Answer
stringlengths
44
28.2k
Id
stringlengths
1
6
CreationDate
stringlengths
23
23
Tags
stringlengths
4
94
Body
stringlengths
54
35.7k
Title
stringlengths
13
150
<p>The important thing to remember about a <a href="http://en.wikipedia.org/wiki/PID_controller" rel="nofollow">PID</a> control loop is that each term is intended to dominate control at different times during a move.</p> <p>The proportional term is intended to dominate and provide a larger torque (or in your case speed) the further you are away from your target position.</p> <p>The derivative term is intended to dominate during the 'cruise' phase of your typical trapezoidal move. It helps to reign back a very high proportional term and limit runaway acceleration when you are far from your destination, but it can also help increase the speed at which you converge on your destination when you get close to it and the proportional term contributes much less.</p> <p>If you are using a velocity controller rather than a torque controller then the derivative term may actually be hidden inside your speed controller and not directly accessible to your PID loop. This can make control simpler (typically it will just accelerate as quickly as it can up to the desired speed or maximum speed, whichever is lower) but it can also make it less predictable. Often an overly aggressive D (or P) term can result in getting into a <a href="http://en.wikipedia.org/wiki/Limit_cycle" rel="nofollow">limit cycle</a> (often incorrectly called <a href="http://en.wikipedia.org/wiki/Mechanical_resonance" rel="nofollow">resonance</a> or <a href="http://en.wikipedia.org/wiki/Oscillation" rel="nofollow">oscillation</a> due to the sound of the motors humming or even screaming in this state, though <em>limit-cycle</em> is a much more accurate description).</p> <p>The integral term is there to correct for residual <a href="http://en.wikipedia.org/wiki/PID_controller#Integral_term" rel="nofollow">steady-state error</a>, that is where there is a persistent, long term difference between where you are being asked to go and where you actually are. Your current <code>correction</code> (really just tolerance) value works like the opposite of an integral term, it cuts the motor entirely when you are within a <a href="http://en.wikipedia.org/wiki/Deadband" rel="nofollow">deadband</a> around the desired position.</p> <p>Due to these factors, you will gain little from implementing a full PID loop unless you also plan in a velocity profile with a distinct accelerate, cruise &amp; decelerate phases.</p> <p>Also bear in mind that the deadband and lack of I term will mean that the final position will always be somewhat random and will most likely differ depending on which direction you approach the desired position. As such, your bi-directional repeatability could be much worse than your standard repeatability.</p> <p>For more information on the difference between accuracy, repeatability and resolution, see <a href="http://www.me.mtu.edu/~microweb/chap2/ch2-1.htm" rel="nofollow">this excellent description</a>. In your case, your resolution is your compass sensor, while both accuracy, repeatability are most likely limited by your <code>correction</code> value, since if the <code>correction</code> value is larger than your compass resolution, you are throwing away some of your positional accuracy in return for being able to turn off your motor when you are close.</p>
1
2012-10-23T19:38:18.867
|soccer|control|
<p>Imagine programming a 3 wheel soccer robot. What type of controller would you use for spinning it? P? PID?</p> <p>The goal for this controller is that it should make the robot stand in a defined angle ( 0 degree ) and turn back if rotated by hand or other robot. </p> <p>I use stepper motors for my robot and not servos so I need to implement this in my software!</p> <p>I have written a sample P type controller already and the movement is fairly good. But I would like to make it better if possible. The code is as follows:</p> <pre><code>void spinSpeed(int devidedValue, int addedValue, int correction) { if(degree&lt;correction &amp;&amp; degree&gt;-correction) { motorSpeed = 0; } else { if(degree &gt; 0) { motorSpeed = ((degree)/(devidedValue) + (addedValue)); } else { motorSpeed = ((degree)/(devidedValue) - (addedValue)); } } } </code></pre> <p><code>correction</code> is a range , in which robot has no movement. <code>degree</code> is a number between -127 and 128 which is returned from the compass. <code>motorSpeed</code> is a number between 0 and 255 which is applied to the PWM.</p>
What is the right approach to write the spin controller for a soccer robot?
<p>Don't, instead get yourself some <a href="http://www.pololu.com/catalog/category/51" rel="nofollow noreferrer">decent motors</a> and drive them via an h-bridge like you'll find on this <a href="http://arduino.cc/en/Main/ArduinoMotorShieldR3" rel="nofollow noreferrer">motor shield</a>.</p> <hr> <p><img src="https://i.stack.imgur.com/fNMxg.jpg" alt="Pololu motors"></p> <p><img src="https://i.stack.imgur.com/6uKlD.jpg" alt="Arduino Motor Shield"></p>
2
2012-10-23T19:42:56.030
|control|rcservo|
<p>I've got some hobby servos (<a href="http://www.servodatabase.com/servo/power-hd/hd-1501mg">Power HD 1501MGs</a>) and I'd like to be able to control them (via an Arduino) so they will either go to the angle I set, or put them in a 'free running' mode, where the load will take them wherever it goes.</p> <p>Is this even possible, or am I just going to end up stripping the gears?</p> <p>My first thought is to simply kill the power to the servo, but the force required to move them in that state is more than I'd like.</p> <p>If it is possible, am I looking at a hardware change, or could I do it in software?</p>
How can I modify a low cost hobby servo to run 'freely'?
<p>You should use an ARM. Then you can run full linux or android and have access to powerful libraries, high-level functional languages, and a package manager and community. You can use gcc or LLVM, and a modern debugger like gdb.</p> <p>ARMs used to be too expensive and/or too big, but nowadays you can get an ARM for $5 that's only 13x13 mm. You have to use reflow soldering, but you will anyways if you want to make a professional-quality robot.</p> <p><a href="http://www.eetimes.com/electronics-products/electronic-product-reviews/processors/4230227/TI-debuts--5-Sitara-AM335x-ARM-processors" rel="nofollow">http://www.eetimes.com/electronics-products/electronic-product-reviews/processors/4230227/TI-debuts--5-Sitara-AM335x-ARM-processors</a></p> <p>All other instructions sets have lost the competition. If you pick something like AVR, you will be forever stuck with inferior toolchains, weaker MIPS/dollar, and a much smaller community.</p> <p>If you don't want to engineer the whole motherboard, then Gumstix, BeagleBone, BeagleBoard, and Raspberry Pi are all excellent pre-existing ARM-based devkits, and processor vendors also offer a devkit for every processor they make, bringing out at least a display bus and some serial busses.</p>
4
2012-10-23T19:43:47.140
|microcontroller|arduino|raspberry-pi|
<p>I am looking for a starting point for my project, preferably using popular systems (ones there is a lot of support for). I have an Arduino Uno, a Raspberry Pi, and a lot of willpower :) Anyone here built a project using the systems above?</p> <p>Observation: I'd like to start with a simple line-following vehicle and build up afterwards.</p>
Good Microcontrollers/SOCs for a Robotics Project
<p>A simple option for approximate neighbor search in non-euclidean spaces is using $sqrt(N)$ samples in the search. This has been implemented in OMPL, see <a href="http://ompl.kavrakilab.org/classompl_1_1NearestNeighborsSqrtApprox.html" rel="nofollow noreferrer"><code>ompl::NearestNeighborsSqrtApprox&lt; _T &gt;</code> Class Template Reference</a>.</p> <p>This reduces complexity to $O(sqrt(N))$.</p>
5
2012-10-23T19:43:48.463
|motion-planning|rrt|
<p>I'm trying to implement a nearest-neighbor structure for use in an RRT motion planner. In order to do better than a linear brute-force nearest-neighbor search, I'd like to implement something like a kd-tree. However, it seems like the classical implementation of the kd-tree assumes that each dimension of the space can be split into "left" and "right". This notion doesn't seem to apply to non-Euclidean spaces like SO(2), for instance.</p> <p>I'm working with a serial manipulator arm with fully rotational links, meaning that each dimension of the robot's configuration space is SO(2), and therefore non-Euclidean. Can the kd-tree algorithm be modified to handle these kinds of subspaces? If not, is there another nearest-neighbor structure that can handle these non-Euclidean subspaces while still being easy to update and query? I also took a look at <a href="http://www.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN">FLANN</a>, but it wasn't clear to me from their documentation whether they can handle non-Euclidean subspaces.</p>
Nearest-neighbor data structure for non-Euclidean configuration space
<p>Just to add to <a href="https://robotics.stackexchange.com/a/125/40">user65's good answer</a>.</p> <p>Rapid rotation about a horizontal axis will come from a large difference between the thrust produced by opposite motors. The larger the difference, the greater the rotation rate. So motors that can produce much more thrust than is needed to simply hold the 'copter in the air will help to achieve fast rotations.</p> <p>Rapid rotation about the vertical axis comes from a large difference in torque between one opposite pair and the other. So now you need not only plenty of spare thrust, but also a lot of propeller drag.</p> <p>Sadly, both of these requirements (spare thrust, and prop drag) imply lower efficiency.</p>
25
2012-10-23T20:34:39.070
|quadcopter|
<p>There are many sites which explain briefly this problem and even propose combinations. I however would like a much more detailed explanation. What is going to give my quad the most agility? Do I need bigger motors/props in a heavy quad to achieve the same level of agility than in a lighter quad?</p> <p><strong>EDIT:</strong> Here is what I have understood on the subject:</p> <ul> <li>A quadcopter doesn't need high revving motors as there are 4 propellers providing thrust and high revving motors require more battery power.</li> <li>Larger propellers give more thrust per revolution from the motor.</li> </ul> <p>The question is focused more on the general characteristics of various combinations but some specific questions do spring to mind: </p> <ol> <li>For a given combination what would be the effect of upgrading propeller size in comparison to installing higher revving motors?</li> <li>What changes would need to be made to lift a heavier quad?</li> <li>How can I achieve more agility in my quad?</li> </ol>
How to choose the right propeller/motor combination for a quadcopter?
<p>The robotics team I lead at NCSU has implemented the ability to drive our holonomic robot using a Leap motion controller. Here's a <a href="https://www.youtube.com/watch?v=BDgmRz1Gb7Q#t=0" rel="nofollow">demo video</a>.</p> <p>Our codebase is in Python, so we used the Python Leap library, which was quite simple and friendly. Our Leap-related code is only about 150 lines. Our client basically takes pitch, yaw and roll data from the Leap controller and converts it into movements, which it uses to make calls to our bot's API using ZMQ. I can share the code with you if you're interested in additional details (currently in a private GitHub repo pending our contest, but it's open sourced under the BSD two-clause license).</p>
39
2012-10-23T21:12:46.500
|kinect|input|
<p>As in the title, I'd like to implement gesture recognition on my robot and I'm looking for the pros and cons between the Kinect and the Xtion - and also if there are any other sensible options available.</p> <p>I'm thinking of the following, but open to other suggestions:</p> <ul> <li>Accuracy</li> <li>Price</li> <li>Driver quality</li> <li>Power draw</li> </ul>
I'd like to use gesture based input for my robot. What are the pros and cons between the Xtion Live and the Kinect?
<p>There is very little difference between a robotic off road vehicle and a normal vehicle with a driver. What kind of vehicles are suitable for off-road conditions?</p> <p>Why - tractors of course!</p> <p><img src="https://i.stack.imgur.com/ho71x.jpg" alt="Tractor"></p> <p>Look at those bad boys! Great for all around the farm. Nobody seems to talk about the importance of cleaning the mud off them, except to read the part number to order replacements.</p> <p>In fact, why not other off-road vehicles? Like Land Rovers:</p> <p><img src="https://i.stack.imgur.com/3WOAg.jpg" alt="Land Rover"></p> <p>They seem to be able to undertake long expeditions without requiring cleaning.</p> <p>Or Monster trucks:</p> <p><img src="https://i.stack.imgur.com/tW98e.jpg" alt="Monster Truck"></p> <p>What you'll notice about the treads on these types of tyres is the alternating half-chevron patterns. They point in such a way that the mud sliding on the tyre is always pushed towards the outside of the tyre, eventually falling off. In a way they're self cleaning.</p>
42
2012-10-23T21:18:17.723
|wheel|
<p>I'm looking to potentially build an autonomous robot that will frequently venture off road, and remain autonomous for up to 6 hours at a time. I've found limited information however about the best tyre tread for this purpose, what could be most suitable?</p> <p>I'm especially looking for a tread pattern that won't need regular cleaning, to save setting this up automatically (a tread that gets "clogged" very quickly clearly won't be that effective at tackling tough terrain autonomously.)</p>
What tyre tread would be best suited to an off road robot expected to deal with frequently muddy conditions?
<p>The simplest controller is a linear state feedback controller. There are essentially 4 different states that you need a gain for. These are tilt angle, tilt rate, speed and position.</p> <hr /> <p><strong>LQR</strong> (linear quadratic regulator) is a method to design these gains (after obtaining a linearized state-space representation of your system). If you do not have a state space representation (you probably don't), you can get equations of motion and measure the parameters. If you do not have a state space representation, you should just tune the gains manually (without LQR or other methods such as <strong>pole placement</strong>).</p> <hr /> <h1>Tuning gains manually:</h1> <p>Assuming tilt angle, position/speed and wheel torques are all directed forwards (if positive), you want a positive gain on your tilt angle and tilt rate, as well as a positive gain on position and speed.</p> <p>Start with a gain on tilt angle, and tilt rate. This will get it to balance initially. Once it remains balanced, you can control position and speed by adding a gain to them. If it is unstable, increase the gain on tilt rate (which helps to damp the system).</p> <p>The position/speed control will control both states to zero. To control to some other value, you just need a reference tracking controller, by replacing the states with their errors before feeding it into your controller (eg. current speed - speed reference).</p> <p>Yaw control can be done independently (with the differences in wheel torques added to the main balance/speed/position controller).</p>
43
2012-10-23T21:22:09.763
|control|gyroscope|balance|two-wheeled|
<p>Is there a good, popular and reliable algorithm I can use by taking input from a gyroscope and using this to control two independant wheels to keep such a balanced robot reliably upright? I'm looking for an algorithm that will let me use it to drive a robot around as well as keep it upright when stationary. The ability to deal with inclines and people <em>nudging</em> it would also be a bonus, but not essential.</p>
What algorithm should I use for balancing a two wheeled robot using a gyroscope?
<p>Lithium Ion Polymer batteries are probably your best bet. </p> <ol> <li>They will present the least amount of magnetic interference (followed by Alkaline... NiCD interferes the most). </li> <li>Unlike the regular lithium ion cells (which are usually round), these ones will stack nicely. </li> <li>I hope you'll check the manual before trying this -- many of them are pressure tolerant. I know of some AUV labs that just put stacks of cellphone-sized batteries into rectangular boxes filled with oil (for pressure compensation) and go with it. For example, Autosub 6000's battery packs -- depth-rated to 6000 meters, without the aid of a cylindrical or spherical pressure housing: <img src="https://i.stack.imgur.com/dxHKf.jpg" alt="Pressure Compensated Battery Pack"> (<a href="http://www.thesearethevoyages.net/jc44/autosub6000_page1.html" rel="nofollow noreferrer">http://www.thesearethevoyages.net/jc44/autosub6000_page1.html</a>). For other battery technologies, you'll have to decide how to pack them into your pressure housing, which may be a bit more complicated.</li> </ol>
46
2012-10-23T21:31:31.177
|underwater|battery|auv|
<p>I'm looking to build an underwater glider robot that will need to remain autonomous for long periods of time, perhaps even months. Power draw should be minimal, and I'm thinking of including some form of charging device (such as a solar charger) however I'd also like the battery capacity to be large enough so I don't hugely need to worry about this. Large current draw isn't really needed, but the battery does need to hold its charge effectively for long periods of time. Considering this is an underwater vehicle, weight and size are also a concern.</p> <p>Cost isn't too much of an issue, as long as it's within reason of a hobbyist project.</p> <p>I am looking to understand the pros and cons of each technology (Lead acid, LiPo, NiCad, fuel cell?), so I can decide what type of battery would be best suited to my purpose. As such, I'm looking at battery technology rather than looking for a specific shopping recommendation.</p>
What's the most effective type of rechargeable battery when taking into account size / weight / Ah?
<p>It will take a lot to shake components off of a PCB/PWB so for the most part it should be safe if you make sure that the mounting is correct. One thing that people forget is that if there is vibration then there may also be flex, and even tiny amounts of flex transmitted into a PWB can be damaging. FR4 is stiff and in the wrong location will take a lot of the stress loads. But this is easily fixed with the right kind of mounting that doesn't allow the force to transfer through the board - anchored on one side, semi-rigid on the other side.</p>
48
2012-10-23T21:46:04.840
|electronics|protection|
<p>It's common for components on some types of robots to experience large environmental stresses, one key one being vibration. Is this something I need to worry about with typical electronics and other sensitive components, or not really? If it is, then how do I secure such components? </p> <p>I've heard of two main philosophies behind this, the first being that you should use a damping system such as with springs to absorb the shock. The second is that you should keep everything rigidly in place so it can't move, and therefore can't hit against anything else and break.</p> <p>Which one should I follow, or if the answer is "it depends" what should I use as a guide as to best protect sensitive components?</p>
How can I best protect sensitive components against damage through vibration?
<p>I actually work as a programmer for the tractors that bit-pirate mentioned.</p> <p>There are several ways mentioned that can get you more accurate results. It depends largely on your application and what you are trying to accomplish.</p> <ol> <li><p><a href="http://en.wikipedia.org/wiki/Wide_Area_Augmentation_System" rel="nofollow">WASS</a> uses a signal to augment not as accurate as some other methods but you will get decent.</p></li> <li><p>If your platform is fixed you can just average the position over time and get good results.</p></li> <li><p>RTK is very accurate, and the library mentioned by Jakob looks pretty cool. Some states like Iowa have a government run RTK system that you can use. </p></li> <li><p>I found <a href="http://gizmodo.com/no-gps-signal-no-problem-this-little-chip-knows-where-1518907218" rel="nofollow">this</a> the other day for dead reckoning.</p></li> <li><p>On a mobile system the antennae offset to the center of gravity will effect the accuracy. E.G. On a steep hill the GPS antennae will be off center.</p></li> </ol> <p>What are you trying to accomplish? The atmospheric distortion for multiple antennas in the same location would be very similar. </p>
49
2012-10-23T22:01:52.997
|localization|gps|
<p>Obviously GPS is the most obvious and accessible technology for obtaining a locational "fix" for a robot at any particular time. However, while it's great sometimes, in other locations and situations it's not as accurate as I'd like, so I'm investigating whether there's a relatively easy way to improve on this accuracy (or not, if that turns out to be the case.)</p> <p>I've considered the following options, but found limited information online:</p> <ul> <li><p>Would using a much better antenna help, especially for low signal areas? I'm thinking yes to this, but if so how would I construct such an antenna and know that it's an improvement? Are there any good guides on how to do this? I could use a ready made antenna if they're not too expensive.</p></li> <li><p>Would using multiple separate receivers in tandem help, or would they likely all be off by a similar amount, or would I not be able to extract a meaningful average with this approach?</p></li> <li><p>What sort of characteristics should I look for when choosing a good GPS receiver to help accuracy?</p></li> <li><p>Is there anything else I should consider which I've missed?</p></li> </ul>
What's the most accurate way to obtain a locational fix using GPS?
<p>To do SLAM, you will need a relatively good estimate of position.</p> <p>Robots that use laser scanners can make do with just odometry, because the data is relatively accurate, and the scanner data can be used to help localize in subsequent time steps.</p> <p>Ultrasound sensors are very fuzzy, they generally have a direction fuzziness of 20+ degrees, and anything in the general direction will be detected.</p> <p>Thus, they are of negligible help in helping to localize (except in very structured environments).</p> <p>A GPS/IMU combination can be used to get reasonable localization. Of course, this depends on the scale of the robot, and if it is indoors, GPS may not be practical.</p> <p>If you are able to carefully control wheel slippage, wheel odometry can significantly improve localization in the short term (although an absolute method of localization is preferred). Without an absolute reference (eg. GPS), even with a laser scanner, you will need to be able to solve the problem of "closing the loop".</p> <p>Structured environments may have a lower accuracy requirement. For example, a maze-like environment with walls at regular square grid distances, where it is simple to detect the presence of a wall in each direction of a grid cell.</p>
53
2012-10-23T22:44:20.217
|slam|localization|gps|mapping|acoustic-rangefinder|
<p>Ultrasound sensors are incredibly cheap these days which makes them a popular choice for many hobbyist robotic applications, and I'd like to use a bunch of them (say 10) around a robot with an algorithm to build a rough map of an area (as the robot explores it.) I'm not interested in dealing with moving objects at this stage, just pinpointing stationary ones, and I'll be using GPS for location. I realise that other components such as a laser scanner would produce much more accurate results, however such devices are also astronomically more expensive.</p> <p>Does an algorithm exist for this purpose?</p>
What algorithm can I use for constructing a map of an explored area using a number of ultrasound sensors?
<p>The problem with the Dynamixel servos is that they're trapped in an insulating plastic jacket. There is no convection, and no good thermally conductive material to let heat out efficiently.</p> <p>The first thing I'd suggest is to get some air flow. Moving air has a surprising ability to cool something hot. You hardly need any movement to get good results. If you're willing to hack the servo, then hack away at the plastic casing. First dismantle it, removing the PCB and motor too if you can get the screws undone. Place the gears somewhere clean and safe in such a way you can remember which order they go back in.</p> <p>Now get your Dremmel out and carefully remove some of the casing around the motor (green areas). You're aiming to remove enough so that air can flow <em>through</em> the casing. In one side, out the other. Note the plastic surface under the gears, there's no point cutting through to the gear cavity. In fact doing this will allow debris to enter the gear chain and stall it, so keep your cutting behind this plastic surface.</p> <p><img src="https://i.stack.imgur.com/ZfKBE.jpg" alt="Cut the casing of a Dynamixel servo"></p> <p>Also be careful not to remove any of the material around the screws (red areas). You'll want that when you re-assemble the motor.</p> <p>Just doing this should give you some extra power margin before it overheats, especially if you can mount the motor so that the cool air enters at the bottom and warm air leaves at the top. It will be even better if you can blow air through the casing using a small fan. If you have several motors close together, you might be able to blow air through all of them in series with a single fan. </p>
55
2012-10-23T22:49:37.247
|otherservos|heat-management|cooling|
<p>Hobby servos are generally not sufficient for real robotics for a number of reasons: Quality, precision, range of motion, torque, etc.</p> <p>Meanwhile, industrial servos, such as ABB, Emerson, GE, etc, are generally both heavy and expensive, and not suitable for small-humanoid scale actuation. Similarly, building your own servo from motors, gearboxes, and encoders, is akin to trying to design your own CPU just to control a motor -- too much detail getting in the way of real work.</p> <p>There exists an in-between level of servos -- reasonably priced, reasonable performance, and reasonably controllable -- in the form of the competing brands of Dynamixel and HerculeX servos.</p> <p>The smallest offerings in those lines generally are not strong enough for real-world interaction, but the next step up hold a lot of promise. For the Robotis Dynamixel line, this is the RX-24F servo (priced between the cheap AX-12F and the next step up the MX-28R.) Asking around, it seems that the specs and interface on that servo is great, but that it shuts down from thermal overload if you actually try to run it at rated load -- something that I'd expect from a hobby servo, but not a robotics servo.</p> <p>Now, stepping up to the MX-28R doubles the price. Thus, if the RX-24F heat flaw could be fixed, it would be positioned at a nice price/performance point.</p> <p>Does anyone have experience in providing additional cooling for this servo? Anything from thermal-gluing heat sinks to the case, to drilling holes and running cooling fluid tubing to any hot parts on the interior would be reasonable approaches. However, before I spend significant time and effort investigating this, I'd like a second opinion -- is it possible, does anyone have experience doing this, is it worth it?</p>
Adding external heat sinking to a Dynamixel RX-24F servo?
<p>Temperature is a very simple number to measure and is a good aggregation of all the other factors since a weakening motor will be driven harder.</p> <p>Generally a motor that is about to fail will be significantly hotter than the rest of the motors.</p>
57
2012-10-23T22:51:07.573
|sensors|failure|motor|
<p>What characteristics can I look for which could be reliable early warning signs that a DC motor on my robot, say one used for the drive, could be failing? I'm looking for an answer that deals in terms of sensors rather than manual inspection, so a circuit could be constructed to warn of a potential failure before it happens.</p> <p>I have a few ideas such as an increase in current draw or decrease in rotation speed / voltage, but I want to guard against false warnings caused by reasonable wear and tear, or just the robot struggling on tough terrain.</p> <p>Obviously such a system will never be foolproof, but are there any points I can look out for?</p>
How can I detect if a DC motor on a robot is starting to fail?
<p>Mathematically, the fact that you now have rotation (mostly) eliminates that parameter as a possible control parameter. Basically you'd have to redesign your algorithm to accept a large and variable angular velocity component while still using angular velocity in your feedback. The less noisy this is, the better the probable outcome simply because you're likely going to apply a differential of position to derive another control parameter. Or rather that is one way. So it does look like you need another control input, but it need not be an accelerometer. You could do a horizon, fixed marker location or even tilt sensors.</p>
60
2012-10-23T23:02:36.980
|two-wheeled|stability|
<p>With two wheeled robot <a href="http://www.pages.drexel.edu/~dml46/Tutorials/BalancingBot/files/nxt_icon.jpg">like this one</a>, I have managed to stabilize it while keeping it stationary. This was done using a digital feedback control system by reading the position of the wheels to determine position, and the natural back electromotive force from the wheel motors was used in the feedback loop to determine velocity. It was kept stable with a PID controller, which was designed using a root locus algorithm to keep it stable and modulate the performance parameters (such as percent overshoot, settling time, etc.). I wanted to attempt to keep it stable while simultaneously propelling it forward, but I couldn't figure out how to go about designing a linear controller that could do that. Is it possible to both propel the robot forward <strong>and</strong> keep it stable using a feedback controller on the wheels, or is a gyroscope necessary?</p>
Is it possible to both move and stabilize a two wheeled robot with no gyroscopes?
<p>You might want to check out FRC 1114 Simbotics' website, especially <a href="http://www.simbotics.org/resources/mobility" rel="nofollow noreferrer">http://www.simbotics.org/resources/mobility</a> . The FTC team I coach has used that as a starting point for their own work. </p>
65
2012-10-24T00:17:07.893
|mobile-robot|design|movement|wheel|first-robotics|
<p>I'm part of a <a href="http://usfirst.org" rel="noreferrer">FIRST Robotics</a> team, and we're looking into using <a href="http://en.wikipedia.org/wiki/Mecanum_wheel" rel="noreferrer">Mecanum wheels</a> for our robot.</p> <p>What are the advantages and disadvantages of using Mecanum wheel versus regular ones? From looking through Google, it looks like Mecanum wheels give more mobility but don't have as much traction. Are there any other advantages or disadvantages?</p> <p>Compared to regular wheels, are Mecanum wheels less efficient or more efficient in any way? And if so, is there a quantifiable way to determine by how much?</p> <p>Are there equations I can use to calculate efficiency (or inefficiency) and/or speed of moving forwards, sideways, or at arbitrary angles?</p> <p><em>A picture of a robot with mecanum wheels:</em></p> <p><img src="https://i.stack.imgur.com/kLSuG.png" alt="robot using mecanum wheels"></p>
Calculating the efficiency of Mecanum wheels
<p>You are going about this incorrectly. The reason why pololu is telling you to connect the two pins is because the sleep pin has a pullup resistor on their breakout board. </p> <p>Connecting reset to the sleep pin is equivalent to connecting the reset pin to high.</p> <p>You can achieve your goal by connecting reset pin to high (5V through pullup resistor) and connect the sleep pin directly to your arduino just like the step/dir pins.</p>
75
2012-10-24T04:13:21.397
|arduino|logic-control|stepper-motor|stepper-driver|
<p>I've got this driver: <a href="http://www.pololu.com/catalog/product/1182" rel="nofollow noreferrer">http://www.pololu.com/catalog/product/1182</a></p> <p>... a A4988 stepper motor driver carrier</p> <p><img src="https://i.stack.imgur.com/MPEI5.png" alt="enter image description here"></p> <p>I'm attempting to control a connection between the RESET and SLEEP pins with logic ( code ) running on my Arduino. The motor runs perfectly when these two pins are connected however I'd like to control when the stepper is powered off from my Arduino ( and thus not generating extra heat )</p> <p>I'd like to:</p> <ol> <li>designate a pin to control the connection between these two pins</li> <li>use a "digitalWrite" to the above pin with a HIGH or LOW to switch power on and off from the stepper</li> </ol> <p>NOTE: The <a href="http://www.pololu.com/file/0J450/a4988_DMOS_microstepping_driver_with_translator.pdf" rel="nofollow noreferrer">data sheet</a> mentioned that for the driver to be powering the stepper both RESET and SLEEP needed to be in switched on ( HIGH )</p>
Using an Arduino to control an ON / OFF connection between two pins
<p>Have a look at the paper <a href="http://www.dynalloy.com/pdfs/TCF1140.pdf" rel="nofollow noreferrer">Technical Characteristics of Flexinol Actuator Wires</a>.</p> <p>What you'll want to do is devise a structure that leverages the available contraction of the nitinol wire to achieve the desired stroke and force for your application. The paper give a couple of example structures:</p> <p><img src="https://i.stack.imgur.com/i1giv.png" alt="structures 1"></p> <p><img src="https://i.stack.imgur.com/jKiFI.png" alt="structures 2"></p> <p><img src="https://i.stack.imgur.com/i6vwx.png" alt="stroke and force"></p> <p>The percentage of contraction of nitinol is related to it's temperature. However, the relationship is non-linear and differs between the heating phase and the cooling phase. These differences will need to be taken into account.</p> <p><img src="https://i.stack.imgur.com/dPrWV.png" alt="temp vs. contraction"></p> <p>In the article <a href="http://robotics.hobbizine.com/flexinolresist.html" rel="nofollow noreferrer">precision flexinol position control using arduino</a> the author describes how to use the properties of nitinol so that the wire can act as it's own feedback sensor:</p> <blockquote> <p>Flexinol, also known as Muscle Wire, is a strong, lightweight wire made from Nitinol that can be made to contract when conducting electricity. In this article I'll present an approach to precision control of this effect based on controlling the voltage in the Flexinol circuit. In addition, taking advantage of the fact that the resistance of Flexinol drops predictably as it contracts, the mechanism described here uses the wire itself as a sensor in a feedback control loop. Some advantages of eliminating the need for a separate sensor are reduced parts count and reduced mechanical complexity.</p> </blockquote> <p>So by using PWM to vary the voltage across the wire and using an ADC to read that voltage drop, you can design closed loop control of the percentage of contraction of the nitinol wire. Then, using an appropriate mechanical structure, you can translate that contraction into the desired stroke and force needed for your application.</p>
85
2012-10-24T10:21:01.003
|mobile-robot|
<p>For a robotic gripper arm we are designing for factory floor use on very small components, we propose to use electrically activated Shape Memory Alloy (SMA) wire harnesses for actuation. </p> <p>The device being designed is akin to Pick &amp; Place machines used for circuit assembly, but moves over an aircraft-hanger sized work surface on wheels. It manipulates irregular shaped and porous objects between 0.5 cu.cm and 8 cu.cm each - hence the traditional vacuum P&amp;P mechanism does not appeal. Also, individual objects in the assembly line have varying hardness and weights.</p> <p>Our design constraints are: </p> <ul> <li>Ensuring minimal to zero vibration and sound </li> <li>Using minimal volume within the mechanism (batteries are at the wheelbase, providing stability, so their weight is not a concern)</li> <li>Fine variation of gripper pressure</li> </ul> <p>We believe SMA meets the first two constraints well, but need some guidance on achieving constraint 3, i.e. different levels of pressure of the gripper controlled electronically.</p> <p>My questions:</p> <ul> <li>Can PWM of a current above the activation threshold (320 mA for <a href="http://www.dynalloy.com/TechDataWire.php">0.005 inch Flexinol HT</a>) provide variable, repeatable actuation force? </li> <li>Would we need pressure sensors on each fingertip and a closed loop control for grip, or can the gripper be calibrated periodically and maintain repeatable force?</li> <li>Is there any well-documented precedent or study we should be referring to?</li> </ul>
Shape Memory Alloy wire for robot gripper arm actuation: How to vary grip pressure?
<p>How about Omnidirectional wheels?</p> <p><img src="https://i.stack.imgur.com/DMSs2.jpg" alt="Omnidirectional wheel"></p> <p>You could drive the sphere on two pairs of such wheels, with each pair driven by one motor. This would give you two axis control of the sphere. I.E. you can drive forwards or sideways.</p> <p>Or you could use three wheels and three motors just like this robot:</p> <p><img src="https://i.stack.imgur.com/sjYFw.jpg" alt="Bowling ball robot"><img src="https://i.stack.imgur.com/npENu.jpg" alt="Bowling ball robot"></p> <p>This allows you to spin the sphere about the vertical axis, if that's helpful.</p> <p>The good thing about this solution is that the wheels both drive the sphere, and bear the weight.</p>
88
2012-10-24T10:41:24.373
|otherservos|mobile-robot|stepper-motor|
<p>Design goal is to have a mobile robot that operates on 3 large casters, essentially 2 to 4 inch diameter steel ball bearings, that are motorized. No other mechanism would touch the surface. The robot should thus be able to move in any XY direction on a flat surface, with steering being achieved by varying the speed and rolling direction of these wheels. The robot has no designated "front" side, so it does not need to (and should not have to) bodily turn, in order to move off in any given direction. </p> <p>Conventional wheels or tracks are not the preferred approach. </p> <p>Looking for suggested mechanical layouts of multiple rubber wheels, pressing down onto the steel ball from within the castor housing, to drive the ball in any direction. A single wheel on a stepper, rotated around the vertical axis using a sail-winch servo, is one approach under consideration. Would this be ideal, or are there any serious flaws in this approach?</p> <p>Alternatively, is there any other suggested method of driving such a steel ball in any arbitrary direction under electronic control?</p>
Mechanical design for motorized spherical caster wheels
<p>Whilst the two answers in place focus on mechanical noise (which is what I think you are asking about) there is also, of course, electrical noise which manifests itself as Electro-Magnetic Interference (EMI or also known as EMC)</p> <p>Anything that contains a motor is likely to generate a level of EMI... normally a small level that you don't notice it, but with bigger motors and/or high switching speeds it will become noticeable.</p> <p>Symptoms include picture breakthrough on your TV (etc) with the motor running.</p>
91
2012-10-24T13:42:36.257
|motor|actuator|noise|
<p><a href="http://www.youtube.com/watch?v=ZHJf365p_zw">Many robots</a> and other mechanical devices produce the signature whirring noise as they move, <a href="http://www.youtube.com/watch?v=67CUudkjEG4">some</a> produce less. What makes the difference? What restrictions a silence requirement places on a robot?</p>
What determines the amount of noise an actuator produces?
<p>It sounds like you're observing the symptom of "lost bytes". There are several different things that can cause an AVR to lose bytes.</p> <p>It sounds like you are <em>guessing</em> that it's losing bytes when new bytes are coming in while the buffer is full. While there are several remedies to fix or at least ameliorate that particular problem, those approaches are useless and counter-productive if the real cause of the "lost bytes" is something else.</p> <p>The first thing I would do in your shoes is to set up some sort of "status debug" port that gives me a clue of why exactly bytes have been lost -- at least a status LED. Once you know why bytes are being lost, you can apply the appropriate remedy.</p> <p>Most modern protocols have some sort of check value at the end of each packet. It's nice if your status debug system can report on the packet goodput rate and the packet error rate -- at least blink a green LED for each validated packet and a red LED for each failed packet check.</p> <p>Since (especially with radio connections) the occasional corrupted byte is pretty much inevitable, most modern protocols are carefully designed such that if <em>any</em> byte is corrupted or lost, the system will eventually discard that packet, and eventually re-sync and correctly handle future packets.</p> <h2>Bytes lost because the interrupt handler somehow never put them into the buffer</h2> <p>Often bytes are lost because the interrupt handler somehow never put them into the buffer. There are several causes, each with a different remedy: external problems, and internal interrupts turned off too long.</p> <p><strong>External problems</strong>:</p> <ul> <li>Line noise causing errors</li> <li>Physical wired connections accidentally getting temporarily unplugged</li> <li>Loss of signal on radio connections</li> </ul> <p>Typically we clip an oscilloscope to the input pin and -- if we're lucky -- we can see the problem and try various techniques to see if that cleans up the signal.</p> <p>Even when the signal at the input pin looks perfect, we can still have data loss issues.</p> <p>Immediately after the last bit of a byte comes in a <a href="https://en.wikipedia.org/wiki/Universal_asynchronous_receiver/transmitter#Synchronous_transmission" rel="nofollow noreferrer">USART</a> or <a href="http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus" rel="nofollow noreferrer">SPI</a> port, normally the interrupt handler for that port is triggered, and that interrupt handler pulls that byte and sticks it into a circular buffer. However, if the <strong>interrupts are turned off too long</strong>, the next byte to come in that port will inevitably overwriting and losing the first byte -- the interrupt handler for that port never sees that first byte. The four "ways an interrupt handler can be turned off too long" are listed at <a href="https://electronics.stackexchange.com/questions/35835/what-can-be-the-cause-of-an-exceptionally-large-latency-for-the-uart-receive-int/44779#44779">"What can be the cause of an exceptionally large latency for the UART receive interrupt?"</a>.</p> <p>To fix this problem, you need to get the longest time an interrupt handler is ever turned off to be less than the time to transfer one character. So you must either</p> <ul> <li>Reduce the amount of time interrupts are turned off; or</li> <li>Slow down the communication bit rate to increase the time to transfer one character;</li> <li>Or both.</li> </ul> <p>It's very tempting to write the interrupt routine such that, immediately after it puts a byte into the circular buffer, the same interrupt routine then checks to see if it's a complete packet and, if so, completely parse and handle it. Alas, parsing usually takes so long that any further bytes coming in the same or any other port are lost.</p> <p>We typically fix this by reducing each interrupt handler (and therefore the time interrupts are disabled while processing this handler) to the minimum possible to grab a byte and stick it in the circular buffer and return from interrupt. All the packet-parsing and packet-handling stuff executes with interrupts enabled.</p> <p>The simplest way is for the main loop (AKA the "background task") to periodically call a function that checks if there is a complete packet in the circular buffer, and if so parse and handle it. Other more complex approaches involve second-level interrupt handlers, <a href="http://www.nongnu.org/avr-libc/user-manual/group__avr__interrupts.html" rel="nofollow noreferrer">nested interrupts</a>, etc.</p> <p>However, even when the interrupt handler perfectly receives every byte and correctly puts it into the buffer, sometimes a system can still lose bytes from buffer overflow:</p> <h2>Bytes lost from buffer overflow</h2> <p>Many people write packet handlers that don't do anything until the handler sees a complete packet in the buffer -- then the handler processes the entire packet as a whole. That approach overflows the buffer and data is lost if any incoming packet is larger than you are expecting, too big to fit in the buffer -- <strong>single-packet overflow</strong>.</p> <p>Even if your buffer is plenty big enough to hold the largest possible packet, sometimes during the time you're processing <em>that</em> packet, the next packet is so large that it overflows the circular queue before your packet-handler gets around to removing the first packet from the queue to make room for future packets -- <strong>two-packet overflow</strong>.</p> <p>Is there some way your status debug system could detect and signal if a byte comes in and it has to be thrown away because there's no more room in the queue? The simple solution to both these problems is to increase the size of the buffer to hold at least two maximum-size packets, or somehow change the thing that's sending the packets to send smaller packets -- so two packets will fit in the space you have now.</p> <p>Sometimes incoming data fills the buffers faster than the packet handler can pull data out of the buffer. Increasing the size of the buffer only briefly delays the problem, and sending smaller packets (but more of them) will probably only make things worse. A real-time system <em>must</em> process incoming data at least as fast as that data can come in; otherwise the <strong>processor overload</strong> will only make the processor get further and further behind. Is there some way your status debug system could detect and signal this sort of overflow?</p> <p>If this overflow only happens rarely, perhaps the simplest "solution" (arguably merely a hack) is to handle it in more-or-less the same way you would handle a (hopefully rare) power glitch or loss-of-signal on a radio connection: when an overflow is detected, have the AVR erase the entire buffer and pretend it never received those bytes. Most modern protocols are carefully designed such that, if any packet is lost, the system will eventually re-sync and correctly handle future packets.</p> <p>To really fix this problem requires somehow making the "time to process a packet" less than "time from the end of one packet to the end of the next packet". so you must either</p> <ul> <li>Reduce the bit rate.</li> <li>Modify the sender to give the AVR more time to process a packet -- perhaps unconditionally send 50 additional "dummy bytes" in the packet preamble -- or however many is needed to give the AVR more than enough time to completely process the last packet and get ready for the next packet.</li> <li>Decrease the time to process a packet</li> <li>Or some combination.</li> </ul> <p>The wall-clock time to process a packet involves both the time the AVR spends in actually processing the packet, and also the time the AVR spends doing "other stuff" such as dealing with all the <em>other</em> I/O ports and interrupt handlers.</p> <p>Some methods of decreasing the time to actually process a packet are:</p> <ul> <li>Sometimes it's faster to copy the packet out of the queue into some other buffer for further processing, removing it from the circular queue. It makes the packet-handler simpler if the packet starts at the beginning of that other buffer, so key parts of the packet are a fixed constant offset from the beginning of that buffer. (This has the advantage of making it impossible for the serial interrupt handler, which only writes into the circular queue, to accidentally corrupt that packet after it's been copied to that other buffer.) (This approach lets you use tested and "optimized" and known-good functions that handle numbers represented as <a href="http://en.wikipedia.org/wiki/ASCII" rel="nofollow noreferrer">ASCII</a> strings of hex digits or decimal digits in consecutive order, which may run faster operating on that linear buffer than "equivalent" functions that also have to deal with the wrap-around split of a circular buffer). This requires both the queue and the other buffer to each be at least the size of the maximum possible packet.</li> <li>Sometimes it's faster to leave the packet in the queue while parsing it and remove it from the queue only after the packet handler is completely done with it.</li> <li>Sometimes a pair of "ping-pong" buffers is faster than a circular queue.</li> <li>Many systems use only a single buffer large enough for the largest possible valid packet, and completely disable interrupts from that port until the interrupt handler has finished with the last packet and is ready for the next packet.</li> <li>Somehow actually do less work per packet.</li> </ul> <p>More general approaches to dealing with situations where "other stuff" is eating so much time that there's not enough time to deal with the packet in the buffer (and may also help reduce the time to actually process the packet):</p> <ul> <li>If you're lucky, you can find some algorithmic tweaks to effectively do the same work in fewer cycles.</li> <li>Load-shedding: do less important stuff less often; or perhaps don't do them at all in times of heavy load. (As implemented in the <a href="http://en.wikipedia.org/wiki/Apollo_Guidance_Computer#PGNCS_trouble" rel="nofollow noreferrer">Apollo 11 AGC</a>).</li> <li>yield() more often: if your main loop does fair round-robin cycling between "if we have a complete packet from port 1, handle it" and "if we have a complete packet from port 2, handle it", and the packet parser for either one takes so long that the buffer for the <em>other</em> port overflows, it may help to break the packet parser up into shorter pieces and do only a little processing each time through the main loop, giving the <em>other</em> packet parser a chance to deal with packets before its buffer overflows. Perhaps even consider switching to a pre-emptive task scheduler or a full-up RTOS.</li> <li>yield() less often: sometimes a processor spends more time in "task switching" or "multitasking overhead" than actually doing anything productive.</li> <li>Reduce the time spent processing interrupt handlers. (<a href="http://www.cs.utah.edu/~regehr/papers/lctes05/regehr-lctes05.pdf" rel="nofollow noreferrer">Preventing Interrupt Overload</a> ). High-frequency pulses on one interrupt line can pause main loop tasks indefinitely. It may be necessary to program each low-priority interrupt handler to recognize high-load situations and completely disable its own particular interrupt before re-enabling global interrupts at its return-from-interrupt instruction, and have the main loop recognize low-load situations and re-enable those interrupts.</li> <li>Increase the clock rate of the processor.</li> <li>Switch to a more powerful processor. A "more powerful" processor is <em>not</em> the same as "a processor with a faster clock rate". You might be able to find a processor that does the same work in <em>fewer</em> cycles than your particular 8-bit AVR -- perhaps some other AVR with specialized peripherals that do work in hardware that you're currently doing in software, freeing up software cycles for other things; or perhaps a 32-bit processor that can do certain tasks in software in fewer cycles than any 8-bit processor.</li> <li>Occasionally in robotics "one big fast processor" is better than lots of little processors: the amount of code and the communication latency to copy a message from one task to another task on the same processor is always much less than to copy that same message from one task to another task on some other processor.</li> <li>Often in robotics it makes things simpler and more "provably correct" to add more processors, perhaps one processor per leg or even <a href="http://openservo.com" rel="nofollow noreferrer">one processor per servo</a>. If you have one task per processor, then many of the above potential problems can't ever happen.</li> </ul>
99
2012-10-24T15:07:52.240
|microcontroller|avr|interrupts|
<p>I have a number of interrupt service routines on an <a href="https://en.wikipedia.org/wiki/Atmel_AVR" rel="nofollow">AVR</a>. These include interrupts for <a href="https://en.wikipedia.org/wiki/Universal_asynchronous_receiver/transmitter#Synchronous_transmission" rel="nofollow">USART</a> serial communication, timers, and <a href="http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus" rel="nofollow">SPI</a> communication.</p> <p>For all of these, I use circular queues (using a start and end pointer, without boundary checking).</p> <p>Some problems start to occur if the AVR starts to get overloaded. The circular queues will lose chunks of data. To solve this, I can reduce the load on the AVR (for example, by skipping some cycles in the timer). However, this is a manual process, where it is reduced if the AVR appears to have problems. This is partly because I do want relatively consistent timer periods.</p> <p>However, even at 70% average processor load, the queues can fill up randomly by chance.</p> <p>In case of spurious overloading at times, how can I make this more adaptive to avoid queue overflows?</p>
How to manage interrupts on an AVR?
<p>Maybe too expensive for a home project, but the MIL-DTL-38999 series connector (available from <a href="http://www.glenair.com/interconnects/mildtl38999/" rel="nofollow">Glenair</a> and many other suppliers) is the connectro of choice for military equipment</p> <p>OK, it fails your "they are not simple plug-in connectors" requirement but are ideal for disconnect/reconnect in all kinds of environments.</p> <p>ANd no I don't work for Glenair :)</p>
100
2012-10-24T15:17:21.853
|wiring|
<p>If you have used connectors for signal wiring for any length of time, you may find that they are unreliable.</p> <p>Specifically, I find these to be unreliable when used for a long time, with a number of disconnections and re-connections:</p> <p><img src="https://i.stack.imgur.com/RDJQV.jpg" alt="enter image description here"></p> <p>This is due to the loss of springy-ness of the crimped metal end on the wire, which causes contact problems.</p> <p>Which connectors (with rapid connection time) are reliable for multiple re-connections for simple signal wiring?</p> <p>This excludes screw terminals and connectors with screws (eg. D-subminiature connectors), because they are not simple plug-in connectors.</p>
What connectors are most reliable?
<p>If you are not near the robot but require an e-stop, you'll want to consider a watchdog circuit watching a remote signal. The circuit keeps an eye out for a pulsing signal and, if the signal stops pulsing, activates the e-stop circuitry. You want to have this as close to a physical circuit as you can, which means the pulse can't be interpreted by a computer nor a microcontroller, because either of those are likely to have failure states that will prevent them from adequately actuating the e-stop.</p> <p>As many of the other answers are stating, the ideal e-stop is one in which any failure mode will cause an emergency stop, which means that the laws of physics are doing most of the work of the e-stopping, rather than something resembling software. Of course, trying to interpret that stop correctly depends greatly on what the robot is meant to do and the consequences of it stopping that action, but failure of an e-stop system should not ensure that your robot keeps running.</p>
112
2012-10-24T22:08:20.767
|mobile-robot|errors|
<p>Emergency stops are obviously a good idea on most robots, how should they be wired? What systems should be killed immediately, and what should stay working?</p>
How should emergency stops be wired?
<p>This subject is covered quite nicely in the <a href="http://www.probabilistic-robotics.org/">Probabilistic Robotics</a> book by Thrun et. al. I don't have a direct reference, but there are some of his papers (such as <a href="http://www.cs.cmu.edu/~thrun/papers/thrun.robust-mcl.ps.gz">Robust Monte Carlo Localization for Mobile Robots</a>, <a href="http://www.cs.cmu.edu/~thrun/papers/thrun.robust-mcl.pdf">pdf</a>) essentially include the same information. Usually what is used is a mixed error model, where the probability density function consists of different parts</p> <ul> <li>A Gaussian error around the true distance reading</li> <li>A part which accounts for false positives like dynamic obstacles and so on. This is larger with smaller distances.</li> <li>A constant part which accounts for false negative readings, where the sensor gives an out of range reading.</li> </ul> <p>The model needs to be fitted to your sensor and application.</p>
113
2012-10-24T22:10:07.020
|sensors|noise|
<p>Range sensors (for example sonar, infrared, and lidar) are notoriously noisy. How can I characterize the noise characteristics to include these in a probabilistic localization sensor model?</p>
How to model the noise in a range sensor's return?
<p>In generally there is difference between 4 and 8 neighborhood in many algorithms which are <strong>graph based</strong>, (such as A*, MRF(markov random field), edge detection algorithm and etc). </p> <p>This depend on computational complexity, time, performance or some things like this (Trade off/balance between of them).</p> <p>I will give some examples:</p> <ul> <li>in MRF algorithm you can choose 4 neighborhood in each pixel (left, right, up, down)</li> <li>or you can choose 8 neighborhood in each pixel (left, right, top, down, top right, top left, down right, down left)</li> <li>or you can choose 8 neighborhood again in each pixel which 4 of them is assigned to current frame neighborhood pixels, and 4 of them is assigned to last frame neighborhood pixels</li> </ul> <p><a href="https://i.stack.imgur.com/JJiWF.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JJiWF.jpg" alt="enter image description here"></a></p>
117
2012-10-25T02:01:30.993
|motion-planning|artificial-intelligence|planning|
<p>In graph-based planning (say, A*), states are connected to their neighbors. How should one decide whether to connect to the 4 neighbors or the 8 neighbors? What are the pros and cons of each approach?</p>
What is the difference between 4-point and 8-point connectivity in graph based planning?
<p>The reason why you can't see the nxtOSEK logo, is that the enhanced firmware looks the same as the regular firmware, but with several performance and feature enhancements. It isn't specific to nxtOSEK.</p> <p>To check if the firmware installed properly, visit the <strong>Settings</strong> menu on the NXT, and then the <strong>NXT Version</strong> submenu. The top line should read: <code>FW NBC/NXC x.xx</code>, where <code>x.xx</code> is the version number of the firmware.</p>
118
2012-10-25T02:41:22.373
|mindstorms|nxt|
<p>I am attempting to upload a custom firmware to a Lego Mindstorms NXT and am having issues.</p> <p>First of all, I'm attempting to use <a href="http://lejos-osek.sourceforge.net/">nxtOSEK</a>, which would let me run C++ programs on it. The problem is, everytime I put it into firmware update mode, the download doesn't seem to actually occur.</p> <p>What I mean by this is that, according to the output in my terminal (both Mac and Windows), the download was successful, however when the NXT reboots, I still see the normal logo (not nxtOSEK).</p> <p>So, what I'm doing is first holding down the <code>Reset</code> button for a few seconds, then hitting the orange button, giving me that tic-tic-tic sound. Then I run the firmware update (either using the Windows NextTool or Mac OSX GUI NextTool) and attempt the download. I get a success message, yet the robot is still using the old firmware.</p> <p>What could be the cause of this problem and how can I solve it?</p>
Properly flashing the firmware on a Lego Mindstorms NXT
<p>Just to add a little to <a href="https://robotics.stackexchange.com/a/120/37">ronalchn's excellent answer</a>:</p> <p>Changing the weight distribution in the robot in a way that doesn't change the position of the centre of mass will affect the <a href="http://en.wikipedia.org/wiki/Moment_of_inertia" rel="nofollow noreferrer">Moment of Inertia</a>, which in turn will affect the robot's ability to accelerate its rate of rotation.</p> <p>For most robots that trundle along, this is unlikely to be a problem. However, I could imagine that for an <em>extremely</em> dynamic robot, that travels very quickly and takes tight turns, being able to rotate its body rapidly might be an advantage, and might even be able to save itself from tipping in some extreme cases.</p> <p>Therefore concentrating the mass near the centre would be best for this type of robot.</p>
119
2012-10-25T04:45:29.487
|mobile-robot|design|stability|wheeled-robot|
<p>When designing a standard 4 or 6 wheel robot, is it better to have the weight distributed primarily in the center of the robot, or over the wheels, or is there no difference?</p> <p>Specifically, which weight distribution will make the robot less likely to tip over?</p>
Is it better to have weight distributed over the wheels or the center of the robot?
<p>One good solution is to do BOTH. A design I use place "hard" real time functions on a small micro controller, tight servo control loops and such. Then there is another CPU that is larger, running Linux and ROS. I let ROS handle the higher level tasks and the uP handle things like controlling a stepper motor or running a PID loop. </p> <p>As said above by others you CAN allow a non-real-time OS to run 1KHz control loops but in order for this to work you need a gross over-kill sized computer that spends most of it's time in an idle loop. If you run the Linux/ROS computer at near 100% CPU utilization the real-time performance is poor. Using a two tier design allows you to always get very good RT performance and also use a smaller, less power hungry computer (possibly a Pi2 higher level tasks.) My uP currently does not run any OS but I'm moving to FreeRTOS </p>
128
2012-10-25T09:49:05.213
|software|platform|real-time|
<p><strong>Edit:</strong> I don't know why, but this question seems to be confusing many people. I am aware of when/where/why/how to use real-time. I am interested in knowing whether people who have a real-time task would actually care enough to implement it in real-time or not.</p> <p>There's no need to mention why real-time operations are important for a robot. My question is however, how much is it actually used in robotics?</p> <p>Take <a href="https://robotics.stackexchange.com/q/6/158">this question</a> for example. Only one answer mentions any platform with real-time capabilities, and it is far from the top too. ROS apparently, being a very popular platform which is not real-time.</p> <p>In the real-time world however, RTAI<sup>1</sup> seems to be the only workable <em>free</em> real-time platform of use. It is however, limited to Linux (no problem), badly documented and slowly developed.</p> <p><del>So, how much is real-time behavior sought after among robotics developers?</del> The question is, how much are developers inclined to write real-time applications when real-time behavior is actually needed? If not much, why?</p> <p>For example, reflexive behavior based on tactile data, cannot go through ROS because it would lose its real-time property. But do people really come up with a real-time solution or use ROS anyway, ignoring the real-time property?</p> <p><sup>1</sup> or similarly Xenomai</p>
How mature is real-time programming in robotics?
<p>It's been a while since I've used the AVR, but there where two libraries which would do what you are looking for. Unfortunately ATMEL doesn't provide proper firmware libraries themselves (as e.g. STM does).</p> <ul> <li><p><a href="http://www.procyonengineering.com/embedded/avr/avrlib/" rel="nofollow">Procyon AVRlib</a> is quite outdated, but also very lightweight. You can just use the parts that you are interested in.</p></li> <li><p><a href="http://www.nongnu.org/avr-libc/" rel="nofollow">AVR Libc</a> is another library which is slightly more recent and in my experience quite comprehensive. I think it is actually bundled with a lot of toolchains.</p></li> </ul>
131
2012-10-25T14:06:19.100
|software|microcontroller|i2c|
<p>I Have an ATmega16 mc which is master on the i2c and a ATMega8 mc which is slave on the i2c.</p> <p>I have connected the two mcs' sda and scl ports to each other alongside a pullup resistor.</p> <p>Now I want to read a register from the ATMega8 using the ATMega16. </p> <p>The problem is that I don't want to assign all the variables manually. Is there any libs or headers that will do this thing for me?</p>
Connecting two microcontrollers using I2C
<p>Kiva uses a multi-agent system to control its fleet of robots and movable shells. Their website provides a <a href="http://www.kivasystems.com/solutions/software/" rel="noreferrer">high-level overview of their system architecture</a> and there's also a <a href="http://www.slideshare.net/devbp/kiva-system" rel="noreferrer">presentation</a> online that goes into more detail. The A* algorithm is used for wayfinding and a heuristic algorithm decides on the order of things to pick up (travelling salesman problem) and which shelves to move where (more frequently used shelves are put nearer the packing stations to reduce travel time).</p> <p>According to <a href="https://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/2082" rel="noreferrer">this paper</a>, the whole system is implemented in Java and MySQL.</p> <p>I couldn't find any direct quote about the sensors in their mobile platforms, but from the picture I'd be guessing a number of sonars and IRs</p> <p><img src="https://i.stack.imgur.com/MQhwO.png" alt="Kiva Systems Robot"></p>
142
2012-10-25T21:02:12.810
|mobile-robot|industrial-robot|
<p>What kind of sensors and algorithms are the mobile robots of <a href="http://www.kivasystems.com/">Kiva Systems</a> equipped with? </p>
Technology behind Kiva Systems mobile robots
<p>You do not want to use stainless steel with linear bearings because that type of steel is not hard enough and the bearing will wear grooves into the rod. You want high carbon steel with hardness greater than HRC 60.</p>
146
2012-10-26T00:30:03.880
|reprap|3d-printing|linear-bearing|
<p>For the 3d printer <a href="http://www.reprap.org/wiki/Prusa_Mendel_Assembly">RepRap Prusa</a> there are several rails (smooth rods) that guide the printer head on the different axises. The printer head uses several linear bearings to glide along the rails. </p> <p>There isn't any specification on what kind of material would be best suited for this purpose with the linear bearings. My first assumption would be for stainless steel because it won't corrode (rust) on the surface, but I'm not sure if this is true for all printers (whether they are 3D printers or not) as a different material may allow the linear bearings to glide more easily. Aluminum would have been my second choice but I have the same reservations of which grade would be least resistant. </p> <p>This <a href="http://www.lm76.com/linear_shaft_selector.htm">resource</a> has some limited information but does not help with which would be best suited for this particular application.</p> <p>What material is best suited for this purpose?</p>
What rail material is best used for linear bearings?
<p>You should read the paper published by Microsoft research on the actual algorithm behind the human motion tracking.</p> <p>Real-Time Human Pose Recognition in Parts from a Single Depth Image, Shotton et. al, <a href="http://research.microsoft.com/apps/pubs/default.aspx?id=145347">http://research.microsoft.com/apps/pubs/default.aspx?id=145347</a></p> <p>It relies on large labeled training data from the human body. That is why the Nao cannot just be tracked with the same method out of the box. To achieve that, you would need to re-train the algorithm with labeled data from the Nao in different poses.</p>
148
2012-10-26T04:48:02.503
|kinect|
<p>I am not sure if this has been tried before but I am trying to use <a href="http://en.wikipedia.org/wiki/Kinect" rel="nofollow noreferrer">Kinect</a> and detect gestures made by the <a href="http://en.wikipedia.org/wiki/Nao_%28robot%29" rel="nofollow noreferrer">Nao robot</a>.</p> <p>I have made a Kinect application, <a href="http://youtu.be/v8SumS-I1qo" rel="nofollow noreferrer">a gesture based picture viewer</a> and it detects humans fine(Obviously it does!) What I wanted to try was (lazy as I am), to see if I could use some (say, voice) command to tell the Nao to do a Swipe Right gesture and have my application identify that gesture. The Nao can easily identify my command and do some gesture. The problem however is, when I put the Nao in front of the Kinect sensor, the Kinect does not track it. </p> <p>What I want to know is, are there some basics behind Kinect's human body motion tracking that essentially fails when a robot is placed in front of it instead of a human?</p> <p>PS: I have kept the Nao at the right distance from the sensor. I have also checked if the entire robot is in the field of view of the sensor.</p> <p>EDIT: This has been posted <a href="https://stackoverflow.com/q/13068945/1473556">on stackoverflow</a> and <a href="http://social.msdn.microsoft.com/Forums/en-US/kinectsdk/thread/723a1072-f637-466f-a721-196984e7763a/#723a1072-f637-466f-a721-196984e7763a" rel="nofollow noreferrer">on msdn</a> by me so as to target a large audience as this problem has not been encountered by anyone in the past.</p>
Detect Nao robot in Kinect
<p>I wanted to create a submerged water pump as part of school science project. I really had a hard time in making a normal DC motor and its shaft in making water proof. i failed miserably. Later i came with an idea. I removed the shaft and magnets from the motor . i removed the coil and attached magnet in the shaft. use a small pvc pipe piece and coil it. keep the shaft inside the coiled pvc pipe. Now when you pass AC current in the coil the shaft will rotate. So now you dont have to water proof the rotating shaft but the wire holes made in the wall which is very easy.</p> <p>Now coils are water proof and shaft doesnt have any coil and electric circuit to get short. just magnectic field under water.</p>
150
2012-10-26T07:23:58.103
|motor|underwater|auv|protection|
<p>Whenever building an aquatic bot, we always have to take care to prevent leakages, for obvious reasons. Now, holes for wires can be made watertight easily--but what about motors? We can easily seal the casing in place (and fill in any holes in the casing), but the part where the axle meets the casing is still left unprotected. </p> <p><img src="https://i.stack.imgur.com/xahVT.jpg" alt="enter image description here"></p> <p>Water leaking into the motor is still quite bad. I doubt there's any way to seal up this area properly, since any solid seal will not let the axle move, and any liquid seal (or something like grease) will rub off eventually.</p> <p>I was thinking of putting a second casing around the motor, maybe with a custom rubber orifice for the shaft. Something like (forgive the bad drawing, not used to GIMP):</p> <p><img src="https://i.stack.imgur.com/z0E4K.png" alt="enter image description here"></p> <p>This would probably stop leakage, but would reduce the torque/rpm significantly via friction.</p> <p>So, how does one prevent water from leaking into a motor without significantly affecting the motor's performance?</p> <p>(To clarify, I don't want to buy a special underwater motor, I'd prefer a way to make my own motors watertight)</p>
Preventing leaks in motor shafts for underwater bots
<p>It would be nice if we could tell the compiler the range and precision of each fixed-point input variable (perhaps no two having the radix point in the same location), and it would automagically -- at compile time -- use the correct range and precision and rescaling operations for the intermediate values and final values in a series of calculations. I've heard rumors that it may be possible to do that in <a href="http://en.wikibooks.org/wiki/Ada_Programming/Types/delta" rel="nofollow noreferrer">the Ada programming language</a> or in C++ templates.</p> <p>Alas, the closest I've seen is fixed-point arithmetic libraries that require you, the programmer, to manually choose the correct representation and manually verify that each operation maintains adequate range and precision. Sometimes they make multiplication and division between mixed variables easier. Such as:</p> <ul> <li><a href="http://sourceforge.net/projects/avrfix/" rel="nofollow noreferrer">AVRfix</a>: a library for fixed point calculation in s15.16, s7.24 and s7.8 format, entirely written in ANSI C</li> <li><a href="http://en.wikibooks.org/wiki/Embedded_Systems/Floating_Point_Unit" rel="nofollow noreferrer">Embedded Systems: fixed point FFT</a> lists some libraries for fixed-point FFT calculation</li> <li><a href="http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&amp;nodeId=1824&amp;appnote=en010962" rel="nofollow noreferrer">AN617: fixed point routines for the Microchip PICmicro</a></li> <li><a href="https://sourceforge.net/directory/?q=%22fixed%20point%22" rel="nofollow noreferrer">"fixed point"</a> projects on SourceForge.</li> <li>gcc has built-in fixed-point libraries <a href="https://gcc.gnu.org/onlinedocs/gcc/Fixed-Point.html" rel="nofollow noreferrer">a</a> <a href="http://gcc.gnu.org/wiki/FixedPointArithmetic" rel="nofollow noreferrer">b</a></li> <li><a href="http://www.ti.com/tool/SPRC087" rel="nofollow noreferrer">TI IQMath Library</a> ( and <a href="http://www.ti.com/tool/sprc542" rel="nofollow noreferrer">source</a> -- Thank you, <a href="https://robotics.stackexchange.com/a/163/37">embedded.kyle</a> ).</li> </ul>
154
2012-10-26T09:45:37.170
|microcontroller|c|
<p>Often we use microcontrollers to do things in our robots, but need to make some calculations in decimal. Using floating point variables is <strong>very</strong> slow, because a software floating point library is automatically included (unless you have a high-end microcontroller). Therefore, we generally use fixed point arithmetic.</p> <p>Whenever I do this, I just use an integer, and remember where the decimal place is. However, it does take some care to ensure that everything is consistent, especially when calculations involve variables where the decimal point is in a different place.</p> <p>I have implemented a fixed point atan2 function, but because I was trying to squeeze every last drop of limited precision (16 bits), I would often change the definition of where the decimal point is, and it would change as I tweaked it. In addition, I would have some constants, as a quasi look-up table, which themselves have an implied decimal point somewhere.</p> <p>I want to know if there is a better way. Is there a library, or set of macros, that can simplify the use of fixed point variables, making multiplication and division between mixed variables easier, and allowing declaration of decimal numbers or constant expressions, but automatically converting to the desired fixed point representation at compile time?</p>
Fixed point arithmetic on microcontrollers
<p>I assume that you are looking for an IMU that provides you with an orientation estimation. The complete package is usually called an Attitude and Heading Reference System (AHRS). What really is the most defining criteria is your budget. Getting above 3 degrees/s should be within reach though. </p> <ul> <li><p>We have been working with the <a href="http://www.xsens.com/en/general/mti" rel="noreferrer">XSens MTi</a> and had good enough results for navigation of ground vehicles. They have a new line out, which has improved on the accuracy quite a bit. </p></li> <li><p>Budget options are also available, <a href="https://www.sparkfun.com/products/11028" rel="noreferrer">this one</a> looks quite promising as it is a single chip solution. There is also an <a href="https://www.sparkfun.com/pages/accel_gyro_guide" rel="noreferrer">IMU buyers guide</a> at Sparkfun. </p></li> <li><p>Usually pitch and roll are fine with most IMUs for ground vehicles, as the gravity vector can be used to compensate the drift. Not so with the yaw axis, which is often a problem even when compensated with a magnetometer. For that reason we often use a single <a href="http://www.kvh.com/dsp3000comm" rel="noreferrer">fibre optic gyro</a> to minimize the heading drift.</p></li> </ul>
155
2012-10-26T09:58:11.633
|ros|imu|odometry|gyroscope|ugv|
<p>At our lab, we have a several "Kurt" type robots (about the size of a Pioneer, six wheels, differential drive). The built-in gyroscopes are by now really outdated; the main problem is that the gyroscopes have a large drift that increases as the gyro heats up (the error is up to 3°/s). We mainly use the IMU (inertial measurement unit) to get initial pose estimates that are later corrected by some localization algorithm, but even doing that the large initial pose error caused by the IMU is often annoying.</p> <p>We've temporarily used an Android phone (Galaxy S2) as a replacement IMU, and the results are so much better compared to the old IMUs. However, I don't like depending on a WiFi connection between IMU and the control computer (a laptop running ROS/Ubuntu), so we're looking to buy a new IMU.</p> <p>What IMU should we choose? What criteria are important to consider for our application?</p>
How to choose a good IMU for a wheeled robot?
<h1>If you want to have a accurate PID coefficient:</h1> <ul> <li>Get your statistic actuator data (i.e. <kbd>collecting input voltages</kbd> + <kbd>encoder pulses</kbd> in a const frequently (<code>0.01sec</code>) time for <code>20secs</code> as a <code>3</code>x<code>2000</code> matrix)</li> <li>Get transfer function (TF) of your actuator (using <a href="https://www.mathworks.com/videos/introduction-to-system-identification-toolbox-68901.html" rel="nofollow noreferrer">Ident</a> toolbox in MATLAB).</li> <li>Finally, use <a href="https://www.mathworks.com/videos/pid-control-design-with-control-system-toolbox-68748.html" rel="nofollow noreferrer">PIDTool</a> toolbox in MATLAB and upload your transform function.</li> </ul> <hr /> <h1>Or in a quick and simple manner:</h1> <p>There is a quicker approach called <a href="https://en.wikipedia.org/wiki/Ziegler%E2%80%93Nichols_method" rel="nofollow noreferrer">Ziegler–Nichols</a>: <a href="https://i.stack.imgur.com/CF9m6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CF9m6.jpg" alt="enter image description here" /></a></p> <p>And in this image demonstrate <strong>PID parameters effects</strong>:<br /> <a href="https://i.stack.imgur.com/387Rz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/387Rz.png" alt="PID parameters effects" /></a></p>
167
2012-10-26T19:11:31.040
|control|pid|tuning|
<p>Tuning controller gains can be difficult, what <strong>general</strong> strategies work well to get a stable system that converges to the right solution?</p>
What are good strategies for tuning PID loops?
<p>I know this is an old question but I will just add a bit to the currently existing answers. First, this is a very complex problem that everyone is trying to tackle, including google with their <a href="https://www.google.com/atap/project-tango/" rel="noreferrer">Tango project</a>. In general, to localise indoor you either need to rely on your internal sensors, or get assistances from a indoor infrastructure deployed to assist you locating yourself. </p> <ul> <li>Relying on onboard sensors: <ul> <li>Using sensors like LIDAR/Lasers, cameras, RGBD sensors, IMUs</li> <li>Perform complex algorithmic sensor fusion to perform some sort of accurate iterative localisation. SLAM (Simultaneous Localisation and Map building) is commonly used. I previously developed a method called <a href="http://playerstage.sourceforge.net/doc/Player-svn/player/group__driver__mricp.html" rel="noreferrer">MRICP</a> (Map Reference Iterative Closest Point) to perform a simple, but error prone localisation. Plenty of literature to look at on that front, including the recent monocular and stereoscopic visual odometry which is quite promising (check vi sensor from <a href="http://skybotix.com/" rel="noreferrer">skybotix</a> or <a href="https://github.com/uzh-rpg/rpg_svo" rel="noreferrer">svo</a>). </li> </ul></li> <li>Rely on infrastructure: <ul> <li>Beacons (bluetooth, ultraband, Wireless ...)</li> <li>Mocap (motion capture cameras: vicon, visualeyez ...)</li> <li>Encoded positioning in light bulbs (philips is recently experimenting with this)</li> </ul></li> </ul> <p>In general it really depends on what accuracy you are trying to achieve. In mobile robotics, from my experience, you really need to focus on globally consistent maps, and locally accurate positioning. This means you need to roughly know where you are from a high-level topological manner (this room is connected to the other room on the left, vs the next room on the left is 2.323m away), but locally you should have an accurate position estimation (lasers + IMUs can do this accurately).</p> <p>Hope this helps.</p>
172
2012-10-27T05:11:23.017
|localization|gps|sensors|slam|
<p>Using an IMU a robot can estimate its current position relative to its starting position, but this incurs error over time. GPS is especially useful for providing position information not biased by local error accumulation. But GPS cannot be used indoors, and even outdoors it can be spotty.</p> <p>So what are some methods or sensors that a robot can use to localize (relative to some frame of reference) without using GPS? </p>
Absolute positioning without GPS
<p>This isn't a published study, but many people use the <a href="http://en.wikipedia.org/wiki/Technology_readiness_level" rel="nofollow">Technology Readiness Level (TRL)</a> system to assess the state of technologies. </p> <p>Granted, it doesn't really tell you <em>when</em> a technology might reach a particular stage, but it can be useful in establishing a common framework for this sort of question. </p> <p>It also can be useful for prioritizing and comparing different subsystems. For example, in a robotic system, if you deem that the actuators are at TRL 6 but your power storage technology is only at TRL 1, you can see what is holding the whole system back and put more effort ($) there.</p>
184
2012-10-27T21:47:42.403
|research|
<p>Robotics has always been one of those engineering fields which has promised so much, but is taking a long time to deliver all the things that people imagine.</p> <p>When someone asks: "How long before we have [X] type of robots?" Are there any resources we can call upon to try to calculate a rough answer. These resources might include:</p> <ul> <li>Rate of progress of computational power, and some estimate of how much will be needed for various types of AI.</li> <li>Rate of progress of electrical energy storage density, and some estimate of how much will be needed for various types of robot.</li> <li>Rate of progress of actuation systems, and some estimate of what would be needed for various types of robot.</li> <li>Lists of milestones towards various types of robot, and which ones have been achieved and when.</li> </ul> <p>Are these types of studies performed, and are the results published?</p> <hr> <p>Added:</p> <p>In response to Jakob's comment, I am not looking for opinions or discussions on this subject. What I am looking for are published studies which might shed light on this question.</p>
Robotics Trends
<p>A co-worker and I once implemented a <a href="https://en.wikipedia.org/wiki/Simplex_algorithm" rel="nofollow noreferrer">simplex algorithm</a> for on-the-fly tuning of the PID parameters of a current control loop for a motor. Essentially the algorithm would modify one parameter at a time and then collect data on some feedback parameter that was our measure of goodness. Ours was percent deviation from a current target setpoint. Based on whether the feedback parameter got better or worse, the next parameter was modified accordingly.</p> <p>Or, in Wikipedia speak:</p> <blockquote> <p>Let a linear program be given by a canonical tableau. The simplex algorithm proceeds by performing successive pivot operations which each give an improved basic feasible solution; the choice of pivot element at each step is largely determined by the requirement that this pivot does improve the solution.</p> </blockquote> <p>Technically we used the <a href="https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method" rel="nofollow noreferrer">Nelder-Mead method</a> which is a type of simplex. It could also be described as a <a href="https://en.wikipedia.org/wiki/Hill_climbing" rel="nofollow noreferrer">hill climbing algorithm</a> as well if you watch how it modifies it's input parameters as it searches for an optimum output parameter.</p> <p><img src="https://i.stack.imgur.com/FdWKc.gif" alt="Nelder-Mead animation"></p> <p>Nedler-Mead worked best in our case because it can chase a setpoint. This was important because our current target setpoint changed as torque demand increased.</p> <blockquote> <p>the Nelder–Mead technique is a heuristic search method that can converge to non-stationary points</p> </blockquote>
210
2012-10-28T19:03:57.037
|control|pid|automatic|tuning|
<p>I have a simple servo system that uses a PID controller implemented in an MCU to perform the feedback. However, the properties of the system change dynamically, and so the PID parameters can never be tuned for all circumstances.</p> <p>My robot is a light weight arm with back-drivable electric motors, similar to this one:</p> <p><img src="https://i.stack.imgur.com/sp7VH.jpg" alt="Lightweight robot arm"></p> <p>The arm performs several tasks, including picking up heavy weights, pushing and pulling objects across the desk. Each of these tasks requires different PID tuning parameters which I cannot easily predict.</p> <p>What I would really like is for some higher level function which can carefully adjust the parameters in response to the arm's behaviour. For example, if it notices that the arm is oscillating, it might reduce P and increase D. Or if it noticed that the arm wasn't reaching its target, it might increase I.</p> <p>Do such algorithms exist? I would be happy even if the algorithm didn't perfect the parameters immediately. E.G. the arm could oscillate a few times before the parameters were adjusted to their new values. </p>
How can I automatically adjust PID parameters on the fly?
<p>Many recent Android devices, such as the Nexus 7, support USB On The Go (OTG) which is a fancy way of saying that 'USB Host Mode' is supported. In these cases, with the right cable which can be had for less than $1, any compatible USB device can be used. You don't need a specialty version of the Arduino that has hardware for USB Host mode.</p> <p>I've experimented with this enough to know that a 'normal' Arduino such as the Leonardo or Teensy 2 appears to an Android phone as a TTY (e.g. serial port) device when connected with an OTG cable.</p> <p><a href="http://android.serverbox.ch/?p=549" rel="nofollow">This article</a> goes into depth on how to use the Android USB Host API to communicate to the Arduino from an Android app. It requires Android 3.1+ and does not require you to root the device.</p>
213
2012-10-29T05:30:30.627
|actuator|
<p>Smart phones these days typically come with a gyroscope, accelerometer, compass, camera, and GPS sensor all on board. They also usually have a connection to the internet with Wifi and mobile data networks. I've seen many cases of using a phone as a remote control for a robot, but to me, it seems like the phone itself is a perfect lightweight computing and sensing platform for an autonomous robot.</p> <p>The main obstacle I see is interfacing with actuators. Being able to control motors to steer even a table-top robot, or control servos, for example. Connecting and communicating to a microcontroller could be an obstacle as well. </p> <p>As a robot hobbyist, I'd like to know how I can overcome these and other obstacles to be able to harness the power of my smart phone with my robotics projects.</p>
How can I integrate a smart phone with my robotics project?
<p>It sounds like your enthusiasm is in the right place, but I think you're trying to make intelligent design decisions without knowing what you're designing. This is good! These are all things to be worried about when designing a robot, whether as big as your shoe or as big as your car. But they aren't what you should be worried about right now.</p> <p>In your situation, I'd choose a robotics kit that's based on an Arduino. That will give you a good starting place in terms of seeing how other people solve the problems of materials, motors, etc. The Arduino has a huge user base and is pretty simple to program and re-program. You can always add your own hardware and software to a kit, to make it do the things you want -- keep pushing the envelope. Also, get some electronic kits (non-robotic kits are perfectly fine); they will teach you a bit about electronics and circuits that would be less fun to learn from a book.</p> <p><strong>Make as many mistakes as you can</strong>. There are no right answers or silver bullets when it comes to building robots... It's an iterative process that comes with bursts of inspiration. If you run out of I/O ports on the Arduino, start looking for another microcontroller board that has more of them. If you find you need more user interaction (LCD, buttons, etc), get a board that supports that. </p> <p>Just don't try to solve all the problems before you take your first step.</p>
215
2012-10-29T08:02:34.200
|arduino|microcontroller|
<p>I'd like to start making robots and tinkering with microcontrollers. Where do I start, and what do I need?</p> <p>I'd like to make my own robots. I'm comfortable with programming (assembly and C) so I've got that part covered, but my electronics/circuits knowledge is a little weak. I have no idea what material to start with and which tools I need, nor how to put stuff together.</p> <p>For the microcontroller, I'm thinking about going with the Pololu Orangutan LV-168 or the Arduino Duemilanove, although I'm leaning more towards the Orangutan because of the built-in LCD and pushbuttons (which I would expect to use, especially for debugging and user interaction). Am I on the right track? It seems to me like the number of I/O ports is small, but is that the case in practice?</p>
Starting out advice on making robots and tinkering with microcontrollers
<p>This has been merged as of six weeks ago:</p> <p><a href="https://github.com/ros/ros_comm/commit/8a1f92fc0c888099fe47005f609883601b7d8d72#diff-1" rel="nofollow">https://github.com/ros/ros_comm/commit/8a1f92fc0c888099fe47005f609883601b7d8d72#diff-1</a></p>
229
2012-10-30T12:59:16.537
|ros|python|
<p>In ROS, I cannot get [ERROR] logs to print in red when I use python. How can I make them appear in red instead of black?</p> <p><br></p> <p>For example, the following Python:</p> <pre><code>rospy.logerr("No analog input received.") </code></pre> <p>produces this output in <strong>black</strong>:</p> <blockquote> <p>[ERROR] [WallTime: 135601422.876123] No analog input received.</p> </blockquote> <p><br></p> <p>whereas the following C++:</p> <pre><code>ROS_ERROR("No analog input received."); </code></pre> <p>produces the following output in <strong>red</strong>.</p> <blockquote> <p>[ERROR] [135601551.192412]: No analog input received.</p> </blockquote>
Red [ERROR] output in python in ROS
<p>As of 2020/2021, the answer is yes, ROS runs on Raspberry Pi.</p> <p><a href="https://roboticsbackend.com/install-ros2-on-raspberry-pi/" rel="nofollow noreferrer">https://roboticsbackend.com/install-ros2-on-raspberry-pi/</a> <a href="https://www.youtube.com/watch?v=AmuLiA840fA" rel="nofollow noreferrer">https://www.youtube.com/watch?v=AmuLiA840fA</a></p> <p>Both of these resources use Ubuntu; RaspberryPi is a first class target for Ubuntu now. <a href="https://ubuntu.com/raspberry-pi" rel="nofollow noreferrer">https://ubuntu.com/raspberry-pi</a></p>
230
2012-10-30T14:09:21.773
|ros|raspberry-pi|wifi|
<p>Can <a href="http://www.ros.org/wiki/">ROS</a> run on a Raspberry Pi?</p> <p>ROS is resigned to run on a network of machines, with different machines, even different cores on the same machine doing different jobs. Can one of those machines be a Raspberry Pi?</p> <p>I am considering using an R-Pi as the <a href="http://en.wikipedia.org/wiki/EtherCAT">EtherCAT</a> master on a mobile robot, communicating with the main PC over WiFi, using a dongle.</p> <ul> <li>Can an R-Pi even run ROS at all?</li> <li>Would an R-Pi have enough processing power to do some 1kHz servoing?</li> <li>Would it be possible to run some servoing on the host through the WiFi connecion?</li> </ul>
Can ROS run on a Raspberry Pi?
<blockquote> <p>At 1st glance:</p> <p>115200 == 115.2 bits per millisecond == ~12.8 bytes per millisecond (assuming 1 stop bit)</p> <p>Is that a valid way to calculate timing for serial transmissions?</p> </blockquote> <p>Very simplistically, this is OK - but don't forget any start and parity bits also, and don't forget any protocol overhead along the bluetooth links</p> <blockquote> <p>Also, given my specific setup:</p> <p>PC Program &lt;--> Bluetooth Serial Profile Driver &lt;--> Bluetooth Transceiver &lt;-*-> BlueSMIRF Wireless Modem &lt;--> Parallax Propellor Program</p> </blockquote> <p>I think it is reasonable to assume that the PC through to the Modem are capable of handling the traffic at 115200 bps, so you can eliminate these from the equation.</p> <p>However, having multiple hops like this does prevent use of flow-control signals, without introducing response messages... which would slow down response time.</p> <p>Now, take the worst case, of no protocol overhead, 115200 bps means your Parallax will be receiving a byte every 69us - adding in start, stop or parity bits will slow this rate down a little bit, but assuming worst case gives you some leeway.</p> <p>This means that your controller has to handle receiving a byte every 69us, as well as doing its normal stuff (calculations etc).</p> <p>But in reality, you will be sending a message string of (n) bytes, which need to be buffered, and processed as a string - whilst still doing the normal stuff. Calculating the buffer is an art in itself, but I would normally work on (a minimum of) twice the size of the longest string (if you have the RAM space). Anything less has the potential for losing messages, if one is not processed before reception of the next commences.</p> <p>If you are restricted to only the length of the longest message, you need to be 100% sure that you can process that message between the receipt of the last byte, and the receipt of the first byte of the next message. (Obviously less of a problem with smaller messages).</p>
237
2012-10-30T17:41:41.563
|microcontroller|serial|rs232|
<p>A common scenario is to have a PC that sends commands to a microcontroller via RS232. My PC program is sending commands (each of which are composed of multiple bytes) as fast as it can to a small robot. The microcontroller on the robot is a Parallax Propellor.</p> <p>I have noticed that if I don't process bytes quickly enough on the microcontroller side of things, it can very quickly overflow the default buffers in the popular serial port drivers that are available for the Propellor. (The buffers are generally anywhere from 16 to 256 bytes). I can arbitrarily increase these buffers or create my own larger circular buffer, but I would like to have a more methodical approach to determining appropriate size requirements and/or the minimal amount of time I can wait before pulling bytes out of the serial port driver buffer.</p> <p>At 1st glance:</p> <ul> <li>115200 == 115.2 bits per millisecond == ~12.8 bytes per millisecond (assuming 1 stop bit)</li> </ul> <p><strong>1) Is that a valid way to calculate timing for serial transmissions?</strong></p> <p>Also, given my specific setup:</p> <ul> <li>PC Program &lt;--> Bluetooth Serial Profile Driver &lt;--> Bluetooth Transceiver &lt;-<em>*</em>-> BlueSMIRF Wireless Modem &lt;--> Parallax Propellor Program</li> </ul> <p><strong>2) What is the maximum amount of data I can send for a given period of time consistently without eventually running in to problems?</strong></p> <p>Maybe I'm over complicating things, but it seems like there are potentially multiple buffers involved in the transmission chain above. How do others commonly deal with this? Do they throttle the PC sending to a known safe rate? Implement flow control? If implementing flow control, how does that affect bandwidth and response times?</p> <p>(If it matters, my experiment is to use a joystick on the PC to control multiple servos with instant reaction to the joystick movements. So every small movement of the joystick results in multiple commands being sent to the microcontroller. The commands are not just simple positional commands though, they also involve acceleration/deacceleration of servos over time and this is the reason that the microcontroller spends a significant amount of clock cycles before processing new bytes.)</p>
How to calculate serial speed and buffer requirements for PC to microcontroller communications?
<p>In industry, there is a strong preference for low maintenance brush-less motors over relatively high maintenance brushed motors. While the former may me more expensive in terms of the motor itself and the drive electronics, the reduction in the term long cost of maintenance usually out weighs the extra capital cost.</p> <p>As <a href="https://robotics.stackexchange.com/a/260/37">user65</a> suggests, you may need sinusoidal commutation to avoid torque ripple at low speeds, depending on precisely how you design your system and how fine you need your velocity control to be.</p> <p>The paper <a href="http://www.magnelab.com/uploads/4c51d9ba6fe5a.pdf" rel="nofollow noreferrer">A Comparison Study of the Commutation Methods for ...</a> has some interesting information commutation methods, which might be of use.</p> <p>Ultimately though, I think that avoiding using encoders is a false economy.</p> <p>Unlike halls, they have the distinct advantage that they <em>aren't tied to the motor rotation</em> - i.e. they don't have to go on the motor shaft. You could place them on the load side of the gearbox, which will allow you to quantify the precise effects of the backlash in your gearbox.</p> <p>This would allow you to to perform backlash compensation in software, run dual servo loops (one for position tracking with backlash compensation and another for more immediate velocity control) and generally take much more precise control of your system both at high and low speeds.</p>
254
2012-11-01T21:00:11.233
|brushless-motor|servomotor|
<p>I have a robot that uses brushed motors in its servo system. These are Maxon 3W motors, with 131:1 planetary gearboxes. The motors are controlled by a PIC microcontroller, running a 1kHz PID controller. The servos are for a low speed high torque application. There is significant backlash between the sensor and the motor.</p> <p>Maxon offer 12W brushless motors which are the same size. These are better in many ways: double the torque, better heat dissipation, higher efficiency.</p> <p>The problem, obviously, is that they require more complex drive electronics. Also, I have heard a couple of people mention that brushed motors are better for servo applications, though they never explained why.</p> <ul> <li>Has anyone else implemented this kind of system?</li> <li>Are there any gotchas when using brushed motors for servos?</li> <li>Is it possible to servo it at low speeds if I only have the 3 integral digital Hall sensors, and no encoder? (I would prefer not to add an encoder because of the money and space cost)</li> <li>It torque ripple likely to be a problem?</li> </ul>
Should I switch my servo system from brushed to brushless motors?
<p>Parrallax offers some products in the space you are looking, but I think what you really want to do is go to hobby king; </p> <p><a href="http://www.hobbyking.com/hobbyking/store/__657__501__Multi_Rotors_Parts-Flight_Controller.html" rel="nofollow">http://www.hobbyking.com/hobbyking/store/_<em>657</em>_501__Multi_Rotors_Parts-Flight_Controller.html</a></p> <p>make no mistake, they ARE cheap copies of other products, and if that bothers you go back to parallax and spend the full amount. I'd go with the 550 frame with motors, and add speed controlers, and props then</p> <p>Look at <a href="http://store.openpilot.org/" rel="nofollow">http://store.openpilot.org/</a> for open source modifiable hardware and software. </p> <p>hobby king also has a few controllers including an arduino, but I'd prefer the wii if it was me (just personal preference)</p> <p>This is not a plug and play adventure!! You will at very least end up soldering your own cables, and if you are not comfortable with that, stay clear.</p>
255
2012-11-01T22:47:12.250
|uav|kit|
<p>I'm really new to robotics, however I am a programmer familiar with several different languages. I don't have a ton of money to spend and I was wondering what is a really good starter kit. </p> <p>My criteria is for the kit to be inexpensive and <strong>powerful</strong>, in that its functionality is extensible -- something that would allow the builder to be creative and possibly invent new ways to use it, not just a glorified model kit. </p> <p>Being extendable to smartphones is a plus.</p> <p>I'm not looking for something easy or introductory, just something powerful, flexible, and cost effective.</p>
What UAV kit(s) would be suitable for a beginner roboticist with programming experience?
<p>As other people mentioned, an RF beacon will probably be difficult and vision is definitely a viable option. The common difficulty with vision-based solutions is that they are computationally expensive, making it difficult to be done on-board. </p> <p>You can try using the PixArt IR tracking sensor from a Nintendo Wii remote, which communicates through I2C and can thus be easily connected to for example an Arduino and place a few active IR beacons on the ground which are then picked up by the sensor. Using a simple clever pose estimation algorithm will get you an accurate estimate of your position.</p> <p>A regular colour camera could also be used but unless you have something like a Beagleboard or Gumstix on-board it would be difficult to process the images in real-time. (although there is nothing stopping you from doing calculations on the ground, of course).</p>
256
2012-11-02T00:19:32.710
|localization|quadcopter|gps|
<p>I want to use an RF beacon to localize my quadcopter for autolanding, when GPS is not precise enough, for example, when my driveway is only 10 feet wide, and the GPS is only showing 20-30 ft. accuracy (with a proverbial lake of lava on either side). The quadcopter would use the GPS to fly to the rough location until it had a strong enough signal off the beacon, when it would begin to use that signal to come to a landing in a precise location, referenced off said beacon. Can someone please explain to me the concepts and theories behind building the beacon and it's accompanying receiver (suitable for connection to an Arduino via any digital or analog method) and achieving, say, a 4" or better horizontal and vertical accuracy within a 50' sphere? Minimally, the quad should have range and altitude, i.e. "I am 10 feet away from the beacon and 2 feet above it". How much added complexity would it take to make the robot fully position aware about the beacon, i.e. "x ft. South, y ft. West and z ft. above it", where the coordinate system is determined by the beacon and not linked to any sort of geographic coordinate system? If the beacon is mounted on a, say, 10 ft pole, are there any changes to be made versus having it on the ground and presuming that all activity takes place above it's x-y plane?</p> <p>Last note- This thing would prefferably operate in the 72MHz band, please presume that where I'm operating, there are not other devices operating on the same band.</p>
Quadcopter Localization Beacon
<p>The diagram you show looks like it would produce a pretty rough trapezoidal Back-EMF. I'm assuming that the gates that are at 100% are the lower legs of the motor drive bridge. I can't think of a reason you would want to do this. In general you want the gate voltage of the return leg to be the complement of the gate voltage of the supply leg.</p> <p>In six-step trapezoidal commutation, you typically ramp the PWM up to 100%, leave it there for a while (~30 electrical degrees of rotation), and then ramp it back down again.</p> <p><a href="http://www.dian-deng.com/index_motor_3phase_6step.htm" rel="noreferrer"><img src="https://i.stack.imgur.com/5EHWn.png" alt="six-step trapezoidal"></a></p> <p>In sinusoidal commutation, the PWM duty cycle is continuously varied in sinusoidal values. Here is a good diagram showing the difference between sinusoidal drive and trapezoidal drive PWM and phase signals:</p> <p><a href="http://www.embedded.com/design/embedded/4006783/Implementing-Embedded-Speed-Control-for-Brushless-DC-Motors-Part-4" rel="noreferrer"><img src="https://i.stack.imgur.com/9FvgT.jpg" alt="sine versus trapezoidal"></a></p> <p>This Fairchild app note shows the PWM though a full 360° rotation:</p> <p><a href="http://www.fairchildsemi.com/ds/FC/FCM8202.pdf" rel="noreferrer"><img src="https://i.stack.imgur.com/4FmHz.png" alt="360 sine rotation"></a></p> <p><a href="http://www.imakenews.com/eletra/mod_print_view.cfm?this_id=287045&amp;u=getoshiba_mve-news&amp;issue_id=000057909&amp;show=F,T,T,T,F,Article,F,F,F,F,T,T,F,F,T,T" rel="noreferrer"><img src="https://i.stack.imgur.com/sVmnF.jpg" alt="sine drive single"></a></p> <p>It's useful to look at what's going on in the signal up close. What you're really doing is gradually varying the current in a triangular wave so that it slowly builds up in the stator of the motor. You have more control over this buildup if you drive the supply and return gates in a complementary fashion rather than holding the lower leg open.</p> <p><a href="http://www.screenlightandgrip.com/html/emailnewsletter_generators.html" rel="noreferrer"><img src="https://i.stack.imgur.com/JsxNP.jpg" alt="current variance"></a></p> <p>Computing a sine wave is more computationally intensive (unless you use a lookup table) than a simple ramp up, hold, ramp down. But it produces a much smoother drive.</p> <p>Space-vector commutation is even more computationally intensive. And while it has more torque ripple than a sinusoidal drive, it makes higher utilization of the bus voltage and is therefore more efficient in terms of power.</p> <p>The phase voltage in space vector drive ends up looking like this:</p> <p><a href="http://www.powerlab-tr.com/?p=693" rel="noreferrer"><img src="https://i.stack.imgur.com/FZVFU.png" alt="space vector voltage"></a></p> <p>This is done by varying the PWM duty cycle in all three phases at the same time. This is opposed to having just a single phase driven as in two-quadrant drive or having two phases driven in complementary pairs as in four-quadrant drive.</p> <p><a href="https://www.sciencedirect.com/science/article/pii/S037877960900056X" rel="noreferrer"><img src="https://i.stack.imgur.com/UXgZ2.jpg" alt="space-vector PWM"></a></p>
261
2012-11-03T00:56:08.123
|brushless-motor|pwm|
<p>I have seen waveforms for driving a brushless motor.</p> <p><img src="https://i.stack.imgur.com/UHO04.jpg" alt="Brushless motor waveform"></p> <p>I guess this is the waveform used for the simpler block commutation. But if I want to do sinusoidal waveforms, what does the PWM signal look like now? Is there a need to carefully synchronise the edges on the three phases?</p>
What do the commutation waveforms look like for a brushless motor?
<p>Practical point of view: Traction of wheel</p> <p>Traction of the wheel (friction between wheel and surface) is proportional to the mass of the robot. Small mobile robots with low mass can have low wheel traction, and can easily slip on the surface, friction co-efficient of the wheel and surface area of contact between wheel and surface. Even if the wheel slippage is not too much, even small slippage can make big difference in accumulating odometer error.</p> <p>Summarizing, higher the mass of the robot, better is the traction of wheel with surface.</p> <p>The type of wheel/tire will be one of the important design consideration as the mass of robot varies. </p> <p>I wasted lots of time trying to make a mobile robot (differential drive with 2 driven wheels + caster wheel, size around 10x15cm, mass &lt; 1 kg) follow specified trajectory, and get proper odometer data. It had hard plastic wheel with rubber band tire. In normal operation slippage was barely noticeable and used slip at every small variation on floor. It also used to get stuck easily at small obstacles. Even though the motor had lots of torque, wheels used to just spin in place, due to lack of sufficient traction. At one point I added lots of deadweight to improve the traction. </p>
262
2012-11-03T06:16:39.650
|mobile-robot|design|dynamics|
<p>Do mobile and/or autonomous robots become more or less effective the bigger they get? For example, a bigger robot has bigger batteries, and thus bigger motors, whereas a smaller robot has the exact opposite, making it need less energy, but also have smaller motors. Is there any known theorem that models this?</p>
Effectiveness of a Mobile Robot In Relation To Mass
<p>Capacitors are used with motors in two different ways. Sometimes the same motor will have both techniques applied, and be associated with two significantly different-looking capacitors.</p> <ul> <li><p>When motors with brushes are running normally, the motor brushes produce sparks, which cause noise "from DC to daylight". This has nothing to do with PWM -- it happens even when these motors are connected directly across a battery, without any PWM. If we did nothing, the cable running from the electronics board (or directly from the battery) to the motor would act like an antenna, radiating TV and other radio interference. One way people fix that problem is to attach small ceramic capacitors directly to the motor to absorb much of that noise. <a href="http://www.beam-wiki.org/wiki/Reducing_Motor_Noise">b</a> <a href="http://www.pololu.com/docs/0J15/9">c</a> <a href="http://www.mabuchi-motor.co.jp/en_US/technic/t_0203.html">d</a> <a href="http://www.robotshop.com/PDF/motor-noise-reduction.pdf">e</a></p></li> <li><p>When using PWM to drive the motor, when the transistors turn "on", the motor may pull a current spike / surge current -- the above noise-filtering capacitors make that current spike worse. When the transistors turn "off", the motor inductance may cause voltage spikes from the motor inductance -- the above noise-filtering capacitors help a little. More complex filters attached directly to the motor can help these two problems. <a href="http://hydraraptor.blogspot.com/2007/09/dc-to-daylight.html">a</a> <a href="http://www.stefanv.com/rcstuff/qf200005.html">b</a></p></li> <li><p>When a motor -- even a motor that doesn't have brushes -- is first turned on at a dead stop, and also when the robot hits an obstruction and stalls the motor, the motor pulls much higher currents than it does in normal operation -- currents that may last for several seconds. This high current may pull down the battery power rail enough to reset all the digital electronics in the system (or perhaps reset just <em>some</em> of the digital electronics, causing half-brain syndrome).</p> <p>One work-around has 2 parts:</p> <ol> <li>add large electrolytic capacitors directly across the battery (or across the battery input to the PWM motor driver, or across the battery input to the digital electronics, or often capacitors in all three locations) -- these capacitors work better at supplying high currents for a few milliseconds than the battery does.</li> <li>In the few milliseconds we have before the stalled motor pulls all the energy from those big capacitors and then pulls the power rails low enough to start resetting things, program the digital system to somehow recognize that the motor has stalled and kill the power to that motor. Then that motor no longer drags down the power rail, and the digital electronics and all the other motors continue to operate normally. ("soft-start", "current-limiting", "torque-limiting", etc. are more sophisticated forms of this idea). (Those big capacitors, also absorb some of the energy that comes out of the motor when the PWM turns "off", and later put that energy back into the motor when the PWM turns "on").</li> </ol></li> </ul> <p>The above capacitors protect other things <em>from</em> the motor's electrical interference. I suppose one could argue that step (2) above prevents a stalled motor from eventually, after many seconds, overheating and failing -- but that's not really its primary purpose.</p>
267
2012-11-03T16:58:44.830
|motor|protection|
<p>I've seen many motors having capacitors attached in parallel in bots. Apparently, this is for the "safety" of the motor. As I understand it, all these will do is smoothen any fluctuations--and I doubt that fluctuations can have any adverse effects on a motor. Apparently these protect the motor if the shaft is being slowed/blocked, but I fail to see how.</p> <p>What exactly is the function of such a capacitor? What does it prevent, and how?</p>
Why are capacitors added to motors (in parallel); what is their purpose?
<p>With a bit of additional research I have found one other project very similar to the OpenServo project. They are called supermodified servos. Some of the web sources are not available anymore, but there is the source for the project, which is quite advanced on google code and on github:</p> <ul> <li><a href="http://code.google.com/p/zosupermodified/" rel="nofollow">zosupermodified</a> - source code for the controller</li> <li><a href="https://github.com/jandetlefsen/Supermodified-Servo" rel="nofollow">Supermodified-Servo</a> - eagle files</li> </ul>
274
2012-11-05T08:29:56.787
|otherservos|i2c|
<p>Some years ago, there where some projects that provided hardware and software to perform modifications on standard hobby servos to convert them to digital servos, with all the advantages that come with it. </p> <ul> <li><a href="http://openservo.com/" rel="nofollow">OpenServo</a> is a little outdated, and does not seem to be worked on anymore, and there is no hardware to buy.</li> <li>Sparkfun has its <a href="https://www.sparkfun.com/products/9014" rel="nofollow">own version</a> of the OpenServo, which at least is available for buying.</li> </ul> <p>Do you know if there are other mods, or even complete low cost digital servos? I am mostly interested in position feedback, and servo chaining.</p>
Low-cost servo with digital control interfaces?
<p>A Kalman Filter is an algorithm that is commonly used in UAVs to fuse multiple sensor measurements together to provide an "optimal" estimate of the position and/or orientation of the UAV. For example, a Kalman Filter can fuse accelerometer, gyro and magnetometer measurements with a velocity estimate to estimate the UAV's yaw, pitch and roll.</p> <p>For more information on the sensors and algorithms used in UAV state estimation, try the stand-alone article <a href="http://www.jhuapl.edu/techdigest/TD/td3102/31_02-Barton.pdf">Fundamentals of Small Unmanned Aircraft Flight</a>. </p> <p>The article also links to accompanying Matlab code implementing the described Kalman Filter UAV state estimation algorithms.</p>
277
2012-11-05T23:18:26.770
|kalman-filter|uav|
<p>I am designing an unmanned aerial vehicle, which will include several types of sensors: </p> <ul> <li>3-axis accelerometer</li> <li>3-axis gyroscope </li> <li>3-axis magnetometer</li> <li>horizon sensor</li> <li>GPS </li> <li>downward facing ultrasound.</li> </ul> <p>A friend of mine told me that I will need to put all of this sensor data through a Kalman filter, but I don't understand why. Why can't I just put this straight into my micro controller. How does the Kalman filter help me about my sensor data?</p>
Why do I need a Kalman filter?
<p>There are two broad approaches:</p> <ul> <li>analytic solutions, given an end-effector pose, directly compute the joint coordinates. In general the solution is not unique, so you can compute a set of possible joint coordinates. Some may cause the robot to hit things in its environment (or itself), or your task might help you choose a particular solution, ie. you might prefer the elbow to up (or down), or the robot to have its arm to the left (or right) of its trunk. In general there are constraints on obtaining an analytic solution, for 6-axis robots, a spherical wrist (all axes intersect)is assumed. The analytic solutions for many different types of robots have been computed over the decades and you can probably find a paper that gives a solution for your robot.</li> <li>numeric solutions, as described in the other answers, uses an optimisation approach to adjust the joint coordinates until the forward kinematics gives the right solution. Again, there's a huge literature on this, and lots of software.</li> </ul> <p>Using my Robotics Toolbox for MATLAB, I create a model of a well known 6-axis robot using Denavit-Hartenberg parameters</p> <pre><code>&gt;&gt; mdl_puma560 &gt;&gt; p560 p560 = Puma 560 [Unimation]:: 6 axis, RRRRRR, stdDH, fastRNE - viscous friction; params of 8/95; +---+-----------+-----------+-----------+-----------+-----------+ | j | theta | d | a | alpha | offset | +---+-----------+-----------+-----------+-----------+-----------+ | 1| q1| 0| 0| 1.5708| 0| | 2| q2| 0| 0.4318| 0| 0| | 3| q3| 0.15005| 0.0203| -1.5708| 0| | 4| q4| 0.4318| 0| 1.5708| 0| | 5| q5| 0| 0| -1.5708| 0| | 6| q6| 0| 0| 0| 0| +---+-----------+-----------+-----------+-----------+-----------+ </code></pre> <p>then choose a random joint coordinate</p> <pre><code>&gt;&gt; q = rand(1,6) q = 0.7922 0.9595 0.6557 0.0357 0.8491 0.9340 </code></pre> <p>then compute the forward kinematics</p> <pre><code>&gt;&gt; T = p560.fkine(q) T = -0.9065 0.0311 -0.4210 -0.02271 0.2451 0.8507 -0.4649 -0.2367 0.3437 -0.5247 -0.7788 0.3547 0 0 0 1 </code></pre> <p>Now we can compute the inverse kinematics using a published analytic solution for a robot with 6 joints and a spherical wrist</p> <pre><code>&gt;&gt; p560.ikine6s(T) ans = 0.7922 0.9595 0.6557 0.0357 0.8491 0.9340 </code></pre> <p>and voila, we have the original joint coordinates.</p> <p>The numerical solution</p> <pre><code>&gt;&gt; p560.ikine(T) Warning: ikine: rejected-step limit 100 exceeded (pose 1), final err 0.63042 &gt; In SerialLink/ikine (line 244) Warning: failed to converge: try a different initial value of joint coordinates &gt; In SerialLink/ikine (line 273) ans = [] </code></pre> <p>has failed, and this is a common problem since they typically need a good initial solution. Let's try</p> <pre><code>&gt;&gt; p560.ikine(T, 'q0', [1 1 0 0 0 0]) ans = 0.7922 0.9595 0.6557 0.0357 0.8491 0.9340 </code></pre> <p>which now gives an answer but it is different to the analytic solution. That's ok though, since there are multiple solutions to the IK problem. We can verify that our solution is correct by computing the forward kinematics</p> <pre><code>&gt;&gt; p560.fkine(ans) ans = -0.9065 0.0311 -0.4210 -0.02271 0.2451 0.8507 -0.4649 -0.2367 0.3437 -0.5247 -0.7788 0.3547 0 0 0 1 </code></pre> <p>and checking that it is the same as the transform we started with (which it is).</p> <p>Other resources:</p> <ul> <li>free video lectures on this topic, search for kinematics at the <a href="http://www.robotacademy.net.au" rel="nofollow noreferrer">Robot Academy</a></li> <li><a href="http://petercorke.com/wordpress/books/book" rel="nofollow noreferrer">Chapter 7 of <em>Robotics, Vision &amp; Control (2e)</em>, Corke, Springer 2017</a>.</li> </ul>
299
2012-11-08T00:59:40.857
|inverse-kinematics|kinematics|joint|robotic-arm|
<p>The forward kinematics of a robot arm can be solved easily. We can represent each joint using <a href="http://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters">Denavit–Hartenberg</a> transformation matrices.</p> <p>For example, if the $i^{th}$ joint is a linear actuator, it may have the transformation matrix:</p> <p>$T_i = \left[\begin{matrix} 1&amp;0&amp;0&amp;0\\ 0&amp;1&amp;0&amp;0\\ 0&amp;0&amp;1&amp;d_i\\ 0&amp;0&amp;0&amp;1 \end{matrix} \right]$ where the extension length is defined by $d_i$</p> <p>whereas, a rotating link may be:</p> <p>$T_i = \left[\begin{matrix} 1&amp;0&amp;0&amp;L\\ 0&amp;\cos\alpha_i&amp;-\sin\alpha_i&amp;0\\ 0&amp;\sin\alpha_i&amp;\cos\alpha_i&amp;0\\ 0&amp;0&amp;0&amp;1 \end{matrix} \right]$ where $\alpha$ is the angle, and $L$ is the length of the link.</p> <p>We can then find the position and orientation of the end effector by multiplying all the transformation matrices: $\prod{T_i}$.</p> <p>The question is, how do we solve the inverse problem?</p> <p>Mathematically, for a desired end effector position $M$, find the parameters $d_i$, $\alpha_i$ such that $\prod{T_i} = M$. What methods exist to solve this equation?</p>
How can the inverse kinematics problem be solved?
<p>short answer; no you really need to do things quite a bit differently.</p> <p>long incomplete answer; Let me give you some psuedo code appropriate for robotC, that puts you on a better path. First, do not use tasks - this is NOT what robotC tasks are for. They could be made to work, maybe, maybe not (and you need quite a few changes to even try).</p> <pre><code>// global variables int distance; int light; main() { while (true) { distance = read_distance; light = read_light; if (task1_wantsToRun()) task1_run(); if (task2_wantsToRun()) task2_run(); } } </code></pre> <p>there is a couple of things here; priority becomes irrelevant. As nice as it seems to have tasks in robotC with priorities, they are not a good choice for subsumption implementation in my experience. For reasons like, priorities are not always honored, tasks can not be interrupted (sometimes) so when a higher priority event occurs, it is not going to react like you expect, robotC only recently became re-entrant, so things like accessing a sensor from more than 1 task may be risky (I2C timing issues), and in some cases it is not (automatically polled sensors).</p> <p>You can add your own priority implementation to the above loop as you get things working, but it really is not needed for starts.</p> <p>Your comment &quot;//box the obstruction&quot; describes a ballistic behavior. Those are a bit tricky to implement using multi-tasking. The simple loop I used makes it a lot easier, and better for starters/learning.</p> <p>The other thing I will leave you with, is that subsumption while being neat and appropriate for a lot of things, is not a good way to implement what is better done traditionally. Indeed the 'evade' portion may be a good candidate for subsumption, but honestly your other task should be called 'GoOnAboutYourBusiness'. I say this because you probably do not want to change from searching to following with subsumption. Handle those with traditional programming loops. With a single sensor, - is the light sensed darker or lighter than it was last loop? if it got darker (assuming black line) keep turning the same direction, if it got lighter turn the other way, if it stayed the same, go straight. You probably need to add some PID and use a steering curve instead of just turning left and right to be smoother.</p> <p>And yes, multiple sensors help. <a href="http://www.mindsensors.com/" rel="nofollow noreferrer">http://www.mindsensors.com/</a> - yeah, that's me in the movie currently (11/10/2012)</p> <h3>Update: actual code</h3> <p>I will try this out in a little while, but it compiles and illustrates what I wrote above:</p> <pre><code>#pragma config(Sensor, S1, S_LIGHT, sensorLightActive) #pragma config(Sensor, S2, S_DISTANCE, sensorSONAR) #pragma config(Motor, motorB, LEFT, tmotorNXT, PIDControl, encoder) #pragma config(Motor, motorC, RIGHT, tmotorNXT, PIDControl, encoder) //*!!Code automatically generated by 'ROBOTC' configuration wizard !!*// int distance_value, light_value; bool evade_wantsToRun() { return distance_value &lt; 30; } void evade_task() { // full stop motor[LEFT] = 0; // evade the object ballistically (ie in full control) // turn left, drive nSyncedTurnRatio = 0; motor[LEFT] = -20; Sleep(500); nSyncedTurnRatio = 100; Sleep(1000); // turn right, drive nSyncedTurnRatio = 0; motor[LEFT] = 20; Sleep(500); nSyncedTurnRatio = 100; Sleep(1000); // turn right, drive nSyncedTurnRatio = 0; motor[LEFT] = 20; Sleep(500); nSyncedTurnRatio = 100; Sleep(1000); // turn left, resume nSyncedTurnRatio = 0; motor[LEFT] = 20; Sleep(500); motor[LEFT] = 0; } /////////////////////////////// void TurnBySteer(int d) { // normalize -100 100 to 0 200 nSyncedTurnRatio = d + 100; } /////////////////////////////// typedef enum programPhase { starting, searching, following, finished }; programPhase phase = starting; // these 'tasks' are called from a loop, thus do not need to loop themselves void initialize() { nSyncedTurnRatio = 50; nSyncedMotors = synchBC; motor[LEFT] = 30; // start a spiral drive phase = searching; } void search() { if (light_value &lt; 24) { nSyncedTurnRatio = 100; phase = following; } } int lastLight = -1; int currentSteer = 0; void follow() { // if it is solid white we have lost the line and must stop // if lightSensors detects dark, we are on line // if it got lighter, we are going more off line // if it got darker we are headed in a good direction, slow down turn in anticipation // +++PID will be even smoother if (light_value &gt; 64) { motor[LEFT] = 0; phase = finished; return; } if (light_value &lt; 24) currentSteer = 0; else if (light_value &gt; lastLight) currentSteer += sgn(currentSteer) * 1; else // implied (light_value &lt; lastLight) currentSteer -= sgn(currentSteer) * 1; TurnBySteer(currentSteer); } bool regularProcessing_wantsToRun() { return phase != finished; } void regularProcessing_task() { switch (phase) { case starting: initialize(); break; case searching: search(); break; case following: follow(); } } task main() { // subsumption tasks in priority oder while (true) { // read sensors once per loop distance_value = SensorValue[S_DISTANCE]; light_value = SensorValue[S_LIGHT]; if (evade_wantsToRun()) evade_task(); if (regularProcessing_wantsToRun()) regularProcessing_task(); else StopAllTasks(); EndTimeSlice(); // give others a chance, but make it as short as possible } } </code></pre>
309
2012-11-08T12:07:23.013
|mobile-robot|software|two-wheeled|robotc|
<p>I've been doing a lot of reading lately about <a href="http://en.wikipedia.org/wiki/Subsumption_architecture" rel="nofollow">Subsumption Architecture</a> and there are a few different ways people seem to advocate. </p> <p>For instance some people use a global "flag" variable to have a task take control. Others use the <code>endTimeSlice()</code> and allow the arbiter to really choose. And I think this is correct. </p> <p>I have this small section of <a href="http://en.wikipedia.org/wiki/Robotc" rel="nofollow">RobotC</a> code that I'm working on for a line following robot but am not sure I am doing it right as currently the track method will always take over the find method. The correct flow should be that find should guide the robot to the line using a spiral path to find the line. Once the line is found track should take over. </p> <pre><code>task evade(){ if(SensorValue(forwardSonarSensor) &gt; threshold){ //box the obstruction } } task find(){ if(SensorValue(lightSensor) &gt; threshold){ //spiral the robot } } task track(){ if(SensorValue(lightSensor) &lt; threshold){ //go straight }else{ //execute turns to follow the line } } task main(){ while(true){ StartTask(evade,9); StartTask(track,8); StartTask(find,7); wait1Msec(250); } } </code></pre> <p>I've just used some comments here rather than the actual code to keep it brief. Are my if statements not good enough as conditions because when the robot is off the line, <code>track()</code> takes over. Is this due to the else statement within track? If so, how to have <code>track()</code> perform turns when it looses the line without taking over from forage at the start of the program? </p>
Correct way to use Subsumption Architecture with Robot C
<p>That script is calling "mex" but it's getting pdfTex "mex" tool instead of the Matlab compiler. </p> <p>Before running that script you need to adjust your <code>PATH</code> environment variable to put the Matlab binaries before your system ones. Assuming your Matlab binaries are located in <code>/opt/matlab/bin</code> that would look something like this:</p> <pre><code>export PATH=/opt/matlab/bin:$PATH roscd ipc_roslib &amp;&amp; make </code></pre>
323
2012-11-08T19:23:49.510
|mobile-robot|software|slam|ros|
<p>Is anyone able to help me out getting IPC-bridge working on my ubuntu lucid installation (with matlab 2012a)? I'm not being able to finish the last step on here (Compiling the messages folders): <a href="https://alliance.seas.upenn.edu/~meam620/wiki/index.php?n=Roslab.IpcBridge#Installation" rel="nofollow">https://alliance.seas.upenn.edu/~meam620/wiki/index.php?n=Roslab.IpcBridge#Installation</a></p> <p>I'm able to rosmake the ipc_bridge_ros, however when i enter this "roscd ipc_roslib &amp;&amp; make", it seems meX does not recognize the commands. Here is what i get (screen shot): <a href="http://img13.imageshack.us/img13/6031/screenshot20121108at191.png" rel="nofollow">http://img13.imageshack.us/img13/6031/screenshot20121108at191.png</a></p> <p>NOTE: i'm going to use IPC-bridge so that i can control a pioneer 3DX and implement a Fast-slam algorithm in matlab.</p>
IPC-Bridge problem
<p>I have built a lot of walking robots, in my experience if you can't get it to walk by programming a gait you are not going to get it to learn because you don't know what it is supposed to do and the search space is too large. Using an Arduino you may be able to get it to fine tune some movements iff you can define good movements.</p>
327
2012-11-08T23:07:14.817
|arduino|microcontroller|machine-learning|walking-robot|
<p>I'm building a 4 legged robot (quadruped) with 3 Degrees of freedom per leg. The goal of my project is to make this robot able to learn how to walk. What learning algorithms will I need to implement for it to work?</p> <p>I'm using an <a href="http://www.arduino.cc/" rel="nofollow">Arduino Uno</a> for the microcontroller.</p>
Learning Algorithms for Walking Quadruped
<p>As you probably guessed from the lack of answers, subsumption architecture is not an active area of research any more. Most papers on this have been published in the late 80's / early 90's.</p> <p>This doesn't mean that subsumption architecture is dead; it has been very influential in robotics, and it's still used in education for example, but it is just not a hot topic in research any more.</p>
354
2012-11-10T20:26:08.600
|control|research|
<p>I am interested in learning more about <a href="http://en.wikipedia.org/wiki/Subsumption_architecture" rel="noreferrer">subsumption architecture</a>. I have read a number of books that talk about the idea but none of them go into great detail. I have also read a fair number of Dr. Brooks papers on the topic however he hasn't published much on the topic in recent years.</p> <p>Is this still an active area of research? Are there are any must read papers on the topic?</p>
Is subsumption architecture still an active area of research?
<p>I am not sure what type of robot you have but i have been doing robocup rescue line for a few years now. I have come to realise that if you want to follow a line well using PID is a good option. Let me expand on this. If you imagine two light sensors on either side of the line you would want them to be of equal value so that the line is in the middle. You can then use the difference between the value of the two sensors to change the turning percentage of the robot. With this technique it is possible to get a robot to follow a line at extraordinary speeds.</p> <p>In terms of making the robot learn to improve its line tracking abilities. What i came up with is start with your initial PID values higher than you want them to be and use a gyroscopic sensor to measure the frequency of oscillation of the robot as it tracks the line. You can from there create your own function to determine how much to lower your values which acts as an automated reinforcement learning parameter. So if the oscillations are too high or become unstable then you state this set of parameters are bad.</p>
361
2012-11-11T19:48:28.747
|machine-learning|artificial-intelligence|reinforcement-learning|line-following|
<p>I am considering programming a line following robot using reinforcement learning algorithms. The question I am pondering over is how can I get the algorithm to learn navigating through any arbitrary path?</p> <p>Having followed the <a href="http://webdocs.cs.ualberta.ca/~sutton/book/the-book.html" rel="noreferrer">Sutton &amp; Barto Book</a> for reinforcement learning, I did solve an exercise problem involving a racetrack where in the car agent learnt not to go off the track and regulate its speed. However, that exercise problem got the agent to learn how to navigate the track it trained on.</p> <p>Is it in the scope of reinforcement learning to get a robot to navigate arbitrary paths? Does the agent <strong>absolutely</strong> have to have a map of the race circuit or path? What parameters could I possibly use for my state space?</p>
Programming a line following robot with reinforcement learning
<p>If you consider e.g. a robotic arm, the inverse kinematics tell you how to choose the arm's joint angles to move the arm to some position and orientation where you want it to be. </p> <p>In contrast to determining the forward kinemactics of some mechanism, determining it's <em>inverse kinematics</em> is usually hard and sometimes, there isn't even an analytic solution. Industrial robots, however, are often designed in such a way that they have an analytic solution for the inverse kinematics. This can be achieved by e.g. clever alignment of joint axes. </p> <p><em>Reinforcement Learning</em>, on the other hand, is a machine learning technique. Like any other machine learning technique or algorithm, it can be used to determine a function which you don't know - given that you choose a good reward function that is related to the problem you want to solve. </p> <p>So, in a nutshell: you can use Reinforcement Learning to determine the inverse kinematics of a robot (if there is no analytical solution or determining one would be ridiculously hard). </p>
369
2012-11-12T18:50:20.770
|inverse-kinematics|reinforcement-learning|machine-learning|
<p>Are <a href="http://en.wikipedia.org/wiki/Inverse_kinematics" rel="noreferrer">inverse kinematics</a> and <a href="http://en.wikipedia.org/wiki/Reinforcement_learning" rel="noreferrer">reinforcement learning</a> techniques contending techniques to solve the same problem viz. movement of robotic manipulators or arm?</p> <p>By a glance through the wikipedia article, it appears that inverse kinematics seems to attempt to achieve a <a href="http://en.wikipedia.org/wiki/Equation_solving" rel="noreferrer">solution</a> as opposed to reinforcement learning which attempts to <a href="http://en.wikipedia.org/wiki/Optimization" rel="noreferrer">optimizes</a> the problem. Have I misunderstood anything?</p>
Are inverse kinematics and reinforcement learning competitive techniques?
<p>This is a much bigger project than you think, and processing power is the least of your concerns. You may want to skip straight to step #9. Otherwise, here are some questions you need to ask yourself, based on my experience of building something very similar as a hobby project (I used a desktop PC for processing):</p> <p>1: <strong>Exactly</strong> how far should the robot reach? You say 1.25 meters - but at what height, and in what direction? If it absolutely must reach anything within a 1.25 meter dome around its base, then either it's maximum reach must be longer, or you have a very big engineering challenge. Look at some of the robots from Fanuc or Kuka; their online catalogs have cutout views that show how far the arm will reach in each direction, and those areas are never perfect domes. Also, how close to the base must the robot reach? If the first arm (from the base) is significantly longer than the 2nd arm, then it won't be able to reach anything close to the base. Also, 6 DOF DOESN'T MEAN IT CAN HAVE ANY POSITION WITH ANY MOVEMENT! The shoulder and elbow joints of robot arms are almost always in the same DOF (and sometimes the wrist too). You can move the arm in any direction or hold the grasper in any orientation, but you can't do both at the same time. If you look at the human arm, the shoulder has 3 DOF, the elbow has one, the forearm has one, and the wrist has two. That's 7 in total, and that's what you need for total flexibility. If you search for 7 DOF, you'll see examples showing how they can move in ways that 6 DOF arms can't. This may not be important for your project, but it's important to know because otherwise you may do a lot of needless debugging.</p> <p>2: Start with the grasper first. There are companies that make these specially, and it's best to look at their work to see what you might need. These can be very expensive parts if they need motors or sensors. This is the most important part of the arm, and fortunately it's the first part you can prototype and test. Figure out what you want, cheaply and quickly fabricate a version yourself, and move it around by hand to ensure it actually does what you want it to do. You may find that it needs to do more or less than you expected. I've seen people learn a lot simply by using a pair of popsickle sticks.</p> <p>3: <strong>Exactly</strong> how much force should the arm lift? Start with the outermost part, the end effector (aka grasper/gripper/whatever). That needs to have a grip capable of holding 2kg, or roughly 20 newton-meters (you'll be doing all forces in newton-meters now; it's a lot easier than foot-pounds because 1kg is 9.8NM, and it's best to round up). How does it hold it? Does it pinch sideways? If so, it needs to have some amount of grip. If it lifts from the bottom like a fork, it needs to have a certain amount of torque in the last up/down DOF. We'll come back to this.</p> <p>4: (AKA #3 part 2) Now that you know what the end effector needs to do, you can get a better idea of how it should be shaped and how much it will weigh. And after going through step 1 and lots of graph paper, you may know how long each segment of the arm needs to be. With this knowledge, you can start calculating how much each segment will weigh and therefore how much force each DOF needs to support. It's safest to calculate this with each segment fully outstretched, but for the lowest 2 joints you may be able to estimate that they will always be at an angle and therefore don't need quite as much force.</p> <p>5: Time to look at motors, the beautiful yet evil Medusa of robotics. Again, what does the robot need to do? Does it need to be fast, or powerful, or precise? If it needs precision but can be slow, you may need geared stepper motors. If it needs to be fast or powerful, you may need geared DC or AC motors. Speaking of which, DC motors will probably be easier to control. But each of these options have problems; stepper motors are quite weak and they don't know their position, while DC and AC motors simply don't know their positions. You'll need rotary encoders for DC/AC motors (incremental encoders at minimum), and may need absolute encoders no matter which motor type you choose. Absolute encoders tell you the exact angle of the arm, incremental ones will only tell you the change in angle. So, absolute encoders are better, but they are also much more expensive. You can avoid this problem by purchasing industrial servo motors (which look a lot like the stepper motors), but those are much more expensive because they have rotary encoders already built in. But those servos may be worth it just for the easier integration, if the maker has a nice software package that you can use.</p> <p>6: You'll need to power all these motors (probably through drivers, which are another piece of hardware), and also power all the sensors and controllers, and they probably won't require the same voltage. Hopefully your motors can at least share a single voltage. You can buy velcro loops and labels in bulk.</p> <p>7: Don't forget the weight of the motors in the arm; you'll need to include that when calculating the torque required for the next motor. </p> <p>8: Make sure you have some arrangement for fast and cheap shipping of new components, and whatever discounts possible.</p> <p>9: Realize this is way too complicated, and look up a robot refurbisher/supplier in your area. Retired industrial robots are much cheaper than new ones, and they can help you get set up and figure out what adjustments you need.</p>
380
2012-11-13T09:34:55.917
|software|inverse-kinematics|robotic-arm|rs232|
<p>I want to build a robot arm that'll be approximately 1.25 meter long and will be able to lift up to 2 kilograms. It'll have 6 dof and it is an expensive project. And most importantly, i am the only programmer in this brand new robotics facility of ours. :)</p> <p>The robot that i want to build will be led by Inverse Kinematics, so with all these parameters and matrices, i think that i'll need a tough processor (Not so sure).</p> <p>Assuming that my robots control interface will be on an Android tablet, i thought that i also could develop my program for Android, and send necessary commands to the control chip via RS-232 interface.</p> <p>So, my question is, are standart 1 GHz Android tablets suitable for these tasks? If not, has anybody got an advice for me?</p>
Processor and command interface preference for a robot arm
<p>You can greatly simplify the problem in most common cases:</p> <ul> <li>A lot of "commercial grade" IMus (e.g. Xsens) have very noisy accelerometers. Don't even bother fusing them to get speed, the odometry is already order of magnitudes better. The only usable data the IMU is going to provide is the pitch and roll, and to some extent the heading (see next point)</li> <li>heading from IMUs is not that trustworthy. It uses magetometers, and will show huge drifts (up to 25 degrees over 2m in our case) near ferromagnetic masses, such as the one you can find in building walls. What we did to solve this is to use the IMU heading, but estimate a heading bias.</li> <li>If you are outdoors, don't forget that travelling 10m on a 10 degree incline does not lead to the same change in X and Y than travelling 10m on a flat terrain. This is usually accounted for by estimating Z, but I guess it can be estimated differently.</li> <li>GPS is also a lying bitch, typically in high-multipath environments. Plus low-grade (and even in some conditions high-grade) GPSes have a tendency to report very wrong standard deviations. We used some simple chi-square tests to check whether a particular GPS measurement should be integrated (i.e. checking that it matches the current filter estimate up to a certain point), which gave us decent results.</li> </ul> <p>The "typical" solution for us is to use odometry + IMU to get an ego-motion estimate and then use GPS to correct X,Y,Z and heading bias.</p> <p><a href="http://www.rock-robotics.org/stable/pkg/slam/pose_ekf/index.html">Here is an EKF implementation that we extensively used.</a> If you need to estimate the IMU's orientation (i.e. if it does not already have a built-in filter), you can also use on of these two filter: <a href="http://www.rock-robotics.org/master/pkg/slam/quater_ukf/index.html">UKF</a> and <a href="http://www.rock-robotics.org/master/pkg/slam/quater_ekf/index.html">EKF</a>.</p>
382
2012-11-13T10:51:35.660
|sensors|kalman-filter|sensor-fusion|
<p>My team and I are setting up an outdoor robot that has encoders, a commercial-grade <a href="http://en.wikipedia.org/wiki/IMU" rel="noreferrer">IMU</a>, and <a href="http://en.wikipedia.org/wiki/Global_Positioning_System" rel="noreferrer">GPS</a> sensor. The robot has a basic tank drive, so the encoders sufficiently supply ticks from the left and right wheels. The IMU gives roll, pitch, yaw, and linear accelerations in x, y, and z. We could later add other IMUs, which would give redundancy, but could also additionally provide angular rates of roll, pitch, and yaw. The GPS publishes global x, y, and z coordinates.</p> <p>Knowing the robot's x y position and heading will useful for the robot to localize and map out its environment to navigate. The robot's velocity could also be useful for making smooth movement decisions. It's a ground-based robot, so we don't care too much about the z axis. The robot also has a <a href="http://en.wikipedia.org/wiki/LIDAR" rel="noreferrer">lidar</a> sensor and a camera--so roll and pitch will be useful for transforming the lidar and camera data for better orientation.</p> <p>I'm trying to figure out how to fuse all these numbers together in a way that optimally takes advantage of all sensors' accuracy. Right now, we're using a <a href="http://en.wikipedia.org/wiki/Kalman_filter" rel="noreferrer">Kalman filter</a> to generate an estimate of <code>[x, x-vel, x-accel, y, y-vel, y-accel]</code> with the simple transition matrix:</p> <pre><code>[[1, dt, .5*dt*dt, 0, 0, 0], [0, 1, dt, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 0, 1, dt, .5*dt*dt], [0, 0, 0, 0, 1, dt], [0, 0, 0, 0, 0, 1]] </code></pre> <p>The filter estimates state exclusively based on the accelerations provided by the IMU. The IMU isn't the best quality; within about 30 seconds it will show the robot (at rest) drifting a good 20 meters from its initial location. I want to know out how to use roll, pitch, and yaw from the IMU, and potentially roll, pitch, and yaw rates, encoder data from the wheels, and GPS data to improve the state estimate.</p> <p>Using a bit of math, we can use the two encoders to generate x, y, and heading information on the robot, as well as linear and angular velocities. The encoders are very accurate, but they can be susceptible to slippage on an outdoor field.</p> <p>It seems to me that there are two separate sets of data here, which are difficult to fuse:</p> <ol> <li>Estimates of x, x-vel, x-accel, y, y-vel, y-accel</li> <li>Estimates of roll, pitch, yaw, and rates of roll, pitch, and yaw</li> </ol> <p>Even though there's crossover between these two sets, I'm having trouble reasoning about how to put them together. For example, if the robot is going at a constant speed, the direction of the robot, determined by its x-vel and y-vel, will be the same as its yaw. Although, if the robot is at rest, the yaw cannot be accurately determined by the x and y velocities. Also, data provided by the encoders, translated to angular velocity, could be an update to the yaw rate... but how could an update to the yaw rate end up providing better positional estimates?</p> <p>Does it make sense to put all 12 numbers into the same filter, or are they normally kept separate? Is there already a well-developed way of dealing with this type of problem?</p>
How to fuse linear and angular data from sensors?
<p>The short answer is yes, this can work. The long answer is &quot;Yes, but you need to do a lot of sensor fusion&quot;.</p> <p>The technology you're after was conceived about 5 years ago, the academic work is here: <a href="https://people.csail.mit.edu/wojciech/MoCap/index.html" rel="nofollow noreferrer">https://people.csail.mit.edu/wojciech/MoCap/index.html</a> <a href="https://i.stack.imgur.com/wAjOk.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wAjOk.jpg" alt="Motion capture in the real world" /></a><br /> <sub>(source: <a href="https://people.csail.mit.edu/wojciech/MoCap/teaser.jpg" rel="nofollow noreferrer">mit.edu</a>)</sub></p> <p>They combined the accelerometer data with ultrasonic ranges between the joints to improve the position estimation. In this case, it was for full-body motion capture but doing it for just fingers should be a similar problem.</p> <p>This project immediately became a company that began making motion capture devices (for 3D movies, video games, etc), but I can't remember the company name. A google search led me to &quot;animazoo&quot;, but I don't think that's what it was called before. Maybe it's the same company with a new name. In either case, those are the only relevant resources I know of but they seem to support your idea.</p>
390
2012-11-13T23:27:22.663
|imu|sensor-fusion|hri|
<p>I have been using the Cyberglove to control a humanoid robot hand, but found it disappointing as it doesn't measure the posture of the human hand very accurately.</p> <p><img src="https://i.stack.imgur.com/T7TYQ.png" alt="Cyberglove"></p> <p>I have been wondering about the possibility of using Inertial Measurement Units (<a href="http://en.wikipedia.org/wiki/Inertial_measurement_unit" rel="nofollow noreferrer">IMU</a>s) mounted on the fingers to track position and measure posture. But I'm not sure how feasible it is.</p> <ul> <li>Would an IMU return enough data to make tracking reliable in all circumstances?</li> <li>Would it be possible to fool the system into incorrectly tracking the fingers?</li> <li>Might it be possible to get away with using simple 3-axis accelerometers, or would it need 9-axis (accelerometer, gyro, and magnetometer)?</li> </ul>
Can I use IMUs to improve the position/posture measurement of fingers in a "data glove"?
<p>There's a good, if slightly overly optimistic study done by KPMG and the Center for Automotive Research called <a href="http://www.kpmg.com/US/en/IssuesAndInsights/ArticlesPublications/Documents/self-driving-cars-next-revolution.pdf" rel="nofollow">Self Driving Cars: the Next Revolution</a>. You'll be seeing more and more semi-autonomous features within the decade, starting in higher end vehicles (all the major automakers are working on such systems and we already have advanced cruise control and lane departure warning). However full autonomy is still a fair ways off, for cost as well as perception and institutional/legal issues. One of the problems with commercializing Google's car is that the sensor systems cost over $250,000 (see <a href="http://www.businessinsider.com/google-self-driving-car-sensor-cost-2012-9" rel="nofollow">http://www.businessinsider.com/google-self-driving-car-sensor-cost-2012-9</a>). Even with economies of scale, we're a ways off from that system being affordable. </p> <p>Google's accomplishments are VERY impressive. They've expanded to more vehicle types and more uses for testing (e.g., commuting to and from Google). They still don't work in all environments (snow being one issue). </p> <p>By the way, though, California and Florida have joined Nevada in passing laws regulating experimental use of autonomous vehicles on public roads. </p> <p>Hope this gives you some info.</p>
403
2012-11-14T16:52:39.187
|ugv|
<p>I am aware of the legislation's in Nevada, but what is happening with the technology currently. When is it expected to be commercialized ?</p>
What is the current state of the Google Self Driving Car Project?
<p>As Rocketmagnet mentioned, just because a motor is rated at 2.5 W doesn't mean it will be pulling 2.5 W <em>all the time</em>. Most robots have at most 1 or 2 servos that are running at full power at any one time; the rest have very low mechanical loads (and therefore pull much less electrical power) or are "off" and therefore pull practically zero electrical power.</p> <p>This leads to 2 very different approaches to power supplies:</p> <ul> <li><p>Tethered robots and desktop computers use a power supply and heat-sinks that can handle the maximum possible worst-case power draw -- when everything pulls the maximum power at the same time. 27 servos * 2.5 W @ 5V requires a 5 VDC and at least 14 A power supply (or perhaps several 5 VDC supplies that add up to at least 14 A).</p></li> <li><p>Autonomous robots and modern laptops use a power supply and heat-sinks that can handle some <a href="http://en.wikipedia.org/wiki/thermal_design_power" rel="nofollow noreferrer">thermal design power</a>. Some human arbitrarily picks some the TDP, which is much smaller than the worst-case power, but somewhat above the power required in "typical situations". Then the power supply is designed so it can handle any load from 0 to <em>slightly above</em> the TDP. And the rest of the system is designed so it <em>never exceeds</em> the TDP -- except perhaps for a few milliseconds. The simplest approach is to have something that measures the total current draw -- then when the current exceeds the TDP, assume that things have already gone horribly wrong, and shut everything down for a few seconds. More sophisticated approaches measure the current of each motor individually: When some motor stalls, "limp mode" kills the power to that one motor, so the robot continues to use the other motors at full power. When lots of motors pull a total current that is too high, "tired mode" reduces the power to all the motors so the robot continues to use all the motors at a slower speed.</p></li> </ul> <blockquote> <p>5 V fuses?</p> </blockquote> <p>You could install one big 14 A fuse. Or you could install 27 individual 0.5 A fuses, one in the +5V power line of each motor. Or both. You'll probably find it easier to find "12 V" or "250 V" fuses, which will work just fine in your application.</p> <p>There are many cheap polyfuses available (designed to protect 5V USB ports from excessive current). Alas, polyfuses take several seconds to "blow" -- too late to protect stuff from permanent damage, but quick enough to keep stuff from heating up, catching on fire, and burning down your house.</p> <p>possibly related: <a href="https://electronics.stackexchange.com/questions/44845/how-to-do-a-simple-overcurrent-protection-circuit-breaker-circuit-for-12v-1-2a">How to do a simple overcurrent protection/circuit breaker circuit for 12V 1-2A?</a></p> <blockquote> <p>convert 12 V to 5 V</p> </blockquote> <p>Most people using servo motors use an off-the-shelf DC-DC converter to convert whatever voltage the batteries supply to the 5V required by the servos.<a href="http://www.societyofrobots.com/schematics_powerregulation.shtml" rel="nofollow noreferrer">(c)</a> I see that <em>some</em> 18650 battery box (<a href="http://www.aliexpress.com/item/Free-Shipping-DIY-2A-5V-Mobile-Power-Supply-USB-Battery-Charger-18650-Box-with-voltage-meter/654255654.html" rel="nofollow noreferrer">a</a>) include a little DC-DC converter to convert the battery power to 5 VDC "USB battery charger". (A few people use servomotors designed to be connected directly to 12 VDC. <a href="http://www.pololu.com/catalog/product/1390/specs" rel="nofollow noreferrer">a</a>)</p> <p>Many DC-DC converters are set up so that they never pull more than some maximum current from the battery -- when the motor connected to their output stalls, the converter switches to a "constant-current" mode at some lower output voltage, pulling <em>less</em> power from the batteries. If you put such a DC-DC converter on each servo, it automatically goes into and comes out of "limp mode" appropriately.</p> <blockquote> <p>batteries</p> </blockquote> <p>"Selecting the proper battery for your robot" <a href="http://letsmakerobots.com/node/28427" rel="nofollow noreferrer">(a)</a></p> <p>"Robot batteries" <a href="http://www.backyardrobots.com/parts/parts.shtml" rel="nofollow noreferrer">(b)</a></p> <p>"Batteries I use in my Robotics" <a href="http://robosapienv2-4mem8.page.tl/Batteries.htm" rel="nofollow noreferrer">(c)</a></p> <p>etc. <a href="http://www.societyofrobots.com/batteries.shtml" rel="nofollow noreferrer">a</a> <a href="http://www.robotmarketplace.com/products/battery_build_main.html" rel="nofollow noreferrer">b</a> <a href="http://www.protechrobotics.com/products.php?cat=39" rel="nofollow noreferrer">c</a> <a href="http://www.terrorhurtz.com/a123/" rel="nofollow noreferrer">d</a> <a href="http://www.seattlerobotics.org/encoder/200210/lithiumion.htm" rel="nofollow noreferrer">e</a> <a href="http://www.batteryspace.com/robotbatteries.aspx" rel="nofollow noreferrer">f</a></p>
416
2012-11-14T21:36:19.967
|servomotor|design|power|rcservo|
<p>I apologize if this question may sound a little vague. I am working on a robotics project that will contain 27 servos of various sizes and I am having trouble figuring out how they should be powered.</p> <p>I was hoping to use several (3-6) 5 W 18650 battery boxes to power them, but the smallest motors would use 2.5 W each, so 1 battery box can only power two. The larger servos, obviously, use even more current, so this plan of using a small number of 18650's becomes infeasible.</p> <p>There is not enough room on the robot for a 12 V car battery, and adding one would require recalculating the sizes of the servomotors that would be needed. Furthermore, I am not sure how to convert the car battery 12 V output down to 5 V input for the servomotors.</p> <p>P.S. What about the stall current of the motors? Should the power supply be able to supply the stall current of all the motors it supplies (at the same time) or just the working current? Should I use a fuse to handle when (if?) the servomotors stall? Should I use a fuse or a circuit breaker? Do they make 5 V fuses? If so, where can I get one?</p> <p>Something like a larger version of the 18650 box would be most preferable.</p>
What is the best way to power a large number (27) servos at 5 V?
<p>To get relative displacement between two time instants all you need to do is integrate the values given off by the accelerometer (twice for linear displacement) and gyro (once for angular displacement).</p> <p>Due to measurement errors, which can many times be adequately modeled as Gaussian (you might have to estimate a bias and/or scale factor to the measurement), there will be drift in your estimate (i.e. errors accumulate and your estimate diverges). Because of that, if you plan to use the IMU to obtain position and orientation estimates relative to a fixed frame, you will also have to use more information to correct that estimate. These corrections can be made using a <a href="http://tom.pycke.be/mav/71/kalman-filtering-of-imu-data" rel="nofollow">Kalman Filtering</a> approach.</p> <p>Many people use the accelerometer and magnetometer data to do that, assuming your robot isn't moving too fast (i.e. $g \gg a_{robot}$), there aren't many magnetic field disturbances (i.e. $m_{earth} \gg m_{other\_stuff}$), and both vectors are perpendicular and have fixed orientation with respect to the ground. See, for instance, the <a href="http://en.wikipedia.org/wiki/Triad_method" rel="nofollow">TRIAD algorithm</a>.</p> <p>But then again, back to your question, if what you mean by "robot has displaced itself forward since a previous measurement" is:</p> <ul> <li><p><strong>Relative to his own body</strong>, regardless of his orientation in space, all you need to do is check the sign of the accelerometer output in the forward direction (you might want to set a minimum threshold or perform median filtering due to sensor noise)</p></li> <li><p><strong>Relative to a fixed frame</strong>, you have to take everything I talked about into consideration</p></li> </ul>
419
2012-11-14T23:06:21.523
|software|imu|deduced-reckoning|artificial-intelligence|
<p>I'm building a walking robot that will need to know when it moves forward. I'm using on-board intelligence and I plan on using accelerometers, gyros, and magnometers (if needed) to be able to detect if the robot moves forward. The problem is, I dont know how to program an Internal Navigation System or an IMU. What software algorithms are needed?</p> <p>To clarify my problem, I need to know how to program the micro controller to read the sensors and be able to tell if the robot has displaced itself <em>forward</em> since a previous measurement. Also if I used <a href="http://store.diydrones.com/ArduIMU_V3_p/kt-arduimu-30.htm" rel="nofollow">this sensor board</a> (or similar) could I use it to determine the displacement. </p>
How do you implement an INS from an accelerometer and (optionally) gyros and a magnetometer?
<p>These are matrix-vector math libraries. They are related to inverse kinematics only because inverse kinematics involves matrices, vectors, and math. If you need an implementation of an inverse kinematics library, that is a different question. But any of those would be fine for doing the required math, once you solve the inverse kinematics in closed form or by some other method. Personally, I've used JAMA (it works).</p>
431
2012-11-15T12:23:57.450
|inverse-kinematics|programming-languages|
<p>I'm planning to write an Inverse Kinematics controlled 6 dof virtual robot for Android. I did some research on packages avaliable and couldn't choose the right one which will satisfy my needs on this project. I've seen a work with Eigen in C++, and used it, it was just fine. But since i'm not so experienced in Java, i wanted to ask before i start, if someone knows some appropiate packages for these operations.</p> <p>Here is what i found so far:</p> <p>JAMA, Vecmath, Jmathtools, EJML, JAMPACK I ask this because i really dont want to get stuck in the middle of my project. Thanks in advance.</p>
Inverse Kinematics in Java
<p>Holding torque has nothing to do with driving the amount of torquelead screws. The holding torque rating is only for when the Axis is in stationary position holding. What you want it to maximize your stepper driver amperage with a stepper that has the same or a little less amp rating</p>
446
2012-11-16T21:03:22.347
|stepper-motor|reprap|
<p>I have a handful of 31.2oz-in stepper motors (<a href="http://www.mouser.com/Search/ProductDetail.aspx?R=HT17-268Dvirtualkey55010000virtualkey834-HT17-268D">Mouser.com - Applied Motion: HT17-268D</a>), and I was curious if they would be big enough to run a 3D printing/cutting/etching type (think RepRap) of machine. I had in mind to attach them via a simple gear to a screw-type drive to run the tool head back and forth. </p> <ul> <li>Maximum bed size would probably be ~1.5'<sup>3</sup>. </li> <li>Heaviest tool head would be something about half the weight of a Dremel tool.</li> <li>Hardest substances I would use it on would probably be hardwoods (with high speed cutter) and copper (for PCB etching).</li> </ul> <p>How do I figure the amount of torque needed to drive the head, and would the motors that I already have be big enough to do the job?</p>
How much torque do I need for a CNC machine?
<p>You can use an <a href="http://www.ebay.co.uk/itm/391201790274?_trksid=p2055119.m1438.l2649&amp;ssPageName=STRK%3AMEBIDX%3AIT" rel="nofollow noreferrer">RC battery checker</a>, </p> <p><a href="https://i.stack.imgur.com/39992.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/39992.jpg" alt="Cell capacity Controller"></a></p> <p>to verify the individual cells in a battery pack. These are cheap, for about $3.</p> <p>If any cell is below 3V on a LiPo battery then it should be considered as dead, or dying, and hence, it should be replaced.</p>
453
2012-11-18T05:44:20.077
|battery|troubleshooting|
<p>In our lab we use LiPo batteries to power our quadrotors. Lately we have been experiencing stability issues when using certain batteries. The batteries seem to charge and balance normally and our battery monitor indicates they are fine even when putting them under load. However when we attempt to fly the quadrotor with one of these batteries, manually or autonomously, it has a severe tendency to pitch and/or roll. My guess is that the battery is not supplying sufficient power to all the motors which brings me to my question. Is this behavior indicative of a LiPo going bad? If so what is the best way to test a battery to confirm my suspicions?</p>
How can one determine whether a LiPo battery is going bad?
<p>Speaking about navigation among moving obstacles, look at Jur van den Berg thesis. Also Lavalle provided a tutorial at ICRA12 <a href="http://msl.cs.uiuc.edu/~lavalle/icra12/" rel="nofollow">http://msl.cs.uiuc.edu/~lavalle/icra12/</a> and the videos are here <a href="http://techtalks.tv/events/105/" rel="nofollow">http://techtalks.tv/events/105/</a> but neither mention the problem of bipeds.</p>
469
2012-11-20T04:54:02.230
|navigation|
<p>I would like to have a better understanding of work in the field of "Navigation Among Movable Obstacles". I started off with <a href="http://www.ri.cmu.edu/pub_files/pub4/stilman_michael_2007_1/stilman_michael_2007_1.pdf" rel="nofollow">Michael Stilman's thesis under James Kuffner</a>, but that has not yet sated my appetite.</p> <p>I am currently trying to simulate a scenario where debris (Tables and Table parts) from a disaster scenario block pathways. The debris forms part of a movable obstacle. The robot which will be used is a bipedal humanoid.</p> <p>The thesis describes an approach to define the search space of possible actions leading from the start point to the goal. However, it assumes a mobile robot which works via gliding. </p> <p>I think the state space definitions would change for a bi-pedal robot. Why is why I wonder what other work is being done in this field. Perhaps the work of other research groups could give me clues as to how to design and perhaps reduce the search space for a bipedal humanoid robot.</p> <p>An implementation of Navigation among Movable Obstacles would also aid me in understanding how to reduce the search space of possible actions.</p> <p>So does anyone know of a working implementation of Navigation among movable obstacles? </p> <p>Any supporting information about other professors or research groups working on similar problems would also be very useful.</p> <p>I hope this edit is sufficient for the problem description.</p>
Is there a working implementation of "Navigation Among Movable Obstacles" for a bi-pedal robot?
<p>Communication latency is another reason why USB1.1 or USB2.0 is not always a good replacement for RS232. Data on USB1.1 or USB2.0 bus is formatted into 1ms or 125us frames respectively, which forces minimum receive to transmit latency equal to two frame periods (often more in practise). This is an important consideration for modern PC hardware, which usually doesn't have built-in RS232 port and USB/RS232 converter is used instead.</p>
474
2012-11-20T15:46:48.243
|rs232|usb|
<p>RS232 is not popular as it used to be and is mainly replaced by USB [<a href="http://en.wikipedia.org/wiki/RS-232#Role_in_modern_personal_computers" rel="nofollow noreferrer">wikipedia</a>]. Problems such as mentioned in <a href="https://robotics.stackexchange.com/q/331/158">this question</a> doesn't help its reputation either.</p> <p>In a new system design therefore, one could think of using USB instead of Serial Port for communication. However, it still seems like RS232 is the serial communication protocol/port of choice.</p> <p>Why is that? I understand changing old machinery that work with RS232 is costly, but what prevents <em>new</em> system designers from using USB instead of RS232?</p>
USB instead of RS232
<p>The "math" way to do this is to generate a speed/torque curve for your stepper at its given voltage, and determine the maximum force that will be applied to the object in motion (in this case, the largest force is likely to be acceleration).</p> <p>Note that most hobby 3d printers are so flimsy that the frame bending under acceleration will harm print quality long before you start missing steps. In this case, you could model the deflection of the frame under load.</p> <p>In my experience, 90% of missed steps are caused by mechanical or software problems such as the axes binding or pulses being missed. Try moving everything by hand to check if it is smooth.</p> <p>Overly aggressive speeds and accelerations should be visible (and audible) on a 3d printer long before the steppers stall. </p> <p>Generally a single missed step will also cause the next steps to also be missed (since the motor must now work even harder to catch up) and you get a few seconds of buzzing instead of motion.</p>
478
2012-11-21T10:59:55.920
|control|stepper-motor|tuning|
<p>As an industrial roboticist I spent most of my time working with robots and machines which used brushless DC motors or linear motors, so I have lots of experience tuning PID parameters for those motors.</p> <p>Now I'm moving to doing hobby robotics using stepper motors (I'm building my first <a href="http://tvrrug.org.uk/">RepRap</a>), I wonder what I need to do differently. </p> <p>Obviously without encoder feedback I need to be much more conservative in requests to the motor, making sure that I always keep within the envelope of what is possible, but how do I find out whether my tuning is optimal, sub optimal or (worst case) marginally unstable?</p> <p>Obviously for a given load (in my case the extruder head) I need to generate step pulse trains which cause a demanded acceleration and speed that the motor can cope with, without missing steps.</p> <p>My first thought is to do some test sequences, for instance:</p> <ul> <li>Home motor precisely on it's home sensor.</li> <li>Move $C$ steps away from home slowly.</li> <li>Move $M$ steps away from home with a conservative move profile.</li> <li>Move $N$ steps with the test acceleration/speed profile.</li> <li>Move $N$ steps back to the start of the test move with a conservative move profile.</li> <li>Move $M$ steps back to home with a conservative move profile.</li> <li>Move $C$ steps back to the home sensor slowly, verifying that the sensor is triggered at the correct position.</li> <li>Repeat for a variety of $N$, $M$, acceleration/speed &amp; load profiles.</li> </ul> <p>This should reliably detect missed steps in the test profile move, but it does seem like an awfully large space to test through however, so I wonder what techniques have been developed to optimise stepper motor control parameters.</p>
How can I optimise control parameters for a stepper motor?
<p>I use @narayan's approach to implement my particle filter:</p> <pre><code>new_sample = numpy.random.choice(a=particles, size=number_of_particles, replace=True, p=importance_weights) </code></pre> <p>a is the vector of your particles to sample, size is the count of particles and p is the vector of their normalized weights. replace=True handles bootstrap sampling with replacement. The return value is a vector of new particle objects.</p>
479
2012-11-21T15:36:22.960
|localization|particle-filter|
<p>I understand the basic principle of a particle filter and tried to implement one. However, I got hung up on the resampling part. </p> <p>Theoretically speaking, it is quite simple: From the old (and weighted) set of particles, draw a new set of particles with replacement. While doing so, favor those particles that have high weights. Particles with high weights get drawn more often and particles with low weights less often. Perhaps only once or not at all. After resampling, all weights get assigned the same weight.</p> <p>My first idea on how to implement this was essentially this:</p> <ol> <li>Normalize the weights</li> <li>Multiply each weight by the total number of particles</li> <li>Round those scaled weights to the nearest integer (e.g. with <code>int()</code> in Python)</li> </ol> <p>Now I should know how often to draw each particle, <em>but</em> due to the roundoff errors, I end up having <em>less particles</em> than before the resampling step. </p> <p>The Question: How do I "fill up" the missing particles in order to get to the same number of particles as before the resampling step? Or, in case I am completely off track here, how do I resample correctly?</p>
Particle filters: How to do resampling?
<p>I'd stick with a <a href="https://beagleboard.org/bone" rel="nofollow noreferrer">BeagleBone</a> for the brains plus an <a href="https://www.sparkfun.com/products/11286" rel="nofollow noreferrer">Arduino Leonardo</a> for the brawn. This gets you a full computer capable of running Linux or Android with 512 MB of RAM and 800 MHz of CPU to write your AI and all the hardware capability that the Arduino ecosystem enables. Connect the two together over USB and use a remote control library like <a href="https://github.com/JayBeavers/Reflecta" rel="nofollow noreferrer">Reflecta</a> or <a href="https://github.com/jgautier/firmata" rel="nofollow noreferrer">Firmata</a> to turn the Arduino into a set of 'remote hardware ports' for the Beaglebone.</p> <p>A new entry into this space is the <a href="https://www.sparkfun.com/products/11712" rel="nofollow noreferrer">PCDuino from SparkFun</a>. Theoretically this combines both boards into one. I haven't tried it out myself to validate however.</p> <p>You should be able to put together a robot based on these parts for around your price range of $300.</p> <h2>BeagleBone</h2> <p><img src="https://beagleboard.org/static/bonescript/bone101/bone_connectors.jpg" alt="BeagleBone"></p> <h2>Arduino Leonardo</h2> <p><img src="https://dlnmh9ip6v2uc.cloudfront.net/images/products/1/1/2/8/6/11286-01_i_ma.jpg" alt="Arduino Leonardo"></p> <h2>PCDuino</h2> <p><img src="https://i.stack.imgur.com/bSp2H.jpg" alt="PCDuino"></p>
494
2012-11-24T10:28:54.920
|nxt|
<p>I'm interested to build Robot from my imagination, and I was looking to purchase a robotic kit.</p> <p>I find the Lego Mindstorm NXT 2.0 really interesting for many reasons : You can plug whatever brick you want, and you can develop in the language you want.</p> <p>I am a developer, and the use of this kind of robotic would be interaction mostly (not moving, so the servo motors are useless to me, at least now).</p> <p>But regarding the spec of the NXT main component, I feel it's a bit low (proc, ram &amp; rom).</p> <p>That made me wonder if any of you know something similar (where I can plug whatever I want on it, and most importantly, program the reaction), but with a more powerful hardware ?</p> <p>Price will also be a limitation : I like the NXT also because I can build what I want under 300 USD. I don't want to spend 10k USD on my first kit, but I would appreciate buying a better piece of robotic if the price isn't too distant from the NXT's.</p> <p>Do you have some alternatives to check out ?</p> <p>Thanks for your help ! :)</p>
More powerful alternatives to Lego Mindstorm NXT 2.0?
<p>While there are specific outlets, as the other answers show, you shouldn't underestimate plain old <a href="http://youtube.com" rel="nofollow">YouTube</a>. Most research labs and companies simply host their videos there (e.g. <a href="https://www.youtube.com/user/mitmedialab" rel="nofollow">MIT</a>, <a href="https://www.youtube.com/user/BostonDynamics" rel="nofollow">BostonDynamics</a> and the likes), as do <a href="https://www.youtube.com/watch?v=NqDTE6dHpJw" rel="nofollow">individuals</a>.</p>
502
2012-11-24T21:27:31.757
|publications|
<p>Robots are somewhat videogenic, and the old saying "show me, don't tell me" is especially applicable.</p> <p>But of course, a video is not a <em>question</em>, so it doesn't fit the Stack Exchange format. Maybe video links would be more suitable in a CodeProject post. It just seems like this board hits the right cross-section of people, whose projects I would be interested in seeing.</p>
Is there a place for posting "look at what I did" videos?
<p>There are really nice on-line <a href="http://www.societyofrobots.com/calculator.shtml" rel="nofollow">calculators on societyofrobots</a> including a <a href="http://www.societyofrobots.com/RMF_calculator.shtml" rel="nofollow">robot motor factor calculator</a> that could be helpful for you.</p>
504
2012-11-24T21:43:42.497
|motor|
<p>I am making a 2 wheel drive robot.</p> <p>Suppose I know that my robot is going to weight x kg when finished and I know the diameter of the wheels y (geared motors will be connected directly to the wheels). I can choose from several geared motors and I know the peak torque of each motor and the idling speed.</p> <p>How can I calculate the load that a specific motor can take? I.e. will a motor with a given torque be able to move my robot without being too overloaded? What rpm will the motor have when it has load?</p>
Choosing motors for 2 wheel drive robot
<p>I think your best ally will be Google. A quick search for "arduino juniper cutedigi" led me to the <a href="http://www.cutedigi.com/wireless/wifi/juniper-wifi-shield-for-arduino-based-on-gainspan-module.html" rel="nofollow">manufacturer's website</a>, which includes a good amount of documentation and sample code under the "Download:" heading at the end of the page. Here's an extra <a href="http://arduino.cc/forum/index.php/topic,118268.0.html" rel="nofollow">troubleshooting forum post</a> that may be of use.</p> <p>Even if you don't find the best documented code out there, finding several sample scripts that do similar things might help you understand the basics.</p>
511
2012-11-25T18:30:52.343
|arduino|electronics|wifi|
<p>I recently got an arduino wifi shield known as "juniper" (I believe it was by cutedigi). I've tried to find code examples, but when I saw code, it was un-commented and very little explained, I could really use a tutorial or some sample code with a good explanation, can anyone help me find a place to start? I found a piece of code here: <a href="http://arduino.cc/forum/index.php?action=printpage;topic=103582.0" rel="nofollow">http://arduino.cc/forum/index.php?action=printpage;topic=103582.0</a> and I just want to connect to a network, maybe send some get requests, or open a socket.</p> <p>EDIT: after poking around for a while, i found documentation, but I still can't get it to work. my code: <a href="http://pastie.org/5455603" rel="nofollow">http://pastie.org/5455603</a> I can't seem to get any input at all from the wifi shield.</p>
Where can I find a tutorial or sample code for the Juniper WiFi Arduino Shield?
<p>One thing to watch out for is that many of the additive or extrusion-based 3D printers will leave small gaps between the beads of plastic that they lay down. Even if you polish the surface, the interior will be very poor optically.</p> <p>You may want to look at <a href="http://en.wikipedia.org/wiki/Selective_laser_sintering" rel="nofollow">Selective Laser Sintering (SLS)</a> to create a finished product that is more uniformly solid.</p>
520
2012-11-26T07:34:40.657
|3d-printing|manufacturing|
<p>A lot of awesome optics projects like hacking cameras and projectors become possible with CAD lens modelling software<sup>1</sup>, if we can also easily prototype the lenses we design.</p> <p>What are some materials and additive or subtractive 3D fabrication strategies that can make a clear lens with strong refraction and the ability to be polished?</p> <p><sup>1</sup> <a href="http://www.optenso.com/links/links.html#lds" rel="nofollow">Here is a helpful list of 37 different lens design &amp; simulation programs</a>.</p>
Is it practical to 3D print a refractive lens?
<p>One way is to find the Jacobian using Perturbation method.</p> <p>Basically, you compute forward kinematic matrix and find each entry through numeric differentiation</p> <p>for each f(θ1+Δθ, θ2, θ3...) subtract f(θ1, θ2, θ3) divide by Δθ</p> <p>θ1, θ2+Δθ, θ3... subtract f(θ1, θ2, θ3) divide by Δθ</p> <p>and so on</p>
521
2012-11-26T08:00:05.007
|inverse-kinematics|kinematics|
<p>When computing the Jacobian matrix for solving an Inverse Kinematic analytically, I read from many places that I could use this formula to create each of the columns of a joint in the Jacobian matrix:</p> <p><span class="math-container">$$\mathbf{J}_{i}=\frac{\partial \mathbf{e}}{\partial \phi_{i}}=\left[\begin{array}{c}{\left[\mathbf{a}_{i}^{\prime} \times\left(\mathbf{e}_{p o s}-\mathbf{r}_{i}^{\prime}\right)\right]^{T}} \\ {\left[\mathbf{a}_{i}^{\prime}\right]^{T}}\end{array}\right]$$</span></p> <p>Such that <span class="math-container">$a'$</span> is the rotation axis in world space, <span class="math-container">$r'$</span> is the pivot point in world space, and <span class="math-container">$e_{pos}$</span> is the position of the end effector in world space.</p> <p>However, I don't understand how this can work when the joints have more than one DOFs. Take the following as an example:</p> <p><img src="https://i.stack.imgur.com/7mVwI.png" alt="enter image description here"></p> <p>The <span class="math-container">$\theta$</span> are the rotational DOF, the <span class="math-container">$e$</span> is the end effector, the <span class="math-container">$g$</span> is the goal of the end effector, the <span class="math-container">$P_1$</span>, <span class="math-container">$P_2$</span> and <span class="math-container">$P_3$</span> are the joints.</p> <p>First, if I were to compute the Jacobian matrix based on the formula above for the diagram, I will get something like this:</p> <p><span class="math-container">$$J=\begin{bmatrix} ((0,0,1)\times \vec { e } )_{ x } &amp; ((0,0,1)\times (\vec { e } -\vec { P_{ 1 } } ))_{ x } &amp; ((0,0,1)\times (\vec { e } -\vec { P_{ 2 } } ))_{ x } \\ ((0,0,1)\times \vec { e } )_{ y } &amp; ((0,0,1)\times (\vec { e } -\vec { P_{ 1 } } ))_{ y } &amp; ((0,0,1)\times (\vec { e } -\vec { P_{ 2 } } ))_{ y } \\ ((0,0,1)\times \vec { e } )_{ z } &amp; ((0,0,1)\times (\vec { e } -\vec { P_{ 1 } } ))_{ z } &amp; ((0,0,1)\times (\vec { e } -\vec { P_{ 2 } } ))_{ z } \\ 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 \\ 1 &amp; 1 &amp; 1 \end{bmatrix} $$</span></p> <p>This is assumed that all the rotation axes are <span class="math-container">$(0,0,1)$</span> and all of them only have one rotational DOF. So, I believe each column is for one DOF, in this case, the <span class="math-container">$\theta_\#$</span>.</p> <p>Now, here's the problem: What if all the joints have full 6 DOFs? Say now, for every joint, I have rotational DOFs in all axes, <span class="math-container">$\theta_x$</span>, <span class="math-container">$\theta_y$</span> and <span class="math-container">$\theta_z$</span>, and also translational DOFs in all axes, <span class="math-container">$t_x$</span>, <span class="math-container">$t_y$</span> and <span class="math-container">$t_z$</span>.</p> <p>To make my question clearer, suppose if I were to "forcefully" apply the formula above to all the DOFs of all the joints, then I probably will get a Jacobian matrix like this:</p> <p><a href="https://i.stack.imgur.com/f6Fm7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f6Fm7.png" alt="enter image description here"></a></p> <p>(click for full size)</p> <p>But this is incredibly weird because all the 6 columns of the DOF for every joint is repeating the same thing.</p> <p>How can I use the same formula to build the Jacobian matrix with all the DOFs? How would the Jacobian matrix look like in this case?</p>
Computing the Jacobian matrix for Inverse Kinematics
<p>It looks like the Rainbowduino 3.0 uses an ATmega328, so first be sure to choose a board that's using that.</p> <p>If that doesn't solve the problem, try looking at this <a href="http://www.seeedstudio.com/wiki/Rainbowduino_v3.0" rel="nofollow">wiki article about the Rainboduino v3.0</a>.</p>
530
2012-11-26T22:14:31.157
|software|arduino|programming-languages|
<p>OK, not really robotics, but has anyone been able to upload to a Rainboduino v3.0 using the Arduino IDE? I can't seem to figure it out, and there is virutally no documentation online. I followed <a href="http://www.anyware.co.uk/2005/2012/01/17/getting-started-with-arduino-rainbowduino/" rel="nofollow">this blog entry</a>, but got no connection to the board. </p> <p>If anyone can give me some suggestions, I would appreciate it! </p>
Rainbowduino 3.0 - Arduino IDE fails to upload
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card