Answer
stringlengths
44
28.2k
Id
stringlengths
1
6
CreationDate
stringlengths
23
23
Tags
stringlengths
4
94
Body
stringlengths
54
35.7k
Title
stringlengths
13
150
<p>First of all if you want to do it yourself using an Arduino then I would recommend using the Arduino Nano to save on weight. </p> <p>If you don't want to rely on noises (as used by some battery monitors) then you could program the Arduino to trigger something off, maybe a visible LED or some other physical thing on the quadcopter. Or it could simply gently reduce power to all motors so that the quad lands slowly and safely.</p> <p>You could also consider the use of telemetry (i.e. sending back information from the quadcopter) if your r/c transmitter supports it. This will give you a real time reading of the battery voltage as well as temperature etc. </p> <p>Many r/c transmitters also allow you to set a simple timer on the transmitter which you would set based on ground testing to know when approximately you should bring the quadcopter back to base.This is the least technical of the solutions but apparently works well if you allow a bit of safety leeway to compensate for variations due to varying flight conditions.</p>
1163
2013-04-04T18:45:28.617
|battery|
<p>I'm building a quadcopter and I've seen that a Li-Po battery must not be entirely discharged, otherwise it could damage it.</p> <p>How do you know when you have to stop your quadcopter or robot in order to prevent damages, since the voltage doesn't drop? Which part should control the battery charge? ESCs? BEC? Flight controller?</p>
How to know when a Li-Po battery is discharged?
<p>I will try to answer 2 and 3.</p> <h1>2</h1> <p>I am pretty sure the cheap ones won't and I doubt expensive ones would.</p> <p>In addition, The viscosity isn't the only thing you have to take into account when it comes to accuracy. Distance, height, wear of the tubes, wear of the motor, liquids that can potentially accumulate in the tube, etc, etc.</p> <p>Having that said, I wouldn't design the solution expecting constant rate. I would rather provide an easy interface to calibrate the pumps to a particular situation. Also, I would recommend the user to check calibration in a regular basis (something like 6 months, by experience).</p> <h1>3</h1> <p>It doesn't really matter, you can use a simple relay, a solid state relay or a mosfet. A professional level custom circuit would use a mosfets, as it is silent, small and fits better in a PCB. Personally, I would rather use a relay board for DIY stuff, which is surprisingly cheaper and easier to find in single units these days. Also, it requires no soldering and doesn't require you to learn how to use the mosfet.</p>
1167
2013-04-05T04:02:25.527
|arduino|
<p>I'm building an arduino controlled pump system to be able to move fluids around. I need this to be fairly accurate, but extreme precision isn't required. Since there will be a variety of liquids moved through the pump, I've determined a peristaltic pump the best fit. But I don't think I fully understand them, and had a few questions..</p> <ol> <li><p>Since I'll need to purge the system... Can a peristaltic pump push air? Let's assume you have a 2m of tubing, and you pump a bunch of water through it. Can you remove the tube from the water reservoir so it is open to the air, and effectively purge the system of any remaining water?</p></li> <li><p>Since I want to fairly accurately measure flow, could I simply count milliseconds instead of using a flowmeter? ... Will a peristaltic pump ALWAYS pump at a constant rate, regardless of the viscosity of the fluid? That is, will maple syrup come out at the same rate as water?</p></li> <li><p>Shopping question, ignore I suppose ... Anyone know where I may find a fast/high flow peristaltic pump? I'm looking to be able to pump, at a minimum, .5oz/sec</p></li> <li><p>Would be determinant upon #3 ... What sort of relay would I want for toggling this on/off with an arduino?</p></li> </ol>
Can the rate of peristaltic pump's flow be accurate across changes in fluid viscosity?
<p>Just a short note- There is a significant difference between information filters and Klaman filters. Although they are mathematically similar (inverses of each other), marginalization is simple in Kalman filters and complicated in information filters. However, smoothing is simpler in information filters while it is complicated in Kalman filters. Since modern state estimation techniques tend to employ smoothing to mitigate the effects of non-linearities, enhance precision and allow loop closures information filters are now on the rise.</p>
1180
2013-04-08T23:18:35.460
|kalman-filter|algorithm|sensor-fusion|
<p>I read many sources about kalman filter, yet no about the other approach to filtering, where canonical parametrization instead of moments parametrization is used. </p> <p>What is the difference?</p> <hr> <p>Other questions:</p> <ol> <li><p>Using IF I can forget KF,but have to remember that prediction is more complicated <a href="http://en.wikipedia.org/wiki/Kalman_filter#Information_filter" rel="nofollow noreferrer">link</a> </p></li> <li><p>How can I imagine uncertainty matrix turning into an ellipse? (generally I see, area is uncertainty, but I mean boundaries) </p></li> <li><p>Simple addition of information in IF was possible only under assumption that each sensor read a different object? (hence no association problem, which I posted <a href="https://robotics.stackexchange.com/questions/1181/object-level-sensor-fusion-for-multiobject-tracking">here</a></p></li> </ol>
information filter instead of kalman filter approach
<p>This is called "data association" in tracking literature. When you measure the position of an object, you need to know which object it was you measured. If you can estimate this probability, then you are free to choose the most likely association. This is a heavily researched topic, but boils down to Bayesian analysis. </p> <p>Here's a simple way:</p> <p>Assume we have two objects $o_1$ and $o_2$, an estimate of their position $x_1$ and $x_2$, and a measurement $z$. Suppose $z$ is just a position measurement. We'd like to update $o_1$ <strong>or</strong> $o_2$'s estimated position, but we have no idea with object we just measured. So, we find the <em>most likely</em> object, and update that.</p> <ol> <li><p>Estimate $p(z|x_1)$ and $p(z|x_2)$. The Kalman filter gives you the tools to do this.</p></li> <li><p>IF $p(z|x_1)&gt;p(z|x_2)$ AND $p(z|x_1)&gt;\gamma$, then update $x_1$ using $z$, the Kalman filter gives you the tools to do this as well. </p></li> <li><p>ELSE IF $p(z|x_2)&gt;p(z|x_1)$ AND $p(z|x_2)&gt;\gamma$, then update $x_2$ using $z$, the Kalman filter gives you the tools to do this as well. </p></li> <li><p>ELSE, no probability is greater than $\gamma$, so we drop the measurement. You can set $\gamma$ to zero if you don't care. </p></li> </ol> <p>Two things: First, you also need to know how to <em>initialize</em> an object. That's why $\gamma$ is important. If the measurement doesn't <em>seem</em> to match any known estimates, you might have just detected a new object.</p> <p>Second, I have a suspicion that your question will be heavily edited, so I'm not putting a lot of details in just now.</p> <hr> <p><strong>EDIT</strong> To address your newer, revised question:</p> <p>You have a different problem, but it is totally related. First, you want to fuse objects. This is as simple as estimating the probability that all measurements of object 1 and all measurements of object 2 are in fact of the same object. The algorithm is simple, but nearly intractable computationally. This is the same as track merger.</p> <p>To really do this with objects in 3D, you need a model of the object which allows you to estimate the probability that two partially overlapping (or possibly just nearby) objects are in fact the same object). But this is exactly the same as the acceptance gate approach.</p> <p><em>note</em></p> <p>There are only a few really useful ways to mere measurements, and they are well defined. I strongly suggest if you wish to continue with abstract tracking and estimation tasks that you read Bar-Shalom's <a href="http://rads.stackoverflow.com/amzn/click/047141655X" rel="nofollow">Tracking and Estimation book</a>. Please get it from a library if you can. I just don't know of a better reference.</p>
1181
2013-04-08T23:25:40.900
|kalman-filter|algorithm|sensor-fusion|
<p>I want to fuse objects coming from several sensors, with different (sometimes overlapping!) fields of view. Having object lists, how can I determine whether some objects observed by different sensors are in fact the same object? Only then I can truly write an algorithm to predict future state of such an object. </p> <p>From literature I read those 4 steps:</p> <ol> <li>Plot to track association (first update tracks estimates and then associate by "acceptance gate" or by statistical approach PDAF or JPDAF)</li> <li>Track smoothing (lots of algorithms for generating new improved estimate, e.g.: EKF, UKF, PF)</li> <li>Track initiation (create new tracks from unassociated plots)</li> <li>Track maintenance (delete a track if was not associated for last M turns. also: predict those tracks that were associated, their new location based on previous heading and speed)</li> </ol> <p>So basically I am questioning point 1, acceptance gate. For a single sensor I can imagine it can be just a comparison of xy position of object and sensor measurement, velocity with heading eventually. My case is however, I have already ready object lists from each sensor in every cycle, there are some algorithms how to merge informations about an object collected by different sensors (great source is e.g. here: <a href="http://www.mathworks.de/matlabcentral/fileexchange/37807-measurement-fusion-state-vector-fusion" rel="noreferrer">http://www.mathworks.de/matlabcentral/fileexchange/37807-measurement-fusion-state-vector-fusion</a>), but question is how to decide which objects should be fused, and which left as they were? Fields of view may overlap partly, not totally.</p>
object level sensor fusion for multiobject tracking
<p>Reading through some of the articles linked to in the Wikipedia article, I'll respectfully disagree with <a href="https://robotics.stackexchange.com/a/1183/37">@Theran</a>. The distinction seems quite well grounded, although Wikipedia does a poor job of making it. </p> <p>The term <em>embedded systems</em> (ES) has been around since the 60s and can, arguably, refer to anything from an airplane to a Furby. I think the term <em>cyber-physical systems</em> (CPS) was coined to distinguish it from what are traditionally thought of as embedded systems, namely closed-loop, non-networked "boxes" that operate in a very well-defined and constrained domain with a limited power to affect physical systems. CPS, on the other hand, embody the idea of <em>think globally, act locally</em> (my apologies to <a href="https://en.wikipedia.org/wiki/Think_Globally,_Act_Locally#Origin_in_town_planning" rel="nofollow noreferrer">Patrick Geddes</a>), that is, they are usually highly networked systems that bring about change in a local physical system dependent on the state and actions of other entities in the wider network. </p> <p>While many robotic applications fit this definition, and can therefore be termed cyber-physical systems, many are not. What bestows the honour on MIT's robotic garden, I believe, is the fact that the robots form <a href="https://www2.lirmm.fr/lirmm/interne/BIBLI/CDROM/ROB/2009/IROS_2009/papers/1119.pdf" rel="nofollow noreferrer">part of a wider, decentralised system</a> (PDF). It is the plants, equipped with sensors, that decide when to request watering or other services from the robots, while it is the robots that then decide between them which one will fulfil that request. Furthermore, not all CPS are thought of as "robotic", for example, an intelligent power grid.</p> <p><a href="https://en.wikipedia.org/wiki/Cybernetics" rel="nofollow noreferrer">Cybernetics</a>, as @Theran has noted, is occupied with the study of control systems, and so will form a core part of studying CPS, but also has a broader range of applications in fields such as mathematics, economics, and sociology, for example.</p> <p>This <a href="http://www.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-8.pdf" rel="nofollow noreferrer">report on cyber-physical systems</a> (PDF), by Edward Lee from UC Berkeley, makes clear that CPS are a next step in the evolution of embedded systems with many of the same constraints (real-time capabilities, reliability) plus a few extra ones (robustness, adaptability, intelligence, interconnectedness). As such, the field of CPS is, in parts, concerned with developing completely new approaches to hard- and software architecture. For example:</p> <blockquote> <p>But I believe that to realize its full potential, CPS systems will require fundamentally new technologies [...] One approach that is very much a bottom-up approach is to modify computer architectures to deliver precision timing [...] Complementing bottom-up approaches are top-down solutions that center on the concept of model-based design [...] In this approach, "programs" are replaced by "models" that represent system behaviors of interest. Software is synthesized from the models. </p> </blockquote> <p>Lee's thoughts are echoed in this <a href="http://www.jiafuwan.net/download/cyber_physical_systems.pdf" rel="nofollow noreferrer">Embedded Computing column</a> (PDF) by Wayne Wolf of Georgia Tech.</p> <blockquote> <p>After all, we've had computers attached to stuff for a long time. Why, you may ask, do we need a new term to describe what we've been doing for years? [...] We have a surprisingly small amount of theory to tell us how to design computer-based control systems. Cyber-physical systems theory attempts to correct this deficiency. [...] Cyber-physical systems actively engage with the real world in real time and expend real energy. This requires a new understanding of computing as a physical act—a big change for computing.</p> </blockquote> <p>I recommend reading both articles for a good view on how CPS are different from "mere" embedded systems. <a href="http://cyberphysicalsystems.org" rel="nofollow noreferrer">Cyberphysicalsystems.org</a> has a concept map of CPS on their homepage that nicely illustrates many of the aspects involved in developing CPS. </p> <p>As for the origin of the term, none of the sources I found attributed it to anyone. Many papers defined it without attribution while clearly not being the first to use them. The term first crops up in the literature in <a href="http://scholar.google.com/scholar?q=%22cyber-physical+systems%22&amp;hl=en&amp;as_sdt=0%2C5&amp;as_ylo=&amp;as_yhi=2006" rel="nofollow noreferrer">2006</a> but, by that time, the US National Science Foundation had already organised a <a href="http://varma.ece.cmu.edu/cps/" rel="nofollow noreferrer">Workshop on Cyber-Physical Systems</a>, suggesting that the term was already in use by then.</p>
1182
2013-04-09T12:52:45.160
|distributed-systems|embedded-systems|
<p>These days, one often hears of <a href="http://en.wikipedia.org/wiki/Cyber-physical_system" rel="noreferrer">cyber-physical systems</a>. Reading on the subject, though, it is very unclear how those systems differ from distributed and/or embedded systems. Examples from Wikipedia itself only make them look more like traditional distributed systems. For example:</p> <blockquote> <p>A real-world example of such a system is the Distributed Robot Garden at MIT in which a team of robots tend a garden of tomato plants. This system combines distributed sensing (each plant is equipped with a sensor node monitoring its status), navigation, manipulation and wireless networking.</p> </blockquote> <p>Obviously, <em>any</em> distributed system consists of sensing, actuations (which can easily include navigation) and networking.</p> <p>My question is, how exactly does cyber-physical systems differ from traditional distributed systems? Is it just a fancy name, or is there something considerably different with it?</p>
Shedding light on "cyber-physical systems"
<p>With a \$100-200 budget it's unlikely you can duplicate the functionality of a \$695 commercially-produced product. Your device may be slower, bulkier, noisier, less reliable.</p> <p>One possible approach is to use dot matrix printer print-heads to drive pins. See example of pins being driven in a <a href="http://vimeo.com/12648709" rel="nofollow noreferrer">vimeo.com</a> video called “Dot Matrix Print Head - Testing”. You might need to replace the pins with push wires so that you could go from a 3-by-9 pin layout to a 4-by-4 or 4-by-5 arrangement. Some fairly old (and bulky) print-heads already contain push wires.</p> <p>If the print-head pins are driven ballistically they might not be able to work with the high duty cycle that a Braille dot would require. If you use push wires, you could have a cam that moves a latch plate back and forth, alternately latching wires in place or releasing them so they can move up or down. Or you could have a cam that lifts all the Braille pins at once, and a print head that drives some latch pins sideways to catch those Braille pins that that should remain up. Another cam would clear all the latch pins at the beginning of each character cycle. </p> <p>There are miniature solenoids as shown below with roughly 1cm square cross-section, with a 1.5mm diameter pin that moves about 3mm. (Listed on ebay as “Miniature Electric Solenoid - 9 V - Push Type - 2 oz. Force - 3 mm Stoke”, about \$7.) The duty cycle of this solenoid is 10%, so like a ballistic print head it would require some latch mechanism, as well as a spring or gravity return. You would need to attach push wires to the end of the shaft because the solenoids won't cluster closely enough.</p> <p><img src="https://i.stack.imgur.com/VvxM2.jpg" alt="Miniature Electric Solenoid - 9 V - Push Type"></p>
1198
2013-04-17T17:06:16.400
|design|
<p>I'm a newbie in robotics, and I'm doing a project on dynamic Braille interface. Basically it's a 8*8 array of pins, which can be either totally up or down. How to use least motor as possible?</p> <p>I'm thinking of using Arduino for easy interface with computer.</p>
Dynamic braille interface
<p>If I understood correctly, you are referring to <code>robotic tendons</code>.</p> <p><img src="https://i.stack.imgur.com/RyYGi.jpg" alt="enter image description here"></p> <p>There is a lot of material on the subject if you search google.</p>
1200
2013-04-18T08:17:48.590
|microcontroller|robotic-arm|movement|
<p>I am looking for a specific name of the wire used for the robotic arm movement control and where can I find some of this online. I want to control it using the micro controller so please suggest some good development kit.</p>
What is the wire used for hand movement of robot called ? where can I find it online ?
<p>Non of the commercial air bearings I've seen have attempted to <em>seal</em> like this, so I think that your problems with vibration may lie elsewhere.</p> <hr> <p>The problems I have seen with air-bearing systems have been related to mechanical <a href="http://forums.reprap.org/read.php?14,131155,131155" rel="nofollow noreferrer">over constraint</a>.</p> <p>A high performance linear bearing stage I once worked on used six bearings like these:</p> <p><a href="http://www.nelsonair.com/NA_prods_flatpad_fp.htm" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gHOsJ.gif" alt="Copyright of Nelson Air Corp assumed, Image used without permission but with attribution."></a></p> <p>Arranged in 3 pairs like this: <code>/\em_</code></p> <p>Where <code>/</code>, <code>\</code> &amp; <code>_</code> are a pair of bearings (side by side along the track), <code>e</code> was the encoder strip and <code>m</code> was a linear motor.</p> <p>The problem that we had was that no matter how we tuned the servo loop, if it was tightly tuned enough to get up to the speed and acceleration we needed (3m/s &amp; 2g) then the system would often get into a <a href="http://en.wikipedia.org/wiki/Limit_cycle" rel="nofollow noreferrer">limit cycle</a> (sometimes erroneously called <a href="http://en.wikipedia.org/wiki/Resonance" rel="nofollow noreferrer">resonance</a>) when stopping (i.e. it would sit humming).</p> <p>The way we solved this was to remove one of the air bearings on the <code>/</code> row, relocating it to the middle:</p> <pre><code> Front view Side Unstable Stable view o o o o |o m m |m e e |e o o o o \o o o o /o </code></pre> <p>By removing the excess constraint, we appeared to remove the tendency for a perturbation in one air bearing to affect another bearing and then ripple through to the other bearings.</p> <p>Depending on your typical direction of motion and the flatness of your surface, you may find that a three point bearing system may work better for your system:</p> <p><a href="http://www.newwayairbearings.com/design/technical-resources/new-way-technical-reports/tech-report-2-vacuum-preloaded-x-y-stage" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JzlXh.jpg" alt="Copyright of New Way Air Bearings, Image used without permission but with attribution."></a></p> <p>You may also want to consider purchasing commercial air-bearing modules and attaching them to a frame (at least for a prototype) rather than attempting to manufacture your own air bearings. That way you can leverage the knowledge and support provided by the air-bearing manufacturer.</p> <p>One other point is that we used ordinary <a href="http://en.wikipedia.org/wiki/Polyimide" rel="nofollow noreferrer">Polyimide</a> tape on our air bearings. We considered various more permanent methods, but in the end decided that being able to easily remove old, scarred or scored tape and replace it with fresh tape quickly and easily made the most sense for our application.</p>
1202
2013-04-18T10:21:39.047
|linear-bearing|
<p>We have an air bearing for a planar xy motion. Today it consists of four pockets according to picture.</p> <p><img src="https://i.stack.imgur.com/dET39.png" alt="Current design" /></p> <p>In the current design there are no sealings around the peripheries of the pockets and we suspect that is the reason we get vibrations.</p> <p>In the current design we control the pressure, same for all for recesses. The flow is adjustable individually for each recess. In practice it is very hard to tune it.</p> <p>For the non recess surfaces we have used <a href="http://www.tss.trelleborg.com/remotemedia/media/globalformastercontent/downloadsautomaticlycreatedbyscript/catalogs/turcite_b_slydway_gb_en.pdf" rel="nofollow noreferrer">Slydway</a> as we need to be able to operate it without pressure occasionally.</p> <p>To try to solve the problem we plan to develop a prototype where we can try out the effect of using sealings around the periphery of the pockets. The idea is something like this: <img src="https://i.stack.imgur.com/7yKuf.png" alt="Idea for new design" /></p> <p><strong>Questions</strong></p> <ul> <li><p>Is the idea with adding sealings good? (sanity check)</p> </li> <li><p>Suggestions for sealings? (I'm thinking a porous material like felt or cigarette filter)</p> <p>Of course all suggestions are welcome.</p> </li> </ul> <p><em>Edit</em></p> <p>I'm going to try and add grooves around the recesses to evaquate the air that leaks. My thinking is that this will give us a more defined area under pressure.</p>
Problem with vibrations in air bearing
<p>IR sensor are best used for detecting any thing but they are quite sensitive to IR lights and sunlight.So if in condition when you are not bothering about Ir rays and sunlight , you can always use a pair of IR LED and Photodiode to make a pair of IR sensor.</p>
1211
2013-04-21T13:33:17.253
|motor|sensors|
<p>I have a small device that's picking up small rocks from a pile and moving them to another place. Its a kind of crude way of trying to push the whole pile onto a bigger gear and hoping one of them is pushed to one of the spaces between gears and taken around and falls off on the other side of the spinning gear. Here I want to know if the machine successfully got a rock here, if not it should spin the gear until it turns up a single rock on the other side of it. If a rock is present at the spot, the gear should stop spinning until the rock is taken care of by the rest of the machine.</p> <p>What kind of device can I use to sense if I successfully succeeded in getting a rock on the other side of the gear?</p> <p><img src="https://i.stack.imgur.com/JEHag.png" alt="enter image description here" /></p> <p>This is just a part of a bigger system. To sum up, I need the sensor to signal when a rock is signaled out and separated from the rest so it can continue work on that single rock.</p> <p>I am building this using an Arduino to move the gear around, so the sensor needs to be something that can be controlled by an Arduino.</p>
What kind of sensor do I need for knowing that something is placed at a position?
<p>I don't think there would be parts not yet existent in robotics. The elevator itself is pretty much a cabin moving on a rail. It may not be built from off-the-shelf components, but building it shouldn't be difficult.</p> <p>There are other reasons however, some of which are mentioned in the same Wikipedia article you have linked to. So additional to "not having cables strong enough" and "the slow speed":</p> <ul> <li><p>Cable wear: The cable used for the elevator will be partially below the clouds (subject to <strong>warmth</strong>, <strong>dry air</strong>, <strong>rain</strong>, <strong>snow</strong>, <strong>wind</strong>, <strong>tornadoes</strong> etc), partially in and above clouds (<strong>Humid</strong>, <strong>Cold</strong>), partially in vacuum (<strong>extreme cold</strong>, <strong>no air pressure</strong>), sometimes in the sun (<strong>extremely hot</strong> outside atmosphere), sometimes in the shade (<strong>near 0 degrees</strong> outside atmosphere) etc. So any cable used for the elevator should be able to resist <em>all</em> these environments.</p> <p>The situation is made worse by considering the <strong>friction</strong> between the "wheels" of the elevator and the track, <strong>chemical reactions</strong> with the environment, <strong>radiation</strong>, <strong>solar flares</strong> and even <strong>biological</strong> reasons (for example an eagle deciding he doesn't like the cable, or a sort of moss evolving to take advantage (aka eat) the material)</p></li> <li>Cable repair: Sooner or later, the cable needs repair. There is no such thing as "indestructible". Now imagine, how would you repair such a long cable? Surely, you can't just cut the bad parts and patch it, because the "welded" parts would probably not be strong enough. Replacing the whole cable would be an option which is quite expensive, given the special tough material and the long length. So, at least this issue would delay building the space elevator until they are sure it's doable.</li> <li>Safety of the elevator: One challenge is to be able to provide safety for the elevator. For example, how would you prevent asteroids from damaging it?</li> <li>Safety of the passengers: Another challenge is to provide safety for the passengers. With a slow speed, passing through the <a href="http://en.wikipedia.org/wiki/Van_Allen_radiation_belts" rel="nofollow">Allen radiation belts</a> is not healthy. This means that there should be adequate radiation shielding on the elevator, adding weight to it and making it harder to build.</li> <li>Safety of others: Yet another issue is providing safety for others, such as airplanes or space satellites. Most probably they would all need to become aware of the existence of this new concept, involving reprogramming a large number of air/space-crafts.</li> </ul> <p>In conclusion, probably the most important thing hindering the construction of the space elevator is the construction of the cable. Not only should it be strong enough to withhold the stress, but it should be practically able to withstand any kind of environment. Other reasons would be minor nuisance in comparison, but still they are and would further delay the deployment of the space elevator.</p>
1214
2013-04-22T12:04:46.710
|mobile-robot|research|
<p>In order to build and operate a <a href="http://en.wikipedia.org/wiki/Space_elevator">space elevator</a> moving crafts and people into space, there are two big challenges that have not been solved yet:</p> <ul> <li>Finding a cable with enough <a href="http://simple.wikipedia.org/wiki/Tensile_strength">tensile strength</a>,</li> <li>Moving stuff along the cable at a reasonnable speed.</li> </ul> <p>Apart from those two ones, what are the other technical challenges to solve, especially things that do not exist yet in robotics, and need to be invented?</p>
Space elevator: What is still needed, apart from the cable and propulsion?
<p>One way to do this is complicated, involving computer vision and a robot arm or other manipulator that can directly affect the orientation of each rock.</p> <p>The low-tech way to do it would be to use a separate conveyor that gave you one rock at a time, and use walls to funnel it into a gate that matches the internal dimensions of the straw. You would then just detect when the rock begins to push on the funnel walls (instead of travelling through) and knock it backwards to try again. After a few tries, it should end up in the proper orientation (or you can give up and reject it at that point). An even simpler way would be to skip the detection and just oscillate the funnel continuously, which would be safe if you're assured that each rock is guaranteed to have a working orientation.</p> <p>It's similar to <a href="http://www.faqs.org/patents/app/20130075226" rel="nofollow">this patent</a> which suggests a single wall, with the other side being open for improperly-aligned objects to fall off the belt.</p>
1218
2013-04-23T14:54:11.400
|arduino|motor|microcontroller|motion|
<p>This is part two of my larger robot, it follows up what happens with the small rocks here: <a href="https://robotics.stackexchange.com/questions/1211/what-kind-of-sensor-do-i-need-for-knowing-that-something-is-placed-at-a-position">What kind of sensor do i need for knowing that something is placed at a position?</a></p> <p>Now i am taking the rocks down a tube for placement. In the case they need to be altered so they always will stand up before they enter the tube. Obvioulsy a rectangular rock wont fit if it comes in sideways. The dimensions here are pretty small. The rocks are about 15 mm x 10 mm. The tube i use is actually a plastic drinking straw. And the material i use for the rest of the robot is Lego powered by step motors which draw the conveyor belts to move the rocks. The control is Arduino.</p> <p><img src="https://i.stack.imgur.com/J5nzR.png" alt="enter image description here"></p> <p>(sorry for the lousy illustration, if you know a good paint program for mac like the one used to draw the picture in my other post, please tell me :-))</p> <p>The rocks will always enter one at a time and have as much time they need to be adjusted to fit and enter the tube so the fall down. The question is, how to ensure all rocks are turned the right way when they get to the straw. Im not sure if using Lego when building the robot is off topic here, but a solution involving lego is preferable. And it has to be controlled by an Arduino. </p> <p>General tips in how to split a complex task into subtasks robots can do is also good, is there any theory behind the most common sub tasks a job requires when designing multiple robots to do it?</p>
How do I adjust objects on a conveyor belt into the proper orientation?
<p>The solution is actually not quite linear. There are at least two cases: 1) The fastest solution does not require maximum turning at all times. 2) The fastest solution does require maximum turning at all times.</p> <p>For an example of 1), consider the goal is straight ahead of the drone. For an example of 2), consider the goal is very close, but straight behind the drone; the fastest solution is a circle-like path.</p> <p>Finally, the problem is somewhat under-specified, unless you also add desired velocity/spin at the time of arrival.</p> <p>So, the problem, as stated (even with adding mass), requires solving discontinuous differential equations in an underspecified problem system. There may exist a closed form solution to this, but when I've had to do this (in simulation systems, not real-world systems) I've ended up using simple planners and PID-type beahviors, rather than finding the "optimal" solution.</p> <p>For example, one simple planner could be to first apply torque in a closed form to turn to face the desired target point at spin zero (this is a simple quadratic equation) and then turn on max thrust; you will get to the point (at a pretty high velocity.) If you want to get to the point with velocity 0, then you have to reverse thrust at the middle of the trajectory.</p>
1219
2013-04-23T15:56:36.040
|design|algorithm|kinematics|navigation|
<p>Imagine a "drone" and a target point on a 2d plane. Assuming the target is stationary, there are eight parameters:</p> <pre><code>P = my position Q = target's position V = my velocity I = my moment of inertia w = my angular velocity s = my angular position T = max thrust U = max torque </code></pre> <p>The drone's job is to get to the target as fast as possible, obeying max torque and max thrust. There are only two ways to apply the torque, since this is only in a 2d plane. Thrust is restricted to only go in one direction relative to the orientation of the craft, and cannot be aimed without rotating the drone. Neglect any resistance, you can just pretend it is floating around in 2d outer space. Let's say the drone checks an equation at time interval <code>t</code> (maybe something like every .01 seconds), plugs in the parameters, and adjusts its torque and thrust accordingly.</p> <ul> <li>What should the equations for thrust and torque be?</li> </ul> <h3>What have we tried?</h3> <p>We know that the time it takes for the drone to reach the target in the x-direction has to be the same for the same time in the y-direction. There is going to have to be some integral over time in each dimension to account for the changing thrust based on total thrust, and total thrust in each direction given the changing angular position. I have no idea how to tie the torque and thrust together in a practical way where a function can just be called to give what thrust and torque should be applied over the interval <code>t</code> unless there is some other technique.</p>
Drone targeting
<p>I updated ArduIMU's firmware and successfully got 100hz of output without disabling normalization.</p> <hr> <p><strong>Update:</strong> </p> <p>Thanks to Kalman filter firmware of ArduIMU I got up to 180hz of output plus removing all noises.</p>
1221
2013-04-24T11:22:53.977
|arduino|quadcopter|imu|pid|
<p>We are using <a href="http://code.google.com/p/ardu-imu/" rel="nofollow noreferrer">ArduIMU (V3)</a> as our Quadrotor's inertial measurement unit. (we have a separate board to control all motors, not with ArduIMU itself). </p> <p>As mentioned <a href="http://code.google.com/p/ardu-imu/wiki/Output#Output_Rate" rel="nofollow noreferrer">here</a> , the output rate of this module is only at about 8hz. </p> <p>Isn't it super slow to control a quadrotor ? I'm asking because as mentioned in <a href="https://robotics.stackexchange.com/a/246/1137">this answer</a> a quadrotor needs at least 200hz of control frequency to easily stay in one spot, and our ESCs is configured to work with 450hz of refresh rate. Any working PID controller I saw before for Quadrotors used at least 200-400hz of control frequency.</p> <p>I asked similar question before from Ahmad Byagowi (one of the developers of ArduIMU ) and he answered:</p> <blockquote> <p>The arduimu calculates the dcm matrices and that makes it so slow. If you disable the dcm output, you can get up to 100 hz gyro, acc and so on.</p> </blockquote> <p>So, what will happen if I disable DCM from the firmware ? Is it really important ? We did a simulation before and our PID controller works pretty well without DCM.</p>
Quadrotor control using ArduIMU
<p>There's more than one way to do it (<a href="http://en.wikipedia.org/wiki/There%27s_more_than_one_way_to_do_it">TMTOWTDI</a>).</p> <p>There are a several ways to connect 8 analog inputs to an Arduino.</p> <ul> <li>Add an analog multiplexer, as georgebrindeiro suggested. Such as: <a href="http://shieldlist.org/appliedplatonics/analoginput">(a)</a>, <a href="http://shieldlist.org/mayhewlabs/mux">(b)</a>, <a href="http://shieldlist.org/criticalvelocity/monstermux">(c)</a>, <a href="http://blueberryde.com/shop/24-channel-analog-expander-shield/">(d)</a>, etc.</li> <li>Replace the Arduino with one that has enough analog inputs already built-in. Such as the Arduino Mini with 8 analog inputs, the Arduino Due with 12 analog inputs <a href="http://arduino.cc/en/Main/ArduinoBoardDue">(b)</a>, <a href="http://arduino.cc/en/Main/ArduinoBoardMini">(a)</a> the Arduino Mega with 16 analog inputs <a href="http://arduino.cc/en/Main/arduinoBoardMega">(b)</a>, the Teensy 3.0 with 14 analog inputs <a href="http://forum.jeelabs.net/node/1429.html">(c)</a>, etc.</li> <li>Add one or more external ADCs, and connect it to digital pins of your CPU. Such as: Arduino 4-20mA Shield with 16-bit ADC <a href="http://erdosmiller.com/products/arduino-4-20ma-shield">(a)</a>; ADS1115 16-Bit ADC -- with 4 of these boards, 2 digital pins from the Arduino are used to read 16 analog inputs <a href="http://learn.adafruit.com/adafruit-4-channel-adc-breakouts">(b)</a>; MCP3208 8 channel 12 bit SPI ADC <a href="http://arduino.cc/forum/index.php/topic,18827.0.html">(c)</a>; 8-channel 16-bit Raspi analog board <a href="http://hackaday.com/2012/09/14/a-truly-professional-raspi-analog-input/">(d)</a>; Arduino and the LTC2440 24bit ADC <a href="http://blog.arduino.cc/2012/07/11/arduino-and-the-ltc2440-24bit-adc/">(e)</a> <a href="http://blog.arduino.cc/2010/11/29/tired-of-a-10-bit-res-hook-up-a-better-analog-to-digital-converter/">(f)</a>; "The Brick" 8 Single Ended 16 Bit Analog Inputs <a href="http://arduino.cc/forum/index.php?topic=69358.0">(g)</a>; etc.</li> </ul> <p>p.s.: Multiplexing analog inputs requires an "analog multiplexer" -- most multiplexers are "digital muxes" which won't work with analog inputs.</p> <p>p.p.s.: All the ultrasonic distance sensors I've used only require digital I/O pins. What kind of distance sensors are you using?</p>
1227
2013-04-25T17:21:33.500
|arduino|microcontroller|input|
<p>I'm in the planning stages for a project using the Arduino Uno to control 8 distance sensors, and have run into a little road block, the Uno only has six input pins. So I'm wondering, is there any way for this to work? If so, how?</p>
Connecting More Than Six Analog Input Pins to arduino
<p>I was looking for something similar, and I found <a href="http://www.alicesastroinfo.com/2012/07/mars-rover-rocker-bogie-differential/" rel="nofollow noreferrer">Mars Rover Rocker-Bogie Differential</a> to be really helpful.</p> <p>With my level of understanding it took me a while. But the link my professor provided me with really helped me, it has decent animations to help understanding the concept. </p> <p>Okay so here's my understanding of the mechanism. The differential system essentially could consist of either of these two mechanisms: The Differential Gearbox and the Differential Bar.</p> <p>The differential Gearbox consists of three gears. The one in the middle (2) is connected to the body, while the ones on the side (1) rockers of the system.</p> <p><img src="https://i.stack.imgur.com/sZMdI.png" alt=""></p> <p>If you were to pick the rocker bogie up and hold the body intact, and tilt one of the side rocker downwards, the gears in the gearbox would make the rocker on the other side tilt upward. If you tilt it upwards, the other one would tilt it downwards. More complex system use more gears to make the whole system more sensitive to the movements.</p> <p>As for the Differential bar, it's not used on the rover as it interferes with the solar panel, but it works in a similar way. Except its rod that connects the two rockers. And this rod is pivoted in the middle onto the body. </p> <p>So, as far as my understanding goes, look at the picture and consider the gears on either side (Gears numbered 1). If one of the gear(let's say the top one) is turning clockwise, it would make Gear no. 2 (which sits in the grove of both of the other gears.) move anti-clockwise. Which in turn would make the other No.1 Gear turn anti-clock wise.</p> <p><strong>NOTE:</strong> ALL THE GEARS FIT INTO THE GROOVE OF EACH OTHER. So any rotation that takes place in this system affects the other two gears. And the gears are attached or arranged in such a way that No.1's are running in the opposite direction of each other. </p> <p><img src="https://i.stack.imgur.com/icHbj.jpg" alt="Here&#39;s the differential Bar on Curiosity"></p> <p>Here you can see the differential Bar on Curiosity.</p>
1235
2013-04-25T22:40:55.590
|wheeled-robot|rocker-bogie|
<p>How does rocker-bogie mechanism keep the body flat / keep the solar panel almost flat all the time? I know there is an differential system that connect both rocker bogie (left and right) together. But how does it actually work?</p> <p><em><strong>Edited:</em></strong> Please provide relevant references.</p>
How does rocker bogie keep the body almost flat?
<p>I found a web site that uses wifi if that would help with the quadcopter. endurance-rc.com They have some neat applications. I've put in an inquiry through their email chain. </p>
1246
2013-04-29T04:41:07.797
|quadcopter|
<p>So I have a quadrocopter, it does come with a remote but I intend to run certain modifications to the copter, like installing a camera, a mechanical manipulator, and other random modifications. The remote that comes with the copter isn't flexible enough to help with such functions and plus it lacks any more buttons. </p> <p>I was wondering if I could somehow program the quadrocopter to respond to my Xbox controller. I was planning on using my laptop's Bluetooth connection to talk to copter. The Xbox controller which is connected to the computer would be then used to control the quadrocopter. So my question is, how exactly do I program the controller? How do I go about making all of this possible? </p> <p>I understand this question is really vague and that there are too many options out there, but I do need help figuring this out. </p>
Using an Xbox controller to fly a Quadrocopter
<p>These are just terms to describe the "layers" of control on the robot. The "joint level" means the position of each actuator (leg), and the "task level" means the current goal of the robot (like go forward, go east, go to location X, etc). </p> <p>This paragraph is about sensing. There are (apparently) position sensors in all of the leg joints, so the robot is capable of <a href="http://en.wikipedia.org/wiki/PD_controller">closed loop</a> control at that level -- no user intervention needed. ("Proprioceptive" is a $5 word for "I know where my legs are because I can sense them".)</p> <p>However, having no external sensors means that the robot can't tell where it is in relation to the world and as a result it can't determine what action it should take to accomplish its goals. So, it must run as <a href="http://en.wikipedia.org/wiki/Open-loop_controller">open loop</a>. Presumably this means that they are either giving it a pre-scripted set of actions, or directing it via remote control. </p> <p>So, the task-level control handles the "go forward" concept, but the joint-level control handles the uneven terrain underfoot.</p>
1253
2013-04-30T11:34:27.403
|mobile-robot|control|walking-robot|hexapod|
<p>While doing a literature review of mobile robots in general and mobile hexapods in particular I came across a control system defined as "Task level open loop" and "Joint level closed loop" system.</p> <blockquote> <p>The present prototype robot has no external sensors by which its body state may be estimated. Thus, in our simulations and experiments, we have used joint space closed loop (“proprioceptive”) but task space open loop control strategies.</p> </blockquote> <p>The relevant paper is <a href="http://ai.eecs.umich.edu/RHex/Papers/ijrr2001.pdf" rel="nofollow">A simple and highly mobile hexapod</a></p> <p>What is the meaning of the terms "joint-level" and "task-level" in the context of the Rhex hexapod?</p>
What is the difference between Task-Level and Joint-Level Control Systems?
<p>Flow-rate meters based on measuring Coriolis-force reactions are widely used in process control systems. The measurement principles are mentioned or illustrated at numerous web sites, including youtube video e2NGvrkmP8Y, <a href="http://www.youtube.com/watch?feature=player_embedded&amp;v=e2NGvrkmP8Y" rel="nofollow noreferrer">Back to Basics: Flow Measurement (Corilolis)</a> (via oscillating-tube phase changes) and Schenck Americas' <a href="http://www.schenckamericas.com/prod_solidsflowmeters.html" rel="nofollow noreferrer">Solids Flow Meters</a> page, Coriolis Principle section (via vaned-rotor reaction forces) . Coriolis-based meters measure the delivered mass of solids, liquids, and gases. Delivered volume can be calculated if density is known. Purchase of a Coriolis-based meter seems far more feasible than building one from scratch.</p> <p>The Schenck page also illustrates impact-plate-based measuring systems and deflection-chute-based measuring systems. These integrate impulses or deflection force to measure flow. Again, purchase seems more feasible than building one yourself.</p> <p>At a hobby level, I think two approaches should be considered first: (A) calibrate a constant-flow-rate system, and control delivery time; (B) implement a fill-and-dump system, and control number of cycles.</p> <p>(A) Using a method from previous questions (<a href="https://robotics.stackexchange.com/questions/1145/what-would-be-the-best-way-to-handle-food-grains">1</a>,<a href="https://robotics.stackexchange.com/questions/1167/can-the-rate-of-peristaltic-pumps-flow-be-accurate-across-changes-in-fluid-visc">2</a>) set up a system that moves your substance at a reasonably constant rate; measure that rate; thereafter, control how long or how fast the system runs to deliver a desired amount of substance.</p> <p>(B) Again using methods from previous questions, set up a system to fill a cup, box, chute, or transfer bin some number of times, and dump it into a receiver. Issues like detecting when the cup is full, preventing or recycling overflow of the cup, and dumping the cup arise but can be resolved. The issue of limited resolution (where the delivered amount is an integer multiple of cup size) can be treated by using a smaller cup, or using different cups for different jobs, or using a box with an adjustable side or bottom, or having the box sit on a load cell during filling.</p>
1259
2013-05-03T07:44:58.167
|mechanism|manufacturing|
<p>I've been watching too much <a href="http://science.discovery.com/tv-shows/how-its-made" rel="nofollow">How It's Made</a>, and I've been wondering how they build devices that spray/inject/dispense a finite amount of liquid (to within some amount of error). I wanted to try this for a hobby project. I'm working on that dispenses dry goods in the amount I specify.</p> <p>Do I use some kind of special nozzle/valve which can open and close at high speeds? How can I dispense a known quantity from a reservoir of a fluid substance onto each individual unit passing along an assembly line, or an amount specified by the user into another container?</p>
How to measure and dispense a finite amount of powder or liquid
<p>There is now an easy way of installing ROS on OSX using conda: <code>conda install ros-noetic-desktop -c robostack</code>. More info on <a href="https://medium.com/robostack/cross-platform-conda-packages-for-ros-fa1974fd1de3" rel="nofollow noreferrer">https://medium.com/robostack/cross-platform-conda-packages-for-ros-fa1974fd1de3</a> and <a href="https://github.com/RoboStack/ros-noetic" rel="nofollow noreferrer">https://github.com/RoboStack/ros-noetic</a></p>
1263
2013-05-05T21:22:41.277
|ros|
<p>The latest OSX documentation I found on the website is from 2011, and the latest build is from over a year ago. I'm a complete n00b to all things ROS and wanted to start playing with it. What is the easiest way?</p> <p><strong>Edit:</strong> <a href="http://www.ros.org/wiki/groovy/Installation/OSX/Homebrew/Source" rel="noreferrer">this version</a> of the installation instructions is more recent (April 2013), but it says that</p> <blockquote> <p>OSX is not officially supported by ROS and the installation might fail for several reasons. This page does not (yet) contain instructions for most higher level ROS packages, only for the base system. This includes the middleware and command line tools but not much more.</p> </blockquote> <p>"Does not contain instructions" also means it doesn't work? What do OSX users who work on ROS usually do? Run it on an Ubuntu VM? Install it just fine on their own on OSX, even though there aren't detailed instructions on the website?</p>
What is the easiest way to install ROS on OSX Mountain Lion?
<p>The optimal reward function depends on the learning objective, i.e. what is to be learned. For simple problems it may be possible to find a closed form representation for the optimal reward function. In fact for really simple problems I'm confident it is possible though I know of no formal methods for doing so (I suspect utility theory would address this question). For more complex problems I would argue that it is not possible to find a closed form solution.</p> <p>Instead of seeking the optimal function we could look to an expert for a good reward function. One approach to doing so is a technique called Inverse Reinforcement Learning (IRL). It formulates a learning problem as a reinforcement learning problem where the reward function is unknown and the objective of the learning process. The paper Apprenticeship Learning via Inverse Reinforcement Learning by <a href="http://www.cs.berkeley.edu/~pabbeel/" rel="nofollow">Pieter Abbeel</a> and <a href="http://ai.stanford.edu/~ang/" rel="nofollow">Andrew Ng</a> is a good place to start learning about IRL.</p>
1266
2013-05-07T13:25:22.977
|machine-learning|
<p>Let's think of the following situations:</p> <ul> <li>You are teaching a robot to play ping pong</li> <li>You are teaching a program to calculate square root</li> <li>You are teaching math to a kid in school</li> </ul> <p>These situations (i.e. supervised learning), and many others have one thing (among others) in common: the learner gets a reward based on its performance.</p> <p>My question is, what should the reward function look like? Is there a "best" answer, or does it depend on the situation? If it depends on the situation, how does one determine which reward function to pick?</p> <p>For example, take the following three reward functions:</p> <p><img src="https://i.stack.imgur.com/UvrHs.png" alt="enter image description here"></p> <ul> <li>Function <code>A</code> says: <ul> <li>below a certain point, bad or worse are the same: you get nothing</li> <li>there is a clear difference between almost good and perfect</li> </ul></li> <li>Function <code>B</code> says: <ul> <li>you get reward linearly proportional to your performance</li> </ul></li> <li>Function <code>C</code> says: <ul> <li>if your performance is bad, it's ok, you did your best: you still get some reward</li> <li>there is not much difference between perfect and almost good</li> </ul></li> </ul> <p>Intuitively, I'd think <code>A</code> would make the robot very focused and learn the exact pattern, but become stupid when dealing with similar patterns, while <code>C</code> would make it more adaptable to change at the cost of losing perfection.</p> <p>One might also think of more complex functions, just to show but few:</p> <p><img src="https://i.stack.imgur.com/eCS4R.png" alt="enter image description here"></p> <p>So, how does one know which function to pick? Is it known which behavior would emerge from (at least) the basic <code>A</code>, <code>B</code> and <code>C</code> functions?</p> <hr> <p>A side question is would this be fundamentally different for robots and human kids?</p>
What reward function results in optimal learning?
<p>Read <a href="http://learn.adafruit.com/ir-sensor" rel="nofollow">Ada Fruits Tutorial</a> along with many other sources of IR tutorials out there.</p> <p>Then for an Arduino I would use (and have used) <a href="http://www.righto.com/2009/08/multi-protocol-infrared-remote-library.html" rel="nofollow">Ken's IRremote Library</a> as it has several examples, several device protocols and requires only a <a href="https://www.sparkfun.com/products/10266" rel="nofollow">TSOP38238</a>. And is quickly adaptable to others.</p> <p>Look at the GitHub Network of Forks, as there are others who have added other device protocols.</p>
1274
2013-05-08T13:30:52.200
|mobile-robot|arduino|raspberry-pi|
<p>I bought my kid a <a href="http://www.gigo.com.tw/english/_toy/detail.php?MID=100&amp;SID=101&amp;ID=974" rel="nofollow">robotics kit</a> with several motors and an infrared remote control (you can steer the robot using IR remote control).</p> <p>Now I want to take it to the next level and control the robots from a PC or a Raspberry Pi.</p> <p>What is the simplest approach to do this?</p> <p>I am thinking about 2 possible ways:</p> <ol> <li>Find out the protocol the existing remote control uses and then emulate the IR signals using Arduino (Arduino is sending the IR signals).</li> <li>Find a piece of hardware, which presses the buttons on the remote control and control it via to Arduino (Arduino is sending signals to the button pushers, the remote control is sending the IR signals to the robot).</li> </ol>
How to connect an infrared remote control to PC or Arduino or Raspberry Pi?
<p>You need to command your motors by torque, and not speed</p> <p>then with n motors, you get n vectors, so you now get 1 vector you regulate according the speed, and you compute the torque for each three motors.</p> <p>To control an AC motor by current you need an inverter capable of Field Oriented Control, and for the vectoring an Arduino could be fine</p>
1276
2013-05-08T17:33:04.103
|control|motor|
<p>I have a three wheeled vehicle in a tricycle configuration attached to a fixed frame. Each wheel is powered by an AC electric motor. The AC motors are fed by motor controllers that take a speed demand. The single main wheel (which is also steerable) has a lower gear ratio than the rear wheels so it has a theoretical higher top speed. </p> <p>When the vehicle drives in a straight line each of the motor controllers are given identical speed requests. Unfortunately feedback from the controller indicates that some motors are pushing while some are pulling. In particular we have a common scenario where one rear wheel is pushing while the front wheel is trying to slow down. The third wheel will often have almost no current. </p> <p>What can be done to make all three motors work together and avoid situations where they fight? Is there a way to change the request to the motor controller to encourage the drives to work together? Do we have to switch from a speed request setup to a current control setup? If so what is the appropriate way to control the motors then? </p> <p>Let me know if I haven't included any important details and I will update my question.</p>
How can a load be balanced between multiple AC electric drive motors?
<p>I've wanted to do something similar, so hearing of your experiences would be great. </p> <p>The best thing I think would be to get a colorimeter, and build a calibration table into your code. Since paint can vary by manufacturer or medium additive (possibly even output levels if the tubes clog some), you could have the robot print its own test pattern. This would also calibrate to the resolution of the color mixer, since you might not have 255 distinct and evenly spaced levels of paint release per color.</p> <p>Let's say your magenta was stepped with a screw thread attached to a stepper motor. If this thread only had an obscure number of 82 turns to adjust paint release levels, with higher output in the middle of the curve, then the test pattern would show the result of 82 different output levels of your magenta.</p> <p>Using a colorimeter under diffused ambient lighting, you can determine what colors are output by which flow settings. Then, create a lookup table in your program to do the translation. To increase color resolution, you could even account for premixed colors (shades of brown, green, gray, etc). </p> <p>Let me know how it goes</p>
1277
2013-05-08T18:52:41.650
|robotic-arm|
<p>I am developing a robot which paints using an airbrush (3D painting). I intend to use several colors as a CMYK printer, but I do not know how to do the conversion of RGB colors in the computer to the dosage of colors in CMYK.</p>
How can I convert RGB colors to CMYK for my airbrush robot?
<p>I think you're mixing two meanings of the word "sensitive". </p> <h2>Input Sensitivity</h2> <p>If you have an IMU that gives very raw ("twitchy") readings, then you should be able to generate the output of a less input-sensitive IMU by simply computing a <a href="https://en.wikipedia.org/wiki/Moving_average" rel="nofollow">moving average</a> or using a Kalman filter on the output.</p> <p>This will produce an effect similar to running a low-pass filter on the output signal; the ability of your filter to reject noise will determine how much accuracy you salvage in this process of subsampling.</p> <h2>Output Sensitivity</h2> <p>Accelerometers produce analog output, and in many accelerometers you can adjust the number of $mV/g$ that the reading will be. This is just the scale factor that lets you match the (expected) output of the IMU to the range of your ADC, in order to get the most precision. So, your intuition is correct -- you could get 2 accelerometers and mix low and high sensitivity to get better precision when reading slight movements.</p> <h2>Whether It Matters</h2> <p>Ultimately, an IMU that you can buy for less than $50,000 has a limit to its accuracy, and will never be accurate enough for you to use it without other navigational aids -- compasses, GPS units, etc. Since your vehicle is really only able to react to a certain range of forces, its accelerometers should be tuned to that range. So, if you have 2 different operating modes that involve 2 different ranges of forces then it might be a good idea to use multiple accelerometers; otherwise, it will produce data that is interesting but not practical (and give you one more source of noise to calibrate).</p>
1286
2013-05-10T21:09:58.213
|kalman-filter|quadcopter|imu|uav|
<p>I noticed that some IMU units are tuned to be sensitive to small changes, other to large changes and some that can be adjusted between different sensitivities. I am familiar with the use of a Kalman filter to normalize readings, but I was wondering if my UAV could benefit from a second IMU where the two are set at high and low sensitivities to get even more accurate and timely information.</p>
Is there a benefit to using 2 IMU units on a UAV set at different sensitivities?
<p>Okay, so most(if not all) transational drift robots or meltys use an acceleorometer to indicate the heading of the robot. What this acceleorometer does is calculates the rate of rotation based on G-forces around a given radius. The data that is accumulated from the accelerometer is then used to light up an LED once per rotation - giving and indication of the heading of the robot. The driver or user then uses this LED to decide using the remote control the direction he wants to traverse to.</p> <p>As for the moving, as mundane it might sound is done by switching the motor on and off. The program or the control of the system turns a motor on when that motor is in the correct position to result in a net movement the direction the robot is indicated to go to. So for example the LED is lit between 30 and 120 degrees it can be interpreted as the direction of the robot would be around 90 degrees.(Keep in mind that the indications of the heading using the LED is not a 100% accurate, the might be off by more than some tens of degrees. Usually because of the time taken for all the data processing and the time taken to switch the LED's on and off). If you wanted to move forward now, the motors would be needed to be on between 270 and 450(360+90) degrees during each rotation after which the net direction of travel will be 90 degrees.</p> <p>Likewise if the robot wants to go to the left that is 180 degrees, but its current heading is 90. The motors will need to be on between 0 and 180 degrees. </p> <p>Hope that helps.</p>
1294
2013-05-14T19:46:34.603
|navigation|movement|
<p>Concerning robots which rotate at high speed by spinning the drive motors in opposite directions, while still being able to simultaneously move in a direction (translate):</p> <p>As far as I know this originated with competitive fighting robots, where it is known as "melty brain" or "tornado drive," according to wikipedia, and is based on alternately slowing down the motors on either side as they revolve around the centre of mass.</p> <p>However, with the whole body spinning so fast how is the current "heading" of the robot established and maintained?</p>
What are the mechanics of translational drift?
<p>There is at least two modalities along which servos (continuous or otherwise) usually fail: gear problems and motor breakdown.</p> <p>When the gear fails (broken tooth, hard point, etc.), the servo may get stuck, free moving or any combination. When the motor breaks (usually the brushes inside the DC motor are the culprit), the servo stops working altogether (as if it was unplugged) or overheats and burns. In your case, since it's humming and behaves differently when actuated by hand, I'd think your servo has gear problem. Its relatively easy to take a look at the gear box as it is the first apparent area when opening the servo and it can easily be re-assembled afterward (on the contrary, checking the motor's brushes usually means breaking the motor open), so I'd have a look to confirm the diagnostic.</p>
1301
2013-05-17T19:08:13.547
|arduino|microcontroller|otherservos|wheeled-robot|
<p>I just got a kit and im not sure if its me or not but it appears one of the continuous servos might be broken. What happened first when I plugged it into the microcontroller, it made a humming sound when I sent it commands. The second continuous servo didnt work at all</p> <p>I played around with different ports on the aurdino based board, and to no avail, just a <strong>hum</strong>. Then I removed the humming servo altogether and just placed the second servo alone. the second continuous servo started to move in whatever direction I asked it to. </p> <p>I plugged the first one in, only the second moved.</p> <p>then I tried spinning them by hand, the second has much resistance, while the first one has dramatically less resistance, maybe 60% easier to spin by hand.</p> <p>Is this something I can fix? Has anyone experienced these problems before?</p> <p>Thanks in advance, you guys are great!</p>
What are the signs that a servo might be broken?
<p>Your calculation is correct in magnitude but incorrect in sign, because gear B rotates oppositely to A (when the axis of D is fixed and D is not locked).</p> <p>If D is locked (ie, the gear is not free to rotate in its plane) then A and B are locked together and rotate identically. </p> <p>If the body V to which the axis of D is fixed rotates during rotation of A, then the rotation rate of B will differ from that of A. Example: Let rotation directions for A, B, and V be stated relative to a view from the left, and for D relative to a view from above. With V fixed, suppose A rotates CW at 40 rpm. Then D rotates CCW at 10 rpm, driving B CCW at 40 rpm. If V now begins to rotate CW at 20 rpm, D's rotation rate drops to 5 rpm, so that B begins to rotate at -20 rpm relative to A, 0 rpm relative to V, and 20 rpm to the frame of reference.</p>
1302
2013-05-18T04:01:29.997
|wheeled-robot|rocker-bogie|
<p>Following, the <a href="https://robotics.stackexchange.com/questions/1235/how-does-rocker-bogie-keep-the-body-almost-flat">previous question</a>, I am trying to calculate how much one rocker would rotate when the other is being rotated. I attached my calculation here.</p> <p>I am trying calculate the rotation of gear B that connects to right rocket. Given gear A rotates at 0.05 rad, what is the rotation of gear B in rad? Gear ratio A:D is 4:1, and D:B is 1:4.</p> <p>At the end, I ended up with rotational gear A = gear B. This somewhat puzzles me. Is my calculation correct?</p> <p><img src="https://i.stack.imgur.com/HDtsx.jpg" alt="My calculation given rotation of gear A that connects to left rocker"></p>
Rotation ratio between left rocker and right rocker in rocker-bogie system
<p>The motors in ABB robots have integrated brakes that are <strong>engaged when not energized</strong>. They hold the position of the joint and prevent the arm from falling if power is lost. The flex pendant enable switch actually drives a relay in the controller that removes or applies all power to the motors. This is done for safety reasons, so that in the event of a mishap (i.e, you drop the pendant), the robot becomes completely de-engerized.</p> <p>When you enable the robot for jogging, it disengages the brake, applies power to the servo, and uses the static torque to hold the joint position instead of the brake. When you release the switch or cut the power in some other way, the brakes engage again.</p> <p>Search Google for "Product manual Motor Unit MU10/20/30" for more details.</p>
1306
2013-05-19T04:25:44.473
|industrial-robot|servomotor|
<p>I was jogging the ABB IRB1410 and I noticed that the servo motors are humming even when the joints are not moving. The motor cuts off only when the guard switch in the flex pendant is released.</p> <p><strong>What kind of mechanism which require the drive motors to keep running even when the joints are not moving ?</strong> I went through the manual but no luck. I suppose the holding torque is provided by some braking mechanism so I think I can rule it out.</p>
Why are the IRB 1410's servos running even when the joints are not moving?
<p>I finally got the problem solved. The problem was that I had selected Arduino UNO. After checking out the datasheets for my board I found that it had the same specs as Arduino Duemilanove AtMega328. After selecting it I got the things right. So the general solution is check out the data sheets if you are using any such modded boards and then select the appropriate board which matches the specs. Now my board is working successfully on both windows and linux.</p>
1320
2013-05-23T18:21:33.780
|arduino|
<p>I bought a new Roboduino atmega 328 board. Basically Roboduino is a modded version of Arduino UNO made by robokits.co.in. The problem is </p> <p><strong>On Windows Plaform:</strong> When I tried to upload a simple Blink program that's listed in the examples of Arduino IDE 1.0.4, I got error that <strong>avrdude: stk500_getsync(): not in sync: resp=0x00</strong></p> <p>I chose the correct COM port after verifying it with the Device manager. I installed the Prolific Drivers for the board. I selected the board as Arduino UNO in Arduino IDE.</p> <p>The complete verbose for the upload is as follow:</p> <pre><code>D:\Softwares\Installed Files\arduino-1.0.4\hardware/tools/avr/bin/avrdude -CD:\Softwares\Installed Files\arduino-1.0.4\hardware/tools/avr/etc/avrdude.conf -v -v -v -v -patmega328p -carduino -P\\.\COM10 -b115200 -D -Uflash:w:C:\Users\ANKITS~1\AppData\Local\Temp\build5865304215250534760.tmp\Blink.cpp.hex:i avrdude: Version 5.11, compiled on Sep 2 2011 at 19:38:36 Copyright (c) 2000-2005 Brian Dean, http://www.bdmicro.com/ Copyright (c) 2007-2009 Joerg Wunsch System wide configuration file is "D:\Softwares\Installed Files\arduino-1.0.4\hardware/tools/avr/etc/avrdude.conf" Using Port : \\.\COM10 Using Programmer : arduino Overriding Baud Rate : 115200 avrdude: Send: 0 [30] [20] avrdude: Send: 0 [30] [20] avrdude: Send: 0 [30] [20] avrdude: Recv: avrdude: stk500_getsync(): not in sync: resp=0x00 avrdude done. Thank you. </code></pre> <p>When I plug in the board the power LED is on. The 13 pin LED blinks once. When the IDE shows uploading the 13 pin LED blinks 3-4 times and then the error appears on the screen. In between also sometimes it blinks randomly for 5-6 times. I also tried other example programs but the same follows. I'm using 32 bit Windows 7 Ultimate and the baud rate is set to 9600.</p> <p><strong>On Ubuntu 13.04:</strong> I downloaded the IDE from Software Center. I was added to the dialouts group on the first run. After connecting the board to my pc I ran two commands <code>lsusb</code> which returned following output:</p> <pre><code>Bus 004 Device 003: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub </code></pre> <p>and then <code>dmesg</code>. After this when I tried to upload the same program of blink, it gave me following error:<strong>avrdude: stk500_recv(): programmer is not responding</strong>. I'm using 64 bit ubuntu 13.04 and selected Arduino UNO as the board.</p> <p>Thank you for reading this long. Please provide me suggestions for the problem.</p>
Problem uploading to Roboduino AtMega328
<p>I cannot comment yet; this is a supplement to ronalchn's answer.</p> <p>Using an opto-isolator between the logic output and the motor control input 100% isolates the two circuits from each other and makes [multiply-]split-supplies trivial.</p> <p>DGND > Digital > Opto &lt; Analog &lt; AGND</p> <p>Under extreme demands, a second opto could engage an entirely different set of batteries, charged to a higher voltage. You would want to be careful not to turn both on at the time time...</p>
1324
2013-05-24T07:13:49.387
|motor|power|
<p>How can I provide more power to a DC motor that is in series behind a receiver circuit hacked out of a cheap RC car without burning up the receiver board? The board runs off two AAs at about 3V. I'm replacing the stock motor with a slightly larger one (12V, taken from a printer) and remounting it on a chassis for a homebrew robotics project... just messing around to learn more. I imagine I could go safely to 4.5V or even 6V with the receiver but I don't want to go much higher since half the stuff is epoxied and I can't really tell what's in there.</p> <p>What I'd like to be able to do is add an additional two AA batteries behind the receiver to run the receiver system at 6V but add another two 3V 123A batteries to have the motor at 12V with the ability to run with the higher current draw due to the heavier load the motor will handle on its fancy new chassis... but without pulling that current through the receiver circuit.</p> <p>My first thought is to simply connect my 123As negative to the motor and positive to a common ground... but I'm really not sure and I want to be careful to not damage the circuit or batteries. My next thought is to simply build a single power supply out of my 123As and use a current divider but I've only read about them and never actually tried so.</p> <p>I've been doing some of those kiddie "electronic playgrounds," a few books and probably cost Google an extra few bucks in energy costs and I'm still kinda at a loss.</p>
Additional Power to DC Motor via Second Power Source
<p>For the motor, a stall current of 1A means that at stall, supplying 6V will cause the motor to draw 1A. The no-load current of 250mA means that supplying 6V will cause the motor to draw 250mA.</p> <p>The half-bridge IC you have given has no current overload protection. Its peak current of 1.2A is high enough, but the 600mA continuous current is not. This can normally cause it to overheat, and possibly break. Under the absolute maximum ratings, it has a peak current of 1.2A for &lt;100us. Normally, it may be possible to heat an IC a little more than recommended if a good heatsink is used. However, the &lt;100us implies that this is not possible, because the heat would not be able to transmit through the package fast enough.</p> <p>These are your options:</p> <ul> <li><p>You can supply your own current foldback protection. By using a current sensor of some sort, you can detect when there is too much current flow, you can then scale down the voltage, possibly down to $6\times0.6\div1 = 3.6V$. Assuming a fast control circuit, this is probably fine (due to the peak current allowed).</p></li> <li><p>You can run the motor at only 3.6V. This limits the stall current, torque and maximum no-load speed, but means that you do not have to do any current foldback.</p></li> <li><p>If you can measure the speed of the motor, and assuming you know the back EMF constant of your motor, you can convert speed to back EMF voltage. This is the additional voltage that you can supply to your motor (in addition to the 3.6V which gives 0.6A of current). This is because the amount of the back EMF voltage represents the voltage which will cause zero current to flow.</p></li> </ul> <p>Note that when you run the motor at 3.6V, this is just the average voltage. Assuming you use pulse width modulation, this is equivalent to a power supply of 6V, with a duty cycle of 60%. Of course, the actual supply voltage must be within the operating condition of the device (36V).</p> <p>You might be tempted to use two sets of half-bridges for each direction of the same motor. However, if you do this, you should be aware that it is quite difficult to control how much of the current comes from each parallel supply. Therefore, due to slight differences in transistor characteristics, one will supply more current than the other.</p> <p>The specifications do not appear to mention anything about compensating current with two motors.</p> <p>Why don't you just use the L293 quad half-bridge drivers (which allow 1A continuous)?</p>
1327
2013-05-24T13:34:10.740
|mobile-robot|motor|
<p>I ran into confusion while reading about motors.</p> <p>Consider a motor with these specs:</p> <ul> <li>Maximum motor voltage - 6VDC</li> <li>No load current - 250mA max.</li> <li>Stall current - around 1A</li> </ul> <p>I am considering using the Texas Instruments L293D, with these specs:</p> <ul> <li>Output Current - 600 mA Per Channel</li> <li>Peak Output Current - 1.2 A Per Channel</li> </ul> <p>If I use the L293D to run 1 motor (back and forth), is this safe? What would happen if my motor requires more than 600mA? Does this simply mean I need different driver IC?</p> <p>Also, the specs say that if I want to drive 2 motors then i'll need to compensate for the current. Is it current from my power supply or from the motor driver?</p>
How do I interpret these specs for a motor and motor driver?
<p>You have to be very careful with those packs as they don't usually have under-voltage protection for themselves. You're pack is likely fine but the charger isn't seeing the minimum voltage required to begin charging and so it won't.</p> <p>I found myself in nearly the same situation not too long ago and managed to bring the battery back to a visible voltage by charging it with a Pb charge algorithm of similar voltage for a few minutes.</p> <p>The multiple plugs are because there are multiple cells in the pack and each one requires individual charging (in parallel - not in series). They all have a common ground so you put the black end of the multimeter in the 4 pin black hole and put the red end in each of the 3 other colored wires. You will get 3 different, but probably close voltages.</p> <p>I have this exact charger which is what I used to revive a similar pack: <a href="https://www.sparkfun.com/products/10473" rel="nofollow">https://www.sparkfun.com/products/10473</a></p> <hr> <p>Edit: If you wanted to get a littler risky with the battery, a NiMH charger could be used for a short period as you mentioned. It charges at an acceptably low rate but the cells differ a lot more than they do between Pb and Li. A multi-cell NiMH for instance can have all of it's cells charged at once while this is generally not possible for Pb/Li. This means that a 12v NiMH charger is not going to work for a 12v Li battery since each Li cell needs to be charged individually.</p> <p>If you happen to have an NiMH AA charger, you would retrofit that to be a 3v Li cell booster provided it slow-charges by putting two 1.5v terminals in series with the Li cell.</p> <p>I would highly suggest getting an inexpensive smart charger as it's uses are nearly endless and it could save you many headaches that dumb chargers often create.</p> <p>Additionally, you may want to build in a low-voltage cutoff circuit into your design to prevent this issue in the future.</p>
1329
2013-05-24T17:32:40.517
|battery|
<p>I used a <a href="http://www.hobbyking.com/hobbyking/store/__8934__Turnigy_2200mAh_3S_25C_Lipo_Pack.html" rel="nofollow">Turnigy 2200mAh 3S 25C Lipo</a> battery pack with <a href="http://www.hobbyking.com/hobbyking/store/__7637__Turnigy_balancer_Charger_2S_3S.html" rel="nofollow">Turnigy balancer &amp; Charger 2S-3S</a> for about a month. Yesterday I left the battery plugged into four ESCs of my quadrocopter. Today I've found the battery totally discharged. When I tried to charge it, the charger showed it as faulty. After replugging it to the charger it showed it as fully charged.</p> <p>How can I charge it now?</p> <p>P.S. I've got a multimeter, but I do not know what and how to measure... The battery pack has two plugs: one is connected to the charger and the other to the ESCs...</p>
Battery pack discharged
<ul> <li>Consider integrating an Android smartphone with the Arduino board, as then you get cameras, gps, accelorometers and even voice command.</li> <li>If you have 2 units and you want to build in a camera on one, why not have it automatically follow the other one. This should not be too hard if you put a marker / beacon on the one without the camera, or even a couple of different coloured markers so you can differentiate which side you are looking at. You could then have a 'robot cop' chase the user controlled car. </li> </ul>
1334
2013-05-27T10:38:08.170
|arduino|sensors|radio-control|automatic|
<p>I bought an RC car about a year ago. A few days later I integrated an arduino nano into the vehicle. The only thing the arduino does is to receive the RC signal and pass it on to the esc/servo. So, basically it just does a big amount of NOTHING :)</p> <p>Right now the wiring looks like this:</p> <blockquote> <p>[Remote] -> [rc receiver] -> [arduino] -> [servo/esc/lights]</p> </blockquote> <p>I added lights and I did some experiments with distance sensors and I will try to integrate car control via xbee + processing. This works via serial already.</p> <p>What else could be possible with a setup like that? Here are some of my ideas:</p> <ul> <li>perhaps some sort of autonomic driving? The car is built for offroad and the suspension is not too bad but it is pretty fast (40 km/h) so a crash would be fatal.</li> <li>FPV (first person view) driving? I could add another servo to move a small camera.</li> <li>"swarm intelligence"? I have built two of those vehicles. Both feature the arduino nano, a zigbee and LED front lights.</li> <li>steering correction? I could integrate a gyro sensor to check if the car is not driving straight when it should.</li> <li>telemetry to another arduino? I could build some sort of arduino-zigbee-handheld that shows me some information for both cars like motor temperature, current speed, uptime, battery voltage, sensor values etc.</li> </ul> <p>Any ideas, anyone? Right now it is just driving like it normally would. I integrated an arduino into an RC toy that does an awesome amount of NOTHING. Makes me feel pretty stupid.</p>
Arduino Controlled RC Car. What now?
<p>You can fly anything you want in class G airspace. Generally speaking, if you're far from any airport or military training area, you are probably walking around in class G. To be sure, though, you should consult a <a href="http://aeronav.faa.gov/index.asp?xml=aeronav/applications/VFR/chartlist_sect" rel="nofollow">current FAA sectional chart</a> for your area.</p> <p>Class G is considered "uncontrolled" by the FAA. Note however that there are other regulations and agencies that restrict flight activities there. Sadly there is no "easy" answer in U.S. airspace.</p> <p>In most cases, except for special situations like in the mountains, Class G extends up to 1200 ft above the ground (AGL).</p> <p>I'm quite familiar with airspace rules because of my previous job (UAV operator). I guarantee you can fly anything you want, but see for yourself: <a href="http://en.wikipedia.org/wiki/Airspace_class_(United_States)#Class_G" rel="nofollow">http://en.wikipedia.org/wiki/Airspace_class_(United_States)#Class_G</a></p>
1339
2013-05-29T00:41:05.047
|quadcopter|
<p>Okay, this might sound like a stupid question but, is there some sort of a permission in the US I might require to fly a quadcopter or a UAV for that matter? I couldn't find much help anywhere else. </p>
Permission to fly UAVs
<p>CF is lighter but you definitively need to know your source, many of the inexpensive tubes are quite brittle. In any case I do not recommend drilling holes in the tube use clamps. Aluminium is somewhat heavier and will transmit vibration (balance your motors and props) but is readily available cheaper and easier to work with. For a smaller craft I would use aluminium anything over 550 quality carbon </p>
1344
2013-05-29T17:47:11.450
|design|quadcopter|
<p>So building a quadrocopter from scratch HAS a lot of decision making, and I need some input on the material choice. </p> <p>I have short listed Aluminum and Carbon Fiber for the Arms and support of the Quadrocopter. I am a little short on cash to experiment with both of them. </p> <p>So considering that I have enough money to buy either of those, and assuming that I have access to general tools like a Table Saw, Horizontal Band Saw, CNC Router and a Water jet. </p> <p>What would be a better material to work with</p> <p>EDIT:<br> I will be deciding the specs around the frame so as to allow me some design liberty. So right now, my goal is to assemble a very durable, as-light-as possible frame, which can withstand a lot of experimentation on the electrical side.</p>
Aluminum vs Carbon Fiber
<p>Keep in mind that "heavier" is not quite the term you're looking for. You should be trimming your SeaPerch to be neutrally buoyant, so unless the new payload creates significantly more drag than your old one, your existing motors should work fine (just with decreased acceleration in response to the increased mass).</p> <p>The answer to the question of torque vs RPM really depends on the propeller you use -- they should be evenly matched. There is a fair amount of theory on this (for example, <a href="http://web.mit.edu/drela/Public/web/qprop/motorprop.pdf" rel="nofollow noreferrer">Mark Drela's lecture notes from MIT</a>), but you can also just approach it in a practical way.</p> <p>The basic idea is to work with the power curves of the propeller and motor (both of which are measurable):</p> <p><a href="http://www.tunnel2funnel.com/2013/01/significance-of-propeller-curve.html" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dFuVS.png" alt="Engine and propeller power curves"></a> (more <a href="http://www.tunnel2funnel.com/2013/01/significance-of-propeller-curve.html" rel="nofollow noreferrer">here</a>)</p> <p>Based on this, you should be able to work iteratively with a few propeller designs after you pick your motor. There are also a lot of guides for this (like <a href="http://youboat.net/boatPower4.aspx" rel="nofollow noreferrer">this one</a>, providing the image below) that explain how to choose the propeller based on the power of the motor that you've already selected. You might not be measuring your engine power in units as large as <a href="http://en.wikipedia.org/wiki/Horsepower#Brake_horsepower" rel="nofollow noreferrer">BHP</a>, but the technique is the same. <a href="http://youboat.net/boatPower4.aspx" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u5r76.png" alt="Rpm vs propeller size for given HP"></a></p> <p>As always, you'll have to run your own thrust-vs-rpm tests on a variety of propellers to verify your calculations.</p>
1346
2013-05-29T20:19:28.193
|motor|underwater|
<p>I am looking to upgrade the motors for SeaPerch underwater ROVs so we can carry heavier payloads and more equipment. </p> <p>My question is, should I look for motors which have a higher RPM and lower torque, or with lower RPM but higher torque to gain a substantial power increase? If the latter, what threshold of RPMs should I stay above to maintain speed? </p> <p>We are currently running <a href="http://www.jameco.com/Jameco/Products/ProdDS/232022.pdf" rel="nofollow noreferrer">Jameco <code>PN 232022</code> motors</a> with ~1 1/2" props (same setup as <a href="https://robotics.stackexchange.com/a/313/1397">here</a>). They are mainly run at max power as our ESC currently consists of a fuse and a toggle switch. </p>
Upgrading the motors on a SeaPerch ROV - more torque, or more RPMs?
<p>First, you haven't provided enough information. Your equation is nonlinear, which means that the behavior of the system as described depends not just on the coefficients of the difference equation, but on the range of values that $y$ can take on. From the looks of things, the closer that $y$ is restricted to 0, the more you can treat the whole thing like a linear system and just apply PID control to it.</p> <p>Second, Ziegler-Nichols just isn't indicated here. Z-N was invented as an ad-hoc tuning method for industrial plants with unknown characteristics. It guarantees neither stability nor performance of the result -- and assuming that you know the values of $a$, $b$, and $c$, you have a plant with known characteristics!!!</p> <p>Even if the plant characteristics are known, Z-N is really only a good way to arrive at a starting point for tuning. Z-N tends to result in an underdamped system, and as I mentioned, it guarantees neither stability nor performance.</p> <p>In systems that are amenable, I vastly prefer to do swept-sine measurements (<a href="http://www.wescottdesign.com/articles/FreqMeas/freq_meas.html" rel="nofollow">Measuring Frequency Response</a>). If you don't like the buzzing and shrieking noises, and the uninitiated diving for cover behind lab equipment, you can use some sort of a system identification based on a step response instead -- but in my experience with servo systems, a frequency response-based design is superior when you need to go from measured values.</p> <p>Whatever you do, you want to account for the effect that the variation in the value of $y$ is going to have on your system. If the derivative $$\frac{d}{dy}y\,\sqrt{1-y}$$ varies by more than a few percent, then you need to do your design to accommodate the differing systems characteristics. If that derivative varies by a factor of two or more, then you need to consider the possibility of some sort of a nonlinear controller, however difficult and perverted your design may end up being, or you need to just pull in your horns and accept that you're going to have a severely detuned system for most ranges of $y$.</p> <p>See <a href="http://www.wescottdesign.com/actfes/actfes.html" rel="nofollow">Applied Control Theory for Embedded Systems</a> for more information.</p>
1356
2013-05-31T16:38:17.993
|pid|
<p>I'm trying to use a PID to stabilize a system described from the following difference equation:</p> <p>$$y_{k+1} = a y_k \sqrt{(1-y_k)}~~~ + b y_{k-1} ~+ c u_k$$</p> <p>Can I use <a href="http://en.wikipedia.org/wiki/Ziegler%E2%80%93Nichols_method" rel="noreferrer">Ziegler-Nichols's rules</a> to find PID parameters in this situation?</p> <p>To be more precise. My system is an <a href="http://en.wikipedia.org/wiki/Apache_HTTP_Server" rel="noreferrer">Apache Http Server</a>, in particular I'm trying to model how the CPU load can change in function of <a href="http://httpd.apache.org/docs/current/mod/core.html#keepalivetimeout" rel="noreferrer">KeepAlive</a> parameter. When KeepAlive grows the cpu load should decrease.</p> <p>So:</p> <p>$$cpu_{k+1} = a \cdot cpu_k \sqrt{(1-cpu_k)}~~~ + b \cdot cpu_{k-1} ~+ c \cdot keepAlive_k$$</p> <p>Obviously the Cpu load is a scalar $\in [0,1]$ , $keepAlive$ is just a time and the $a,b,c$ parameters are known to me through experimental data and multiple regression on them.</p>
Can I use Ziegler-Nichols's rules to find PID parameters for a non linear system
<p>This has to do with the torque, or moment, the rotors induce on the body of the quadcopter/multirotor. If all of the rotors were to spin the same direction they would all induce a torque in the same direction causing the craft to yaw. Of course this is undesirable for many reasons. By spinning half of the rotors the opposite direction the torques are theoretically canceled preventing the craft from yawing.</p>
1367
2013-06-02T05:11:59.240
|quadcopter|multi-rotor|
<p>While looking up information for the right propellers for my quadcopter, I realized that they had different orientations i.e. Clockwise and Counterclockwise. On further research I found that all multi-rotors have different combinations of these orientations. So my question is WHY? How does it matter if the propeller is turning clockwise or anti-clockwise? </p>
Prop Orientation on a Multirotor
<p>For a basic multicopter, in the absence of any other way to control yaw, orientation does matter. A quadrotor needs a balanced set of rotation directions so that it can easily control yaw direction, and at equilibrium, all propellers can rotate at the same speed. This is primarily because all rotors in a quadrotor are fixed (usually). The quadrotor can also control yaw by varying the speed of propellers that rotate in one direction versus another.</p> <p>Propeller orientation does not matter so much when it has some other method to control yaw. For example, helicopters have a tail rotor, which corrects for any yaw moment caused by the main propeller. Thus, if it was to set to rotate in the opposite direction, the tail rotor can just rotate in the opposite direction to counter this.</p> <p>The standard tricopter has two front rotors, and a third rear rotor. However, the rear rotor can change its axis. It tilts left or right. When it tilts left or right, a portion of its thrust acts in the yaw direction. Therefore, where front rotors rotate in the same direction, the rear rotor must tilt more the counter the moment from the front rotors. Where the front rotors rotate in opposite directions, the rear rotor tilts less. This is explained in <a href="http://vimeo.com/22778782" rel="nofollow">the video</a> in <a href="http://www.rcgroups.com/forums/showpost.php?s=768e670da2be9e8df7c955f722722400&amp;p=24951580&amp;postcount=4" rel="nofollow">message #4</a> of the link you posted.</p> <p>The reason why the controllers does not even need to know the direction of rotation in the tri-copter is because it is a feedback controller. Thus, when it senses undesired yaw, it simply tilts the rear rotor in the opposite direction to correct the yaw direction.</p> <p>Note that the rear rotor, when tilted in one direction, also causes a lateral force. This is counter-balanced by having the whole tricopter tilt in the opposite direction. Hence, the front rotors are tilted (to a lesser degree), and part of their force is oriented laterally.</p> <p>Therefore, tricopters are more stable and performs better when the front rotors are counter-rotating, because the rear rotor can tilt less (as well as the whole tricopter tilting less).</p> <p>In summary, the general rule is that propeller orientation is a consideration <strong>unless</strong> it has some other mechanism to correct for yaw. With more than 4 rotors (which are unable to tilt), it is not necessary to have rotors in one direction be exactly equal to the number in the other direction, because it can change the speed of the rotors independently to exactly cancel out the moments. However, this may cause uneven utilisation of the rotors - we tend to like symmetry.</p>
1370
2013-06-02T20:34:24.683
|multi-rotor|
<p>This question stems from previous <a href="https://robotics.stackexchange.com/questions/1367/prop-orientation-on-a-multirotor">question</a>, where I asked why does the prop orientation matter so much for a multirotor. But on further research<sup>&dagger;</sup> I found that these reasons need not apply to a tri copter. and then again. Why? </p> <p>Are these reasons general for all multi rotors with odd number of motors? or even rotors?</p> <p>&dagger; <a href="http://www.rcgroups.com/forums/showthread.php?t=1889111" rel="nofollow noreferrer">This forum</a> talks a lot about tricopters and prop orientations but nothing really answers the question. </p>
Prop orientation for tricopters
<p>The LEDs on the board indicate the status of the Ardupilot. See the <a href="http://code.google.com/p/arducopter/wiki/AC2_LEDs" rel="nofollow">LEDs on the boards</a> on the Arducopter wiki for more detail. Of interest to you are likely the A (red), B (yellow), and C (blue) LEDs. LED A is solid when the motors are armed and blinks otherwise. LED B flashes while the Ardupilot calibrates. Finally LED C indicates the status of the GPS module if there is one.</p> <p>Edit: I just noticed you are using Arduplane in which case you want to reference the <a href="http://code.google.com/p/ardupilot-mega/wiki/LED" rel="nofollow">LEDs and their meanings</a> page on the Arduplane wiki. The meanings are basically the same though it appears you do not have the option arm/disarm the motors.</p>
1378
2013-06-04T07:07:59.703
|uav|
<p>Using ArduPilot software (fixed wing, ArduPlane), I know that after I boot up I need to keep the system sit still while the gyros initialise.</p> <p>When I have ground station in the field it's easy to know when it's safe to launch because the telemetry message tells me. But I don't always fly with a ground station. In these situations I currently just sit and wait for a while before arming, then again before launching. </p> <p>Is there some reliable rule of thumb? information in the blinking of the arming switch or buzzing that I haven't worked out yet? This UAV has PX4 autopilot hardware (with both Px4FMU and PX4IOBoard), including with buzzer and illuminated arming switch. The LEDs on the board are obscured (but I could make light channels from them if required).</p> <p>(Note: I'm asking this question here to test the theory that robotics stack exchange night be an appropriate forum for these sorts of questions, which has been suggested a couple of times in response to the Area51 drones proposal.)</p>
How to tell when an ArduPilot has finished initialising its gyros (without referencing telemetry)?
<p>You can use an Ethernet-enabled microcontroller as someone mentioned above. The board can be used as the interface between the network and the mechanism that you will be using to push the button.</p> <p>For the mechanism you can use a servo motor or a pneumatic that releases with less force(you can control this by adjusting the inputs to the pneumatics manually). Any thing you use will have to have a control or a drive circuitry that triggers the mechanism.</p> <p>You will have to write a code block for the conversion of the frame received via the Ethernet to a control signal understandable by the control circuitry.</p>
1408
2013-06-07T13:41:23.617
|arduino|actuator|
<p>So I want to program something that will simply push a button, but controllable over ethernet. I'm new to robotics so I don't know where to start. What's the best way to control an actuator over a network connection?</p>
Pushing Buttons Remotely over Ethernet
<p>Despite its price, the Raspberry Pi is a rather powerful platform that can run most Linux desktop applications but the most CPU/memory hungry. For example, Quake 3 has been shown to <a href="http://www.youtube.com/watch?v=e_mDuJuvZjI" rel="nofollow">run in 1080p</a> on it.</p>
1413
2013-06-09T21:29:14.680
|mobile-robot|raspberry-pi|artificial-intelligence|
<p>In general, is a Raspberry Pi processor powerful enough for a mobile chatbot? I want to make a small mobile robot that is like a chatbot. Is a Raspberry Pi processor powerful enough for any type of AI robotics? As far as a mobile robot, I want to make a wheeled robot about one foot in every dimension. The chatbot abilities will be from ProgramPY-SH, a new chatbot program that uses Xaiml databases. The chatbot works by looking through a database for a match of the user's input (vocal or text-based). It then acts according to the instructions given by the XML-like database.</p>
Is a Raspberry Pi processor powerful enough for a mobile chatbot?
<p>It sounds like you have 6 legs with some number of PID-controlled joints on each leg. You would like to move 3 legs at a time, while the other 3 legs stand in a stable tripod configuration.</p> <p><strong>Instead of figuring out how to move each set of 3 legs as one unit, you should be treating them as individual legs.</strong> You will send a leg a set of desired joint positions, and the PID control on each joint motor will effect that change. By sending a steady stream of desired joint positions, you will be describing a trajectory for the leg to follow.</p> <p>Coordinating the actions of all 6 legs simply involves a series of checkpoints to make sure that no leg gets too far behind the others. You need not keep all 6 legs on the same checkpoint; for example, you might keep each set of tripods tightly in sync but only sync the 2 sets at the point in the gait where the weight is shifted. </p> <p><em>Another way to imagine the checkpoints is to consider the gait to be a series of repeated movements of each leg, where each leg is slightly phase-shifted. In that case, the checkpoints would be the "constraints" on the allowed phase differences (e.g. <a href="http://www.msl.ri.cmu.edu/publications/pdfs/haynes-icra-2006.pdf" rel="nofollow">this CMU paper</a>).</em></p>
1414
2013-06-10T16:08:43.623
|mobile-robot|pid|legged|walking-robot|hexapod|
<p>I am currently working on a legged hexapod which moves around using a tripod gait. I have two sets of code to control the tripod. </p> <p>Set 1: Time based control</p> <p>In this code set, I set the tripod motor set to move at their rated rpm for a required amount of time before shifting to the other tripod motor set.</p> <p>PID control would be based on counting the number of transitions using an optical speed encoder, Calculating the error based on difference between actual speed and required speed and then adjusting the error with fixed Kd and Ki values.</p> <p>Set 2: Transitions based control</p> <p>In this code set I count to the number of transitions required to complete one rotation of the leg(tripod motor set) before starting the other leg(tripod motor set).</p> <p>PID control would be time based. Calculation of error would be the difference in time taken for individual motors of the motor set.</p> <p>Query: The set 2 shows promising results even without PID control, but the first set does not.Why so? The motors are basically set to move 1 rotation before the other set moves. </p> <p>Would the speed differences between the motors cause it to destabilize?</p> <p>How often do I update the PID loop?</p> <p>My robot seems to drag a little bit. How do I solve this?</p>
PID Conundrums for Legged Robots
<p>In short, yes, there are a number of robotics companies catering to just your needs. Specifically which apply to you depends on what type of robot you desire. Naturally cost varies with the complexity and durability. For example if you would be satisfied with a differential drive robot for indoor use then the <a href="http://www.irobot.com/us/learn/Educators/Create.aspx" rel="nofollow">iRobot Create</a> may suit your needs. If you need something for outside use then you could look at <a href="http://www.clearpathrobotics.com/" rel="nofollow">Clearpath Robotics</a> products. For a full humanoid robot the <a href="http://www.aldebaran-robotics.com/en/" rel="nofollow">Aldeberan Nao</a> is a good choice. For quadrotors check out the <a href="http://ardrone2.parrot.com/" rel="nofollow">AR.Drone Parrot</a>. And I understand <a href="http://liquidr.com/technology/wave-glider.html" rel="nofollow">Liquid Robotics</a> offers a good selection of water based robots. </p>
1416
2013-06-11T00:11:39.623
|mobile-robot|battery|reinforcement-learning|
<p>I'm involved in research on psychologically plausible models of reinforcement learning, and as such I thought it'd be nice to try and see how well some to the models out there perform in the real world (i.e. sensory-motor learning on a mobile robot). This is already been done in some robotics labs, such Sutton's <a href="http://webdocs.cs.ualberta.ca/~sutton/papers/horde-aamas-11.pdf" rel="nofollow">implementation of the Horde Architecture on the "Critterbot"</a>. However, these implementations involve robots custom-build by robotics experts in order to deal with the trials and tribulations of learning on a long time-scale: </p> <blockquote> <p>"The robot has been designed to withstand the rigors of reinforcement learning experiments; it can drive into walls for hours without damage or burning out its motors, it can dock autonomously with its charging station, and it can run continuously for twelve hours without recharging."</p> </blockquote> <p>Unfortunately I'm no expert when it comes to designing robots, and don't have access to a high quality machine shop even if I did; I'm stuck with whatever I can buy off-the-self or assemble by hand. Are these constraints common enough for amateur robotics suppliers to cater to, or should I expect to have to start from scratch?</p>
Building robots with high reliability, durability, and battery life
<p>It may be worthwhile to consider how laser scanners work. We know that it is possible to send a beam of light at an object, and detect how long it takes to be reflected back to the sensor to measure its distance.</p> <p>First of all, we use lasers because reflection of the light from the object is so important. lasers keep the light concentrated in a narrow beam, with minimal refraction. We'll come back to that later.</p> <p>There are several ways to measure distance. The first is triangulation. This typically depends on good placement of optics, and the CCD sensor the return beam shines onto. You can easily see the problem with a large distance - the angle detected gets increasingly close to $0^\circ$, so that we need sensor components which are very accurate at small scales. We would also like a beam width which is narrow for accurate detection of where the beam is, but there is greater diffraction at narrow beam widths, meaning the return beam of light is not that narrow. Over long distances, the beam gets increasingly wider, because of the small amount of diffraction.</p> <p>The second way is to measure the round-trip time. Light travels very fast at $3\times 10^8 \textrm{m/s}$. This means that even detecting something at $150 \textrm{m}$ will take $1 \textrm{μs}$. Embedded computers typically have a clock rate of only several megahertz, so even at this large distance, a system would struggle to provide an accuracy of 10%, and that is assuming the data acquisition is as fast - often the peripheral features have a slower clock rate.</p> <p>The data acquisition is typically much slower, allowing cheaper systems to be built. As to how they can get away with this - an interesting phenomenon is used. If a sinusoidal signal is embedded into the light (via the intensity), the return signal will also be a sinusoidal signal. However, depending on the distance, the signal will have a phase shift. That is, at $0 \textrm m$, there is no phase shift, and if the wavelength is $20 \textrm m$, then an object at $5 \textrm m$ will mean that the light travels $10 \textrm m$, creating a phase shift of $180^\circ$. If the signals are normalized to the same amplitude, taking the difference between the outbound signal, and the return signal, we get another analog sinusoidal signal (note: this is only one method). The amplitude of this depends on the phase shift. At $180^\circ$, we get the maximum amplitude - double of the original outbound signal. We can easily detect the amplitude of this signal even with slow digital circuitry.</p> <p>The main problem is that the choice of wavelength limits the maximum range. For example, the wavelength of $20 \textrm m$ limits the detection range to only $5\textrm m$. If we simply chose a much longer wavelength, we would get the same percentage accuracy. In many cases, if we want greater absolute accuracy, the system will have a greater cost, since we must more accurately measure amplitude in the face of environmental noise. This may involve a number of changes - for example, a larger laser beam, greater light intensity, and more accurate electronic components, more shielding of noise, and more power to run the system. If any one of these is not good enough, accuracy is affected.</p> <p>The other problem which affects all methods is the intensity of the return beam. As the distance increases, the intensity decreases. There is not that much scatter of light in air, so attenuation is not so much caused by this. However, even though a laser light is used, there is a very small amount of diffraction. It is not possible to remove this diffraction completely, although it can be decreased with a wider beam width. Diffraction reduces the amount of the returning light beam incident on the sensor. Because the beam width gradually increases, the sensor only receives a small proportion of the returning light. This affects the accuracy of measurements. Using a larger sensor can also increase noise. Therefore, here we also see another limit on the distance. We can increase the distance by increasing light intensity, or increasing beam width. However, increasing light intensity increases power consumption and heat, while increasing beam width requires larger optic lenses, and larger distances between optic components.</p> <p>Essentially, there is a balance between cost, accuracy, power consumption, and size of the system, and longer distances often requires better components to maintain accuracy.</p>
1435
2013-06-14T22:04:40.517
|localization|
<p>I'm looking at laser scanners and I see a huge range of detection distances. The furthest I've see are 30m if you exclude the very expensive ones that claim up to 150m. My question is what is the reason for the huge difference in range/price. I would think that with a laser it wouldn't be that difficult to detect something at distances greater than 30m. What's the physical limitation that makes it so much more expensive to go above 30m for a laser scanner or is there one?</p>
laser scanner distance
<p>If adjacent motors are 1 unit apart, the overall path-length of an X form is 2√2, or about 2.8284, and the overall path-length of a Steiner form is 1+√3, or about 2.732, which is 3.5% less. It is likely that hub mounting will decrease either or both of the total lengths by a few percent, but I will ignore that factor for the moment and merely say that the length improvement provided by using a Steiner form is quite small.</p> <p>In addition, the Steiner form will require either two hubs or one rather-long hub, and may require mounting hardware for spokes at six points instead of four. Because hub weight often is comparable to the weight of one or more spokes, the Steiner form has a weight disadvantage. (See “<a href="http://www.rcgroups.com/forums/showthread.php?t=1718951" rel="nofollow">Intro to Multirotors, Theory, Build Log and Tips</a>” for a series of pictures that illustrate relative hub and spoke sizes for a variety of quadcopters.) </p> <p>A central point, as available with an X-frame, is a convenient location for a battery and a controller that serves all the motors. For balance, an extra pad for mounting the battery and controller would be needed on the middle bar of a Steiner frame (unless multiple batteries and controllers are used).</p> <p>Note, the <a href="http://www.wowhobbies.com/quacopters-drones-quad-flyers.aspx" rel="nofollow">quacopters-drones-quad-flyers</a> page at wowhobbies.com includes a quadcopter, hexacopter and octocopter buying guide, in which several units are characterized by <em>IAS</em>, meaning there is “Inadequate Structural Support (eg: booms are too long and center frame is too weak to support the size of the quadcopter structure or any substantial payload)”. As size scales up, the crossbar in the Steiner form needs to scale up more than the rest of the arm structure (which will also overpower that form's 3.5% length advantage) else the frame will become structurally inadequate.</p>
1437
2013-06-15T22:30:49.067
|design|quadcopter|
<p>I haven't made or flown any quadcopter, but I am fascinated by them. But when looking at the frame of a lot of designs, after whachting <a href="http://www.youtube.com/watch?v=dAyDi1aa40E" rel="nofollow">this video</a> I wondered why a lot the frames are in an X shape. Since the most efficient shape would according to the video be something like this >-&lt;, where each corner is 120°. I also did a quick search on the internet and found <a href="http://deltaquadlog.blogspot.nl/2013/03/frame-design.html" rel="nofollow">this blog</a> which stated the same (however he did not mention the exact angle) and said: "Even though this is not entirely a new idea, it has not yet been widely accepted by the community."</p>
Most material efficient quadcopter frame
<p>&quot;<a href="http://en.wikipedia.org/wiki/Least_significant_bit" rel="nofollow noreferrer">LSB</a> <a href="https://en.wikipedia.org/wiki/Root_mean_square" rel="nofollow noreferrer">RMS</a>&quot; means the root-mean-squared value of the total noise in least significant bits of the digital channel. Roughly, that's the standard deviation of the noise times the weight of one step of the digital value.</p> <p>&quot;<span class="math-container">$\mu g/\sqrt{Hz}$</span>&quot; means the power spectral density in micro-g's (<span class="math-container">$1\mu g \simeq 0.0000098 m/s^2$</span>). If the power spectral density is flat, then you can square this number, multiply it by the bandwidth, take the square root, and get to the noise as LSB RMS. The power spectral density is useful when you try to figure out the effect of sensor noise on velocity and/or position when integrating.</p> <p>&quot;<span class="math-container">$^\circ / s$</span>-rms&quot; has a similar meaning to the &quot;LSB RMS&quot;, except the noise is referred to the rate measurement rather than the ADC bits.</p> <p>&quot;<span class="math-container">$^\circ /s / \sqrt{Hz}$</span>&quot; is, again, the power spectral density.</p> <p>It should be obvious at this point what the magnetometer specifications mean.</p> <p>If you haven't got a background in random processes and signal analysis then you're going to have a rough time relating this back to real-world numbers, particularly if you're doing any kind of sensor fusion. Even the &quot;big boys&quot; in the sensor fusion game can't easily map sensor noise to system behavior without lots of simulation and head-scratching.</p> <p>Simpler problems will yield to analysis, however.</p>
1449
2013-06-20T07:12:43.997
|imu|accelerometer|gyroscope|noise|magnetometer|
<p>I'm trying to understand how noise is represented for accelerometers, gyroscopes, and magnetometers so that I can match the requirements of my project with the standard specs of these sensors. </p> <p>I want the output of a 3 axis inertial sensor to be values in meters/sec/sec, gauss, and radians/sec for each axis, and noise to be represented by a range around the true value (so X as in +/- X m/s/s, gauss, and radians/sec) for accelerometers, magnetometers, and gyroscopes respectively. Switching out gauss for teslas, meters for feet, radians for degrees, etc. would all be fine.</p> <p>After looking at a few datasheets, I'm surprised to find that... </p> <ul> <li>Accelerometer noise is measured in "LSB rms" and "μg/√Hz"(<a href="https://www.sparkfun.com/datasheets/Sensors/Accelerometer/ADXL345.pdf" rel="noreferrer">https://www.sparkfun.com/datasheets/Sensors/Accelerometer/ADXL345.pdf</a>, <a href="http://dlnmh9ip6v2uc.cloudfront.net/datasheets/Sensors/Accelerometers/MMA8452Q.pdf" rel="noreferrer">http://dlnmh9ip6v2uc.cloudfront.net/datasheets/Sensors/Accelerometers/MMA8452Q.pdf</a>)</li> <li>Gyroscope noise is measured in "º/s-rms" and "º/s/√Hz" (<a href="https://www.sparkfun.com/datasheets/Sensors/Gyro/PS-ITG-3200-00-01.4.pdf" rel="noreferrer">https://www.sparkfun.com/datasheets/Sensors/Gyro/PS-ITG-3200-00-01.4.pdf</a>)</li> <li>Magnetometer noise is measured in "µT rms" and "Gauss/√Hz" (<a href="http://dlnmh9ip6v2uc.cloudfront.net/datasheets/Sensors/Magneto/MAG3110.pdf" rel="noreferrer">http://dlnmh9ip6v2uc.cloudfront.net/datasheets/Sensors/Magneto/MAG3110.pdf</a>, <a href="http://www.vectornav.com/products/vn200-rug?id=54" rel="noreferrer">http://www.vectornav.com/products/vn200-rug?id=54</a>)</li> </ul> <p>What do these units mean, and how do they (or can they) translate into what I want?</p>
How are units of noise measurement related to units of a sensor's data measurement?
<p>The whole point of the chipselect pin is so that multiple devices (in this case, shields) can share the same SPI bus.</p> <p>As long as no two chipselect pins are the same, you ought to be fine. Make sure only one chip select pin is on at any given time.</p>
1454
2013-06-22T00:59:26.140
|arduino|
<p>I was wondering if it is possible to plug a motor shield on top of an Ethernet shield, even though the direction pins on the motor shield would be connected to the same pins as the spi bus. I was thinking that it would work if, in the coding, I disabled both chip selects on the Ethernet shield before I used the motors.</p>
Motor and Ethernet shields together
<p>Is an underwater quadrotor possible? Absolutely. Whether it's practical in that configuration is a different matter. </p> <p>In air, viscosity and buoyancy are negligible; in water, they are not. An aerial quadrotor will expend energy fighting gravity, while an underwater quadrotor can simply rely on positively bouyant materials to keep it from sinking. You can experience this yourself &mdash; you can swim in water (or use a small piece of foam for flotation), but not in air.</p> <p>An underwater quadrotor will expend energy fighting the water's fluid friction, while an aerial quadrotor will move easily. You can experience this effect as well &mdash; try throwing a frisbee in air, then see how far you can "throw" it underwater.</p> <p>If you were to actually operate a quadrotor underwater, you would mostly likely fly it with the rotors oriented horizontally &mdash; in the direction of travel, instead of being vertical and just tilting slightly in the direction of travel. </p> <p><strong>Update 2015-12-21:</strong> Someone did this exact thing. <a href="https://www.youtube.com/watch?v=FC9EJhs0pc0" rel="nofollow noreferrer"><img src="https://i.imgur.com/FZy52am.gif" alt="underwater quadrotor"></a></p> <p>While a torpedo shape typically favored for cruising in a forward direction, there are other vehicles with 4 thrusters (positioned for maneuverability), such as Bluefin's HAUV:</p> <p><img src="https://i.stack.imgur.com/JlXXy.jpg" alt="Bluefin HAUV Prototype"></p>
1466
2013-06-24T05:26:18.397
|quadcopter|
<p>Just out of curiosity, can the concept of a Quadcopter be applied to an ROV? Would it work the same way underwater as it would be in Air? If not what kind of modifications it would take to implement that idea, underwater?</p>
Quadcopter application underwater
<p>In case you're not already aware, the problem you are asking about is generally referred to as the <a href="https://en.wikipedia.org/wiki/Boundary_value_problem" rel="nofollow">two-point boundary value problem</a>. For some systems, a closed form solution may be extremely difficult to calculate and may not even exist. As such it would help to know more about your dynamics.</p> <p>Your description would seem to imply that you are working with a double integrator system. If that's accurate then you could use a fixed-final-state-free-final-time controller. Because the dynamics matrix is nilpotent you can find a closed form solution in terms of time, then minimize the function to find the optimal arrival time. From there you can use your favorite numerical integration method to verify that the accelerations are within the desired bound. This approach will actually work for any linear system with a nilpotent dynamics matrix. For additional details please see section IV, Optimally Connecting a Pair of States, of my paper <a href="http://arl.cs.utah.edu/pubs/ICRA2013-1.pdf" rel="nofollow">Kinodynamic RRT$^*$: Asymptotically Optimal Motion Planning for Robots with Linear Dynamics</a>.</p>
1483
2013-06-26T22:51:52.753
|control|motion-planning|
<p>I'm trying to find the optimal-time trajectory for an object (point mass) from point <code>p0</code> with initial velocity <code>v0</code> to point <code>p1</code> with velocity <code>v1</code>. I'm considering a simple environment without any obstacles to move around. The only constraint is a maximum acceleration in any direction of <code>a_max</code> and optionally a maximum speed of <code>s_max</code>.</p> <p>This is easy to work out in 1D but I struggle with the solution 2D and 3D. I could apply the 1D solution separately to each dimension, but this would only limit acceleration and speed in a single dimension. The actual acceleration might be larger in multiple dimensions.</p> <p>Are there any textbook, closed-form solutions to this problem? If not, how can I solve this in a discrete-event simulation?</p>
Optimal-time trajectory in 2D and 3D with simple constraints
<p>Moving it around requires, apart the mechanics, always a free and therefore unused space. Wouldn't you rather like to know, <em>what</em> is in <em>which</em> position? And for the barcode: it can become unreadable by ice. I'm thinking of using RFID for a similar project for about 10 fridges with each 30 boxes in it.</p>
1489
2013-06-27T10:05:35.743
|arduino|motor|sensors|cameras|
<p>I'm endeavoring to prototype a challenging sorting mechanism inside a fridge and would appreciate any constructive tips on how to get from the specs to a plausible design.</p> <h3>Problem</h3> <p>The aim of the game is to identify and sort food items in the limited space of a fridge - such that a user would push their unsorted shopping into a chamber at the top of the enclosure - and the machine inside would then try to identify the contents with help of bar-codes (first big problem) - and then sort and move the items according to their identities into different chambers below (second big problem). </p> <h3>Solution?</h3> <p>Are there any existing devices that already serve such functions (automatic bar-coding and sorting), the designs of which could perhaps inform the mechanics of the device I'm planning to construct?</p> <ul> <li>I'm thinking maybe manufacturing plants</li> <li>or packing factories with conveyor belts etc may use systems that already solve such problems? </li> <li>Or filtering mechanisms in candy dispensers,</li> <li>mechanized lifting forks? </li> <li>Textbook engineering mechanisms?</li> </ul>
automatic sorting with barcode identification inside a refrigerator
<p>Lithium thionyl chloride / SOCl2 batteries have excellent <a href="http://www.allaboutbatteries.com/Battery-Energy.html" rel="nofollow noreferrer">energy density</a> but are not aimed at high-current applications and are not rechargeable, so it seems likely that you can save weight and cost by using some other kind of batteries. </p> <p>Quadcopter batteries (typically <a href="http://en.wikipedia.org/wiki/Lithium_polymer_battery" rel="nofollow noreferrer">lithium ion polymer</a> technology, rated for 5C to 15C discharge rates and rechargeable hundreds of times) might be a better choice. You could use half-a-dozen 2200 mAh packs, or a couple of 5500 mAh packs.</p> <p>Previous question <a href="https://robotics.stackexchange.com/q/554">Quadcopter liPo battery weight/capacity trade off</a> includes some charts comparing energy densities, etc., for several battery technologies.</p> <p>Note, if your application is controlling several motors, with several motor controllers, you may be able to use higher voltages to the controllers, which use PWM to deliver desired amounts of energy to the motors. This technique is usable when you have a high-voltage, low-current power source. However (see below) that doesn't help in the present case. Likewise, a DC/DC buck converter (that converts high voltages to lower voltages, at ~ 95% efficiency) doesn't help.</p> <p>The 20A, 10Ah parameters of your SOCl2 example imply 30 minutes of operation. The 20A, 14.4V parameters imply energy rate is 288 W. The current drain per cell in the example is 1A (which exceeds the rating of a typical AA SOCl2 cell). If we limit per-cell current to 1A, we get 3.6 W per cell. Then the number of cells required is 288/3.6 = 80 cells, regardless of the cells' series or parallel configuration, which is why a higher voltage to the motor controller, or to a buck converter, doesn't help.</p>
1500
2013-06-29T00:30:43.873
|power|
<p>I am building a robot where power density is critical, and looking into using lithium thionyl chloride (SOCl2) batteries. I will be drawing around 20A constantly and need between 12 and 17V. The batteries I have found so far are AA-sized and designed to deliver 100mA, 3.6v, 2.4Ah, and weigh 19g each. I could picture a pack with 4 blocks of these batteries in series, where each block has 20 batteries in parallel. That would mean 48Ah, which is way more than the ~10Ah that I need, and 1.52kg, which is more than I would like the robot be carrying.</p> <p>So, the question is, is there a way to achieve 10Ah at 20A and 14.4V (i.e. for 5 hours) using SOCl2, carrying much less weight than 1.52kg?</p>
lithium thionyl chloride batteries to generate 20A @ 14.4V
<p>This is an extremely general question. What code you will use will depend on the type of "robotic movement" you are attempting to achieve, and what platform you are running the code on.</p> <p>Let's say you want to control a stepper motor from python running on a Raspberry Pi. Stephen Phillips has written <a href="http://blog.scphillips.com/2012/12/a-python-class-to-move-the-stepper-motor/" rel="nofollow">a nice python class</a> for this purpose. The code was for Python 2.x, but the only changes you need to make to port it to python 3 are to add parenthesis to the print statements and change the backticks to repr() because python 3 has cleaned up the print syntax. See the output of python's 2to3 command below:</p> <pre><code>@@ -41,10 +41,10 @@ steps = (steps % self.steps_per_rev) if steps &gt; self.steps_per_rev / 2: steps -= self.steps_per_rev - print "moving " + `steps` + " steps" + print("moving " + repr(steps) + " steps") self._move_acw(-steps / 8) else: - print "moving " + `steps` + " steps" + print("moving " + repr(steps) + " steps") self._move_cw(steps / 8) self.step_angle = target_step_angle @@ -98,7 +98,7 @@ GPIO.setmode(GPIO.BOARD) m = Motor([18,22,24,26]) m.rpm = 5 - print "Pause in seconds: " + `m._T` + print("Pause in seconds: " + repr(m._T)) m.move_to(90) sleep(1) m.move_to(0) </code></pre>
1503
2013-06-29T22:41:17.110
|python|
<p>Are there any Python3 modules used to program robotic movement by declaring a device or component and then providing the instructions? I am not looking for modules that test the components.</p>
Python3 Modules for motor movement
<p>You need to use a PID algorithm to control the speed of the motors, that way you'll be able to make them spin as slow as you need while still getting torque. The PID algorithm needs to measure the speed of your motor somehow, the typical way is to place some optical encoders in the wheels, this is a must for any rover and besides giving you the speed will give you a very good estimate of the position, which helps navigate and make turns.</p> <p>You can steal the optical encodes from an old mouse, those with a ball. </p>
1514
2013-06-30T18:23:12.643
|motor|pwm|
<p>So I built three little bots so far. </p> <p>One with a raspberry-pi, (6V motors), one with an arduino (12V motors), and another with an arduino but a hacked remote car (7ish, volt motors):<br> <a href="https://i.stack.imgur.com/AJ45l.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/AJ45ls.jpg" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/f6NUt.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/f6NUts.jpg" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/Rr5fH.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Rr5fHs.jpg" alt="enter image description here"></a></p> <p>The problem I have with all these is that the motors spin so fast the car just bumps into something in a few seconds. (I have a small house)</p> <p>I tried to use PWM to control the speed, but at a little less than full throttle (255) they jam up and they can't take the weight I put on them.</p> <p>Should I buy one of these chassis that slow the motor down and give it torque with a gearbox, or is there anything else I can do?</p>
Reducing motor speed without jamming up
<p>The snow problem you're referring to seems to that the car can no longer tell where the edge of the road is.</p> <p>Even in heavy traffic, a good driver still maintains enough space from the car in front that plenty of road is visible, and the side of the road identifiable from LiDAR. This is not common practice, but I imagine autonomous cars would be programmed to do so. In that case, plenty of road is always visible for the car.</p> <p>Google's car seems to have <a href="http://www.youtube.com/watch?v=BrmorE5W1tM" rel="nofollow">no problems in heavy traffic</a>.</p>
1518
2013-07-01T16:27:51.393
|ugv|lidar|
<p>If a street is extremely crowded to an extent that the terrain is not visible from the point of view of the LIDAR (e.g. in google's self driving car), can it still manage to localize itself and continue to operate? I recall Sebastian Thrun saying that Google's car cannot navigate through snow filled roads since the onboard LIDAR cannot map the terrain beneath the snow (<a href="http://www.economist.com/node/21560989" rel="nofollow">e.g. here</a>).</p> <p>[Edit : Based on the comments] Clarifying the context, here "not visible" means there is an obstacle between the LIDAR and the terrain</p>
How do visual obstructions impact the ability to localize using LIDAR?
<p>You may want to check out <a href="https://robotics.stackexchange.com/questions/167/what-are-good-strategies-for-tuning-pid-loops">this question on how to tune PIDs</a>. </p> <p>It sounds like you're confusing the role of <code>error</code> in this calculation. The PID equation takes the error measurement (actual position - desired position) and determines how much "response" should be commanded in the controller (e.g. how much force). </p> <p>So instead of saying <code>if (error_value &gt; 0) { }</code>, you should be saying something like <code>response = get_PID_calculation(error_value)</code>. Depending on the construction of your specific vehicle, that response would be sent to the drive motor as the throttle amount, the steering as desired angle, or some other control input.</p>
1524
2013-07-02T12:06:21.443
|arduino|pid|line-following|
<p>I'm working on building a line follower robot and want to optimize its performance. It was suggested that I use a PID algorithm. I read a lot about PID but am confused a bit regarding following:</p> <p>I've calculated the error_value using $k_p * proportional + ...$ But regarding the change in the motor speed I'm confused as to what to use during comparison the difference (i.e. currentposition - setpoint) or the errorvalue. That is should I use </p> <pre><code>if (difference &gt; 0) { //code for changing appropriate motor's speed using error_value } </code></pre> <p>or </p> <pre><code>if (error_value &gt; 0) { //code for changing appropriate motor's speed using error_value } </code></pre> <p>Also is there any specified range for the values of the constants $k_p$, $k_i$ and $k_d$? I'm using a <a href="https://en.wikipedia.org/wiki/Differential_wheeled_robot" rel="noreferrer">differential wheeled robot</a> for my line follower.</p> <p>Also I would be happy if someone suggests me any other advanced optimization algorithm for improving the line follower robot.</p>
Line Follower optimization
<p><sub>This should have probably been a comment to @RobertEnglish answer ... but comments don't seem to allow pictures</sub></p> <p>Bavel gears. Plastic ones should be rather inexpensive.</p> <p><a href="https://i.stack.imgur.com/RzlZL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RzlZL.png" alt="enter image description here" /></a></p>
1538
2013-07-05T03:47:23.300
|motor|motion|movement|driver|linear-bearing|
<p>I am interested in using this miniature motor <a href="http://www.newscaletech.com/technology/squiggle-motors.php" rel="nofollow noreferrer">Squiggle Micro Motor</a> to create very tiny <strong>horizontal</strong> movements. However, due to very limited space, I can only place it <strong>vertically</strong> within my project.</p> <p>Assuming this motor is placed as follows, how can one adapt it to simultaneous movement at a right angle? (Ideally with the X-axis movement matched to the Y-axis movement as well as possible.) <img src="https://i.stack.imgur.com/sdyln.png" alt="enter image description here" /></p>
How to convert vertical motion to horizontal?
<pre><code> A great way to learn C/C++ for arduino is by going to the website: ardx.org/CIRC01 </code></pre> <p>This website is completely and gives great mini projects that involve electronics and then gives an example code and a great explanation of how the code works. This site consists of 11 projects and explains the basics of C/C++ for arduino, it also teaches you how to read schematics and data sheets. After you are done with project #1 change the CIRC01 at the end of url to CIRC02, and each time you complete a project continue to go to the next one by changing the # on the end of the CIRC to a higher # until you get to 11. By the time you get to project #11 you should have a good understanding of programming for arduino and a better understanding of electronics as well. Another way that I learned C++ is by reading the book "C++ for Dummies by Stephen R. Davis". This book teaches C++ in a simple yet affective fashion, however it is quite tedious to completely read this book, if it were me I would start by using the projects on ardx.org and then once I have finished all the projects and if I wanted even more knowledge on programming c/C++ then read the book. Good luck!</p>
1544
2013-07-06T13:25:20.640
|arduino|
<p>I have no problem in reading circuit schemes, connecting wires, resistors etc. (basing on for example instructables.com), but I've only tried to learn Java (a while ago) and it's difficult to me to find out what's going on with all this C-code-stuff. Are there any tutorials that are focused on the programming part? thanks</p>
Arduino C/C++ progamming tutorials
<p>If you want to maximise efficiency, you go with two rotors: a main rotor, and a tail rotor. I.e. a traditional helicopter. With the common CCPM setup (Cyclic Collective Pitch Mixing), you have three servos controlling the main rotor. Together they can tilt it any direction and control the pitch on the blades. As you can imagine, this is mechanically complicated.</p> <p>The advantage of a quadcopter is that it's very simple; you have no servos. Adjacent arms are counter-rotating to control yaw.</p> <p>Tricopters have a yaw imbalance that is corrected by dynamically tilting one of the motors.</p> <p>So, with a tricopter you lose the "no servo" advantage that made people switch from traditional 2-rotor helis to quads.</p>
1552
2013-07-08T00:46:21.340
|quadcopter|multi-rotor|
<p>Hy, I just found useful to post my idea here. </p> <p>I've seen videos about automated quadcopters: <a href="http://www.ted.com/talks/raffaello_d_andrea_the_astounding_athletic_power_of_quadcopters.html" rel="nofollow">http://www.ted.com/talks/raffaello_d_andrea_the_astounding_athletic_power_of_quadcopters.html</a> and <a href="http://www.ted.com/talks/vijay_kumar_robots_that_fly_and_cooperate.html" rel="nofollow">http://www.ted.com/talks/vijay_kumar_robots_that_fly_and_cooperate.html</a>. </p> <p>I surfed pages from the companies presenting this research and other information on the internet, but I haven't found why they use quadcopters specifically. I understand, how accelerating, rotating and rolling works in those quadcopters - it's simple, but they claim that quadcopters have minimum number of working parts to fulfill their needs, which I don't agree and I think that tricopters are better in this (duocopters can't rotate horizontally, but tricopters can by inclining and then powering the remaining left or right propeller). </p> <p>I rode forums, calculated and draw drafts of both tri and quad and found, that tri is much more efficient just in everything than quad with same props and battery when taken in account that best properties has the smallest copter with the largest props so: 3:4 moving parts (no vectored yaw in tri), 9.5:16 size, building Y instead of X construction take far less material 1.5:2.82, lesser maximum power input 3:4, better power efficiency makes longer flight time and tricopters have also improved agility over quadcopters. </p> <p>The only disadvantage I see is a bit complicated horizontal rotating in tricopter without vectored yaw, which can be problem in man controlled machines but easily solved by simple algorithms in automated machines -> it's not a real disadvantage, just a small work to be done. I was thinking about doing that in my bachelor thesis, but for now I am looking for your opinions, thanks!</p> <p>EDIT: Maybe the torque is the problem, because on tricopters you can can have all 3 props in 1 direction or 2 in 1 direction and 1 in the opposite and it's symmetrical in neither way, but I'm not sure if this is the main problem...</p>
Use of automated tricopters instead of quadcopters?
<p>If you want sub-meter tracking of the moving object, GPS alone will not be sufficient. I concur that passive vision-based techniques will require a huge amount of engineering to implement, not to mention substantial size, weight, and power. I recommend using differential GPS techniques -- not exactly easy, but simpler from a system standpoint. You will need a high-update rate GPS receiver that outputs Carrier Phase and Pseudo-Range data as well as a traditional GPS solution. You will also need to transmit this additional data to your quadcopter from both a nearby reference station and the moving object. It is useful to have a stationary reference station since its position is exactly known, so using the satellite ephemeris you can determine the propagation and timing errors from each satellite very precisely. Since you know the approximate geometry to both the non-moving reference station and the moving object, and the paths to the satellites are approximately the same for both cases, you can adopt the precisely calculated propagation and timing errors learned from the reference station and apply them to the moving object. Additional accuracy is gained by computing a relative position solution for both the quadcopter and the moving object that explains the observed changes in Carrier Phase over time as both of these move. Since these are phase changes, the integer number of cycles is unknown and must be solved for using a well-known technique for finding integer solutions to least-square problems (name escapes me at the moment). A Kalman Filter or Particle Filter is also typically needed to converge to the correct position solution using the carrier data after fusing with the pseudo-range-derived data. Check out work done at Auburn University's Navlab for examples of this type of work.</p>
1554
2013-07-08T06:46:38.617
|arduino|quadcopter|gps|
<p>If this has already been answered, by all means please point me to it.</p> <p>I am in the process of building a quadcopter which I eventually plan to run autonomously by allowing it to track an object and take a video feed of it moving.</p> <p>GPS is one of the options I've considered, basically:</p> <ul> <li>GPS antena on moving object (person, car, bike, surfer)</li> <li>GPS antena on quadcopter</li> <li>Radio to transmit coordinates from moving object to quad copter</li> </ul> <p>Some of the challenges I can foresee are</p> <ul> <li>Line of sight for camera. How does the camera know exactly where to point?</li> <li>Angle, how can I pre-program the quad to always record, say... 10m to the right of the moving object, or even better, program a set of angles to record from whilst keeping up with the object</li> <li>GPS accuracy, what happens if the GPS lock is weak?</li> </ul> <p>What are some of my other options? I saw this <a href="http://www.youtube.com/watch?v=w2itwFJCgFQ">TED Talk</a> where the quads are following a ball shaped sensor? I believe it uses Kinect cameras and lots of them which is not really an option for this challenge.</p> <p>So I'm open to hearing some ideas before I start research and development of these features.</p>
Track a moving object
<p>Here's an intuitive way to think about this. A quadrotor has 4 motor speeds that you can adjust to control 4 things: vertical lift, pitch angle, roll angle, heading angle. </p> <p>A trirotor has 3 motor speeds and therefore only let's you control 3 things, one of the four previously listed has to go. Using @srim's notation, and if A is the front, then thrust from A+B+C is vertical lift. The differential thrust between A and (B+C) controls pitch angle. The differential thrust between B and C controls roll angle. Once you have set the speeds of the 3 rotors to control these 3 things, then you can compute the reaction/drag torques that will arise, but you have no degrees of freedom left to control it. In general these torques will not be balanced, so the tri rotor would spin around its vertical axis, maybe a big tail fin could keep that in check...</p>
1562
2013-07-09T11:15:30.823
|design|quadcopter|uav|
<p><a href="https://robotics.stackexchange.com/a/549/37">An answer</a> to the question <a href="https://robotics.stackexchange.com/q/543/37">Why are quadcopters more common in robotics than other configurations?</a> said:</p> <blockquote> <p>You need 4 degrees of freedom to control yaw, pitch, roll and thrust.</p> <p>Four props is therefore the minimum number of actuators required. Tricoptors require a servo to tilt one or more rotors which is more mechanically complicated.</p> </blockquote> <p>In a comment, I asked:</p> <blockquote> <p>How do you get pure yaw motion with a quadcoptor and if that's possible why won't this work with a tricoptor? I don't understand how can you get yaw motion with any system where all rotors are in a plane without first tilting and moving. I would have thought that the main difference between quadcopters and tricoptors would be the kinematic calculations would be more complex.</p> </blockquote> <p><a href="https://robotics.stackexchange.com/a/1453/37">Another answer</a> explained:</p> <blockquote> <p>you get pure yaw in the following way:</p> <p>North and South motors rotating the same speed but collectively at a higher (or lower) speed than East and West Motors which are also at the same speed.</p> </blockquote> <p>This explains why it works with a quadcopter, but doesn't explain why it won't work with a tricopter.</p> <p>Is it simply the fact that the asymmetry means that you can't imbalance the <a href="http://en.wikipedia.org/wiki/Torque_effect" rel="nofollow noreferrer">torque effects</a> to provide yaw movement while still keeping the thrusts balanced to keep pitch and roll constant?</p>
If you can create pure yaw motion with a quadcoptor then why won't this work with a tricoptor?
<p>Ian's answer is mathematically correct. However, on most processors division takes longer than multiplication. So if you're at all pressed for processor resources you would want to precalculate a gain and offset for each channel, and apply it:</p> <p>$k_{zz} = \frac{1}{S_{zz}}$</p> <p>$b_{zz} = -\frac{B_z^{0g}}{S_{zz}}$</p> <p>$a_{z1}^{corrected} = k_{zz} a_{z1} + b_{zz}$</p>
1570
2013-07-10T11:07:19.197
|accelerometer|calibration|errors|
<p>I found <a href="http://www.kionix.com/sites/default/files/AN012%20Accelerometer%20Errors.pdf" rel="nofollow">a good explanation</a> on how to remove accelerometer bias (when on flat table only one axis should show values, the other two should be 0). I've calculated S and B factors (page 3):</p> <blockquote> <p>Record $B_x^{0g}$, $B_y^{0g}$, $B_z^{0g}$, $S_{xx}$, $S_{yy}$, and $S_{zz}$ in EEPROM or flash memory and use these values in all subsequent calculations of acceleration to get the corrected outputs.</p> </blockquote> <p>I don't know how to incorporate these into the final calculation of accelerations. I guess the bias should be substracted from my sensor reading. What about sensitivities (S)?</p>
Accelerometer bias removal
<p>I did this exact same project 6 years back. I used two servos. The first was a continuous rotation servo with a spool the pull the spider up. The spool did not have an end on it so that when the spool turned parallel to the string it all came loose and the spider fell. It was simple and effective, although the CR servo that I used was under powered.</p> <p>I have a write up in <a href="http://forums.parallax.com/showthread.php/107352-Oldbit-s-Unofficial-Propeller-Halloween-Contest-%28Over-300-value-in-cash-amp-priz?p=760287&amp;viewfull=1#post760287" rel="nofollow noreferrer">this thread</a> on the Parallax forums.</p> <p><img src="https://i.stack.imgur.com/LSHzj.jpg" alt="hardware picture"></p>
1571
2013-07-10T14:31:23.340
|motor|rcservo|
<p>For Halloween, I'd like to build a spider that drops from the ceiling when it detects motion and then rewinds itself back up to scare the next kids. I already have a lightweight foamish spider from Hobby Lobby that I'd like to use, but I need help adding the smarts to it to scare the kids.</p> <p>Ideally, it'd be able to detect how tall/far away the kid is to drop to a custom height each time, but I'd settle for a standard dropping height if that's too much. I even had an idea of having a motion sensor and having it shoot Silly String webs in front of people.</p> <p>I have a very technical background, but I'm a total noob when it comes to robotics, so ideas as to what components and considerations I'd need would be greatly appreciated!</p>
How to raise/drop a spider?
<p>I realize this question is 2 years old, but I have direct recent experience with this. The way I did this is with 6 rotated cube positions with 1000 points at each position, so a total of 6000 samples.</p> <p>I'm assuming Matlab/numpy nomenclature, where NxM means N rows and M columns.</p> <p>I assume an equation like Ax = B where B is the measured values matrix (6000x4, with right column all ones) and A is the "real" values matrix (also 6000x4, which has six 1000-value sections, with 1000 copies each of [1,0,0,1], [-1,0,0,1],[0,1,0,1], [0,-1,0,1], [0,0,1,1], [0,0,-1,1], corresponding to Xup, Xdown, Yup, Ydown, Zup, Zdown. x is a 4x4 matrix where the upper left 3x3 components are the gain and cross-axis sensitivities, and the right 3x1 components are the offsets; the bottom row is all zeros. By using a 4x4 matrix we can do everything in linear math and avoid the separate offset computation.</p> <p>I then let numpy compute x as the least squares division of B/A. I think in Matlab it would be B\A. I'm not sure how least squares works in this case, but the result is a 4x4 matrix that minimizes the errors and averages out all the samples. Conceivably you could also do this with only two positions, where each axis is diagonal to gravity, then upside down from there, and you adjust the expected "ideal" values accordingly.</p> <p>The result is a 4x4 matrix that fixes scale, offset, and cross axis errors, which also means it corrects for any rotation error you have between the sensor and the reference edge hardware.</p> <p>Here is the python code, assuming numpy:</p> <pre><code>def least_squares_fit(A, B): """Does least squares fit to find x of Ax = B. Returns best guess of original distortion matrix x and its inverse. A is ideal values. B is distorted values. Use inverse to correct real data back to ideal data.""" # Need to extend A and B A = np.hstack((A, np.ones(len(A)).reshape(-1, 1))) B = np.hstack((B, np.ones(len(B)).reshape(-1, 1))) X, res, rank, s = np.linalg.lstsq(A, B) X = X.T # Not sure why we need T Xi = np.linalg.inv(X) return (np.asmatrix(X), np.asmatrix(Xi), np.sum(res)) </code></pre>
1579
2013-07-10T20:44:48.130
|sensors|accelerometer|calibration|errors|
<p>I've already <a href="https://robotics.stackexchange.com/questions/1570/accelerometer-bias-removal">asked a related question (accelerometer bias removal)</a> here on robotics and got a bit better results on corrected accelerometer output. To get even better results I found the <a href="http://www.vectornav.com/support/library?id=86" rel="nofollow noreferrer">calibration equations (7th &amp; 8th paragraph)</a> from Vectornav which are just a bit enhanced than the solution in the linked question:</p> <p><img src="https://i.stack.imgur.com/ZlbWs.png" alt="enter image description here" /></p> <p>However, six more variables are needed:</p> <blockquote> <p>Sensitivity of sensor X-axis to Y-axis inputs (<span class="math-container">$M_{xy}$</span>)</p> <p>Sensitivity of sensor X-axis to Z-axis inputs (<span class="math-container">$M_{xz}$</span>)</p> <p>Sensitivity of sensor Y-axis to X-axis inputs (<span class="math-container">$M_{yx}$</span>)</p> <p>Sensitivity of sensor Y-axis to Z-axis inputs (<span class="math-container">$M_{yz}$</span>)</p> <p>Sensitivity of sensor Z-axis to X-axis inputs (<span class="math-container">$M_{zx}$</span>)</p> <p>Sensitivity of sensor Z-axis to Y-axis inputs (<span class="math-container">$M_{zy}$</span>)</p> </blockquote> <p>Below it is also stated:</p> <blockquote> <p>IEEE-STD-1293-1998 [...] provides a detailed test procedure for determining each of these calibration parameters</p> </blockquote> <p>However, after searching through the <a href="https://docs.google.com/file/d/0B8zqB-WeBKRBTUN4OU0weEV0eVE/edit" rel="nofollow noreferrer">1293-1998 standard</a> (especially page 201 in Google Docs) I didn't find any clue on how to calculate the <span class="math-container">$M$</span> values. Also, <span class="math-container">$B_{d}$</span> and <span class="math-container">$V_x$</span> values from Vectornav equations is not explained anywhere. Can someone point me further?</p>
Accelerometer calibration - how to get cross-axis sensitivities
<p>Or you could just do what I have done and scavenge two DC motors plus gearboxes from used cordless drillers. Mine are from 14.4v drillers and I am operating them using 3S LIPOs (11.1v). The LIPOs are not that expensive if you buy them rated at say 2200mAh (more current capacity = more cost). You don't even need a very high "C" rating. 20 "C" would probably do fine. But of course since you have no weight limitation you could easily use the Nicd batteries that came with the cordless driller itself, plus you get a free charger too.</p> <p>You will need to remove the chuck (unscrew the centre screw within the chuck by turning in the opposite direction as it is a reverse thread, and then put an allen key in the chuck, unscrew it all the way, and hit the key hard counter clockwise and the chuck will come off) and then remove the little ball-bearings and insert (self-tap them) grub screws in the holes where the bearings were in order to block the clutch. </p> <p>Then if you wish you can just put the chuck back on and attach any shaft and wheel! You can easily get controllers for the motors and connect them to an r/c receiver.</p> <p>The point is that these motors and gearbox combos are made for a good torque and speed compromise and are quite hardy in use. They are of course brushed and in my experience won't draw much over maybe 10 Amps each and that only if stalled.</p>
1581
2013-07-11T15:19:47.430
|motor|wheeled-robot|battery|
<p>I have a project which requires a robot to move around a room with a flat surface (concrete floor).</p> <p>The robot must carry a laptop. I estimated that the total weight would be 6-7kg (including motors, battery, laptop, motor controller board and other mics items). I would like it to move at about the same speed as a Roomba moves. The robot will have two motors and a castor.</p> <p>I have tried doing the calculation to determine the type of motor to use, but I'm very confused.</p> <p>Can someone advise me on the type of motor and type of battery (Lipo/SLA) to use?</p>
Choosing motor and battery for a robot
<p>I can say positively that 900Mhz china video transmitters can affect 2.4Ghz frequency range. I experienced this first hand and confirmed interference on my spectrum analyzer, it can cause harmonics on the 2.4Ghz but when the tx and rx are close it appears to have an affect on the noise floor too, using those low pass filter don't seem to help close range either.</p>
1582
2013-07-11T18:31:18.143
|multi-rotor|
<p>I'm starting to attempt to fly FPV on my quadrotor. I am using a FrSky D8R-II 2.4 GHz frequency hopping diversity receiver (two antennas) for control and recently added a <a href="http://www.readymaderc.com/store/index.php?main_page=product_info&amp;cPath=11_30_31&amp;products_id=348" rel="nofollow noreferrer">no-name 910 MHz 1.5 watt analog video transmitter</a> for FPV flying:</p> <p><img src="https://i.stack.imgur.com/UFrHc.jpg" alt="enter image description here"></p> <p>When the video transmitter is powered up, my control range drops from about 1.5km to under 50m. I'm surprised that the 910 MHz video channel affects my 2.4 GHz control channel this way.</p> <p>Is this sort of interference expected? Is it because my transmitter is low quality? What changes are recommended&nbsp;— should I switch to a UHF control radio? Or a different frequency (eg 5.8 GHz?) for the video radio? Or just try moving it a few more inches (they are already about 5in apart)?</p>
Interference between 900 MHz video transmitter and 2.4 GHz control radio
<p>Is it possible to use multiple ultrasonic sensors on a single robot? Yes: <a href="http://www.societyofrobots.com/sensors_sonar.shtml" rel="nofollow noreferrer">"Using Multiple Sonar Sensors"</a>.</p> <p>As you have already figured out, one sensor often receives echoes of pings sent by another sensor. There are several ways to deal with cross-sensitivity, roughly in order of simplest first:</p> <ul> <li>Ping only one transducer at a time, ignoring all the other transducers while waiting for the "ghost echoes" from the current transducer to die down before pinging the next transducer. This is much faster than mechanically rotating a single transducer. Perhaps this will be fast enough, unless your robot is ramming into things at nearly the speed of sound.</li> <li>Use relatively narrow beam angle transmitters or receivers (or both) per sensor, and increase the angle from one sensor to the next so one sensor can't hear the echo from another (unless the stuff in front of the transducer causes some weird lateral reflections)&nbsp;— sensors angled apart roughly the same as the beam angle. Alas, this leaves "blind spots" between transducers where objects can't be seen by any transducer.</li> <li>Some combination — for example, increase the angle from one sensor to another so one sensor only hears echoes from its 2 neighbors (about half the beam angle); then alternate between pinging the even transducers (ignoring the odd transducers) and pinging the odd transducers (ignoring the even transducers).</li> <li>Each transducer operating at a different frequency. Alas, all the low-cost ultrasonic transducers, with <a href="http://www.newark.com/multicomp/mcpct-g5120-4140/ultrasonic-transducer/dp/25R0960?in_merch=Popular%20Products" rel="nofollow noreferrer">few exceptions</a>, are tuned to resonate at 40 kHz. When listening to a variety of signals, these transducers can only "hear" signals that are within a few kHz of 40 kHz. You'll have to balance (a) The further away from 40 kHz you use on a transducer designed for 40 kHz, the less sensitive it is, so you want a frequency "relatively close" to 40 kHz; and (a) The closer all the frequencies are together, the more difficult it is to discriminate between them, so you want a set of frequencies that are spread "relatively far apart". I don't know if there is a good compromise or not&nbsp;— if not, you're stuck with (c) use more expensive sensors tuned to other frequencies, or more expensive <a href="http://search.digikey.com/us/en/products/SPM0404UD5/423-1086-1-ND/1587388" rel="nofollow noreferrer">"wide-bandwidth" sensors</a> not tuned to any particular frequency.</li> <li>Use various transmit timings to rule out ghost echoes. Say you transmit from the left, delay 2 ms (not nearly enough to let the echoes die down), then transmit from the right,... after the echoes die down, then transmit from the left, delay 3 ms, then transmit from the right. If the right receiver gets an echo back 5 ms later both times, then you can be pretty sure it's a real echo; if the right receiver gets an echo back 5 ms later the first time, 6 ms later the second time, it's probably a ghost from the left receiver. (There are much more sophisticated "spread spectrum" techniques for separating out many transmitters all using the same frequency at the same time.)</li> <li>Combine the signals from all the receivers. If you have one central transmitter that pings in all directions (or equivalently you have transmitters pointed in every direction, and you ping them all at the same instant), and the first echo you get back hits the left receiver first (then later the right receiver hears an echo), you know that the nearest obstacle is closer to the left side than the right side. (There are more sophisticated "phased array" techniques that combine signals from all the receivers, and even more sophisticated "beamforming" techniques for slightly adjusting the transmit times of all the transmitters.)</li> </ul> <p>P.S.: Have you seen <a href="http://www.societyofrobots.com/member_tutorials/node/71" rel="nofollow noreferrer">"Infrared vs. Ultrasonic — What You Should Know"</a> ?</p> <p>(Yes, I've said this all before, at <a href="https://electronics.stackexchange.com/questions/24835/multiple-ultrasonic-rangefinder-question">"Multiple Ultrasonic Rangefinder Question"</a>.)</p>
1584
2013-07-11T22:16:02.167
|sonar|sensor-error|
<p>Our robot has a circular array of 12 sonar sensors that looks like this:</p> <p><img src="https://i.stack.imgur.com/Q5dBX.jpg" alt="enter image description here"></p> <p>The <a href="http://www.hobbyking.com/hobbyking/store/__31136__ultrasonic_module_hc_sr04_arduino.html" rel="noreferrer">sonar sensors themselves</a> are pretty good. We use a low-pass filter to deal with noise, and the readings seem pretty accurate. However, when the robot comes across a flat surface like a wall, something weird happens. The sonars don't show readings that would indicate a wall, instead, it appears like a curved surface. </p> <p>The plot below was made when the robot was facing a wall. See the curve in the blue lines, as compared to the straight red line. The red line was produced by using a camera to detect the wall, where the blue lines show filtered sonar readings. </p> <p><img src="https://i.stack.imgur.com/B4KWm.png" alt="enter image description here"></p> <p>We believe this error is due to crosstalk, where one sonar sensor's pulse bounces off the wall at an angle and is received by another sensor. This is a systematic error, so we can't really deal with it like we would with noise. Are there any solutions out there to correct for it?</p>
How to deal with sonar crosstalk
<p>Your intuition is correct. We can look at the difference in arc length that each wheel will roll for a given <a href="http://en.wikipedia.org/wiki/Circular_sector" rel="nofollow">sector</a> (specified by $\theta$ in degrees).</p> <p>$$d_l = \frac{\theta*2\pi{r}}{360}$$ $$d_r = \frac{\theta*2\pi(r+A)}{360}$$</p> <p>This simplifies to:</p> <p>$$ \frac{d_r}{d_l} = \frac{\frac{\theta*2\pi(r+A)}{360}}{\frac{\theta*2\pi{r}}{360}} = \frac{r+A}{r}$$</p> <p>Speed is just distance over time:</p> <p>$$ \frac{V_r}{V_l} = \frac{d_r/t}{d_l/t} = \frac{r+A}{r}$$</p>
1602
2013-07-16T11:26:28.637
|kinematics|
<p>I want to give my robot a differential mechanism for the system of turning and steering. Considering the case of turning a right-angled corner, the robot will achieve this by following a gradual circular arc through the intersection while maintaining a steady speed. To accomplish this end, we increase the speed of the outer wheel while slowing that of the inner. But supposing i want the turn to be within a definite radius, how do i calculate what ratio the 2 speeds have to be in? Can someone give me an insight into this? </p> <p>What Ive done is this, although I have my doubts.</p> <p>If the speed of the right wheel is $V_r$ and the speed of the left wheel is $V_l$, then the ratio of their speeds while turning will be equal to the ratio of the circumferences of their corresponding quadrants.</p> <p><strong>Therefore</strong> $$V_r :V_l =\frac{r+A}{r}$$</p> <p>Is this right? I have a sinister feeling Im missing something out..</p> <p><img src="https://i.stack.imgur.com/FjWpW.jpg" alt="enter image description here"></p>
How to design a differential steering mechanism?
<p>I have worked with quite a few servos (Industrial and RC) and from my experience they don't come with a limit switch that actually takes the current off the motor when they hit a limit. They are however limited in rotation by the PWM signal you send them and are as such limited to a rotation safe area. </p> <p>It does become a bit more complicated when you link several servos into a robot arm as there is a chance that individual arm segments might hit each other or the ground or... . </p> <p>For a reference feedback i recommend measuring the internal voltage over the servo reference potentiometer as outlined in the following link:</p> <p><a href="http://forums.trossenrobotics.com/tutorials/how-to-diy-128/get-position-feedback-from-a-standard-hobby-servo-3279/" rel="nofollow">http://forums.trossenrobotics.com/tutorials/how-to-diy-128/get-position-feedback-from-a-standard-hobby-servo-3279/</a></p> <p>This analog voltage gives you an indication of the actual servo position. This voltage can be sampled from eg. an Arduino board or any other board with an analog/digital converter.</p> <p>I have done something similar with a hexapod robot, and it worked perfectly.</p>
1607
2013-07-16T17:57:51.097
|rcservo|
<p>I'm moving from controlling a robot arm with basic time based motors controlled using a raspberry pi and python to a more advanced robotic arm that has servos (e.g <a href="http://dlnmh9ip6v2uc.cloudfront.net/datasheets/Robotics/Hitec-HS-425BB-Servo-Specsheet.pdf" rel="nofollow">HS-425BB</a>). With the time based motors I must constantly keep track of the arms position (guess work) and make sure it doesn't over turn.</p> <p>Do servos automatically stop if you give them a position that is outside of their boundaries rather than grinding the gears?</p>
Do servos stop at their limits automatically?
<p>We build a new version and its now available for download:</p> <p><a href="http://code.google.com/p/ardu-imu/downloads/detail?name=arduimu_vD" rel="nofollow">http://code.google.com/p/ardu-imu/downloads/detail?name=arduimu_vD</a>....</p> <p>The problem completely gone in this version. Read comments in arduimu_vd.ino for more info.</p>
1614
2013-07-17T16:50:15.297
|arduino|quadcopter|noise|ardupilot|
<p>We are using <a href="http://code.google.com/p/ardu-imu/" rel="nofollow noreferrer">ArduIMU (V3)</a> as our Quadrotor's inertial measurement unit. (we have a separate board to control all motors, not with ArduIMU itself). </p> <p>Now we have a problem with ArduIMU's sensors output. When we put our quadrotor steady on the ground with motors on, instead of getting 0 degree in roll and pitch we have a noisy output something like the image below( -6 to 6 degree error ):</p> <p><img src="https://i.stack.imgur.com/cd62D.png" alt="enter image description here"></p> <p>delta_t = 0.2s</p> <p>We are sure that this isn't a mechanical problem, because we checked the mechanical joints and everything.</p> <p>I should mention that with motors off everything is going well. Also we checked that if we vibrate the device slowly on yaw axis or any other axis, it still shows the noisy output.</p> <p>We are using DCM filter inside ArduIMU, also we tested with Kalman filter but no difference.</p> <p>We also tested FRI low-pass filter, results is good but there is about 3 seconds delay in the output.</p> <p>We also checked that if we separate the ArduImu's power from our circuit, it still no difference.</p> <p>What's the problem with ArduIMU and how we can get rid off this noisy output ?</p> <p><strong>Update:</strong> We think that the problem with our PID controller is because of these noises ... Is this a true assumption ? We can't tune our PID parameters ( using Ziegler–Nichols method ) when we have noisy data. We tested Ziegler–Nichols method when we have low rate noises and we successfully tuned our PID but when noise appears we are unable to tune PIDs. Is there anyway for us for tuning our PID in such situation ? Is this problem is because of the noises or the PID itself can get rid of them ?</p>
ArduIMU noisy output in Quadrotor
<p>If the getting started guide claims you can run it of a computers USB port, it will work at about .5 amps. A normal computer power supply limits the current to a USB port to .5 amps. Doing some research on the Beaglebone Black's site <a href="http://beagleboard.org/Support/FAQ" rel="nofollow">http://beagleboard.org/Support/FAQ</a>, I found this:</p> <p><em>When powered up over USB, the regulators are somewhat limited in what they can supply the system. Power over USB is sufficient as long as the software and system running perform some management to keep it under the USB current limit threshold.</em></p> <p>and</p> <p><em>The recommended supply current is at least 1.2A (or 6W), but at least 2A (or 10W) is recommended if you are going to connect up anything over the USB. The actual power consumption will vary greatly with changes on the USB load.</em> </p> <p>So it seems like the .5 amps from the computers USB port, is about as low as it gets. </p>
1618
2013-07-18T02:34:04.530
|microcontroller|
<p>What is the minimum amount of power that a beaglebone needs to start up? This would be with no peripherals attached besides host usb. The getting started guide claims that it can run off of a computer's usb power, but makes no mention of how many amps are actually needed. I saw a mention of older kernels limiting current draw to .5 amps when working off of usb, although that was all I could find. </p> <p>Could one start a BeagleBone Black off of .3 amps? If not, how many?</p>
Beaglebone Black power draw
<p>I'll chime in.</p> <ol> <li><p>You could use piezoelectric inchworm motors. The approach is outlined here: <a href="https://www.techbriefs.com/component/content/article/tb/techbriefs/mechanics-and-machinery/2273" rel="nofollow noreferrer">https://www.techbriefs.com/component/content/article/tb/techbriefs/mechanics-and-machinery/2273</a></p></li> <li><p>you could use a ball robot motor system, upside down with a spherical bearing rather than actuating with a toroidal motor. The ball robot is well explained here: <a href="https://www.youtube.com/watch?v=bI06lujiD7E" rel="nofollow noreferrer">https://www.youtube.com/watch?v=bI06lujiD7E</a> Bearings are available here: <a href="https://www.acornbearings.co.uk/bearings/spherical-roller-bearings" rel="nofollow noreferrer">https://www.acornbearings.co.uk/bearings/spherical-roller-bearings</a>. an omniwheel appraoch is here: <a href="https://www.semanticscholar.org/paper/Concept-of-a-three-D.O.F-spherical-joint-gripper-Weyrich-Abdullah/10699c6f1c39a8700c903d407a110910bef53f3c" rel="nofollow noreferrer">https://www.semanticscholar.org/paper/Concept-of-a-three-D.O.F-spherical-joint-gripper-Weyrich-Abdullah/10699c6f1c39a8700c903d407a110910bef53f3c</a></p></li> <li><p>You could make a rare earth magnet ball rotor with an electromagnetic. See these two links for a better idea. <a href="https://i.stack.imgur.com/HRgZk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HRgZk.png" alt="enter image description here"></a> and <a href="https://i.stack.imgur.com/0eQf4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0eQf4.png" alt="enter image description here"></a></p></li> <li><p>or a delta robot through a SR joint. </p></li> <li><p>I'm guessing you are already familiar with stewart plaforms </p></li> </ol> <p>Hope this helps. I can sketch stuff if this isn't clear, but I'm not fast enough at CAD to draw this all out. I didn't post pictures because it seemed terribly slow.</p>
1622
2013-07-18T02:20:07.367
|design|mechanism|
<p>How would you motorize the joints in an Iron Man suit? You need something fairly shallow, I would think, probably dual servos sitting on either side of an elbow or knee joint or either side of your hips, but how do you get motorized action there without dramatically adding to the thickness of the joint? </p> <p>Bicycle-style chain drives wouldn't work, I would think, since the length of the chain would need to vary depending on what position you're in for at least a lot of joints.</p> <p>How would you motorize the joints?</p>
How do I motorize the elbow socket and other joints in a powered exo-skeleton?
<p>For the smallest (and cheapest) of multicopters, DC brushed motors are well in use: <a href="http://www.rcgroups.com/forums/showthread.php?t=2009496" rel="noreferrer">http://www.rcgroups.com/forums/showthread.php?t=2009496</a> They are coreless motors<sup><strong>1</strong></sup> and have very little inertia, can accelerate very quickly and are perfectly suited for high frquency PWM (we drive them with up to 32kHz). You can achieve more than 3:1 power:weight ratio on a sub 50g AUW airframe. Driving those motors is a lot simpler than a brushless motor, you basically just need a transistor and a capacitor.</p> <p>That applies to micro copters, brushless technology didn't catch up in that weight class (yet). Probably just a question of time. The first micro brushless systems are available, but they cost many times more than coreless motors and don't offer more performance (yet).</p> <p>For larger quads: no question, brushless is king. But for the micro and nano-quads, its awesome to have these cheap motors to crash with confidence ;)</p> <hr> <p><sup><strong>1</strong> - <a href="http://www.pacificelectronicscorp.com/dccoreless.html" rel="noreferrer">http://www.pacificelectronicscorp.com/dccoreless.html</a> </sup></p>
1627
2013-07-18T21:37:55.480
|quadcopter|brushless-motor|
<p>I've been thinking about starting a quadcopter project, maybe building it from scratch. One of the main barriers-to-entry for me is the motors: it seems like most quadcopters use brushless motors. I have some experience with DC motors and using PWM signals to regulate speed, but no experience with brushless motors. As I understand it, brushless motors are more expensive than the typical DC motor I would use on a land robot, and they also require Electronic Speed Controllers (ESCs), which seem to make them (from my perspective) even more expensive and more complicated to use.</p> <p>So, my question: what is it about brushless motors that make them useful in a quadcopter? Is it more torque, less weight, something to do with efficiency? And would it be significantly harder (or even possible) to achieve lift using DC motors instead?</p>
Why do quadcopters use brushless motors?
<p>To control the motor with logic gates, you could make a circuit with a monostable multivibrator (or “one-shot”), an R-S flipflop (or R-S latch), and an AND gate or an OR gate to combine the one-shot and latch outputs. You would use an AND if you want the motor to run when both the latch and the one-shot are active, <em>vs</em> an OR to make the motor run when either of them is active. Whether to use AND or OR depends on what error conditions you want to detect or handle.</p> <p>An <a href="http://en.wikibooks.org/wiki/Electronics/Flip_Flops#RS_Flip_Flop" rel="nofollow">R-S flipflop</a> is easily made using two NAND gates, <em>ie</em> half of the gates in the quad NAND 74LS00. A one-shot also can be made <a href="http://www.electronics-tutorials.ws/sequential/seq_3.html" rel="nofollow">using two NANDs</a> (see “Simple NAND Gate Monostable Circuit” at about 1/3 thru page), or you can use a 74121, 74221, etc. The 555 also is often used as a <a href="http://www.circuitstoday.com/555-timer-as-monostable-multivibrator" rel="nofollow">monostable multivibrator</a>, and can also be <a href="http://www.edaboard.com/thread142724.html" rel="nofollow">used as an R-S flipflop</a>.</p> <p>So, you could use two 7400 packages for the circuit, or could use one 7400 and one 556 (which is a dual 555). Circuits with NAND gates can be simulated by hand or in <a href="http://www.neuroproductions.be/logic-lab/" rel="nofollow">online simulators</a>. <a href="https://www.circuitlab.com/circuit/e38756/555-timer-as-astable-multivibrator-oscillator/" rel="nofollow">Some simulators</a> also simulate 555's.</p>
1630
2013-07-19T19:14:17.173
|motor|raspberry-pi|
<p>I have some crude time based motors taken from a robot arm that we upgraded to proper servos. I want to be able to power a conveyor belt with one of them and I was wondering how I would go about the following setup:</p> <p>A ball drops through a hole onto the conveyer belt hitting a lever switch on its way through. This switch triggers the motor to start. When the ball gets to the top of the belt and falls off it hits another lever switch that turns the motor off.</p> <p>I could handle this logic by hooking it up to my raspberry pi and using python to start and stop the motor depending on which GPIO pin received input (top or bottom lever). Or I could use a single lever and set a constant time interval to stop the motor. I would prefer to use both to handle any change in scale/construction.</p> <p>I was wondering however if this could be done with the breadboard alone, using logic gates or similar?</p>
Controlling a conveyor belt with a time based motor
<p>It turns out VirtualBox was causing everything to not work properly for some reason. Now I'm using VMware and everything's running fine. What confused me was that I was getting errors that I found online and that even after trying every fix I could find some would still be there. Next step is getting XYZ coordinates of a moving robot, I'll start looking for tutorials right away (I'm still new to ros).</p>
1633
2013-07-19T22:01:54.720
|ros|kinect|
<p>Whenever I try using <code>openni_launch</code>, it works normally, however, when I try viewing an image using the kinect's rgb or depth camera, or even recording a simple <code>bagfile</code> with data from the kinect, I am unable to see any picture and <code>rosbag</code> does not record any data, and after a few seconds of running <code>image_view</code> or <code>rosbag record</code>, I got this error:</p> <pre><code>terminate called after throwing an instance of 'openni_wrapper::OpenNIException' what(): virtual void openni_wrapper::OpenNIDevice::startImageStream() @ /tmp/buildd/ros-groovy-openni-camera-1.8.8-0precise-20130418-2203/src/openni_device.cpp @ 224 : starting image stream failed. Reason: Xiron OS got an event timeout! [camera_nodelet_manager-2] process has died [pid 3788, exit code -6, cmd /opt/ros/groovy/lib/nodelet/nodelet manager __name:=camera_nodelet_manager __log:=/home/rosbotics/.ros/log/16b63744-e043-11e2-ac16-080027486aa8/camera_nodelet_manager-2.log]. log file: /home/rosbotics/.ros/log/16b63744-e043-11e2-ac16-080027486aa8/camera_nodelet_manager-2*.log </code></pre> <p>After searching around and trying various fixes, I figured it might be a problem with openni and started using freenect, however I encountered the same problems, I could not record any data using bagfiles or see any images from the kinect (using <code>rviz</code> or <code>image_view</code>)</p> <p>Then someone asked me to use something completely unrelated, <code>freenect-glview</code>, however that too gave me a black screen.</p> <p><code>lsusb</code> shows that all 3 parts of the kinect are connected and I've been able to control the kinect's motor through ubuntu so I know that there is at least a connection established between both.</p> <p>Additional Info:</p> <ul> <li><p>I run ROS on Ubuntu using VirtualBox V.4.2.14 and Windows 7 with USB 2 ports</p></li> <li><p>I am using ubuntu 12.04 and ROS-Groovy (all up to date)</p></li> <li><p>I've had the exact same errors on my Mac OSX Lion</p></li> <li><p>When I try using Rviz with the kinect, VirtualBox crashes all together</p></li> </ul> <p>I would appreciate anyone's help on the matter.</p>
How can I get data from my kinect?
<p>Two days? Eeek!</p> <p>I wouldn't try to simulate the thing at a circuit level. If you're an EE and you're familiar with circuit simulation packages then using SPICE (or whatever) to simulate the <em>system</em> may let you slide by with an acceptable performance.</p> <p>Personally, if I were designing one of these for money, I'd probably simulate it in Scilab, possibly using Xcos -- but that's because I'm familiar with that tool.</p> <p>Given your limited time I wouldn't try to do more than make something that follows straight lines and gentle curves at some fixed speed.</p>
1637
2013-07-21T17:33:49.660
|mobile-robot|
<p>I am a graduate student trying to make my own Line follower Robot for my minor assessment, I've all <strong>hardware</strong> parts and all <strong>data-sheets</strong> with me, I've attended a workshops of Robotics and studied a lot on <strong>Line follower robot</strong>. I have a good knowledge of C Programming and Embedded systems, but the problem is I've a very <strong>limited amount</strong> of time(2 days).</p> <p>Please help me to suggest a good paper work about my Project - Line follower robot, where should I start from ? I am getting myself confused should I start from <strong>Programming</strong> or should I first do circuit <strong>simulations</strong> as I know It is not a better approach to use directly hardware. </p> <p>Please suggest me a fine <strong>Paper work</strong> or some links/videos so that I can make my Robotics projects fast. Any help would be really appreciated, Thanks.</p>
Paper work before I do make my own Line follower Robot
<p>I'm pretty sure that people at the place that manufactured your stepper motors did, in fact, write up a datasheet including operating torque specifications and lots of other information.</p> <p>It is unfortunate that all too often a person ends up with some electrical part in his hand, and somehow the datasheet was lost along the way. I've discovered that it's well worth the money to buy stuff from people who actually do supply good datasheets, rather than try to save a few nickles buying allegedly "the same" part from other people who can't be bothered to forward the datasheets they got from the manufacturer.</p> <p>A few links to datasheets that list operating torque:</p> <ul> <li><a href="http://www.kelinginc.net/KL23H251-24-8BT.pdf" rel="nofollow">Torque curve for the KL23H251-24-8B</a></li> <li><a href="http://www.shinano.com/motors/docs/SKC_stepper_operation.pdf" rel="nofollow">"Stepper Motor Operation and Theory": "Fig. 7-1 Speed - Torque Curve"</a> gives a brief explanation of the speed-torque curve.</li> <li><a href="http://www.linengineering.com/line/contents/stepmotors/pdf/LinEngineering_Catalog_2011-2012_HR.pdf" rel="nofollow">"Lin Engineering Catalog": "Reading a speed and torque curve"</a> on p. 11 gives a brief explanation of the speed-torque curve. The rest of the catalog has many speed-torque curves for particular motors.</li> <li><a href="http://www.alltronics.com/mas_assets/acrobat/28M016.pdf" rel="nofollow">torque curves</a> for several Lin Engineering steppers</li> <li><a href="http://www.alephobjects.com/hardware/motors/SY42STH47-1504A_060047067.pdf" rel="nofollow">Pull out torque curve for the SY42STH47-1504A</a></li> <li><a href="http://www.trinamic.com/tmc/media/Downloads/QMot_motors/QSH4218/QSH4218_manual.pdf" rel="nofollow">Torque figures for Trinamic QSH4218 series</a>(p. 7-8)</li> <li><a href="http://www.wantmotor.com/ProductsView.asp?id=155&amp;pid=75&amp;sid=80" rel="nofollow">Pulse-torque characteristics of the 42BYGHW</a> (several graphs of speed vs. torque, each one of the <em>same</em> motor connected to a different kind of motor driver).</li> <li><a href="http://catalog.orientalmotor.com/plp/itemdetail.aspx?cid=1002&amp;categoryname=all-categories&amp;productname=pk-series-stepping-motors&amp;itemname=pke245da-l&amp;cid=1002&amp;plpver=11&amp;categid=100&amp;prodid=3001048&amp;itemid=49362&amp;origin=groupdetail&amp;by=prod&amp;filter=0&amp;grpid=25391&amp;backtoname=Compatible%20Motors%20%28sold%20separately%29&amp;isUOM=1#" rel="nofollow">Speed-Torque Characteristics of the Oriental Motor PKE245DA-L</a></li> </ul> <p>As you can see from the above graphs, the torque at the maximum-mechanical-power part of the torque curve is very roughly half the "holding torque" (the torque when moving very slowly).</p>
1638
2013-07-21T18:33:28.620
|stepper-motor|cnc|
<p>I have been looking online for a while, and I cannot seem to find any steppers without any torque ratings. (<strong>Operating torque, not holding torque.</strong>) I even looked on hobby sites that usually have all the ratings, including Adafruit and Sparkfun. I only have found one ever that said the operating torque, however, it didn't seem that reputable and it didn't have holding torque, so it might be likely that it's a mistake. I might contact them and ask.</p> <p><strong>Am I missing something? Can I calculate what it will run at with certain factors? (How long in between each step, etc.)</strong></p> <p>The reason that I say that is I found a tutorial saying how much <em>torque</em> (didn't specify which kind, but I kinda assume it isn't holding) you need for a CNC machine (what I'm building).</p> <hr> <p><strong>Equation (From <a href="http://buildyourcnc.com/torquemotion.aspx">this</a> site):</strong></p> <pre><code>Torque = ((weight)(inches/revolution))/2(pi) </code></pre> <p>Also on the page:</p> <blockquote> <p>By the way, we are talking about torque during a continual turning motion, not at a holding position.</p> </blockquote> <p>That seems like operating torque, but what makes it the most confusing is they sell steppers and they only list the holding.</p> <hr> <p><strong>What am I missing?</strong></p>
WHY are there no operating torque specifications on steppers?
<p>To answer your first question: if you really want to find the true kinematic equations for differential drive, I wouldn't start approximating by assuming that each wheel has moved in a straight line. Instead, find the turning radius, calculate the center point of the arc, and then calculate the robot's next point. The turning radius would be infinite if the robot is moving straight, but in the straight case the math is simple. </p> <p>So, just imagine that over each time step, or each time you calculate the change in the incremental sensors, the robot travels from point A to point B on an arc like this: <img src="https://i.stack.imgur.com/l0fEQ.png" alt="enter image description here"> Here's some sample code with the math simplified:</p> <pre><code>// leftDelta and rightDelta = distance that the left and right wheel have moved along // the ground if (fabs(leftDelta - rightDelta) &lt; 1.0e-6) { // basically going straight new_x = x + leftDelta * cos(heading); new_y = y + rightDelta * sin(heading); new_heading = heading; } else { float R = unitsAxisWidth * (leftDelta + rightDelta) / (2 * (rightDelta - leftDelta)), wd = (rightDelta - leftDelta) / unitsAxisWidth; new_x = x + R * sin(wd + heading) - R * sin(heading); new_y = y - R * cos(wd + heading) + R * cos(heading); new_heading = boundAngle(heading + wd); } </code></pre> <p>I used similar math in a simulator to demonstrate different ways of steering: <a href="http://www.cs.utexas.edu/~rjnevels/RobotSimulator4/demos/SteeringDemo/" rel="noreferrer">http://www.cs.utexas.edu/~rjnevels/RobotSimulator4/demos/SteeringDemo/</a></p>
1653
2013-07-25T07:54:49.897
|mobile-robot|kinematics|motion|two-wheeled|forward-kinematics|
<p><strong>How do you calculate or update the position of a differential drive robot with incremental sensors?</strong></p> <p>There is one incremental sensor attatched to each of the two differential wheels. Both sensors determine the distance $\Delta left$ resp. $\Delta right$ their wheel has rolled during a known time $\Delta t$.</p> <p>First, let's assume the center between both wheels marks the position of the robot. In this case, one could calculate the position as:</p> <p>$$ x = \frac{x_{left}+x_{right}}{2} \\ y = \frac{y_{left}+y_{right}}{2} $$</p> <p>"Deriving" those equations under the assumption that both wheels rolled in a straight line (which should be approximately correct for small distances) I get:</p> <p>$$ \frac{\Delta x}{\Delta t} = \frac{1}{2}\left( \frac{\Delta left}{\Delta t} + \frac{\Delta right}{\Delta t}\right)cos(\theta) \\ \frac{\Delta y}{\Delta t} = \frac{1}{2}\left( \frac{\Delta left}{\Delta t} + \frac{\Delta right}{\Delta t}\right)sin(\theta) $$</p> <p>Where $\theta$ is the angle of orientation of the robot. For the change of this angle I found the equation</p> <p>$$ \frac{\Delta \theta}{\Delta t} = \frac{1}{w} \left( \frac{\Delta left}{\Delta t} - \frac{\Delta right}{\Delta t}\right) $$</p> <p>Where $w$ is the distance between both wheels.</p> <p>Because $\Delta x$ and $\Delta y$ depend on $\theta$, I wonder whether I should first calculate the new $\theta$ by adding $\Delta \theta$ or if I should rather use the "old" $\theta$ ? <strong>Is there any reason to use one over the other?</strong></p> <p>Then, let's now assume the center between both wheels does <em>not</em> mark the position of the robot. Instead I want to use a point which marks the geometric center of the robot's bounding box. Then $x$ and $y$ change to:</p> <p>$$ x = \frac{x_{left}+x_{right}}{2} + l\, cos(\theta)\\ y = \frac{y_{left}+y_{right}}{2} + l\, sin(\theta) $$</p> <p>"Deriving" the first gives:</p> <p>$$ \frac{\Delta x}{\Delta t} = \frac{1}{2}\left( \frac{\Delta left}{\Delta t} + \frac{\Delta right}{\Delta t}\right)cos(\theta) - l\,sin(\theta)\,\frac{\Delta \theta}{\Delta t} $$</p> <p>Now there is a dependance on $\Delta \theta$. <strong>Is this a reason to use the "new" $\theta$ ?</strong></p> <p><strong>Is there any better method to do simulatenous update of position and orientation?</strong> May be using complex numbers (same approach as with quaternions in 3D?) or homogeneous coordinates?</p>
Calculate position of differential drive robot
<p>A machine is just a device which is defined to do a particular task whereas a robot can execute any task given to it through programming</p>
1654
2013-07-25T16:30:23.483
|industrial-robot|
<p>What is the difference between a Robot and a Machine? At what point does a machine begin to be called a robot?</p> <p>Is it at a certain level of complexity? Is it when it has software etc?.</p> <p>For instance: A desktop printer has mechanics, electronics and firmware but it is not considered a robot (or is it). A Roomba has the same stuff but we call it a robot. So what is the difference.</p> <p>I have always believed that a robot is a robot when it takes input from it's environment and uses it to make decisions on how to affect it's environment; i.e. a robot has a feedback loop.</p>
What is the difference between a Robot and a Machine?
<p>If you are concerned about the milk coming in contact with the air. I'd suggest using a plastic bag which the air from the first solenoid pours into. On expansion, the plastic bag(filled with air) will force the milk in the container to push out through the second solenoid. This way doesn't come in contact with milk. The only problem I see, which would compromise the milk, is any leaks in the bag itself. And another problem would that you'll have to find a way to drain that air in the plastic bag once all the milk is out.</p>
1660
2013-07-26T18:41:28.633
|arduino|
<p>I am working on a homemade vending machine project that serves milk and cookies, using arduino and some basic servos and stuff.</p> <p>The problem is: I really have no clue on how to protect the milk to last long, or how to even know if the milk is still ok to drink.. All I really know is that air is bad for the milk (and the cookies), so here is what I came up with:</p> <p><img src="https://i.stack.imgur.com/mGUMy.png" alt="enter image description here"></p> <p>Two solenoids that activates at the same time, to allow air in, and milk out. All of this should be inside a "slightly" colder place.</p> <p>I'm sure this design might sound stupid to some of you, but this is where I need your help please, do you think this design can work ? (Would that solenoid on top make any difference to protect milk?) How to improve it to make the milk last as long as possible ?</p> <p>I'v heard about the big guys making machines that keep milk fresh for weeks even months, while i'm probably sure my milk won't stand a couple of hours..</p> <p>Any idea or any information, link, or clue would be greatly appreciated. Thank you.</p>
How to protect the milk in a homemade vending machine?
<p>Sugru is great for this kind of thing, but it can be a little pricey. An alternative is <a href="http://www.instructables.com/id/How-To-Make-Your-Own-Sugru-Substitute/" rel="nofollow">Oogoo</a>, a DIY mixture of silicone caulk, cornstarch, and (optional) pigment.</p>
1670
2013-07-28T15:19:02.270
|cnc|
<p>I’m trying to inject some kind of rubber around an aluminum strut to form “feet” for a robot. I’ve already milled the mold, but I’m having trouble finding an inexpensive and readily available rubber compound that will cure without exposure to air. Ideally it should cure to about the consistency of a silicone O-ring. I’ve tried silicone gasket-maker (the automotive stuff), however a week later it hasn’t cured in the mold, as there is no exposure to the air. Is there anything out there with a similar consistency to silicone, but doesn’t require air to cure? Or is there a way to get what I’m currently using to set up without waiting a millennium? There aren’t any real mechanical requirements, I’m just trying to clean up the look of the robot and prevent its legs from scratching my table. </p>
Moldable rubber for "feet"
<p>I think the question refers to this sort of device: <a href="http://en.wikipedia.org/wiki/Servo_%28radio_control%29" rel="nofollow">RC servo</a></p> <p>Those usually aren't very high performance so they're not going to be able to track a generated motion profile very well. Most commercial motor control systems use an S curve for a point to point move (see @ddevaz's answer) which do a piece-wise profile where each segment uses a different equation. Your problem is going to be that in order for your motor to track the generated profile you're probably going to have a very "slow" profile. Otherwise the profile you try and command the device to follow is going to have a large position error vs. the actual position of the device.</p> <p>Ideally you'll want some sort of feedback you can look at while you are executing the motion so you can see how well the device is tracking the command. From a more practical perspective, if you want significantly better motion you may need different motors and different motor control.</p>
1678
2013-07-31T00:04:17.147
|otherservos|kinematics|
<p>I made a small crawler robot a little while ago that had two legs with two degrees of freedom each, so 4 RC servos total. While I was programming the movement of the legs I noticed that they moved rather stiffly. It makes sense that the RC servo's internal controller would have a very quick response to position commands, but I wanted my crawler to move in a way that seems a little more smooth and life-like.</p> <p>My solution was create a cubic function of time that describes the path of the servos, and then set their position in small time increments, resulting in more smooth motion. Essentially what I did was solve for the $a_i$ coefficients in a cubic equation using the time interval, starting and ending position of the servo, and starting and ending rates the servo should move (which is just the derivative of the position):</p> <p>Solve for $a_0$, $a_1$, $a_2$, and $a_3$:</p> <p>$$ position(t) = a_0 + a_1t + a_2t^2 + a_3t^3 $$ $$ rate(t) = position'(t) = a_1 + 2a_2t + 3a_3t^2 $$</p> <p>Given: $position(0)$, $position(t_f)$, $rate(0)$, $rate(t_f)$</p> <p>I set the rate of the servo between a pair of movements to be zero if the movements were in opposite directions, and positive or negative if the movements were both in the positive or negative direction, respectively. </p> <p>This worked pretty well, but this solution is limited in a few ways. For one, it's difficult to decide what exactly the rates between movements that go in the same direction should be. I used the average of the slopes ahead and behind of a particular position between movements, but it isn't clear to me that is optimal. Second of all, cubic curves could take the servo to a position outside of the range of the positions at the beginning and end of a movement, which may be undesirable. For example, at some point during the time interval, the curve could cause the servo to go beyond the second position, or below the first position. Thirdly, curve generation here does not consider the maximum rate that the servo can turn, so a curve may have the servo move at a speed that is unrealistic. With that, a minor concern is that the maximum turning rate depends on the response of servo's internal controller, and may change depending on the size of the position interval. </p> <p>Neglecting that last concern, these issues may be solved by increasing the degree of the polynomial and adding constraints to solve for the coefficients, but I'm now starting to wonder...</p> <p>Is there a better way than this to make servo movement smooth and seem more life-like?</p>
Smooth servo movement for a crawling robot
<p>If I understand your question, you are asking whether a vehicle balancing on two wheels (or two wheels and one <a href="http://en.wikipedia.org/wiki/Caster" rel="nofollow">caster</a>) will be able to move straight, or at least predictably, if both wheels were driven from the same motor and used a differential. </p> <p>The answer is yes, but only if you have a way to equalize the forces of friction affecting each wheel. For example, by applying a brake individually to each wheel, you could alter the balance of force between the left and right. It would be crude, but certainly possible to control the steering in this way. In fact, many <a href="http://en.wikipedia.org/wiki/Tractor#Pedals" rel="nofollow">tractors use independent left and right brake pedals</a> to accomplish this.</p> <p>Without actively braking (or some other method), your intuition is correct: the amount of rotation of each wheel would depend on an unpredictable set of forces, and the robot would make seemingly random turns. </p>
1682
2013-07-31T17:41:12.177
|motor|wheeled-robot|motion|wheel|
<p>I am in the concept phase of a driving robot. The two wheels on the front axle will be powered, while the rear will be dragged along. The rear is also responsible for steering but this has noting to do with my question.</p> <p>Since the robot is required to make relatively sharp turns at high speed. Therefore I have two options to compensate the different speeds on both sides. On the one hand, a differential gear in the front axle could be used. It would be powered by one motor then. On the other hand, I could simply use two motors directly powering each a front wheel. This way I could simulate the differentiation in software.</p> <p>I'd like to go for the first approach, using the hardware differential. But I have the one concern with it. Would a robot vehicle with differential gear still move straight, without explicit steering applied?</p> <p>My imagination is that, with those wheels not being solidly connected, the robot would move in random curves which I'd have to compensate with a lot of steering then. I know that for real cars, differential gears are standard and do their work, but now I am talking about a small robot measuring about 6 inches.</p>
Does a vehicle with defferential gear still move straight?
<p>There are a variety of GPS units available that use plastic cases to protect against the elements, and they are all plastic. </p> <p>For example, <a href="https://www.google.com/search?q=garmin+puck&amp;tbm=isch" rel="nofollow noreferrer">these Garmin units</a>.</p> <p>It sounds like you're really asking about what enclosure materials will keep the weather out but let the signal in. For this, you just need to make sure that you aren't using a conductive shield:</p> <blockquote> <p><a href="https://electronics.stackexchange.com/a/2370">the conductor used as a shield has to be thicker than the "skin depth" of the metal at the frequencies you're trying to block out. [...] 1 micron is enough in practice.</a></p> </blockquote> <p>This <a href="https://en.wikipedia.org/wiki/Electromagnetic_shielding" rel="nofollow noreferrer">shielding</a> could be in the form of a metal, a paint, or even a fabric. So, your enclosure can be anything that does not have a lining made with one of these materials. Unpainted plastic is a good option.</p>
1687
2013-08-01T02:26:37.093
|gps|protection|coverage|
<p>I'm going to be embarking on an autonomous robot project and I was going to be using GPS to navigate to waypoints (I'm aware of the margin of error when it comes to localization with GPD but I live in a lovely area with many open fields). </p> <p>I was going to use Adafruit's Ultimate GPS Breakout board with my RaspberryPi, and I was wondering how I should protect or mount the GPS to protect it from the elements. Do all GPS units need to be face up and unobstructed (ex. wood or plastic) in order to work? If so, how can I still protect a GPS unit from the outdoors?</p>
Can GPS modules work inside plastic enclosures?
<p>I'm not familiar with that platform. But in general, no, you do not need separate batteries for each motor.</p> <p>Also in general, if you need 2-7.5V for each motor, then you probably want one honkin' big battery that puts out 7.5V or slightly more. You could use a 12V battery and take care with your drive commands to never give the motors over 7.5V.</p> <p>Only if you need to do something odd like connecting the motors in series would you need more voltage.</p>
1693
2013-08-01T22:08:53.750
|motor|battery|
<p>For the Dagu Wild Thumper 6 Wheeled platform, or any multiple motor system, do I really need 1 battery for each motor? Or should I just buy 2 for either side of the platform. In addition, for larger motors like the ones on this platform, how do I deal with the power generated from a coasting motor?</p> <p>I want to jump into the deep end with robotics, as I already hold all the programming skills, and I realize a platform of this magnitude may be a difficult endeavor.</p> <p>Recommended motor voltage is 2 – 7.5 Volts, so should one use two 22 Volt batteries for the left and right side, or six 7.5 volt batteries?</p>
Do 6 motors require 6 individually-assigned batteries?
<p>According to the link you provided, you can use 5V or 3.3V directly on the sensor (it already have a voltage regulator). Since the arduino is able to send 3.3V or 5V you can use both of them directly without any level converter. You can see on the video <a href="http://lukagabric.com/arduino-mpu6050/" rel="nofollow">here</a> that you don't need anything </p> <p>I got a GY-80 with the same kind of features and I use it directly on my arduino nano without any level converter. Their is also a whole arduino playground on how to use this sensor.</p>
1697
2013-08-02T15:36:39.800
|arduino|sensors|quadcopter|logic-control|
<p>I bought this MPU-6050: <a href="http://www.ebay.com/itm/MPU-6050-6DOF-3-Axis-Gyroscope-Accelerometer-Module-for-Arduino-DIY-/261231955458" rel="nofollow">link</a></p> <p>According to the manufacture site, the sensor logic level is 3.3V (though the <em>ebay</em> page says <code>Power supply :3-5v</code>)</p> <p>Should I use a 4 channel Bi-Directional Logic Level Converter (<a href="http://www.dash.co.il/image/cache/data/AD-757/ID757_LRG-500x500.jpg" rel="nofollow">like this one</a>) for the <code>SDA, SCL, INT</code> channels? or can I connect it directly to my arduino nano?</p> <p>I saw some places that says I should use it with a logic level converter and some who say it's ok without it. (I guess it depends on the sensor board, so please take a look, link above)</p> <p><strong>Current Setup:</strong></p> <pre><code>SDA &lt;-&gt; LLC &lt;-&gt; A4 SCL &lt;-&gt; LLC &lt;-&gt; A5 INT &lt;-&gt; LLC &lt;-&gt; D2 VCC &lt;- LLC &lt;- 5V (arduino) GND &lt;- LLC &lt;- GND (arduino) </code></pre> <p><em>I still don't have the parts so I can't test it, and I'm probably going to use <a href="http://www.i2cdevlib.com/devices/mpu6050" rel="nofollow">Jeff Rowberg library</a></em> to communicate with the sensor (I<sup>2</sup>C)</p>
MPU-6050 + Arduino nano - Logic converter or not?
<p>I have a bot with 2 independently driven wheels. </p> <p>I chose to use a gyro to keep it heading in the desired direction, bumps slippage and even picking it up and turning it around are of little consequence to it as it will just correct it's heading.</p> <p>I use a single PID, which adds/subtracts a correction to the desired current speed for each of the 2 motors in accordance with the error in the current heading (direction) as determined by the gyro.</p> <p>For example I set the speed to 50% and the heading to 20degrees. this would normally drive both motors at 50% power. but as the heading wanders off, the PID will make adjustments adding some power to one motor and removing some power from the other, so you might end up with a 45%/55% power split between the motors as the pid corrects the heading.</p> <p>Gyros have their problems though, even with some time spent calibrating I'm still finding my gryo has about 1 degree per minute drift. It might be because the earth rotates 15 degrees per hour, this should also be considered in the long run.</p> <p>As previously stated encoders on the wheels are a great idea for actually knowing how far you have traveled.</p>
1711
2013-08-05T21:23:04.340
|pid|differential-drive|
<p>Consider a differential drive robot that has two motorized wheels with an encoder attached to each for feedback. Supposed there is a function for each DC motor that takes a float from -1 to 1 and sets the PWM signals to provide a proportional amount of power to that motor. Unfortunately, not all motors are created equal, so sending each motor the same PWM signal makes the robot veer left or right. I'm trying to think about how to drive the robot straight using the encoders attached to each motor as input to a PID loop.</p> <p>Here's how I would do it: I would take the difference between the left and right encoders, bound the error between some range, normalize it to be from [-1, 1], and then map it to the motor powers 0 to 1. So if I and D were zero, and we get an error of 1 (so the left motor has turned much more than the right motor), then left motor would be set to 0, and the right motor set to 1 (causing a hard left). </p> <p>Are there any issues with this? What is a better approach?</p>
Approach to using PID to get a differential robot driving straight
<p>There is no difference in between them. Motors are categorized by type (brushed or brushless), size, kv rating and the number of poles. The size and kv rating are usually what determine what application it can be used in.</p>
1712
2013-08-06T09:26:02.310
|motor|brushless-motor|
<p>I am working on a robot with focus on speed. At the moment I am looking for a suitable motor but it would help if I understood the difference between the various options.</p> <p>To provide some background, I have not worked with RC model components before, but I think this is the only place for me to find the components needed for my robot, such as the motor.</p> <p>I have already figured out how much power the motor needs to accelerate my robot as desired, taking energy conversion efficiency and tractional resistance into account. It's about 170 watts, depending on the final weight.</p> <p>To limit my search further, I need to decide on either using a RC car motor or a RC helicopter motor now, but I don't understand the difference between these options.</p> <p>Focussing on brushless motors (if that matters), what are the differences between RC car and RC helicopter motors? What needs to be taken into account when choosing between them?</p>
What is the difference between RC motors for cars and helicopters?
<p>I recommend arranging your sensors like the following:</p> <pre><code> Thickness of Line &lt;--------&gt; /\ | | / \ | * * | || | | || moving * | | * || direction | | || | * * | || | | || </code></pre> <p>This allows you to detect many conditions. Let's say the sensors receives 1 if on black area and 0 if on white. Let's look at some of the configurations that may happen and their interpretations. I will use <code>|</code>, <code>-</code>, <code>/</code> and <code>\</code> to indicate the borders of the actual line:</p> <ul> <li><p>You are completely lost:</p> <pre><code> 0 0 0 0 0 0 </code></pre></li> <li><p>moving on straight line:</p> <pre><code> | | | 1 1 | | | 0 | | 0 | | | 1 1 | | | </code></pre></li> <li><p>The line turns right:</p> <pre><code> 0 0 ------------------ 0 | 1 | | 1 1 |-------- | | </code></pre></li> <li><p>The line turns left:</p> <pre><code> 0 0 ------------------ 1 | 0 | --------| 1 1 | | | </code></pre></li> <li><p>Gap ahead:</p> <pre><code> |________| 0 0 ________ 0 | | 0 | | | 1 1 | | | </code></pre></li> <li><p>Gap behind:</p> <pre><code> | | | 1 1 | | | 0 |________| 0 0 0 ________ | | </code></pre></li> <li><p>leaning out of the line from the left:</p> <pre><code> | | 0/ 1 / | | 0 / /0 or 1 | | / 1 1 / | | </code></pre></li> <li><p>you are being teased:</p> <pre><code> 1 1 1 1 1 1 </code></pre></li> </ul> <p>There are many other variations you can think of. As one of the examples shows, this also gives you the ability to detect when you are leaning out of the line so you can come back smoothly (without having to turn back on the line sharply). Given this, following the circle shouldn't be a problem at all regardless of whether it's green or black.</p> <hr> <p>Detecting color with grey-scale sensors is possible, but probably not what you are looking for. In your specific case, a value that is far from both 0 and 1 (say from 0.2 to 0.8 or something like that) would depict your color, which is only one. You should still pay attention that if only one of the sensors is showing such a value, for example 0.5, it is likely that you are just going out of the line and it's measuring a half white/half black location.</p> <p>If you really want to detect the actual color you have to do the following:</p> <ul> <li>Use three grey-scale sensors instead of one at each location.</li> <li><p>On each of the three, mount a color filter; one green, one blue and one red. Color filters are as simple as colored transparent pieces of plastic:</p> <p><img src="https://i.stack.imgur.com/dqfTd.jpg" alt="enter image description here"></p></li> <li><p>Read the three RGB values and convert them to <a href="https://en.wikipedia.org/wiki/HSL_color_space" rel="nofollow noreferrer">HSL (Hue-Saturation-Lightness)</a>.</p></li> <li>Base your decisions on the calculated hue. I say only on the hue because the environment lighting very easily changes your observed saturation and lightness so they are not very reliable.</li> </ul>
1728
2013-08-08T11:47:18.693
|mobile-robot|sensors|automatic|line-following|
<p>I'm working on a robotics project, and I am using grayscale sensors to automatically follow a black line: turning 90 degrees, going round in a circle, and passing through gaps in the lines etc. I was wondering what is an effective way to detect the colours and move it through the lines, with five or six grayscale sensors. </p> <p>Thank you very much.</p>
What is an effective distribution of grayscale sensors on robot
<h3>Text mode vs PDU mode</h3> <p>The SIM900 module supports both text and PDU mode. Please note that some AT commands have different syntax depending on which mode is currently active. </p> <p>To check which mode is currently active (0: PDU mode, 1: Text mode):</p> <pre><code> AT+CMGF? </code></pre> <h3>Retrieving SMS's</h3> <p>You can use the CMGL command to retrieve unread, read, stored unsent, stored sent, or all messages.</p> <p>If text mode is currently selected:</p> <pre><code> AT+CMGL=STAT,MODE </code></pre> <p>Parameters:</p> <pre><code>STAT: "REC UNREAD" Received unread messages "REC READ" Received read messages "STO UNSENT" Stored unsent messages "STO SENT" Stored sent messages "ALL" All messages MODE: (OPTIONAL) 0 Normal 1 Not change status of the specified SMS record In other words, the following command should print all SMS messages: AT+CMGL="ALL" </code></pre> <p>If PDU mode is currently selected:</p> <pre><code> AT+CMGL=STAT,MODE </code></pre> <p>Parameters:</p> <pre><code>STAT: 0 Received unread messages 1 Received read messages 2 Stored unsent messages 3 Stored sent messages 4 All messages MODE: (OPTIONAL) 0 Normal 1 Not change status of the specified SMS record In other words, the following command should print all SMS messages: AT+CMGL=4 </code></pre> <h3>Example</h3> <p>If text mode is currently selected, and a CMGL command is issued, the following is an example of what could be expected (note there's a line break before the actual message starts).</p> <pre><code>AT+CMGL="ALL" +CMGL: 1,"REC READ","+85291234567",,"07/05/01,08:00:15+32",145,37 It is easy to list SMS text messages. 1 : Message index "REC READ" : Message status (it's been read before) "+85291234567" : Source number (ie the person that sent you the sms) "07/05/01,08:00:15+32" : Service center timestamp 145 : Character set 37 : Length of message Refer to Section 4.2.3, page 99, of the SIM900 AT command set for more information. </code></pre> <h3>External links</h3> <ul> <li><a href="http://www.cooking-hacks.com/skin/frontend/default/cooking/pdf/SIM900_AT_Command_Manual.pdf" rel="nofollow">SIM900A AT Command set (V1.05: 2011-10-24)</a></li> <li><a href="http://www.developershome.com/sms/checkCommandSupport.asp" rel="nofollow">Developers home — Tutorials regarding AT commands</a> (I'd strongly recommend you take some time and go through the tutorials — especially sections 18 to 26)</li> </ul>
1743
2013-08-14T08:20:38.523
|arduino|microcontroller|
<p>I am using SIM900A for some purpose and want to know the number of the sender from where a message comes. I am unable to find the specific AT command related to receiving message which give me number from where latest message comes.</p> <p>I had used AT+CNMI (it corresponds to notification regarding latest received message), but am unable to find sender number.</p> <p>I had seen AT+CMGL=&lt;stat&gt;[,&lt;mode&gt;] will give you a string which will have oa i.e. originating address and once that is stored in a string I can easily parse it out, but when I had data format of that string. Need help or any suggestion if somebody can help me out with any other possible solution.</p>
AT command in SIM900A GSM/GPRS module to find out originating address of an SMS
<p>Here are some that I know of:</p> <p>Solidworks student edition: <a href="http://www.solidworks.com/sw/education/cad-faq-students.htm" rel="nofollow">http://www.solidworks.com/sw/education/cad-faq-students.htm</a> (Free for students who use a school's privately assigned password, otherwise $149)</p> <p>AutoCAD Student version: <a href="http://students.autodesk.com/" rel="nofollow">http://students.autodesk.com/</a> (Free for students)</p> <p>Google Sketchup: <a href="http://www.sketchup.com/products/sketchup-make" rel="nofollow">http://www.sketchup.com/products/sketchup-make</a> (Free)</p> <p>FreeCAD: <a href="http://www.freecadweb.org/" rel="nofollow">http://www.freecadweb.org/</a> (Free)</p>
1745
2013-08-14T11:48:18.233
|design|
<p>I am planning to build a robot.</p> <p>1) What free or low cost robot modelling tools exist.</p>
What modelling tools are available to design a robot
<p>(I know it's old but I still feel I need to add my solution)<br> I have a similar issue, multiple units sending data and need a way to know what unit it is. I create a random unitId and save it in eeprom. If needed I can read it on serial port or by activating one at a time and check on the receiver side.<br> I guess I could easily write a small sketch to write a specific unitId if that was needed.<br> Code I have in setup()</p> <pre><code>long unitId; ... Setup(){ long notunitId; .. pinMode(RESET_PIN, INPUT_PULLUP); unitId = (long)EEPROM.read(0) &lt;&lt; 24 | (long)EEPROM.read(1) &lt;&lt; 16 | (long)EEPROM.read(2) &lt;&lt; 8 | (long)EEPROM.read(3); notunitId = (long)EEPROM.read(4) &lt;&lt; 24 | (long)EEPROM.read(5) &lt;&lt; 16 | (long)EEPROM.read(6) &lt;&lt; 8 | (long)EEPROM.read(7); if (digitalRead(RESET_PIN) == LOW) { Serial.println("Force generation of a new unitId"); unitId = notunitId = 4711; } if (unitId == -notunitId) { Serial.print("unitId from eeprom: "); Serial.print(unitId, HEX); } else { randomSeed(analogRead(7)*analogRead(6)*analogRead(5)+micros()); unitId = random(); notunitId = -unitId; EEPROM.write(0, unitId &gt;&gt; 24 &amp; 0xFF); EEPROM.write(1, unitId &gt;&gt; 16 &amp; 0xFF); EEPROM.write(2, unitId &gt;&gt; 8 &amp; 0xFF); EEPROM.write(3, unitId &amp; 0xFF); EEPROM.write(4, notunitId &gt;&gt; 24 &amp; 0xFF); EEPROM.write(5, notunitId &gt;&gt; 16 &amp; 0xFF); EEPROM.write(6, notunitId &gt;&gt; 8 &amp; 0xFF); EEPROM.write(7, notunitId &amp; 0xFF); Serial.print("Generated new unitId: "); Serial.print(unitId, HEX); } Serial.println(); </code></pre> <p>And in loop I then just add unitId to the message wherever needed.</p>
1753
2013-08-16T08:40:44.710
|arduino|microcontroller|communication|
<p>This might be a out of league question and may seems to be very odd.I am using multiple Arduino UNO boards over network and want to assign a GUID and Serial number to each board so that when ever they send any data to a central server, server is able to find which Device it is if we have assign name for each device.</p> <ul> <li>first way to do this is to assign GUID and serial number of device before each message that is send to central server manually while programming and then burn that hex to arduino.</li> </ul> <p>Now is there any way out that we can burn a program that always give a output as a string (GUID+Serial number of device) like we program EEPROM for this and then burn our main code in Arduino which pick the GUID+Serial ID combo from EEPROM and write it before every message which arduino is pushing to central server.</p> <p>Or my another way of asking is can we program EEPROM with a different program and our Arduino separately like 2 files running in parallel or is it not possible?</p> <p>Is there any other way of doing this?</p>
Assigning Serial number and GUID to a microcontroller
<p>Well, I'll be darned. A serial number shows up in dmesg when you plug the thing in. I checked two boards that the number was different, so hopefully these are globally unique.</p> <pre><code>me@mycompy:~/arducopter/MissionPlanner$ dmesg|tail [26581.627967] iwlwifi 0000:03:00.0: fail to flush all tx fifo queues [26583.020822] usb 1-1.2: new full-speed USB device number 27 using ehci-pci [26583.115672] usb 1-1.2: New USB device found, idVendor=2341, idProduct=0010 [26583.115682] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=220 [26583.115689] usb 1-1.2: Product: Arduino Mega 2560 [26583.115696] usb 1-1.2: Manufacturer: Arduino (www.arduino.cc) [26583.115702] usb 1-1.2: SerialNumber: 64033353830351A0B122 [26583.116485] cdc_acm 1-1.2:1.0: ttyACM0: USB ACM device </code></pre>
1755
2013-08-16T15:48:38.817
|arduino|ardupilot|
<p>I have several <a href="http://store.3drobotics.com/products/apm-2-5-kit">APM 2.5 boards</a> and need to identify them based on some globally unique hardware signature that does not change with programming.</p> <p>Arduinos and atmel AVR chips in general <a href="http://forum.arduino.cc/index.php?topic=45104.0">do not have</a> (also <a href="http://forum.arduino.cc/index.php?topic=45060.0">this thread</a>) an accessible serial number.</p> <p>However, it seems that the Ardupilot has so many integrated sensors and other ICs that one of them must have something unique I can use ( <a href="http://stuff.storediydrones.com/APM_v252_RELEASE.zip">see schematic</a> )!</p> <p>I will be checking datasheets for MPU-6000, HMC5883L-TR and MS5611, but in the meantime, if someone has already figured this one out, please answer.</p>
Any globally unique signature in Ardupilot hardware, or Arduino in general?
<p>Inaccuracy due to multipath bounces is negligible compared to inaccuracy due to seeing fewer satellites, as Matthew Gordon answers. So on a land vehicle, roof is best. Towbar is very poor.</p>
1765
2013-08-20T05:29:43.533
|gps|ugv|
<p>I used to think that the higher GPS antenna position the better until I read the following on <a href="http://gpsd.berlios.de/faq.html#accuracy" rel="nofollow noreferrer">GPSd FAQ</a>:</p> <blockquote> <p>One <strong>common error is to place the GPS or antenna as high as possible</strong>. This will increase multipath effects due to signal bounce from the ground or water, which can cause the GPS to mistake its position and the time signal. The <strong>correct location for a boat GPS antenna</strong> is on the gunwale rail or pushpit rail, <strong>close to the water</strong> and as far from the mast as possible (to reduce signal bounce from the mast). If you're outside or in a fixed location, put the GPS antenna as far from buildings as possible, and on the ground.</p> <p>If you're <strong>in a car</strong>, <strong>don't put the GPS antenna on the roof, put it on the towbar</strong> or some similar location. If you're driving in a heavily built up area, you're going to get signal bounce off buildings and reduced accuracy. That's just how the physics works. Note, however, that as your velocity goes up it becomes easier for the convergence filters in your GPS to spot and discard delayed signal, so multipath effects are proportionally less important in fast-moving vehicles.</p> </blockquote> <p>Does anyone has experience placing GPS antenna on a towbar of the car as suggested? Does it give reasonable effect?</p> <p>My concern is that placing antenna there will not reduce an error that much, but will expose the device (antenna) to possible mechanical damage.</p> <p>So, are there any better positions apart from roof and towbar?</p> <p>Thanks</p>
Place for GPS antenna on autonomous vehicle
<p>I think you're misunderstanding the following two sentences from the abstract: </p> <blockquote> <p>(i) the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data.</p> </blockquote> <p>It looks like they're first extracting lines from the point cloud generated from the laser range finder. They then cross-reference these lines with lines extracted via computer vision on the monocular image. If a line extracted from the point cloud doesn't exist in the image, then it is NOT a valid line. Therefore, you are correct, "If the dynamic object moved, [the] laser data [WILL] update and eliminate the segment" caused by dynamic objects. However, they're trying to correctly identify segments specifically when the dynamic object is briefly static.</p> <p>You can see a specific example on page 441 where a person standing still caused a segment to be extracted from the point cloud. Fortunately, this segment does not exist in the monocular image, and so it was deemed an INVALID segment. There's also a brief description of this process in section 4 on page 437.</p>
1774
2013-08-21T07:02:49.803
|sensors|slam|
<p>In this <a href="http://www.mdpi.com/1424-8220/12/1/429" rel="nofollow">paper</a>, the author says that during SLAM process, pseudo segments that appear from any momentary pause of dynamic objects in laser data would make the map unsatisfied.</p> <p>How is this caused? </p> <p>If the dynamic object moved, won't laser data update and eliminate the segment of dynamic objects?</p>
In SLAM, how does a laser range finder produce pseudo-segments from dynamic objects?
<p>This design is definitely plausible, but there are some problems I can see.</p> <ul> <li>The maximum length is defined by the force the magnets can lift, since the ones lowest in the stack won't push the other ones up after some amount of them.</li> <li>The magnets will want to get as far from the force as possible, so they will most definitely not go straight up. This calls for a use of something like a flexible tube. This may mean the design won't be as space-efficient as it seems (if it is even possible).</li> <li>The stack won't keep it's shape if the magnets are not energized all the time, so the current consumption may be bigger than readily-available parts.</li> </ul> <p>It is an interesting concept, but I can't see using it to lift any significant load to a reasonable distance.</p>
1782
2013-08-26T12:47:32.267
|actuator|
<p>So i got this idea waay back when i was in highschool as a kind of electromagnetic analogue to a biological muscle. it is basically a long stack of thin electromagnets connected in parallel. <img src="https://lh5.googleusercontent.com/-DUdbOfkO9bI/UhtIXyefiII/AAAAAAAAB0c/vmkKU3K6rF0/w2350-h1762-no/IMG_20130826_152132.jpg" alt="(the picture)">. when current is applied gaps between electromagnets shrink thus providing contraction of the whole chain. </p> <p>I am pretty sure it can work. It can't offer great contraction range (up to 50% i would guess) but it has potential to provide good speed and be compact so that multiple chains can be combined to form stong and fast linear actuators. The thing is, i never heard of this type of actuator being used. so what is the catch? is there a better alternative? is there a design flaw? too much heat generated making them unpractical?</p>
Actuator design. plausible?
<p>Finally Success:</p> <p>I found that the Torobot USB board could be communicated with an Arduino serial driver. Conveniently this is available through opkg:</p> <pre><code>opkg install kernel-module-cdc-acm </code></pre> <p>When the board is plugged in, it comes up as</p> <pre><code>/dev/ttyACM0 </code></pre> <p>From here you can simply echo commands to the device. </p> <pre><code>echo "#8P1500T100" &gt; /dev/ttyACM0 </code></pre> <p>This basically says "set servo 8 to position 1500 with speed 100"</p>
1787
2013-08-28T19:13:55.500
|control|microcontroller|rcservo|usb|embedded-systems|
<p>I'm trying to get the "Torobot" USB servo controller to work with Angstrom Linux on a Beagle Board XM.</p> <p>The servo controller registers as a USB device. The device just takes simple text commands, but there is no TTY associated with it. So I'm not sure how to send commands to it.</p> <p>Can I just send data like this (assuming that 002/005 is the device):</p> <pre><code>$ cat file.txt &gt;&gt; /dev/bus/usb/002/005 </code></pre> <p>Or do I need to associate it with the generic USB device? If so, how do I do that?</p>
How do I send text to a Torobot USB device?
<p>You can control brushless motors 2 ways</p> <ol> <li>control with a hall effect sensor <a href="http://scholar.lib.vt.edu/theses/available/etd-09152003-171904/unrestricted/T.pdf" rel="nofollow">http://scholar.lib.vt.edu/theses/available/etd-09152003-171904/unrestricted/T.pdf</a></li> <li>sensorless(back emf) control <a href="http://www.pmdcorp.com/downloads/app_notes/BrushlessSensorConfig.pdf" rel="nofollow">http://www.pmdcorp.com/downloads/app_notes/BrushlessSensorConfig.pdf</a> or you can buy an esc (elcetronic speed control) My advice If you are not knowledgeable about electronic you can buy esc</li> </ol>
1791
2013-08-29T19:25:26.487
|motor|control|microcontroller|power|brushless-motor|
<p>I consider using a brushless outrunner motor, designed for helicopters, in my driving roboter. How can I control such a brushless motor with my micro controller? Of course I'll have a separate power source.</p> <p>The roboter should be able to move forwards and backwards, so I need to control the motor in a way to determine direction of rotation, too. I think this isn't related to the question, but I need to ensure high acceleration.</p> <p>Specially, I am talking about <a href="http://www.modellhobby.de/Motoren/E-Motoren/DYMOND-Brushless-Antriebe/Aussenlaeufer/DYMOND-MASTER-HQ-2838.htm?shop=k_staufenb&amp;SessionId=&amp;a=article&amp;ProdNr=03121706&amp;t=11&amp;c=3258&amp;p=3258" rel="nofollow noreferrer">this motor</a> which is listed in a German shop.</p> <p><img src="https://i.stack.imgur.com/Q3ih6.jpg" alt="DYMOND MASTER HQ motor"></p>
How to control a brushless motor?
<p>The low distance between the top of the robot and the ceiling really restricts your options. It seems pretty much impossible to get a centralised overview of the whole room and work from there.</p> <p>I'm not sure what kind of 'room' you are talking about and how much you can instrument it, but it might be an option to place markers on the ceiling rather than the robots. Given the short distance, you'd have to pretty much completely fill the ceiling with tiny markers that can be completely observed by an upwards pointing camera on every single robot, though you might be able to position this camera lower on the robot, say between the front and back wheels on either side, to give you a wider viewing angle. But the biggest challenge would be to print enough distinct markers to instrument the whole ceiling.</p> <p>Alternatively, it might be conceivable to instrument the floor with lots of RFID tags, provided you can find readers that have a small enough range (AFAIK, RFID readers will only tell you that a certain tag is in range, not where it is). The <a href="http://www.phidgets.com/products.php?category=14&amp;product_id=1024_0" rel="nofollow">Phidgets RFID reader</a> already has a range of approx. 3 inches, so unless you localise by seeing which group of tags you can observe (if it's possible to observe multiple tags at the same time - can you tell I have no actual working experience with RFID?), you'd have to experiment with getting smaller tags and 'shielding' them to a degree from the reader, so that they can't be read other than at very close range.</p> <p>All in all it seems a tough but very interesting challenge. If it's for work, I assume you can't tell us the purpose of the project, but it sure sounds intriguing :)</p>
1795
2013-08-31T21:49:19.597
|localization|wireless|swarm|
<p>I have a 300cm x 300cm room with a 25cm high ceiling (yes twenty five centimeters). It contains 50 small wheeled robots (about 20cm x 20cm). A central computer will orchestrate the movements of the robots, using a wireless network to send position commands to them. The robots will perform their own closed loop position control to save wireless bandwidth. The robots have 32-bit ARM microcontrollers. They have wheel position sensors, and the ability to do accurate wheel control. </p> <p>Problem: The robots can't actually do this yet because they have no way to measure their position in the room.</p> <p>Question: How can the robots be given the ability measure their position and orientation to an accuracy of better than ±5mm? I am looking for an accurate and robust solution, which is not affected by occlusions, and doesn't require a high power PC on each robot. Whatever sensors are necessary for localisation can easily be added to the robots. The microcontrollers can easily be changed for more powerful ones if the localisation solution demands it.</p>
Localizing a swarm of robots
<p>It certainly sounds like the joystick is broken, but in Robotics, nothing is certain. Some things to try:</p> <ul> <li>Swap over the X and Y wires going to the Arduino. Which axis exhibits the problem now?</li> <li>Disconnect the joystick from the Arduino, and power it up on its own. Using a multimeter, measure the voltage on the X axis wire as you move the joystick. Does this make more sense? If so, then there's something wrong with the way it's connected to the Arduino, or something wrong in software.</li> <li>Disconnect the joystick completely from its power, and use the multimeter to measure the resistance between the X axis signal wire and the GND wire. If this doesn't make sense, then there's likely a fault in the joystick. Possibly a failed solder joint. This could be fixed by simply re-heating the solder joints on that pot.</li> </ul>
1797
2013-09-01T03:20:07.567
|microcontroller|
<p><em>Outline:</em></p> <p>I'm trying to work with an Arduino and Analog thumb stick to get values for a simple differential drive robot I'm working on. The <a href="http://www.plexishop.it/en/modulo-joystick-keyes-sjoys.html" rel="nofollow noreferrer">Keyes_Sjoys Arduino Joystick Module</a> I have in my possession is giving me some strange numbers. </p> <p><em>Following axises Data I have:</em></p> <ul> <li>X-axis range of 0 to a shaky 470-520 with a center value of 40.</li> <li>Y-axis range of a solid 4 to solid 1023 with a center value of 605.</li> </ul> <p><em>Problem</em></p> <p>I haven't used analog sensors before but it seems pretty obvious that my X-axis ranges should feel somewhat similar to the Y-axis but they don't. In addition, the X-axis hits zero way way before even coming close to the edge for its operating range.</p> <p>Is my sensor broken (it's new), or is there some way I can recalibrate the potentiometer?</p> <p><sub>Note, I also asked this over on <a href="https://electronics.stackexchange.com/q/80870">Electrical Engineering Stack Exchange</a>.</sub></p>
How do I Calibrate Analog Thumb stick?