Answer
stringlengths
44
28.2k
Id
stringlengths
1
6
CreationDate
stringlengths
23
23
Tags
stringlengths
4
94
Body
stringlengths
54
35.7k
Title
stringlengths
13
150
<p>In my personal experience, I hate Google SketchUp. I Recommend that you use CAD Solidworks. Google SketchUp is not meant for gear design. In Solidworks, you are able to create gears however you may and also make it function on a three-dimensional visual on your computer screen. *I personally recommend this to you with great honestly.</p>
1806
2013-09-03T12:54:04.527
|design|mechanism|3d-printing|
<p>I'm trying to make differential in Google SketchUp using this tutorial <a href="http://support.ponoko.com/entries/21249896-Gears-and-Joints-with-SketchUp-Sketchy-Physics">http://support.ponoko.com/entries/21249896-Gears-and-Joints-with-SketchUp-Sketchy-Physics</a> for gears modeling. But I have problem: gears don't collide with any objects (and other gears). What's wrong? How to fix this? How to make a bevel gear placed at 90 degrees relative to each other and conical cylindrical gears joints?</p> <p>P.S. Is there something like SketchUp and SketchyPhisics in Linux?</p>
Gears modeling in Google SketchUp and SketchyPhisics
<p>With a series of planetary gear sets aligned axially, one can gear down by high ratios. For example, the picture below (a <a href="https://upload.wikimedia.org/wikipedia/commons/thumb/8/88/Rearview_Mirror_Epicyclic_Gears.jpg/640px-Rearview_Mirror_Epicyclic_Gears.jpg" rel="nofollow noreferrer">wikipedia commons</a> image used in the <a href="https://en.wikipedia.org/wiki/Planetary_gear#Gallery" rel="nofollow noreferrer">planetary gear</a> article) shows a 2.5-cm gearset with ratio -5/352, about 1:70. Stacking three of these would give a ratio of about 1:343000. Some torque and power would be lost to friction, and plastic gears would not be strong enough to deliver all of the torque possible at the output, but the concept should be clear. If multiplying motor torque by 1/3 of a million isn't enough, add a fourth gearset, giving a torque multiplication from motor to output of about 24 million. Note that rotation speed would be correspondingly divided (by about 24 million) and only about 1/2 to 3/4 of the motor power would arrive at the output, due to friction losses.<img src="https://i.stack.imgur.com/VZIUx.jpg" alt="mirror gearset"></p> <p>To stop turning the pulley if load gets too high, you could use a <em><a href="https://en.wikipedia.org/wiki/Clutch#Torque_limiter" rel="nofollow noreferrer">torque-limiter</a></em> (or, <em>safety clutch</em> or <em>slip clutch</em>) at the output. </p>
1808
2013-09-03T19:23:28.217
|sensors|control|actuator|
<p>I need to assemble a small (about 8cm x 5cm x 5cm maximum), actuator with as much torque as I can get at that size. It will be driving a small reel and pulley (reel is ~1.25cm^3, 5mm center diameter), and needs to respond to load (eg. stop if the load increases beyond a certain threshold). Power to the actuator will be provided via a common bus line, so the space limit isn't further limited by the size of the battery.</p> <p>My thought is to use a worm drive for this (for torque) and watch for change in current/voltage (for load), but I'm not sure if that is mechanically sound. It seems like the mechanical advantage provided by the worm would make it hard to detect a change in load.</p> <p><strong>Plan B</strong></p> <p>I could add another sensor that would gauge the force being exerted. I'd prefer to avoid adding too many points of failure to the system, but if I did what sort of sensor would I use?</p> <p>How should I approach this?</p>
Tiny high torque actuator/sensor design
<p>You can use <a href="http://www.sketchup.com/" rel="nofollow">google sketchup</a>. It's free and should be allow you to do most of what you want. </p> <p>Solidworks would most likely fully match your requirements above. Since it is a one-time project, you could consider evaluating Solidworks by requesting a <a href="http://www.solidworks.com/sw/purchase/solidworks-trial.htm" rel="nofollow">free trial</a>.</p> <p>Good luck.</p>
1813
2013-09-04T21:51:27.273
|software|
<p>Which software can be used to prototype/design robot parts (mechanical parts, body, gears, etc)></p> <p>I have some crazy idea I would like to try (quadripedal walking robot, animal-like), but I'd like to design the mechanism and test (to some degree) the mechanism in some kind of simulator before I start wasting money on parts/materials. What tool could I use for that? </p> <p>I'm only interested in mechanical design (chassis + servo/motor placement + cogs/gears), not in electronic design. I'm not interesting in robot <em>control</em> software, because I'll be probably able to slap something like arduino onto it and program behavior I want (experienced programmer)</p> <p><strong>Details (what I'd like to see)</strong>:</p> <ol> <li>Should work in 3d. I.e. finished system should be viewable in 3d.</li> <li>I should be able to cut materials like plywood/metal, drill holes, place gears on it, etc.</li> <li>It would be nice if it had some kind of part catalog so to place a gear/cog I wouldn't need to design it from scratch.</li> <li>It would be nice I could test if parts can actually move. I don't need full-blown simulation, just to see if gears can turn or if they'll get stuck.</li> <li>Not interested in electronic circuitry, just need mechanical parts, but should be able to place servos.</li> <li>It would be nice if it could produce blueprints.</li> <li>cheap/inexpensive, if possible.</li> </ol> <p>Basically, I should be able to construct robot mechanism in it (by placing/connecting parts like gears,cogs, motors, springs), or some kind of clock, and test (to some degree) if it actually works.</p> <p>I know that I could use blender3d for that, but it wasn't exactly designed for this purpose.</p> <p>I also heard that solidworks could be used for designing mechanical parts, but it is too expensive, especially for one-time-project. </p> <p>Any recommendations?</p>
Software for designing mechanical systems/robotic parts
<p>OK. <strong>as drawn</strong>, ignoring mass and accelerations, the force $F_p$ will appear as a torque on your ball screw.</p> <p><strong>However</strong>, the total force on the ball screw, and hence the torque, depends on the mass of the thing you're moving with the ball screw interacting with gravity (if it's being moved in anything other than a horizontal plane), and on whether or not the whole assembly -- frame and load -- is moving at anything other than a steady velocity.</p> <p>On a bad day, your mass-spring-damper system will have an overall resonance that interacts with your control system, making oscillations happen where you never expected them.</p>
1815
2013-09-05T12:40:22.553
|movement|torque|differential-drive|
<p>I need an equation or a some hints to solve the following problem.</p> <p>Imagine a <a href="http://en.wikipedia.org/wiki/Roller_screw" rel="nofollow noreferrer">roller screw</a> drive. I apply a torque of <code>T</code> to translative move my load mass <code>M</code>. I assume my screw has an efficiency of 90%. Now an additional axial force affects my mass in the opposite moving direction. Is this force completely transformed into torque (of course considering the efficiency) or is it possible, that my whole roller screw is moving, because it is not fixed? I just found papers/books/articles for movable slides/loads, but fixed shafts. But in my case motor and shaft are part of an osciallation system.</p> <p>I'm not a mechanical engineer, so I'm sorry if the answer may is trivial.</p> <p>I made a little sketch now <img src="https://i.stack.imgur.com/70UmY.jpg" alt="enter image description here"></p> <p>The process force <code>Fp</code> is pushing my mass, most of the force is transformed into a load torque <code>Tp</code> which acts against my drive torque <code>TD</code>. Some of the energy is lost by friction. The question is, if there is also a partial force <code>Tp?</code> which is affecting the bearing and therefore exciting my chassis.</p>
Roller Screw drive - axial movement instead of friction
<p>Using a so-called optical flow sensor is the best way to help with holding the horizontal (i.e. in X-Y plane) position. I don't see any reason why you couldn't do the same for vertical control, although a sonar is probably easier and cheaper to use for this (likewise, if you are indoors, you could use 2 sonars for the horizontal position as well)</p> <p>People used to hack the sensors of optical mice to achieve this:</p> <p><a href="http://makezine.com/2007/12/15/using-an-optical-mouse-for-rob/" rel="noreferrer">http://makezine.com/2007/12/15/using-an-optical-mouse-for-rob/</a> <a href="http://areciv.com/index.php?aid=18" rel="noreferrer">http://areciv.com/index.php?aid=18</a> <a href="http://home.roadrunner.com/~maccody/robotics/croms-1/croms-1.html" rel="noreferrer">http://home.roadrunner.com/~maccody/robotics/croms-1/croms-1.html</a> <a href="http://home.roadrunner.com/~maccody/robotics/mouse_hack/mouse_hack.html" rel="noreferrer">http://home.roadrunner.com/~maccody/robotics/mouse_hack/mouse_hack.html</a></p> <p>Some ready-made sensors are available: <a href="http://www.buildyourowndrone.co.uk/Optical-Flow-Sensor-p/op-fs.htm" rel="noreferrer">http://www.buildyourowndrone.co.uk/Optical-Flow-Sensor-p/op-fs.htm</a></p> <p>Some explanation on how to use such device with an Arduino can be found here:<a href="https://code.google.com/p/arducopter/wiki/AC2_OptFlow" rel="noreferrer">https://code.google.com/p/arducopter/wiki/AC2_OptFlow</a></p> <p>Nowadays, better sensors are out there (but much more expensive) that works in low-light conditions (i.e. indoors) and are even combined with a sonar (as it makes senses to hold altitude in order to get a better/stable optical reading for XY positioning), such as this one:</p> <p><img src="https://i.stack.imgur.com/S1APo.jpg" alt="px4flow"> <a href="http://store.3drobotics.com/products/px4flow" rel="noreferrer">http://store.3drobotics.com/products/px4flow</a></p>
1844
2013-09-12T15:15:05.590
|sensors|quadcopter|cameras|visual-servoing|
<p>My quad copter can balance itself in the air using data collected from mpu6050. With the sonar sensor, it can hover at a specific height, but it moves on the horizontal plane in a random direction. If i put an object below it, it will ascend to keep the distance between the sonar senor and the object.</p> <p>Now i want to make it have the ability to hover stably. Is it possible to add a downward-facing camera to calculate the speed of optical flow in order to keep it hovering on the same point in the horizontal plane? Could I use a forward-facing camera to stabilize its vertical speed?</p>
Stabilizing a quadcopter with optical flow
<p>In general, the microphone should be covered in some sort of waterproofing material (called "potting") that would prevent its materials from ever coming in contact with the pool water. So unless you are concerned that the microphone will be smashed to pieces while in your pool, it shouldn't be a problem</p> <p>That said, there are some other transducer technologies besides piezo. Ribbon microphones won't suffice, mostly because of their fragility. Dynamic microphones might also a bad idea, because they are generally larger and require more motion than the others. </p> <p>I'd recommend trying a cheap electrostatic (also called electret) microphone. You should be able to cover it in silicone RTV or "plumbing goop" and still get a signal out of it.</p>
1857
2013-09-16T17:59:59.720
|sensors|
<p>I'm kicking around the idea of building a small passive sonar for an autonomous submarine. I've looked through the net for parts and finding a good transducer for converting the sound underwater into an electrical impulse. After looking at parts I got into the piezoelectric materials used for doing this such as barium titanate or Lead zirconate titanate. From what I've read on the web, some of these materials are toxic. </p> <p>My question is, are there piezoelectric materials that one could to build a sensor from scratch that does not possess the toxic qualities? Something that could preferably thrown in a pool w/ my kids and not give them or me any defects.</p>
Transducer for underwater applications (passive sonar)
<p>The answer is that 3-axis accelerometers <strong>don't</strong> have a left handed coordinate system just for the gravity. In static condition (i.e. if the accelerometer is not accelerating with respect to any inertial frame) they measure the <strong>opposite</strong> of gravity acceleration, not the gravity acceleration itself. </p> <p>In more general terms, the accelerometers measure the difference between the <strong>actual</strong> acceleration of the sensor with respect to any inertial frame and the gravitational acceleration: $$ a_{accelerometer} = a_{sensorFrame} - g $$ This "acceleration" measured by the accelerometer is sometimes called <a href="https://en.wikipedia.org/wiki/Proper_acceleration" rel="noreferrer"><em>proper acceleration</em></a>.</p> <p>This can be easily verified by checking the measure of an accelerometer in free fall: as in that case the <strong>actual</strong> acceleration of the sensor will be equal to $g$, the accelerometer measure will be $0$. </p>
1858
2013-09-16T18:18:13.083
|sensors|imu|accelerometer|calibration|
<p>Careful inspection of page 35 (figure 58) of the <a href="http://www.analog.com/media/en/technical-documentation/data-sheets/ADXL345.pdf" rel="noreferrer">ADXL345 datasheet</a> shows that under gravitational loading only, the chip uses a left-handed coordinate system. My own experiments with this chip confirm this. </p> <p>I typically only use the chip to indicate the gravity vector. So when using this chip, I simply negate the values to get a right handed coordinate system. But this doesn't seem right. I assume there is a logical and mathematical explanation for the left-handed coordinate system but i can't figure out what it might be. </p> <p><img src="https://i.stack.imgur.com/bItD6.png" alt="Image from ADXL345 Datasheet"></p>
Why do 3-axis accelerometers seemingly have a left-handed coordinate system?
<p>By searching for a different topic I found your post and I work with the Sparkfun Razor 9DOF IMU too. Actually it was a pain in the rear to get it all work.</p> <p>First of all you have to do the tutorial <a href="https://github.com/ptrbrtz/razor-9dof-ahrs/wiki/Tutorial" rel="nofollow noreferrer">razor-9dof-ahrs</a> form ptrbrtz. When this is working you can do the next steps. Btw: read it carefully and you should be able to do it on your own!</p> <p>First I tried to get it work with an Arduino Uno and a SoftwareSerial. Sadly I wasn’t able to get it work with a SoftwareSerial, I received with Text and Binary output only rubbish data. I worked a whole day on this topic and I wasn’t able to do it and I could say I have a lot experience with Arduino programming. If you are able to get it work with a SoftwareSerial please post an answer.</p> <p>With the Uno I was only able to receive data with this configuration: <a href="http://blog.mobileapes.com/2010/09/read-data-from-9dof-razor-imu-via.html" rel="nofollow noreferrer">http://blog.mobileapes.com/2010/09/read-data-from-9dof-razor-imu-via.html</a>. But be aware the TX should go to RX and the RX to TX! The description in the picture (Rx to Rx and Tx to Tx) is false.</p> <p>If you are able to use an Arduino MEGA, which has at least 4 UARTs, you could use the code I developed. Once I got it to work on an Uno but I searched for the code and I didn’t find it anymore, with a little try and fail you should be able to do it. But be aware, if you are sending data to the PC and receiving data from the IMU on only one UART, you could probably negatively influence the serial communication.</p> <pre><code>/**************************** INFO ********************************/ // This code expects a message in the format: H 12.00,-345.00,678.00 /******************************************************************/ #include &lt;TextFinder.h&gt; /*** Defines the frequency at which the data is requested ***/ /*** frequency f=1/T, T=period; ie. 100ms --&gt; f=10Hz, 200ms --&gt; f=5Hz ***/ #define PERIOD 100 // [ms] /*** Vars for IMU ***/ TextFinder finder(Serial3); const int NUMBER_OF_FIELDS = 3; // how many comma separated fields we expect float rpy[NUMBER_OF_FIELDS]; // array holding values for all the fields /************************************************************/ /*** Setup /************************************************************/ void setup() { Serial.begin(57600); // init the Serial port to print the data to PC Serial3.begin(57600); // init the Serial3 port to get data from the IMU delay(500); initIMU(); } /************************************************************/ /*** Loop /************************************************************/ void loop() { // print manager timer static unsigned long timer = 0; static unsigned long currentTime = 0; /************************************************************/ /*** Request after a specific period for the data /************************************************************/ currentTime = millis(); if(currentTime - timer &gt;= PERIOD) { // Request one output frame from the IMU // #f only requests one reply, replies are still bound to the internal 20ms (50Hz) time raster. // So worst case delay that #f can add is 19.99ms. Serial3.write(&quot;#f&quot;); /************************************************************/ /*** Get the IMU values /************************************************************/ // the current field being received int fieldIndex = 0; // search the Serial Buffer as long as the header character is found boolean found_HeaderChar = finder.find(&quot;H&quot;); if (found_HeaderChar) { // Get all 3 values (yaw, pitch, roll) from the Serial Buffer while(fieldIndex &lt; NUMBER_OF_FIELDS) { rpy[fieldIndex++] = finder.getFloat(); } } /************************************************************/ /*** Print out the values /*** Format: yaw, pitch, roll, left_Encoder, right_Encoder /************************************************************/ if (found_HeaderChar) { // print Interval Serial.print(currentTime - timer); Serial.print(&quot;,&quot;); // print IMU values for(fieldIndex=0; fieldIndex &lt; NUMBER_OF_FIELDS; fieldIndex++) { Serial.print(rpy[fieldIndex]); Serial.print(&quot;,&quot;); } Serial.println(&quot;&quot;); } timer = millis(); } } /********************************/ /*** Initialize Functions /********************************/ void initIMU() { // Output angles in TEXT format &amp; Turn off continuous streaming output &amp; Disable error message output Serial3.write(&quot;#ot#o0#oe0&quot;); Serial3.flush(); } </code></pre> <p>Edit: Oh I forgot to say that you must edit the Output.ino from the Razor AHRS firmware. Search for the function <code>output_angles()</code> and change it to:</p> <pre><code>// Output angles: yaw, pitch, roll void output_angles() { if (output_format == OUTPUT__FORMAT_BINARY) { float ypr[3]; ypr[0] = TO_DEG(yaw); ypr[1] = TO_DEG(pitch); ypr[2] = TO_DEG(roll); Serial.write((byte*) ypr, 12); // No new-line } else if (output_format == OUTPUT__FORMAT_TEXT) { Serial.print(&quot;H &quot;); Serial.print(TO_DEG(yaw)); Serial.print(&quot;,&quot;); Serial.print(TO_DEG(pitch)); Serial.print(&quot;,&quot;); Serial.print(TO_DEG(roll)); Serial.println(); } } </code></pre>
1864
2013-09-18T00:50:06.823
|arduino|imu|
<p>I was looking into the Razor IMU from Sparkfun, and realized that the only example code on sparkfun's website was for it was meant for hooking it up to a computer (the AHRS head tracker used a serial to usb chip). I looked on google and saw nothing but stories about how it did not work. </p> <p>Is there any good way to hook up the Razor IMU to an arduino uno (or any arduino without hardware support for more than one serial port), and if so does example code exist?</p>
Razor IMU Arduino interfacing
<p>It's nice, but for the most part, unnecessary if you're just dealing with robotics. Maybe if you need some high-end optimization, it may be helpful. But most compilers can optimize better than average humans these days anyway. Unless you really know what you're doing and understand the architecture enough to optimize further, I'd say it's not necessary. Plus, different manufactures have their own instruction sets, many of which behave differently on different MCU's. Another thing to cosider is that syntax varies between different MCU's. For instance, AVR MCU's have a different syntax than MIPS architectures and TI DSP's. It all depends on what you're doing.</p>
1881
2013-09-24T15:50:54.913
|microcontroller|software|programming-languages|
<p>I ask this since assembly language is really close to the micro-controller hardware, and what micro-controller would you reccomend.</p> <p>The bot has to search for object that I show it before and then I 'lose' the object. Note, I do not know anything about micro-controllers.</p>
Should you learn assembly language for robotics?
<p>The number of bits in the instruction set for the MCU will directly affect the precision of the math operations on that given MCU. If you're going to be doing a lot of 16/32 bit math operations, then yes, a 32 bit MCU will perform significantly better compared to an 8 bit MCU.</p> <p>This article may help you understand the difference: <a href="http://www.embedded.com/electronics-blogs/industry-comment/4027627/The-8-bit-MCUs-won-t-be-going-away-anytime-soon" rel="nofollow">http://www.embedded.com/electronics-blogs/industry-comment/4027627/The-8-bit-MCUs-won-t-be-going-away-anytime-soon</a>.</p> <p>The Due has an 84MHz CPU vs the Mega's 16MHz...this isn't even taking into account that a 32-bit core allows the Due to perform operations on 4 byte wide data within a single CPU clock cycle, whereas the Mega will need a clock cycle for each byte at least.</p>
1895
2013-09-27T06:27:53.983
|arduino|imu|
<p>First, I'm a beginner in MCU/Robotic world (been working with ATMega+CVavr, but that's all). so please bear with me.</p> <p>I'm making a prototype data glove (Like <a href="http://keyglove.com" rel="nofollow">KeyGlove</a>, but much more simpler), it consist of:</p> <ol> <li>IMU sensors (MPU 9150 9DOF, all reading is fused with built in DMP) -> Reads hand position and orientations</li> <li>Minimum of 2 flex sensors -> Reads Figer flexion</li> <li>MCU (well, Arduino to be specific)</li> </ol> <p>The sensors are plugged in to the Arduino and the reading will be filtered (e.g Low pass, Kalmann) in the Arduino before being transferred over serial to PC. The PC will then translates the data into virtual gripper to move an object (VR) </p> <p>Initially I planned to use UNO + Pansenti’s MPU9150 Library in my code, then I realised the flash memory size left would be so tiny (i.e MPU9150 lib code size is ~29k, Uno has 32k). My project is still in very early stage, so a lot things are expected to be changed and added, with so little flash memory left. I can only do so much. </p> <p>I immediately looking for Mega as replacement (256k flash) but I realised there is also newer Due with faster processor. </p> <p>They cost effectively the same as for now. .</p> <p>My main concern here is the robustness and compatibility when:</p> <ul> <li><p>Designing the HW interface to Arduino (making circuits, addding shield)</p></li> <li><p>Code development (available library)</p></li> <li><p>Streaming and filtering the sensor readings <strong>(would 32 bit MCus helps, or it's overkill?)</strong></p></li> </ul> <p><em>I know this question might sound as too localized, but I believe a lot of projects that utilize multiple sensor reading + filtering similarly will also benefits from this discussion</em>.</p> <p>I'll revise the question if it's needed. The main question is probably <strong>Would 32 bit MCUs perform <em>significantly</em> better in multiple sensor reading and signal filtering compared to 8 bit MCUs?</strong></p> <p>Or in my case.. should I go with Mega or Due?</p>
Arduino for simple Data-Glove. Should I go with Mega or Due?
<p>This might be a stupid idea, but wouldn't using a CVT (continuously variable transmission) be easier to control using an electronic system (both from a mechanical and control points of view)?</p> <p>I'm just not sure how easy it is to build one. I hope this helps! Good luck! :D</p>
1897
2013-09-27T20:06:01.870
|motor|automatic|
<p>I'm looking to make an automatically shifting bicycle for my senior design project (along with some additional features TBD). However, I come from an electrical/software background; not a mechanical one. So I need to figure out a decent way to shift gears. I was thinking of leaving the gear system in place as is and using some sort of motor (servo or stepper motor with worm gears) to pull and release the wire cable as needed. However, I have some concerns with this; namely the amount of torque needed to pull the wire and finding something with enough holding torque. Perhaps my best option is to use the trigger shifters on as well and perhaps use a solenoid. My other concern (namely with the worm gear) is that it'll be too slow. </p> <p>So I would like to pick your brains here for a moment. Thanks</p>
Mechanism for changing gears on a bicycle
<p>Yes, as defined in literature, all <em>localization</em> requires a prior map. </p> <p>This is because the goal of localization is to <em>localize</em> a robot with respect to some feature. If you don't know where the feature is, you can't know where the robot is.</p> <p>If you are <em>uncertain</em> about the features, then you are doing <em>Simultaneous Localization and Mapping</em> (SLAM).</p>
1901
2013-09-28T01:43:26.767
|localization|slam|mapping|
<p>So I'm doing some reading on Monte Carlo Localization, and it sounds like the approach is based on using a predefined map, but I just need to make sure (because I haven't read anywhere that it absolutely needs a predefined map). I just want to make 100% sure that my understanding is correct:</p> <p>Does it absolutely need a predefined map?</p> <p>[maybe I need to add the below stuff as another question, but here goes nothing] And what other localization approaches are there that don't need a predefined map? So far I've only read about SLAM (which sounds to me like a general approach instead of a specific implementation).</p> <p>Thanks in advance!</p>
Does Monte Carlo Localization need a predefined map?
<p>Plywood is available in various grades (A, B, C), sanded and unsanded, veneer core or MDF core, and with surfaces of various kinds of woods. I'm not aware of any plywood that will emulate the friction coefficient of <a href="http://en.wikipedia.org/wiki/Delrin" rel="nofollow">Delrin</a> (polyoxymethylene). </p> <p>With some hard, tightly grained, or oily woods (eg ironwood, ebony, teak) or Baltic Birch plywood, it might be possible to polish the wood to a point that its smoothness approaches that of polyoxymethylene. Note that those woods might cost more than polyoxymethylene does.</p> <p>For other woods, it might be possible to use a filler and an epoxy finish to emulate some properties of Delrin, but if your project is about building your robot and not about accurately emulating properties of polyoxymethylene, I don't recommend the approach of experimenting with different finishes on plywood.</p> <p>Instead, it may be best to purchase several <a href="http://en.wikipedia.org/wiki/Polyethylene" rel="nofollow">polyethylene</a> cutting boards and cut them up for use as polyoxymethylene testing standins. Large cutting boards often are available cheap on Ebay, and small cutting boards can be found at thrift stores.</p> <p>Note that polyoxymethylene is significantly stronger, harder, and more wear-resistant than the polyethylene in cutting boards, but polyethylene's properties are so much closer to polyoxymethylene's than plywood's are, that the prototyping fit should be much closer. For a still-closer fit, you can buy “drops” (small cutoffs) of UHMWPE (<a href="http://en.wikipedia.org/wiki/Ultra-high-molecular-weight_polyethylene" rel="nofollow">ultra high molecular weight polyethylene</a>) that in some applications will work as a Delrin replacement.</p>
1903
2013-09-28T06:56:12.847
|design|
<p>I'm working on a robot with a team, and we're building our robot out of acetal <a href="http://en.wikipedia.org/wiki/Polyoxymethylene" rel="nofollow">polyoxymethylene (POM)</a> (specifically, delrin) plastic. However, we'd like to prototype the robot before we build it out of POM, as POM is somewhat expensive.</p> <p>There are several critical areas that plywood would be used in place of POM:</p> <ul> <li>Over sliding surfaces</li> <li>Around gearboxes </li> <li>Under weight stress</li> </ul> <p>We'd like to take into account the friction coefficients, smoothness, and rigidity of the materials in deciding whether plywood is a valid prototype substitute. The material will be 1/4" thick.</p> <p>What differentiates plywood from acetal POM with respect to the relevant points?</p>
Is finished plywood a comparable prototyping substitute for polyoxymethylene?
<p>No two motors will ever turn with the same angular velocity given the same voltage. If you power each of your Rover 5 motors with 12V (I don't know what they're rated for), you'll see that each motor will spin at slightly different speeds. If you want to guarantee you're traveling in a straight line, you need to implement velocity control on both wheels. One method of doing this is implementing a PID controller on the drive wheels to ensure their velocity is the same, based on encoder ticks per unit time. Otherwise (let's assume you have two wheels that are driving the vehicle) one wheel will turn faster than the other, and you'll slowly drift in one direction.</p> <p>However, you may want to turn a vehicle with no steering control! That means you want to turn your drive wheels at different velocities (albeit this will cause your wheels to slip horizontally and thus cause you to lose some traction/power), and so you need two different encoders that will be the input to two different velocity controllers. For a tank like system, if the front left wheel encoder is ticking and the rear left wheel encoder is NOT ticking, then maybe your tread has fallen off! It's really very useful to create a robust system.</p> <p>Edit: Man I keep editing and adding more stuff...having multiple encoders will also allow you to identify failures. If you see that one wheel has stopped moving, it could be stuck on something and/or broken! This could allow you to halt the system and tell the operator that a mechanical failure has occurred with, for example, the front left wheel. This can only be done if you have more than one encoder.</p> <p>As a side note, it's always good to have a redundant system in case one breaks!</p>
1911
2013-09-30T03:23:16.163
|mobile-robot|motor|control|
<p>I just got my rover 5 chassis with 4 motors and 4 <a href="http://en.wikipedia.org/wiki/Rotary_encoder#Incremental_rotary_encoder" rel="nofollow">quadrature encoders</a> and I am trying to utilize the optical encoders. I know the encoders generate pulse signals which can be used to measure speed and direction of the motor.</p> <p>I want to know how 4 separate optical encoders add value for the controller of rover 5 like platform. The controller normally uses PWM to control the speed of the motor. If two motors are running at same speed then the encoder output will be same. So, why should the controller monitor all 4 encoders?</p>
How are the optical encoders used in platforms like Rover 5?
<p><strong>Tips for MORSE on Ubuntu 12.04</strong>:</p> <p>If you're having trouble compiling MORSE, perhaps these can help:</p> <ol> <li>Install python3-dev or python-dev or both (not sure which was needed)</li> <li>Check which version python3 directs to. If it's the wrong one, you can specify the python version explicitly in cmake. The cmake I used is: "cmake -DCMAKE_INSTALL_PREFIX=/home/oferb/opt/morse -DPYTHON_EXECUTABLE=/usr/local/bin/python3.3 .."</li> </ol> <p>If you've already managed to compile MORSE, then make sure your Blender was compiled with the same version of python (i.e python 3.2.2) that MORSE was compiled in. I used this Blender: <a href="http://download.blender.org/release/Blender2.62" rel="nofollow">http://download.blender.org/release/Blender2.62</a>.</p> <p><strong>Tips for MORSE on Ubuntu 13.04</strong>:</p> <p>Note that "sudo apt-get install morse-simulator" gives you morse 1.0, but according to morse support, "morse create", which shows up in the tutorials, is something new in morse-1.1. You might want to search for an older tutorial, or install a more recent version of MORSE.</p>
1918
2013-10-01T14:28:08.767
|simulator|
<p>I've been having trouble installing MORSE. I am trying to install it on Ubuntu 12.04 and on a VirtualBox with Ubuntu 13.04 (I don't need it on a VirtualBox, I'm just trying to make <em>something</em> work). On Ubuntu 12.04 I get the following errors at the cmake stage:</p> <pre><code>$ cmake -DCMAKE_INSTALL_PREFIX=/home/oferb/opt/morse_build .. -- will install python files in /home/oferb/opt/morse_build/lib/python3/dist-packages CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: PYTHON_INCLUDE_DIR (ADVANCED) used as include directory in directory /home/oferb/mnt/svr_home/opt/morse used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/src used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/src/morse used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/src/morse/builder used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/src/morse/modifiers used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/src/morse/sensors used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/src/morse/multinode used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/src/morse/middleware used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/bindings used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/testing used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/testing/base used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/testing/failures used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/testing/robots/human used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/testing/robots/segway used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/testing/robots/pr2 used as include directory in directory /home/oferb/mnt/svr_home/opt/morse/testing/robots/pionner3dx </code></pre> <p>On a fresh VMBox with Ubuntu 13.04, after 'morse check' succeeds, I try "morse create mysim" and get:</p> <pre><code>adminuser@adminuser-VirtualBox:~$ morse create mysim usage: morse [-h] [-b BASE] [--name NAME] [-c] [--reverse-color] [-g GEOM] [-v] {check,edit,run} [scene] ... morse: error: argument mode: invalid choice: 'create' (choose from 'check', 'edit', 'run') </code></pre> <p>Any suggestions?</p> <p>UPDATE:</p> <p>I've managed to install MORSE on Ubuntu 12.04. Make sure your Blender was compiled with the same version of python (i.e python 3.2.2) that MORSE was compiled in. I used this Blender: <a href="http://download.blender.org/release/Blender2.62" rel="nofollow">http://download.blender.org/release/Blender2.62</a></p>
Installing MORSE simulator on Ubuntu 12.04
<p>The vast majority of the time, I end up cutting off whatever connector is at the end of the cable hanging out of the stepper motor, and crimping on something else compatible with the board I want to connect it to.</p> <p>Occasionally the board I want to connect it to has some sort of cage clamp terminal strips (also known as spring clamp terminal strips) or some other terminal block, so I can cram in the raw bare wires without any connector on the end.</p> <p>As <a href="https://robotics.stackexchange.com/questions/100/what-connectors-are-most-reliable/252#252">I mentioned earlier</a>, many RepRaps use 8P8C "RJ45" "Ethernet" connectors to carry power over CAT5 cable, usually in a way completely incompatible with any power-over-Ethernet standard or any Ethernet data standard. <a href="http://reprap.org/wiki/RepRap_project_FAQ#wires_and_connectors" rel="nofollow noreferrer">"Why do many RepRaps have RJ45 connectors?"</a>; <a href="http://reprap.org/wiki/CupCakeWiring#Plug_in_Extruders_.2F_Toolheads" rel="nofollow noreferrer">a</a>; <a href="http://reprap.org/wiki/Extruder_Controller_2.2#RS485_Comms_.2B_Power" rel="nofollow noreferrer">b</a>; <a href="http://reprap.org/wiki/Motherboard_1.2#RS485_Comms_.2B_Power" rel="nofollow noreferrer">c</a>; <a href="http://reprap.org/wiki/Monotronics" rel="nofollow noreferrer">d</a>; etc.</p> <p>( Quite a few people, for reasons inexplicable to me, insist on removing those 8P8C sockets and using their own favorite connector -- <a href="http://reprap.org/wiki/PCB_adaptions_for_Mendel" rel="nofollow noreferrer">http://reprap.org/wiki/PCB_adaptions_for_Mendel</a> ).</p>
1919
2013-10-01T21:49:15.783
|stepper-motor|wiring|
<p>I need to actuate 3 or 4 Cnc-like Nema 23 (~1N.m torque) stepper motors, I would like some cable solution to connect easily the motor to the motor driver. </p> <p>I have not yet bought anything, I have searched various robotic stores and ebay, but did not yet found a triple (motor, cables, driver) which would be "plug and play". </p> <p>As stepper motors usually have 4 to 6 cables, and there are multiple motors, manual soldering everything would be too time consuming, error prone and messy.</p> <p>Is there a standard way to deal with cables for stepper motors ?</p>
How can we manage stepper motor cables?
<p>I introduced a different frame of reference, which I called $\mathbf{R}_p$, or position of the robot on the ground plane, instead of the "body" frame that I asked about.</p> <p>$\mathbf{R}_p$ moves with the robot, but only in yaw and X and Y position. The pitch and roll (provided by the estimator) are relative to this frame, so the roll ($\phi$) and pitch ($\theta$) are relative to the frame $\mathbf{R}_p$. To transform a point from $\mathbf{R}_p$, to the local frame $\mathbf{L}$, we just rotate $\psi$, (yaw) about the Z axis of $\mathbf{R}_p$ of e.g.: </p> <p>$$ \mathbf{T}_R^L = \begin{vmatrix} cos(\psi) &amp; -sin(\psi) &amp; 0 &amp; 0 \\ sin(\psi) &amp; cos(\psi) &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 \end{vmatrix} $$</p> <p>$$ P_L = \mathbf{T}_R^L * \begin{vmatrix} \phi \\ \theta \\ 0 \\ 0 \end{vmatrix} $$</p> <p>This gives me the attitude with respect to the local frame that I established when I initialized my system.</p>
1925
2013-10-05T00:39:23.150
|mobile-robot|kinematics|
<p>I could use some guidance regarding a coordinate system transform problem. My situation is this: my system begins at some unknown location, which I initialize the location (x y) and orientation (roll, pitch, and yaw) all to zero. I establish a frame of reference at this point, which I call the "local" coordinate frame. It is fixed in the world and does not move. At system startup, the body frame is perfectly aligned with the local frame, where body +x points forward, +y to the right and +z down. The body frame is fixed to my system, and travels with the system as it moves.</p> <p>I have an estimation routine that provides me with the x and y position, as well as the roll, pitch, and yaw of the system. Yaw is rotation about the z axis of the local frame. Pitch and roll are with respect to the body frame (I.e.,if the robot pitches up, I always get a positive value. If it rolls right, I get a positive value.)</p> <p>How can I take the known roll and pitch values and transform them to be with respect to the local (fixed) frame?</p>
Performing the proper coordinate system transformation
<p>Look at Electron Beam welding technology. Businesses like <a href="http://www.sciaky.com/products_EB.html" rel="nofollow">Sciaky's Additive Manufacturing</a> have found a niche market by using 3D printing techniques in metal applications. They have a 2 min video on <a href="https://www.youtube.com/watch?v=A10XEZvkgbY" rel="nofollow">Youtube</a></p> <p>Specifically with GMAW you will want to use a wire with very little manganese and other "cleaner" elements. This wire will cause the formation of a silica slag (glass) on the top of the weld. This slag will act as an insulator and prevent the next successive arc on that spot from establishing. </p> <p>Read the literature that comes with the wire you select. Often straight CO2 gas will work more economically that an Argon gas mix. Also factor in temperature / gas expansion when you design your enclosure or consider pre and post flow gas timing when considering your output speed.</p>
1944
2013-10-12T19:30:42.633
|3d-printing|
<p>Has anybody experimented with <a href="http://en.wikipedia.org/wiki/Gas_metal_arc_welding" rel="nofollow">GMAW</a> for additive manufacturing? The thing is, welding wire is so much cheaper than ABS or PLA filaments and, well, it is steel you are printing in, not some flimsy plastic! I imagine the arc deposition printhead would be constructed similarly to one used in plastic filament printers, except there is no need for the heating element (so, even simpler). Welding often requires fast Z speed (to finely control the arc) so i think Delta (<a href="http://www.wired.com/design/2013/02/deltamaker-3d-printer/" rel="nofollow">DeltaMaker</a>) chassis would work best. GMAW calls for some sort of inert gas to insulate heated metal from oxygen. It would make sense to seal off most of the interior of the printer and fill it with heavier than air inert gas during printing. </p> <p>I would highly appreciate any pointers on existing 3d printer designs employing this deposition method as well as flaws in design i outlined here.</p>
Arc welder for 3d printing
<pre><code>#include &lt;AccelStepper.h&gt; //AccelStepper Xaxis(1, 2, 5); // pin 2 = step, pin 5 = direction AccelStepper Xaxis(1, 12, 6); // pin 3 = step, pin 6 = direction void setup() { Xaxis.setMaxSpeed(4000); //Xaxis.setSpeed(10); pinMode (3, INPUT); } void loop() { //Xaxis.runSpeed(); if (digitalRead (3) == HIGH) { //Xaxis.stop(); Xaxis.setSpeed(20); Xaxis.runSpeed(); } if (digitalRead (3) == LOW) { Xaxis.runSpeed(); Xaxis.setSpeed(-10); // - sign change direction !!!!! } } </code></pre>
1947
2013-10-14T06:21:05.723
|arduino|control|stepper-motor|
<p>I have this project I'm working on where I'll need the speed of the stepper motor to change set speed at a certain distance, I just can't figure out a way to do it. I'm using arduino and a stepper motor, this is the current code.</p> <pre><code>#include &lt;AccelStepper.h&gt; AccelStepper stepper1(AccelStepper::FULL4WIRE, 0, 1, 2, 3); void setup() { stepper1.setMaxSpeed(200.0); stepper1.setAcceleration(400.0); stepper1.moveTo(5000); } void loop() { // Change direction at the limits if (stepper1.distanceToGo() == 0) stepper1.moveTo(-stepper1.currentPosition()); stepper1.run(); </code></pre> <p>What I want it to do basically is to first <code>moveTo(2500)</code> at the current speed 200 then after 2500 I want it to increase speed to 400. After it has moved 5000 it turns and moves back to position but that's implemented already.</p>
I want my stepper motor to switch speed while traveling (not acceleration wise)
<p>The short answer is "no, a sonic range sensor can't do it". </p> <p>It might "work" under very controlled conditions, but relying on only the attenuation of the returned signal to determine thickness may leave you open to incorrect results due to <a href="http://en.wikipedia.org/wiki/Multipath_propagation" rel="nofollow noreferrer">multipath propagation</a> effects. </p> <p>The more traditional way to measure thickness with sound is called profiling. The following is excerpted from <a href="http://woodshole.er.usgs.gov/operations/sfmapping/seismic.htm" rel="nofollow noreferrer">a USGS Woods Hole Science Center page on Seismic Profiling systems</a>:</p> <blockquote> <p>reflection profiling is accomplished by [emitting] acoustic energy in timed intervals [...]. The transmitted acoustic energy is reflected from boundaries between various layers with different acoustic impedances [i.e. the air and the paper]. Acoustic impedance is defined by the bulk density of the medium times the velocity of the sound within that medium. The reflected acoustic signal is received [by one or more microphones]. The receiver converts the reflected signal to an analog signal [which is digitized and heavily processed to determine the makeup of the materials]. </p> </blockquote> <p><img src="https://i.stack.imgur.com/wwlhm.png" alt="Seismic Profiling System"></p> <p>Rather than just measuring the time of the incoming pulse, you'd need to analyze both the time and frequency domain of the recovered signal to solve for the acoustic properties necessary to transform your transmitted pulse into the received pulse.</p> <p>So the long answer is that it can be done sonically, although a sonic range sensor is generally insufficient for this purpose. </p>
1954
2013-10-16T06:09:58.530
|arduino|sensors|
<p>The HC-SR04 is directly connected to an Arduino board with the receiver end(echo) connected to analog pin 2 and the transmitter (trigger) connected to digital pin 4.</p> <p>I am wondering if I can use the sensor to sense the change in saturation from when object block its path. The receiver and transmitter will be positioned like this </p> <p><img src="https://i.stack.imgur.com/phIBB.jpg" alt="enter image description here"></p> <p>The line in the middle is supposed to be a paper. I'll be using it to see the difference between one paper and two paper when they travel trough the two. </p> <p>Now I'm not sure if this is possible but the way I see it working is kind of similar to an IR LED Arduino program connected to an LED, where when one paper passes trough the light gets a little bit weaker and with two it takes a heavier hit.</p> <p>Is this possible?</p>
Is it possible to use HC-SR04 ultrasonic range sensor to indicate thickness of a material
<p>Telemetry is used to get in-flight data back to a ground station. For example, attitude (roll, pitch, yaw), altitude, GPS position, speed, battery voltages. It just gives you more feedback.</p> <p>It can also be used to alert you when battery levels are too low or when a fault occurs.</p>
1965
2013-10-21T11:11:13.247
|uav|ardupilot|
<p>I'm pretty new to the world of UAS after a ten year holiday from RC flying.</p> <p>I'm looking at Ardupilot and am wondering what purpose telemetry serves? Is it just to get in flight data back to a ground station or can it also be used to program the system in flight? Are there other capabilities that I am missing?</p>
What is telemetry used for?
<p>This is a new interpretation of Grey Walter's Tortoise.</p> <p><a href="http://en.wikipedia.org/wiki/William_Grey_Walter" rel="nofollow">http://en.wikipedia.org/wiki/William_Grey_Walter</a></p> <p>Try a BEAM robotics approach of a very simple control system that reacts to nearing obstacle by steering the opposite direction. Look at a subsumption / augemented finite state machine (<a href="http://people.csail.mit.edu/brooks/papers/AIM-1091.pdf" rel="nofollow">enter link description here</a>) to layer the higher goal of driving onto near term obstacle avoidance.</p> <p>This is a solved problem. But I caution you to consider bumper switches as well in case the reflectant geometry misses nearby objects to give you another layer of safety.</p>
1974
2013-10-24T21:20:04.303
|mobile-robot|sensors|
<p>I want to steer a RC car in a straight line.The car has 4 sharp IR sensors on each corner of the car to help it steer the corridor.The corridor is irregular and looks something similar to the picture below.</p> <p>The car needs to be stay exactly at the middle(shown by lighter line) and take help of the IR sensors to correct its path.</p> <p>The car has a servo on the front wheel to steer and another that controls the speed.</p> <p>I tried running it using a algorithm where it summed the values on each side of the car and took the difference.THe difference was then fed to a pid control the output of which went to steer the car.The greater the value from the pid (on either sides), the greater the value of the steering angle till it reaches the middle.</p> <p>It works for the part where the walls are at similar distance from the center and even then it oscillates a lot around the center and fails miserably around the bumps in the corridor.</p> <p>I need to make changes to the algorithm and need some help in steering me in the right direction.</p> <p>The IR sensors are too finicky and is there a way to filter out the noise and make the readings more stable?</p> <p>Any help regarding the changes that needs to be implemented is much appreciated.</p> <p>Currently the car only uses 4 IR sensors to guide.I can also use 2 ultrasonic sensors.</p> <p><img src="https://i.stack.imgur.com/sXdUT.png" alt="enter image description here"></p>
Autonomous car steering using IR sensors
<p>I've worked with the same sonar in a similar setting, and your only real option (if you can't do anything to the pool itself) is to find an open water testing area. You could also contact BlueView. They're pretty responsive.</p>
1976
2013-10-25T05:58:53.117
|sonar|
<p>I belong to an AUV team at my university. We are planning to have a Multibeam 2D Imaging Sonar (the Blueview P900) for our AUV to detect obstacles underwater.</p> <p>I have the following questions to ask on the feasibility of testing/implementing such sonars on AUVs.</p> <ol> <li><p>As we know that these multibeam sonars have multiple reflections arriving at different times from various surfaces while testing in a pool, is there any recommended way to filter these noises in the image obtained from the sonar pings?</p></li> <li><p>Are such sonars in use/test by any other team/organization anywhere else who do pool testing other than a ocean/reservoir testing where multiple reflections are almost zero except the reflections from the obstacle(s)?</p></li> <li><p>Also i would like to know the recommended image processing algorithms that can be implemented/used to detect obstacles from the sonar images.</p></li> </ol>
Usage of Multibeam 2D Imaging Sonar for AUVs, testing them in the pool environment
<p>I just posted a similar question, trying to locate a paper than a co-worker remembered (ICRA? 2011? Not from Mudd nor the Foxe paper). I'd second @JustSomeHelp -- they need to be distinguishable. So a bump sensor alone won't help as you'll basically have (in naive form), one landmark that keeps getting different locations. That's going to end badly.</p> <p>Clearly though, the recti-linear assumption is super helpful -- there's a 99 paper that address this. However I've not yet found a canonical approach to doing it</p> <p>Z. J. Butler, A. A. Rizzi, and R. L. Hollis. Contact sensor-based coverage of rectilinear environments. In Proc. of IEEE Int’l Symposium on Intelligent Control, 1999.</p>
1985
2013-10-27T08:13:45.520
|mobile-robot|slam|
<p>First, is it possible to build map without landmarks for a robot in 2D? Let's say we have an aisle surrounded by two walls. The robot moves in this environment. Now is it feasible to build such a SLAM problem? Or landmarks must be available to do so?</p>
SLAM without landmarks?
<p>The swissranger is a RF-modulated light sources with phase detectors which is one type of time of flight camera. The other type is Range Gated Imager. The Wiki Page for time-of-flight cameras <a href="http://en.wikipedia.org/wiki/Time-of-flight_camera" rel="nofollow">http://en.wikipedia.org/wiki/Time-of-flight_camera</a> states this for the Range Gated imagers: </p> <p>"Range gated imagers can also be used in 2D imaging to suppress anything outside a specified distance range, such as to see through fog. A pulsed laser provides illumination, and an optical gate allows light to reach the imager only during the desired time period."</p> <p>It also states that:</p> <p>"The ZCam by 3DV Systems is a range-gated system."</p>
1991
2013-10-29T09:01:10.230
|mobile-robot|cameras|
<p>I'm looking to build an outdoor robot and I need to know if <a href="http://en.wikipedia.org/wiki/Time-of-flight_camera" rel="nofollow">time-of-flight cameras</a> like the <a href="http://www.mesa-imaging.ch/swissranger4500.php" rel="nofollow">SwissRanger™ SR4500</a> work in fog, does anybody have some experiences on that?</p>
Are time-of-flight cameras like the swissranger affected by outdoor fog?
<p>The Jacobian is of size $2\times 4$ because you have four state elements and two measurement equations.</p> <p>The <a href="https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant" rel="noreferrer">Jacobian</a> of the measurement model is the matrix where each $i,j$ element corresponds to the partial derivative of the $i$th measurement equation with respect to the $j$th state element.</p>
1992
2013-10-30T04:59:42.603
|kalman-filter|
<p>The state vector is $$ \textbf{X} = \begin{bmatrix} x \\ y \\ v_{x} \\ v_{y} \end{bmatrix}$$</p> <p>transition function is $$ \textbf{X}_{k} = f(\textbf{X}_{k-1}, \Delta t) = \begin{cases} x_{k-1} + v_{xk} \Delta t \\ y_{k-1} + v_{yk} \Delta t \end{cases} $$</p> <p>$z_{b} = atan2(y, x)$ and $z_{r} = \sqrt{ x^{2} + y^{2}}$</p> <p>the Jacobian of the observation model: $$ \frac{\partial h}{\partial x} = \begin{bmatrix} \frac{-y}{x^{2}+y^{2}} &amp; \frac{1}{x(1+(\frac{y}{x})^{2})} &amp; 0 &amp; 0 \\ \frac{x}{\sqrt{ x^{2} + y^{2}}} &amp; \frac{y}{\sqrt{ x^{2} + y^{2}}} &amp; 0 &amp; 0 \end{bmatrix} $$</p> <p>My question is how the Jacobian of the observation model has been obtained? and why it is 2X4?</p> <p>the model from <a href="http://www.mrpt.org/Kalman_Filters" rel="nofollow">Kalman filter</a>.</p>
Jacobian of the observation model?
<p>Not by merely looking at Jacobian but by looking at the <a href="https://en.wikipedia.org/wiki/Singular-value_decomposition" rel="noreferrer">Singular Value Decomposition</a> of the Jacobian, one can see the degrees of freedom that are lost, if lost. Of course it technically somehow turns up to finding the null space but yet I guess it is somewhat familiar and easier.</p> <p>For example let the Jacobian be:</p> <p>$$J = \begin{bmatrix} -50 &amp; 650 &amp; 325 &amp; 0 &amp; 0 &amp; 0\\ 1275.8 &amp; 0 &amp; 0 &amp; 50 &amp; 0 &amp; 0\\ 0 &amp; -1225.8 &amp; -662.92 &amp; 0 &amp; -100 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0.86603 &amp; 0 &amp; 1\\ 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 0\\ 1 &amp; 0 &amp; 0 &amp; 0.5 &amp; 0 &amp; 0 \end{bmatrix}$$ A Singular Value Decomposition of this is given by $J=U\Sigma V^T$, where</p> <p>$$U=\left[\begin{matrix}-0.46 &amp; 0.01 &amp; -0.89 &amp; 0.0 &amp; 0.0 &amp; -0.02\\\\0.03 &amp; -1.0 &amp; -0.03 &amp; 0.0 &amp; 0.0 &amp; 0.0\\\\0.89 &amp; 0.05 &amp; -0.46 &amp; 0.0 &amp; 0.0 &amp; -0.01\\\\0.0 &amp; 0.0 &amp; 0.0 &amp; -0.97 &amp; 0.24 &amp; 0.0\\\\0.0 &amp; 0.0 &amp; 0.02 &amp; -0.02 &amp; -0.07 &amp; -1.0\\\\0.0 &amp; 0.0 &amp; 0.0 &amp; -0.24 &amp; -0.97 &amp; 0.07\end{matrix}\right]$$</p> <p>$$\Sigma=\left[\begin{matrix}1574.54 &amp; 0.0 &amp; 0.0 &amp; 0.0 &amp; 0.0 &amp; 0.0\\\\0.0 &amp; 1277.15 &amp; 0.0 &amp; 0.0 &amp; 0.0 &amp; 0.0\\\\0.0 &amp; 0.0 &amp; 50.59 &amp; 0.0 &amp; 0.0 &amp; 0.0\\\\0.0 &amp; 0.0 &amp; 0.0 &amp; 1.36 &amp; 0.0 &amp; 0.0\\\\0.0 &amp; 0.0 &amp; 0.0 &amp; 0.0 &amp; 0.34 &amp; 0.0\\\\0.0 &amp; 0.0 &amp; 0.0 &amp; 0.0 &amp; 0.0 &amp; 0.0\end{matrix}\right]$$</p> <p>$$V^T=\left[\begin{matrix}0.04 &amp; -0.88 &amp; -0.47 &amp; 0.0 &amp; -0.06 &amp; 0.0\\\\-1.0 &amp; -0.04 &amp; -0.02 &amp; -0.04 &amp; 0.0 &amp; 0.0\\\\0.0 &amp; -0.24 &amp; 0.34 &amp; -0.03 &amp; 0.91 &amp; 0.0\\\\0.03 &amp; 0.01 &amp; -0.01 &amp; -0.7 &amp; -0.02 &amp; -0.72\\\\0.03 &amp; 0.01 &amp; -0.01 &amp; -0.71 &amp; -0.02 &amp; 0.7\\\\0.0 &amp; -0.41 &amp; 0.82 &amp; 0.0 &amp; -0.41 &amp; 0.0\end{matrix}\right]$$</p> <p>Now, it can be seen that the 6th singular value of $\Sigma$ is zero. Therefore, the corresponding (6th) row of $U$ is $$\left[\begin{matrix}0.0 &amp; 0.0 &amp; 0.0 &amp; -0.24 &amp; -0.97 &amp; 0.07\end{matrix}\right]$$, which shows the direction of singularity. It means the end effector at this instant of Jacobian can't move in this direction. In other words, the angular velocity $ω$ of the end-effector about the vector $\hat{n}=-0.24\hat{i}-0.97\hat{j}+0.07\hat{k}$ is not possible at this instant.</p> <p>The corresponding (6th) column of $V^T$ is $$\left[\begin{matrix}0.0\\\\0.0\\\\0.0\\\\-0.72\\\\0.7\\\\0.0\end{matrix}\right]$$ which is the direction of the above singularity in the joint space, which means the above singularity is caused at the end-effector when $\dot{\theta_1}=0, \dot{\theta_2}=0, \dot{\theta_3}=0, \dot{\theta_4}=-0.72$ units, $\dot{\theta_5}=0.7$ units and $\dot{\theta_6}=0$.</p>
1997
2013-10-30T16:31:17.353
|robotic-arm|inverse-kinematics|industrial-robot|
<p>For a 6DoF robot with all revolute joints the Jacobian is given by: $$ \mathbf{J} = \begin{bmatrix} \hat{z_0} \times (\vec{o_6}-\vec{o_0}) &amp; \ldots &amp; \hat{z_5} \times (\vec{o_6}-\vec{o_5})\\ \hat{z_0} &amp; \ldots &amp; \hat{z_5} \end{bmatrix} $$ where $z_i$ is the unit z axis of joint $i+1$(using DH params), $o_i$ is the origin of the coordinate frame connected to joint $i+1$, and $o_6$ is the origin of the end effector. The jacobian matrix is the relationship between the Cartesian velocity vector and the joint velocity vector: $$ \dot{\mathbf{X}}= \begin{bmatrix} \dot{x}\\ \dot{y}\\ \dot{z}\\ \dot{r_x}\\ \dot{r_y}\\ \dot{r_z} \end{bmatrix} = \mathbf{J} \begin{bmatrix} \dot{\theta_1}\\ \dot{\theta_2}\\ \dot{\theta_3}\\ \dot{\theta_4}\\ \dot{\theta_5}\\ \dot{\theta_6}\\ \end{bmatrix} = \mathbf{J}\dot{\mathbf{\Theta}} $$</p> <p>Here is a singularity position of a Staubli TX90XL 6DoF robot:</p> <p><img src="https://i.stack.imgur.com/G9z3e.png" alt="robot with joint 4 and joint 6 aligned pointed down"></p> <p>$$ \mathbf{J} = \begin{bmatrix} -50 &amp; -425 &amp; -750 &amp; 0 &amp; -100 &amp; 0\\ 612.92 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; -562.92 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 0\\ 1 &amp; 0 &amp; 0 &amp; -1 &amp; 0 &amp; -1 \end{bmatrix} $$</p> <p>You can easily see that the 4th row corresponding to $\dot{r_x}$ is all zeros, which is exactly the lost degree of freedom in this position.</p> <p>However, other cases are not so straightforward.</p> <p><img src="https://i.stack.imgur.com/riy82.png" alt="robot with joint 4 and joint 6 aligned pointed at an angle"> $$ \mathbf{J} = \begin{bmatrix} -50 &amp; -324.52 &amp; -649.52 &amp; 0 &amp; -86.603 &amp; 0\\ 987.92 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; -937.92 &amp; -375 &amp; 0 &amp; -50 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0.5 &amp; 0 &amp; 0.5\\ 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 0\\ 1 &amp; 0 &amp; 0 &amp; -0.866 &amp; 0 &amp; -0.866 \end{bmatrix} $$</p> <p>Here you can clearly see that joint 4 and joint 6 are aligned because the 4th and 6th columns are the same. But it's not clear which Cartesian degree of freedom is lost (it should be a rotation about the end effector's x axis in red).</p> <p>Even less straightforward are singularities at workspace limits.</p> <p><img src="https://i.stack.imgur.com/NiMCk.png" alt="robot at workspace limit with no aligned joint axes"></p> <p>$$ \mathbf{J} = \begin{bmatrix} -50 &amp; 650 &amp; 325 &amp; 0 &amp; 0 &amp; 0\\ 1275.8 &amp; 0 &amp; 0 &amp; 50 &amp; 0 &amp; 0\\ 0 &amp; -1225.8 &amp; -662.92 &amp; 0 &amp; -100 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0.86603 &amp; 0 &amp; 1\\ 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 0\\ 1 &amp; 0 &amp; 0 &amp; 0.5 &amp; 0 &amp; 0 \end{bmatrix} $$</p> <p>In this case, the robot is able to rotate $\dot{-r_y}$ but not $\dot{+r_y}$. There are no rows full of zeros, or equal columns, or any clear linearly dependent columns/rows. </p> <p>Is there a way to determine which degrees of freedom are lost by looking at the jacobian?</p>
Is there a way to determine which degrees of freedom are lost in a robot at a singularity position by looking at the jacobian?
<p>A few notes first,</p> <p>First, as you mentioned, you can't just pull out a submatrix and do an <em>update</em> on it. You <em>can</em> do a propogation step on a submatrix, however. </p> <p>This is because of the cross-covariance terms (which "spread" information across different parts of the state). This is why having a more accurate estimate of your heading will lead to more accurate position estimates, for example. </p> <p>However, following your edit, simply updating the cross-correlation (covariance) terms won't do it either, you need to update the whole matrix (unless you know <em>for sure</em> that some elements are independent, conditioned on the state estimate. </p> <h3>Here you go:</h3> <p>To do this, form the Jacobian matrices as before, but note that there should be zeros in all off-diagonal elements when no part of that state is being measured. Then the magic of matrix inversion will spread the innovation corrections to the correct parts of the state. The Jacobian matrix <em>must</em> be of size $n\times m$ for $m$ measured values and $n$ state variables (or $m\times n$ depending on your definition of Jacobian). None of this "update part of the covariance junk" unless you know <em>for sure</em> that the Jacobian elements are equivalent to identity. The safest way is to use the full Jacobian.</p> <h3>Other hacks (once everything is theoretically correct)</h3> <p>However, this <em>still</em> won't ensure PD covariance matrices. I strongly <em>strongly</em> recommend you don't do the following hacks until you fix all the other mistakes. But in the end, for a field-deploy-able system that operates for non-trivial amounts of time, I've found it is almost always necessary to do the following things:</p> <ol> <li><p>After all updates, Let covariance $P$ be $P\gets \frac{1}{2}P + \frac{1}{2}P^{T}$, just to "even out" the off-diagonal terms -- for symmetry</p></li> <li><p>Let $P\gets P + \epsilon I_{n\times n}$, where $\epsilon$ is a small scalar to ensure you don't underflow (and wreck the <a href="https://en.wikipedia.org/wiki/Condition_number">condition number</a> of your matrix)</p></li> </ol>
2000
2013-10-30T18:46:37.280
|kalman-filter|
<p>I have an unscented Kalman filter (UKF) that tracks the state of a robot. The state vector has 12 variables. Each time I carry out a prediction step, my transfer function (naturally) acts on the entire state. However, my sensors provide measurements of different parts of the robot's state, so I may get roll, pitch, yaw and their respective velocities in one measurement, and then linear velocity in another.</p> <p>My approach to handling this so far has been to simply create sub-matrices for the covariance, carry out my standard UKF update equations, and then stick the resulting values back into the full covariance matrix. However, after a few updates, the UKF yells at me for trying to pass a matrix that isn't positive-definite into a Cholesky Decomposition function. Clearly the covariance is losing its positive-definite properties, and I'm guessing it has to do with my attempts to update subsets of the full covariance matrix. </p> <p>As an example taken from an actual log file, the following matrix (after the UKF prediction step) is positive-definite:</p> <pre><code> 1.1969 0 0 0 0 0 0.11567 0 0 0 0 0 0 1.9682 0 0 0 0 0 0.98395 0 0 0 0 0 0 1.9682 0 0 0 0 0 0.98395 0 0 0 0 0 0 1.9682 0 0 0 0 0 0.98395 0 0 0 0 0 0 1.9682 0 0 0 0 0 0.98395 0 0 0 0 0 0 1.9682 0 0 0 0 0 0.98395 0.11567 0 0 0 0 0 0.01468 0 0 0 0 0 0 0.98395 0 0 0 0 0 1 0 0 0 0 0 0 0.98395 0 0 0 0 0 1 0 0 0 0 0 0 0.98395 0 0 0 0 0 1 0 0 0 0 0 0 0.98395 0 0 0 0 0 1 0 0 0 0 0 0 0.98395 0 0 0 0 0 1 </code></pre> <p>However, after processing the correction for one variable (in this case, linear X velocity), the matrix becomes:</p> <pre><code> 1.1969 0 0 0 0 0 0.11567 0 0 0 0 0 0 1.9682 0 0 0 0 0 0.98395 0 0 0 0 0 0 1.9682 0 0 0 0 0 0.98395 0 0 0 0 0 0 1.9682 0 0 0 0 0 0.98395 0 0 0 0 0 0 1.9682 0 0 0 0 0 0.98395 0 0 0 0 0 0 1.9682 0 0 0 0 0 0.98395 0.11567 0 0 0 0 0 0.01 0 0 0 0 0 0 0.98395 0 0 0 0 0 1 0 0 0 0 0 0 0.98395 0 0 0 0 0 1 0 0 0 0 0 0 0.98395 0 0 0 0 0 1 0 0 0 0 0 0 0.98395 0 0 0 0 0 1 0 0 0 0 0 0 0.98395 0 0 0 0 0 1 </code></pre> <p>The difference between the two matrices above is </p> <pre><code> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -0.00468 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 </code></pre> <p>As you can see, the only difference between the two is the value in the location of the variance of linear X velocity, which is the measurement I just processed. This difference is enough to "break" my covariance matrix.</p> <p>I have two questions:</p> <ol> <li><p>Updating a subset of the filter doesn't appear to be the right way to go about things. Is there a better solution?</p></li> <li><p>Alternatively, am I missing a step that would keep my covariance matrix as positive-definite?</p></li> </ol> <p>Thanks!</p> <p>EDIT:</p> <p>It looks like I'm not properly placing the values back into the original covariance matrix. Simply copying the values back isn't sufficient. I need to track the correlation coefficients for the covariance matrix, and make sure that when I update a variance value, I update all the values in its row/column to maintain the correlation coefficient value. I have to do some more testing to verify that this is my issue, but some initial analysis in Matlab suggests that it is. If I'm correct, I'll answer my own question.</p> <p>EDIT 2:</p> <p>Given the response below and after trying it, I can see that my original edit idea won't fly. However, I have one more question:</p> <p>As this is a UKF, I don't actually have Jacobian matrices. I think I see how I would make it work within the UKF update equations, but even in an EKF - and I ask because I have one of those as well - my state-to-measurement function $h$ is going to end up being the identity matrix, as I am directly measuring my state variables. In the case, I take it my "Jacobian" would just be an $m \times n$ matrix with ones in the $(i, i)$ location, where $i$ is the index of the measured values in the measurement vector?</p>
Maintaining positive-definite property for covariance in an unscented Kalman filter update
<p>According to your covariance matrix it does make sense. The third column of the first row is sigma_xz, and is nonzero, which means, according to your covariance matrix, the random variable z is correlated to the random variable x, and so changes in x will affect z. If you further analyze your covariance matrix, it seems that Z is correlated to Y (element (2,3) of covariance matrix is nonzero) and, of course, itself (element (3,3) of covariance matrix). For the velocities, it certainly makes sense that updates in position will update your velocity because the EKF is simply capturing information regarding changes in state. Again if you look at the covariance matrix, I assume the 7th, 8th and 9th elements of your state vector are the velocity in x, y and z. In the covariance matrix, the elements (1,7), (1,8), and (1,9) are all nonzero so the 3 velocities are correlated to your x position...further analysis shows that they're also correlated to your Y and Z position by similar arguments.</p>
2009
2013-11-01T15:22:32.457
|kalman-filter|
<p>This is a follow up to </p> <p><a href="https://robotics.stackexchange.com/questions/2000/maintaining-positive-definite-property-for-covariance-in-an-unscented-kalman-fil/2004#2004">Maintaining positive-definite property for covariance in an unscented Kalman filter update</a></p> <p>...but it's deserving of its own question, I think.</p> <p>I am processing measurements in my EKF for a subset of the variables in my state. My state vector is of cardinality 12. I am directly measuring my state variables, which means my state-to-measurement function $h$ is the identity. I am trying to update the first two variables in my state vector, which are the x and y position of my robot. My Kalman update matrices currently look like this:</p> <p>State $x$ (just test values): $$ \left(\begin{array}{ccc} 0.4018 &amp; 0.0760 \end{array} \right) $$</p> <p>Covariance matrix $P$ (pulled from log file): $$ \left(\begin{array}{ccc} 0.1015 &amp; -0.0137 &amp; -0.2900 &amp; 0 &amp; 0 &amp; 0 &amp; 0.0195 &amp; 0.0233 &amp; 0.1004 &amp; 0 &amp; 0 &amp; 0 \\ -0.0137 &amp; 0.5825 &amp; -0.0107 &amp; 0 &amp; 0 &amp; 0 &amp; 0.0002 &amp; -0.7626 &amp; -0.0165 &amp; 0 &amp; 0 &amp; 0 \\ -0.2900 &amp; -0.0107 &amp; 9.6257 &amp; 0 &amp; 0 &amp; 0 &amp; 0.0015 &amp; 0.0778 &amp; -2.9359 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0.0100 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0.0100 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0.0100 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0.0195 &amp; 0.0002 &amp; 0.0015 &amp; 0 &amp; 0 &amp; 0 &amp; 0.0100 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0.0233 &amp; -0.7626 &amp; 0.0778 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1.0000 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0.1004 &amp; -0.0165 &amp; -2.9359 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1.0000 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0.0100 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0.0100 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0.0100 \\ \end{array} \right) $$</p> <p>Measurement $z$ (just test values): $$ \left(\begin{array}{ccc} 2 &amp; 2 \end{array} \right) $$</p> <p>"Jacobean" $J$: $$ \left(\begin{array}{ccc} 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ \end{array} \right) $$</p> <p>Measurement covariance $R$ (just test values): $$ \left(\begin{array}{ccc} 5 &amp; 0 \\ 0 &amp; 5 \\ \end{array} \right) $$</p> <p>Kalman gain $K = PJ^T(JPJ^T + R)^{-1}$:</p> <p>$$ \left(\begin{array}{ccc} 0.0199 &amp; -0.0024 \\ -0.0024 &amp; 0.1043 \\ -0.0569 &amp; -0.0021 \\ 0 &amp; 0 \\ 0 &amp; 0 \\ 0 &amp; 0 \\ 0.0038 &amp; 0.0000 \\ 0.0042 &amp; -0.1366 \\ 0.0197 &amp; -0.0029 \\ 0 &amp; 0 \\ 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right) $$</p> <p>$K$ is 12x2, meaning that my innovation - and therefore both measurement and current state - would need to be 2x1 in order to have a 12x1 result to add to the current full state:</p> <p>$x' = x + K(z - h(x_s))$</p> <p>where $x_s$ is a vector containing only the parts of the full state vector that I am measuring. </p> <p>Here's my question: $K(z - h(x_s))$ yields</p> <p>$$ \left(\begin{array}{ccc} 0.0272 \\ 0.1969 \\ -0.0948 \\ 0 \\ 0 \\ 0 \\ 0.0062 \\ -0.2561 \\ 0.0258 \\ 0 \\ 0 \\ 0 \\ \end{array} \right) $$</p> <p>Does it make sense that this vector, which I will add to the current state, has non-zero values in positions other that 1 and 2 (the x and y positions of my robot)? The other non-zero locations correspond to the robot's z location, and the x, y, and z velocities. It seems strange to me that a measurement of x and y should yield changes to other variables in the state vector. Am I incorrect in this assumption?</p> <p>Incidentally, the covariance update works very well with the Jacobean in this form, and maintains its positive-definite property.</p>
EKF partial state update question
<p>have a look to my Java Mavlink Library : <a href="https://code.google.com/p/mavlinkjava/" rel="nofollow">https://code.google.com/p/mavlinkjava/</a> It can be used on Android, Windows, Linux and Java RT JVM from IS2T. Guillaume</p>
2021
2013-11-05T07:12:11.477
|quadcopter|ardupilot|
<p>I'm using the telemetry kit from <a href="http://store.3drobotics.com/products/3dr-radio" rel="nofollow">3DR robotics</a> (433MHz) to interface with Ardupilot Mega 2.6, controlling a quadcopter. The <a href="http://planner.ardupilot.com/" rel="nofollow">Mission Planner</a> (v1.2.84) by Michael Oborne works well with the telemetry kit, transmitting flight data (IMU, compass, GPS etc.) from the quadcopter to the GCS and displaying them in their GUI.</p> <p>However, I would like to see the same data in the hyperterminal (windows system). The radio receiver on the GCS connects to my PC through a USB drive. I have tried calling the remote radio station using all possible Baud rates, starting from 110 to 921600 (including 57600). I've set the <strong>data bits to 8</strong> and <strong>stop bits to 1</strong>. <strong>'None'</strong> for <strong>parity</strong> and <strong>flow control</strong>.</p> <p>However, all that I ever get on my terminal is either gibberish or no data at all. I also tried burning this <a href="http://vps.oborne.me/3drradioconfig.zip" rel="nofollow">software</a> to the radio receiver and tried using AT commands on the radio. </p> <p>It connects OK with '+++', but keeps returning error for AT1, ATT etc.</p> <p>Please give me an idea about how to get flight data at the hyperterminal.</p> <p>Thanks.</p>
Telemetry with Ardupilot 2.6
<p>There's a lot of advancement on the software side, but there are many reasons why we don't currently see them on robots.</p> <p>First, notice that there are a huge variety of robots already built or possibly built in the future. So let's look at a couple of them:</p> <ul> <li>Manipulators: A manipulator is just an arm. You define a task for it, it does it. It's dumb, it's big and it's plugged in the wall at some factory. Do you really need AI or parallel processing for a manipulator? Does the programming language even matter? Probably not.</li> <li>Small mobile robots: a variety of robots are small, or could be small. The biggest issue on those robots is power. To process heavily, you need more power. To provide more power, you need bigger batteries. To carry bigger batteries you need stronger motors. Stronger motors are big and need a bigger support. In short, small robots have to be really energy efficient, that's why they also can't be processing too much. GPGPU's and complicated AI is usually off limits for those robots.</li> <li>Humanoids: This is probably what you have been thinking when you asked the question. So let's continue this answer assuming you are talking about humanoids, or more generally big mobile robots.</li> </ul> <blockquote> <p>What will be the implications of GPGPU, Multi-core and "programming in the large" model in the specific case of embedded software,</p> </blockquote> <p>It would be great! Besides the fact that you need proper hardware (actual computers vs. microcontrollers), it would be great. I believe this would specifically be helpful on areas where there is need for processing of huge input data, such as vision or tactile input.</p> <blockquote> <p>and how will it influence the styles and conventions in the community?</p> </blockquote> <p>Probably not much. Software is done in modules. Someone writes an odometry module and you use it. If they change the odometry implementation underneath, you won't notice it (much). Using GPGPUs is low-level enough to be part of lower level libraries, and often the actual users (programmers) of the robot would be unaware of it. Multi-core processing is all too easy and I think it's already widely used (you just make threads!)</p> <blockquote> <p>Functional Programming has supposedly started reaching the masses. It was late night previous weekend when I briefly skimmed through an example of functional reactive programming to solve real time problems. AI people are also suggesting that we should be programming our robots in Declarative Domain Specific languages soon. It would be nice to know the implications on the robotics community.</p> </blockquote> <p>Basically all non-joke (and even some joke) programming languages we have are turing-complete. This means that they can all get the job done. So choosing a language is really just a matter of convenience. Currently, C is the ubiqutous languange for every architecture. Everything can link to C and C is compiled on the remotest microcontroller on this planet. I don't think C could be put aside.</p> <p>For higher level programming, I don't see any reason why any language couldn't be used. If the robot is controlled by a computer (vs. a microcontroller), it can probably run anything compiled or interpreted in any language.</p> <p>Regarding real-time, probably you are misusing the word. If you are actually referring to an actual <a href="https://en.wikipedia.org/wiki/Real-time_computing" rel="nofollow">real-time system</a>, then probably the biggest obstacles are the tools. Believe it or not, there are not many good ones. I program in real-time and trust me, there's something wrong with each of them. Still, you'd probably be forced to write in C or C++ if you want to use any of those tools.</p> <blockquote> <p>There has been a tremendous growth of frameworks like ROS and Urbi too!! That should be the region to look upon.</p> </blockquote> <p>I don't know Urbi, but I know ROS. ROS is quite famous, and everything you make, people immediately request a ROS module. It's nice, it works, but it's not perfect. Biggest flaw is that it can't be real-time, but they can't really be blamed. What I mean is, it's not a region to look upon, it's a region where people are already at.</p> <blockquote> <p>Most of the robotics, embedded and high performance AI codebase directly depends on C/C++, and though languages like Rust and D are bubbling up, wouldn't it take massive amount of time to adopt the new languages, if ever adaptation begins?</p> </blockquote> <p>It sure will. As for embedded systems, I doubt it would ever change away from C, definitely not to a functional language. C is quite low-level yet very high-level. It's at the exact sweet spot for embedded programming. You can be portable yet have access to all hardware.</p> <p>Embedded programming is in direct violation of most of what's important in a functional language. The fact that you are working with hardware means that the functions can't be pure (in Haskell, they would be IO). So what would be the benefit of functional programming if most functions are not-pure and their results can't be cached? Besides, it's not even practical to load a whole interpreter of a functional language in a tiny microcontroller and hope to be able to use all sorts of dynamic caching during runtime to compensate for lack of non-functional language structures.</p> <p>In short for microcontrollers, functional languages simply aren't fit. For "high performance AI codebases", there is again no reason why they can't be used. It's just that people have to pick up one of those languages and start using it. Biggest problem is though that People are lazy and don't like change.</p> <blockquote> <p>Correct me, but it seems like a lot of time has passed and there are not many major production results from the AI community. I've heard about cognitive architectures of old like ACT-R and 4CAPS. They seem to be in hibernation mode!</p> </blockquote> <p>They are not in hibernation mode. The nature of the research has changed. Take a look at <a href="https://en.wikipedia.org/wiki/Artifical_intelligence#History" rel="nofollow">a very brief history of AI on Wikipedia</a>. Whereas in the 80s you would see for example a lot of algorithms developed on solving NP-complete problems, now you see artifical intelligence for example in Kinect that you usually don't think of as "intelligent".</p> <p>There is also a lot of way more complicated research going on such as cognitive science and others. My personal feeling is that they are still maturing and not yet at a stage that could be widely used. Either that, or I have just been ignorant about it myself.</p> <blockquote> <p>There seems to be a lot of work lately on otherwise intelligent systems (solved problems) like Computer vision and Data mining, but these problems cater to supercomputing and industrial crowd more. Could there be any possible shift towards low powered systems soon?</p> </blockquote> <p>Could there? Yes. Would there? Probably not. Like I said before, heavy processing requires power and power is very precious on a (mobile) robot. There would however definitely be a middle-ground used in robotics. Perhaps not a super complete AI, but a dumbed down, simpler one (but still more intelligent than current) would be placed on robots.</p>
2022
2013-11-05T11:10:07.917
|software|artificial-intelligence|programming-languages|embedded-systems|
<p>This is my first question on this site, might be a little subjective :)</p> <p>There is an ongoing process of many cool cyclonic changes of technology in the electronics and software industry.</p> <p><strong>Concurrency and Parallelism</strong></p> <p>What will be the implications of GPGPU, Multi-core and "programming in the large" model in the specific case of embedded software, and how will it influence the styles and conventions in the community?</p> <p>Single board multicore hardware like soon to be released Parallela can be an example?</p> <p><strong>Programming language research</strong></p> <p>The results have been excellent. Functional Programming has supposedly started reaching the masses. It was late night previous weekend when I briefly skimmed through an example of functional reactive programming to solve real time problems. AI people are also suggesting that we should be programming our robots in Declarative Domain Specific languages soon. It would be nice to know the implications on the robotics community. </p> <p>There has been a tremendous growth of frameworks like ROS and Urbi too!! That should be the region to look upon.. </p> <p>Most of the robotics, embedded and high performance AI codebase directly depends on C/C++ , and though languages like Rust and D are bubbling up, wouldn't it take massive amount of time to adopt the new languages, if ever adaptation begins? </p> <p><strong>AI</strong></p> <p>Correct me, but it seems like a lot of time has passed and there are not many major production results from the AI community. I've heard about cognitive architectures of old like ACT-R and 4CAPS. They seem to be in hibernation mode! </p> <p>There seems to be a lot of work lately on otherwise intelligent systems (solved problems) like Computer vision and Data mining, but these problems cater to supercomputing and industrial crowd more. Could there be any possible shift towards low powered systems soon?</p> <p>Thanks</p>
How will the currently evaluated computer technology influence robotics and embedded systems in the foreseeable future?
<p>Of course this can be done, however the important bit isn't really the netbook here. The important bit is the type of interface you decide to use and the command language you code in to control that interface.</p> <p>Your netbook probably has USB and possibly serial - but you need to have a way to connect these to relays and voltage control devices.</p> <p>X10 or NexTag produce a range of USB compatible interfaces, and provide a controller language. They provide a very simple, beginner's route to home automation and control.</p> <p>For more fun, you could build your own serial interface board - which converts the signals from the 9 serial pins to useful output, but this is more of an electronics project - not for the faint hearted :-)</p>
2024
2013-11-06T00:19:13.383
|raspberry-pi|python|
<p>I have an old beat-up netbook that is currently collecting dust. I've also only taken stuff apart, without having to worry about putting it back together, so please bear with my possibly stupid questions. </p> <p>a) I imagine it's possible to wire this baby up to servos, breadboards, and all that good stuff. Am I correct? </p> <p>b) I'd like to start with some simple Raspberry Pi-like projects (think automating my irrigation system, feeding the dog from work, etc). Obviously barring the energy expenditure, wouldn't a netbook be more apt than a Raspberry Pi for handling this type of thing?</p> <p>c) I have basic Python experience, but I wouldn't mind picking up more as I go. Would that be sufficient?</p> <p>Cheers!</p>
Roboticize an old netbook?
<p>I have been looking for the same thing and I messed around in the IDLE and found has you put a space next to the first code or the code that you want to do two actions with and type the code.</p> <p>Example:</p> <pre><code>Void loop { Serial.print("Hello"); Serial.print("World"); } </code></pre> <p>And put the semi-colons where I place it on the code. </p>
2028
2013-11-06T12:28:56.787
|arduino|stepper-motor|c|
<p>I've got a code where I have a motor running back and forth and buttons connected to a scanner, when I press the buttons it causes the motor to stop and over rides it. I would like them to run parallel to each other so the codes don't interrupt each other.</p> <p>Here is my code</p> <pre><code>#include &lt;AccelStepper.h&gt; // Define some steppers and the pins they will use AccelStepper stepper1(AccelStepper::FULL2WIRE, 2, 3); const int buttonPin = 4; const int button2Pin = 14; const int pulseopto1 = 9; const int startScan = 11; int buttonState = 0; long previousMillis = 0; long interval = 5; void setup() { pinMode(buttonPin, INPUT); pinMode(button2Pin, INPUT); pinMode(pulseopto1, OUTPUT); pinMode(startScan, OUTPUT); stepper1.setMaxSpeed(40000.0); stepper1.setAcceleration(100000.0); stepper1.moveTo(25000); } void loop() { buttonState = digitalRead(buttonPin); if (buttonState == LOW) { digitalWrite(startScan, HIGH); } else (buttonState == HIGH); { digitalWrite(startScan, LOW); } { buttonState = digitalRead(button2Pin); if (buttonState == LOW) { // turn LED on: digitalWrite(pulseopto1, HIGH); delay(5); digitalWrite(pulseopto1, LOW); delay(5); } else { // turn LED off: digitalWrite(pulseopto1, LOW); } } // Change direction at the limits if (stepper1.distanceToGo() == 0) { stepper1.moveTo(-stepper1.currentPosition()); } stepper1.run(); } </code></pre>
Is it possible to run multiple loops at the same time? (Arduino)
<p>Well, you <em>can't</em> estimate the covariance from the state. You need the equations used to <em>find</em> the state. </p> <p>This is because the covariance (along with the markov assumption) represents <em>how</em> the robot got to the state. A robot at location (10,5,1) does not always have the same covariance matrix, right? You need to know what actions brought it to that state.</p> <p>Barring any ros-based solution (are you sure the SLAM / EKF nodes can't do this for you?) you'll have to do this:</p> <ol> <li>Find the kinematics model for the robot (differential drive, ackerman, whatever)</li> <li>Read in the odometery and control command, and use the kinematics equations and control inputs as the <em>u</em> and <em>f()</em> for the EKF (using <a href="https://en.wikipedia.org/wiki/Extended_Kalman_filter#Discrete-time_predict_and_update_equations" rel="nofollow">this</a>) as a reference.</li> <li>Find the Jacobian of the kinematics equations with respect to each state element</li> <li>Feed the Jacobian and control / odometery into the EKF as the <em>F</em> term (using <a href="https://en.wikipedia.org/wiki/Extended_Kalman_filter#Discrete-time_predict_and_update_equations" rel="nofollow">this</a>) as a reference.</li> </ol> <p>In case you are wondering, yes this effectively reproduces the work done by the /odom node, since the output of the kinematics equations is precisely what /odom reports. </p>
2031
2013-11-07T06:17:38.840
|localization|ros|odometry|
<p>In <strong>ROS</strong> I've recorded a bag file from a custom robot (in real world) that does not provide covariance matrix and I want to use <code>/odom</code> to feed an <strong>EKF</strong>, but covariance matrix is 0. How can I calculate it?</p> <p><strong>Note</strong>: Covariance matrix is needed by EKF to estimate position.</p> <p>It's a sample of <code>/odom</code>:</p> <pre><code>pose: pose: position: x: 0.082 y: 0.507 z: 0.0 orientation: x: 0.0 y: 0.0 z: -0.789272088731 w: 0.614043622188 covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] twist: twist: linear: x: 0.104 y: 0.0 z: 0.0 angular: x: 0.0 y: 0.0 z: 0.0663225115758 covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] </code></pre>
Calculate covariance matrix from x,y,z data
<h2>scope probe</h2> <p>I can sympathize with how frustrating it is when attaching an oscilloscope probe to debug a problem makes the system act differently.</p> <p>A standard 1X probe (1:1 probe) will load a circuit with a capacitance of about 110 pF. Many people use a 10X probe or a 100X probe, giving 1/10 or 1/100 the loading on the circuit -- hopefully small enough to be insignificant.</p> <h2>ISO1050</h2> <blockquote> <p>a pull-up resistor on the TX of CAN. ... the shape of the signal suggests a pull-up resistor may be necessary, but I haven't seen the need for that in any datasheet I found.</p> </blockquote> <p>I think this is it. Page 4 of the <a href="http://www.ti.com/lit/ds/slls983h/slls983h.pdf" rel="nofollow">datasheet for the ISO1050</a> specifically says</p> <blockquote> <p>NOTE</p> <p>TXD is very weakly internally pulled up to VCC1. An external pull up resistor should be used ... An adequate external pullup resistor must be used to ensure that the TXD output of the microprocessor maintains adequate bit timing input to the input on the transceiver.</p> </blockquote> <h2>PIC18F45K80</h2> <p>I'm assuming you're using the Microchip PIC18F45K80. The <a href="http://ww1.microchip.com/downloads/en/DeviceDoc/39977f.pdf" rel="nofollow">PIC18F66K80 family datasheet</a> has a lot of useful information about the Microchip PIC18F45K80. In particular, I think you might find section 27.0 ECAN MODULE useful. Subsection 27.2.5 mentions ENDRHI: Enable Drive High bit, where</p> <ul> <li>ENDRHI = 1: CANTX pin will drive VDD when recessive </li> <li>ENDRHI = 0: CANTX pin will be tri-state when recessive</li> </ul> <p>Subsection 27.3.1 CONFIGURATION MODE implies that ENDRHI (and all other configuration bits) must be set in configuration mode, and cannot be changed while the CAN module is active.</p> <p>I suspect that <em>either</em> the pull-up resistor <em>or</em> setting ENDRHI to 1 would be adequate. However, if I were you, I would run some tests with <em>both</em> a 10 KOhm pull-up resistor <em>and</em> ENDRHI set to 1.</p>
2036
2013-11-08T17:49:30.283
|microcontroller|can|
<p>I'm dealing with a board that no matter what I do I can't seem to make CAN work over 125&nbsp;kbit/s. I'll give some detail about the board on the bottom, but I'm going to keep this question generic.</p> <p>First of all, regarding hardware. From what I've gathered, there isn't any need for a <a href="https://en.wikipedia.org/wiki/Pull-up_resistor" rel="nofollow">pull-up resistor</a> on the TX of CAN. Is that correct? It may perhaps be chip-specific, but wherever I see, it seems that the TX/RX lines are directly connected to the transceiver.</p> <p>Second, regarding bit-timing: Using different calculators, for example, <a href="http://www.kvaser.com/en/support/bit-timing-calculator.html" rel="nofollow">Kvaser</a> or <a href="http://www.intrepidcs.com/support/mbtime.htm" rel="nofollow">the one from Microchip</a>, I can see the following configuration (for 64&nbsp;kHz input clock):</p> <pre><code> SYNC PROP PHASE1 PHASE2 BRP (prescaler) 125 kbit/s 1 1 3 3 32 250 kbit/s 1 1 3 3 16 500 kbit/s 1 1 3 3 8 1000 kbit/s 1 1 3 3 4 </code></pre> <p>I've seen this from more than one source. Furthermore, the numbers fit to the formula in the datasheet of the microcontroller.</p> <p>However, only the configuration for 125&nbsp;kbit/s works for me. I'm using <a href="http://www.esd-electronics-usa.com/esd-electronics-usa/canreal.htm" rel="nofollow">CANreal</a> to monitor the messages.</p> <p>I've tried different configurations for the CAN, for example with 16 time quanta instead of 8 as well as changing my microcontroller's clock to 16&nbsp;MHz and using again different values. Regardless of all that, speeds higher than 125&nbsp;kbit/s result in only errors and warnings in CANreal (which are taken from the CAN driver). Note that the same CAN board, driver and software works with 1&nbsp;Mbit/s with some other hardware I have.</p> <p>This all is made harder since, as soon as I put a probe from my oscillator on the TX line, it becomes a continuous 0-1 alteration like the following:</p> <pre><code> __------ __------ __------ __------ __------ / | / | / | / | / | / | / | / | / | / | / | / | / | / | / | | | | | | | | | | | | |_| |_| |_| |_| | </code></pre> <p>Which is not something I would be outputting by software. In fact, as soon as I remove the probe, the messages start arriving (again, only at 125&nbsp;Mbit/s). So basically, I don't seem to be able to have any oscillator debugging available.</p> <p>Back to my "first of all, regarding hardware", the shape of the signal suggests a pull-up resistor may be necessary, but I haven't seen the need for that in any datasheet I found. Furthermore, my microcontroller configures the pin when used as CAN, so I don't have control over making it push-pull (since it looks like it's <a href="http://en.wikipedia.org/wiki/Open_collector#MOSFET" rel="nofollow">open-drain</a>). Not to mention the microcontroller doesn't even have a configuration to make the pin push-pull.</p> <p>Is there any hidden parameter somewhere that should also be set? Is a pull-up resistor necessary after all? Why would the oscillator probe cause such a behavior?</p> <hr> <p>Details from the board:</p> <ul> <li><a href="http://en.wikipedia.org/wiki/Microcontroller" rel="nofollow">MCU</a>: P18F45K80. CAN is connected to its default RB2 and RB3.</li> <li>CAN transceiver: ISO1050 </li> <li>Compiler: <a href="https://en.wikipedia.org/wiki/Mikroelektronika#Products" rel="nofollow">mikroC</a></li> </ul>
Making high CAN baud rates work
<p>using a correct power supply for stepper motors, or servos, is difficult. You just cannot select based on voltage and amperage alone. biggest criteria is the so-called over-voltage protection circuitry in most power supplies. over-voltage protection circuitry is marketing/sales hype, and misleading. over-voltage protection circuitry does not sense over-voltage, it senses fast current change, which generates an internal voltage, that turns off the power supply. turning off the power supply does protect the power supply, but causes any motor to stop/stall/miss steps. This turnoff lasts 2-3 seconds, devastating for any machine operations.</p> <p>Best power supply is one without over-voltage protection circuitry. Remember, it's not over-voltage protection circuitry. This is actually the greatest cause of poor motor performance than any other criteria.</p> <p>Example: connect a 24v 10A power supply to a nema34 stepper motor. if you set the torque high, thus high amperage, say power supply 3 amps total (2 windings on at same time), and start the motor, the power supply voltage will drop in 200 msec to around 4V, then rise again after 3 seconds. LED blinks. You will never see over-voltage, voltage never is over 24.5V. Yet the power supply shuts off. At Excitron, we have analyzed this situation for over 18 years. One solution is set the brake for about 20% of load, then the power supply is slightly loaded at about .5A. Most power supplies cannot handle a 0 to 3A load without shutting off, but will not shutoff if loaded say, from .5A to 3A. Note that 3A is much less than 1/3 of the supply output of 10A, which has a surge capacity of about 13A. Depends on the power supply, but you get the idea.</p> <p>CUI makes a robust enclosed 24V 72 watt supply that has no over-voltage protection circuitry. This supply is amazing, never shuts down due to rapid amperage change.</p> <p>Therefore, almost all power supplies do not work well with high amperage motors.</p>
2042
2013-11-10T08:53:08.697
|stepper-motor|power|stepper-driver|
<p>I am building a machine and need 2 Stepper Motor for that. The motors are driven using by a 3.3v arm device.</p> <p>I have made the following selections regarding the stepper motor, stepper motor driver and the power supply.</p> <p><a href="http://www.circuitspecialists.com/12-volt-3.5-amp-power-supply.html" rel="nofollow">Power Supply 12 Volt Power Supply - 3.5 Amp Single Output</a></p> <p><a href="http://www.pololu.com/product/1200" rel="nofollow">Stepper Motors Stepper Motor: Unipolar/Bipolar, 200 Steps/Rev, 42×48mm, 4V, 1.2 A/Phase</a></p> <p><a href="http://www.pololu.com/product/2133" rel="nofollow">Stepper Motor Driver DRV8825 Stepper Motor Driver Carrier, High Current</a></p> <p>I tried my best to research the compatibility and came up with these.</p> <p>Is this a good selection considering the fact that the Power Supply will be driving 2 of these motors.</p> <p>I will be running the motors at 1/16 step for high resolution.As far as the speed is concerned,it's going to be pretty slow but they will be running continuously for hours in end.Basically what I am trying to do here is make a V-Plotter.As far I can tell, there will be loads of start stop motion in the motors though.</p>
Choosing correct power supply for Stepper Motors
<p>The weight of just the 6.6 pound laptop is already 32% over capacity, so you may need to consider lightening the load. If not then you risk lower performance, higher stresses on the robot and possibly even burning out one or more motors under stress testing.</p> <p>On the matter of battery life, according to the <a href="http://www.irobot.com/filelibrary/pdfs/hrd/create/Create%20Manual_Final.pdf" rel="nofollow">iRobot Create Manual</a>:</p> <blockquote> <p>Alkaline batteries will power the robot (on a hard floor with no payloads or attachments) for up to 1.5 hours if the robot is constantly moving.</p> </blockquote> <p>So this is less than half the run time of the APS battery:</p> <blockquote> <p>The iRobot Create robot (without a payload or other attachments) will run for up to 3.5 hours with a fully charged APS battery.</p> </blockquote>
2045
2013-11-11T01:09:54.447
|mobile-robot|irobot-create|
<p>I'm interested in getting a Create for a project I'll be working on, and wanted some information about it from somebody that already has one:</p> <ol> <li><p>How much weight can it safely carry? I talked with Irobot's tech support and they told me the maximum is 5lb, but searching on the internet it seems like this limit is actually not as strict as it appears to be. I'm asking because I'd need to put a 3kg laptop on top of it, which would mean ~3.5-4kg if you also consider the kinect and eventual supports for both. I guess I could use a netbook and send the data I need to another computer, but I wanted to avoid the additional overhead of the wireless link.</p></li> <li><p>For how long does it run using AA batteries? I'm inclined on not getting the battery pack, since I'd be using the robot in europe, so I'd also need a transformer if I went with the battery pack option.</p></li> </ol> <p>Thanks!</p>
Questions about Irobot Create
<p>The <code>update_motors()</code> method looks like it implements a constant step controller, basically increasing pwm if too slow, decreasing pwm if too fast. What I also see is the calculated error involves multiplying the set point <code>req_speed</code> by 10, which exagerates the error, which can definitely cause oscillations. Also there is no tolerance value 'close to zero' for the error, so it will always oscillate. Finally, I don't see any feed forward logic; estimating the initial PWM when the setpoint changes from zero can make a big different in how fast the controller converges on the setpoint. My <a href="https://github.com/Ezward/Esp32CameraRover2" rel="nofollow noreferrer">project</a> also uses a constant step controller for speed control and implements feed-forward; it works pretty well. You will still need lateral control in order to do a good job of holding a line, as any initial speed differential between the wheels can create an angular offset that will not be corrected by simple speed control. See that project's go-to-goal behavior for an example.</p>
2048
2013-11-11T07:09:21.587
|arduino|motor|pwm|
<p>I am trying to get precise control over the speed of rover 5 based robot. It has four PWM controlled motors and 4 Optical Quadrature Encoders. I am using <a href="https://www.sparkfun.com/products/11593" rel="nofollow">4-channel motor controller</a> with <a href="https://www.sparkfun.com/products/10336" rel="nofollow">rover 5 chassis</a>. I am using arduino Nano for control. I am able to read encoder INT output and change PWM based on pulse width to control speed. But, as a result, I am getting heavy oscillations in the control output. That makes, the robot to move in steps, as PWM is changing constantly. I need an algorithm which can minimize this ringing and have a smooth moving robot. Here is my arduino code snippet.</p> <pre><code>void setup() { Serial.begin(9600); init_motors(); init_encoders(); req_speed[0] = 20; req_speed[1] = 20; req_speed[2] = 20; req_speed[3] = 20; } void loop() { update_encoders(); update_motors(); } void update_motors() { int i, err; unsigned long req_width; if(micros() - mtime &gt; 2999) { mtime = micros(); for(i=0; i&lt;4; i++) { digitalWrite(pins_dir[i], req_speed[i]&gt;0); if(mtime - change_time[i] &gt; 50000ul &amp;&amp; req_speed[i] != 0) { cur_pwm[i] += 5; } if(req_speed[i] &gt; 0) cur_err[i] = req_speed[i]*10 - cur_speed[i]; else cur_err[i] = (-req_speed[i]*10) - cur_speed[i]; if(cur_err[i] &gt; 0 &amp;&amp; cur_pwm[i] &lt; 255) { cur_pwm[i]++; } else if(cur_err[i] &lt; 0 &amp;&amp; cur_pwm[i] &gt; 0) { cur_pwm[i]--; } analogWrite(pins_pwm[i], cur_pwm[i]); } } } void update_encoders() { int i; unsigned long w; enc_new = PINC &amp; B00001111; unsigned long etime = micros(); for (i=0; i&lt;4; i++) { if((enc_old &amp; (1 &lt;&lt; i)) &lt; (enc_new &amp; (1 &lt;&lt; i))) { w = (unsigned long)(((etime - change_time[i]))); pulse_width[i] = (w + pulse_width_h1[i] + pulse_width_h2[i])/3; pulse_width_h2[i] = pulse_width_h1[i]; pulse_width_h1[i] = pulse_width[i]; change_time[i]=etime; pulse_count[i]++; cur_speed[i] = (3200000ul / pulse_width[i]); } } enc_old=enc_new; } </code></pre> <p>Here req_speed is between -100 to 100, where sign indicates direction. Please consider all undefined variables as globals. I experimentally measured that, when motor is running at full speed, the pulse width is around 3200us.</p> <p>Encoders' INT outputs (XOR of A and B) are connected to A0 thru A3. Motor PWM is connected to D3, D5, D6, D9. Please let me suggest any improvements to this code and advice me about what am I missing here.</p>
Encoder based speed control for Rover 5
<p>A switch does not have a quick low to high or high to low transition, instead something like this. This is called switch bounce. <img src="https://i.stack.imgur.com/DsQYX.png" alt="enter image description here"> image source: piconix.com</p> <p>Just to add to the answers here. There are two commonly used methods to avoid switch bouncing:</p> <ol> <li><p>Using a capacitor with the switch, called RC debouncing. The capacitor tries to smoothen the transition and so avoiding voltage spikes. This method works fine but still there are chances that some big spikes of voltages may still be sent to the Arduino and Arduino senses it as a button press(false button press). So it is not a 'sure' method to avoid bounce. Also additional hardware is required for this method. </p></li> <li><p>Software denouncing: It is better method using which you can almost entirely remove the problem of switch bounce. In software too there are two majorly used methods, one is called polling and other is interrupt method. As Chris wrote, polling with a delay of 10-50 ms (depending on the quality of switch you are using) can avoid switch bouce. Polling is used in simple projects where time is not a very crucial aspect of project functioning. Polling is easy to implement. On the other hand using interrupt can be a little painstaking for beginners but it is a better way because the normal functioning of the code is not haulted while the debouncing is done(unlike polling where where we use delay and stop the processor from doing anything in that duration of time) . There can be many ways to do so. </p></li> </ol> <p>Here is a pseudo code that I often useuse to avoid switch bounce:</p> <pre><code>//If button is pressed an interrupt, say, INTERRUPT1 is triggered //Following is the interrupt service routine for INTERRUPT1: ISR(INTERRUPT1) { //do something you wanted to do on button press disable INTERRUPT1 //so that other button spike don't trigger it enable, say, INTERRUPT2 Timer INTERRUPT2 Timer is set to interrupt after, say 10ms } //Following is the interrupt service routine for INTERRUPT2: ISR(INTERRUPT2) { disable INTERRUPT2 Timer enable INTERRUPT1 //button press can be recognized now after 10ms have been elapsed } </code></pre>
2063
2013-11-14T20:32:20.550
|arduino|c|serial|
<p>I am trying to use a push button in order to know and print number of time a push button is pressed using a counter.But everytime i press the button , counter get incremented to sometime 3 and sometime 5 and some time counter does start >100 and continue.</p> <p>I had preferred the <a href="http://arduino.cc/en/Tutorial/Button" rel="nofollow">this link</a> for wiring PUSH button with arduino.</p> <p>and here is my code</p> <pre><code>const int buttonPin = 2; // the number of the pushbutton pin const int ledPin = 13; // the number of the LED pin // variables will change: int buttonState = 0; // variable for reading the pushbutton status int count = 0; void setup() { // initialize the LED pin as an output: pinMode(ledPin, OUTPUT); // initialize the pushbutton pin as an input: pinMode(buttonPin, INPUT); pinMode(buttonPin, INPUT); Serial.begin(9600); buttonState = digitalRead(buttonPin); digitalWrite(ledPin,LOW); } void loop(){ // read the state of the pushbutton value: buttonState = digitalRead(buttonPin); // check if the pushbutton is pressed. // if it is, the buttonState is HIGH: if (buttonState == HIGH) { // turn LED on: digitalWrite(ledPin, HIGH); count = count + 1; Serial.print(count); } else { // turn LED off: digitalWrite(ledPin, LOW); } } </code></pre> <p>I dont know why count is coming absurdly and unevenly.</p>
Unable to read pushbutton press properly in Arduino
<p>I must agree with the other two answers, however the main issue is that you do not have enough voltage into your regulator (I see from your comment to Ian that you are using a Pololu D15V35F5S3 Regulator). If you refer to the <a href="http://www.pololu.com/product/2110" rel="nofollow noreferrer">Pololu D15V35F5S3 Product Description</a>, down at the bottom you will find the following graph:</p> <p><img src="https://i.stack.imgur.com/ugMfs.jpg" alt="enter image description here"></p> <p>Looking at the red line for 5V output: Note for all currents greater than zero, the dropout voltage is greater than 1V. (The minimum input voltage necessary to achieve 5V output is 5V + dropout voltage.) The more current used by your 5V loads (Pi), the greater the dropout. The problem is compounded by any voltage drop in your 6V source due to current surges (see Ian's answer).</p> <p>You either need a higher input voltage, a lower dropout regulator (this may be difficult and insufficient), a different regulator (buck-boost), or a different power source for the Pi.</p>
2068
2013-11-15T12:58:46.770
|raspberry-pi|electronics|esc|
<p>I have an <a href="http://en.wikipedia.org/wiki/Radio-controlled_model" rel="nofollow noreferrer">RC</a> car. The battery provides power to the <a href="https://en.wikipedia.org/wiki/electronic_speed_control" rel="nofollow noreferrer">ESC</a> and then the ESC provides 6 V back out to the receiver. Instead of the receiver I have a Raspberry Pi, which uses the 6 V, steps it down to 5 V and provides power to the Raspberry Pi.</p> <h3>The problem</h3> <p>Every time we go full power*, there is a lack of voltage and the Raspberry Pi seems to hard reset.</p> <p><em>* By full power we mean direct to 100% and not ranging from 0-100</em></p> <p>I am not an expert in electrical circuits, but some of the suggestions are to use a capacitor to provide the missing 5 V in the interim. How do I prevent the Raspberry Pi from dying in the event of full power?</p>
My Raspberry Pi is losing power in a surge
<p>Problem solved. It turns out that the pin assignment on my relay is different from that in the book. </p> <p>Here's the schematic for my actual relay (notice that the pin assignment is CO,NC,NO instead of NC,CO,NO as in the book)</p> <p>Schematic of my actual relay:</p> <p><img src="https://i.stack.imgur.com/u7HMx.jpg" alt="OMRON G5V-2"></p>
2074
2013-11-17T06:08:31.293
|electronics|
<p>I could swear that it was working for a while. I got back to my desk, tried it again, and it's no longer working. Could I have fried the NO pins on both sides? This is a DPDT relay. Everything works normally on the NC pins. I have never applied more than 5V. I do hear the relay click when I apply 5V to the coil. But when I measure voltage on the NO pins, I get 0V. Has anyone else seen this? I have two of these relays and I can't seem to get voltage on the NO pins with either relay. I should clarify that I'm expecting the same 5V power source to power both the coil and the common pins. If the NC pins work then I don't see why the NO pins shouldn't. In both cases the 5V is shared between the coil and any load attached to the NC/NO pins. I did try driving the entire circuit off a 9V power supply, but that did not change the results (and that does contradict my earlier statement that I've never applied more than 5V to this relay). My circuit is based on Charles Platt's "Make: Electronics", p. 59.</p> <p>Here's a pic of the schematic I am following, except that I am using a 5V relay and a 5V power supply (USB port) and I am using piezo buzzers without resistors instead of LEDs.</p> <p><img src="https://i.stack.imgur.com/nYbnL.jpg" alt="enter image description here"></p>
Omron G5V-2 relay NO pins not working
<p>It get solved automatically and that give me clear hint that when i was using 9v 1 Amp adaptor, because of heat Arduino was getting a type of HANGED..but when i powered it with USB cable..it works fine now.</p>
2083
2013-11-20T14:42:44.193
|arduino|c|serial|communication|
<p>I am using Arduino UNO to read a push button every time it is pressed.Earlier i was simply reading the Digital IO pin to read the count and then i faced the condition of switch debounce regarding which i had asked a <a href="https://robotics.stackexchange.com/questions/2063/unable-to-read-pushbutton-press-properly-in-arduino">question here</a> and get to know that i must use Interrupt instead of reading a digital IO pin but even after using interrupt, i was facing the problem of Switch Debouncing.</p> <p>So i used <a href="http://www.arduino.cc/en/Tutorial/Debounce" rel="nofollow noreferrer">this link</a> and code given on this link</p> <pre><code>const int buttonPin = 2; // the number of the pushbutton pin const int ledPin = 13; // the number of the LED pin // Variables will change: int ledState = HIGH; // the current state of the output pin int buttonState; // the current reading from the input pin int lastButtonState = LOW; // the previous reading from the input pin // the following variables are long's because the time, measured in miliseconds, // will quickly become a bigger number than can be stored in an int. long lastDebounceTime = 0; // the last time the output pin was toggled long debounceDelay = 50; // the debounce time; increase if the output flickers void setup() { pinMode(buttonPin, INPUT); pinMode(ledPin, OUTPUT); // set initial LED state digitalWrite(ledPin, ledState); } void loop() { // read the state of the switch into a local variable: int reading = digitalRead(buttonPin); // check to see if you just pressed the button // (i.e. the input went from LOW to HIGH), and you've waited // long enough since the last press to ignore any noise: // If the switch changed, due to noise or pressing: if (reading != lastButtonState) { // reset the debouncing timer lastDebounceTime = millis(); } if ((millis() - lastDebounceTime) &gt; debounceDelay) { // whatever the reading is at, it's been there for longer // than the debounce delay, so take it as the actual current state: // if the button state has changed: if (reading != buttonState) { buttonState = reading; // only toggle the LED if the new button state is HIGH if (buttonState == HIGH) { ledState = !ledState; } } } // set the LED: digitalWrite(ledPin, ledState); // save the reading. Next time through the loop, // it'll be the lastButtonState: lastButtonState = reading; } </code></pre> <p>and change </p> <pre><code> long debounceDelay = 50; </code></pre> <p>to 10(means read any thing ina time gap of 10 mili second) as the code says.Now what is happening, code is running on the board and after some time my board get hang and LED stop toggling on any press of push button and then i had to manually reset my board.I also want to add upon a thing that i am also using a serial port in between when LED toggles or switch is pressed.</p> <p>I am totally confused why this is happening.There can beone possibility that this is happening because i reduced time gap between two consecutive events to 10 from 50 miliseconds and that might be making AVR get hanged and thus require a manual reset.</p>
Arduino Uno getting a type of "HANGED" while runing samll code of switc debounce and Serial print
<p>Before you run rviz, you need to set use_sim_time to true:</p> <pre><code>rosparam set use_sim_time true </code></pre> <p>Now start up rviz, then run rosbag using --clock:</p> <pre><code>rosbag play bagfile_name.bag --clock </code></pre> <p>For setting up rviz with the navigation stack, you will want to follow <a href="http://wiki.ros.org/navigation/Tutorials/Using%20rviz%20with%20the%20Navigation%20Stack" rel="nofollow">this tutiorial</a>.</p>
2084
2013-11-20T16:25:12.700
|localization|ros|
<p>I have a bag file that contains couple of topics needed for localization, odometry data, kinect data and <code>/tf</code>. What I want is watching robot's movement path in <code>rviz</code> after initializing robot position (even I don't know how to initial it). Any help?</p> <p>All topics:</p> <pre><code>/scan /tf /clock /map /odom </code></pre>
Fake localization using bag file in ROS
<p>After talking to a friend, I found out what I did wrong.</p> <ol> <li>I needed to include all of the #include libraries.</li> <li>I downloaded one of the libraries in html, instead of raw .h format which was messing up the program, so I downloaded the correct version.</li> </ol> <p>The program is now working, and collecting data well!</p> <p>If you find your readings to be off, change the update delay from 500ms to 20ms and the readings will drastically improve.</p>
2089
2013-11-24T06:28:48.743
|arduino|sensors|gyroscope|
<p>I recently purchased at EY-80 from electrodragon: <a href="http://www.electrodragon.com/?product=all-in-one-9-axis-motion-sensor-gyroscope-accelerometer-magnetometer-barometer" rel="nofollow noreferrer">EY-80 All in one 9-Axis Motion Sensor (Gyro + Acceler + Magneto + Baro)</a></p> <p>I am having a hard time compiling the <a href="https://github.com/Edragon/Arduino_sketch/wiki" rel="nofollow noreferrer">example code</a> on my arduino:</p> <p><img src="https://i.stack.imgur.com/C7xKr.png" alt="enter image description here"></p> <p>This is what is happening. So far, I am only copy and pasting the code. Any help? (I am somewhat new to programming, so don't fully understand all of the code)</p>
Compiling Code for EY-80
<p>The servo.write(angle) function is designed to accept angles from 0 to 180. (The value 180 is significantly larger than 100). Could you tell me where in <a href="http://playground.arduino.cc/ComponentLib/Servo" rel="noreferrer">the Servo documentation</a> you read "<em>100 (motor at full power)</em>", so we can fix that typo?</p> <p>Please change the line</p> <pre><code>int maxspeed=100; /* wrong */ </code></pre> <p>to</p> <pre><code>int maxspeed=180; </code></pre> <p>Also, please run <code>servo.refresh()</code> periodically to keep the servos updated -- perhaps something like:</p> <pre><code>for(pos = maxspeed; pos&gt;=minspeed; pos-=1) // goes from 180 degrees to 0 degrees { myservo.write(pos); // set the servo position myservo.refresh(); // tell servo to go to the set position delay(delaytime); // waits 15ms for the servo to reach the position }; // ... myservo.write(92); for( long int delaytime = 0; delaytime &lt; 100000; delaytime+=10 ){ myservo.refresh(); // tell servo to go to the set position delay(10); }; </code></pre> <p>As mpflaga pointed out, you can get more precision by setting the pulse width directly in microseconds with</p> <pre><code>myservo.writeMicroseconds(1000); // typical servo minimum; some go down to 600 or less myservo.writeMicroseconds(2000); // typical servo maximum; some go up to 2500 or more </code></pre> <p>p.s.: Have you seen <a href="http://arduino.cc/en/Tutorial/BlinkWithoutDelay" rel="noreferrer">"blink without delay"</a> ? That technique makes it much easier to read sensors and update servo positions with consistent heartbeat timing.</p>
2091
2013-11-24T23:38:07.977
|arduino|control|quadcopter|esc|servomotor|
<p>I am using an Arduino Uno to control an ESC for my (in progress) quadrocopter. I am currently using the servo library to control the ESC, which works great.</p> <p>Except..</p> <p>A count of 100 is max speed, meaning I only have 10 speeds between 90 (stopped) and 100 (motor at full power) to correctly run my quadrocopter, I would like to have many more speed options. Any ideas? I'm having a hard time using a PWM signal, I might not be doing it right though.</p> <p>My current code is <a href="https://github.com/toozinger/Quad/blob/master/run_motor_as_servo.ino">here</a>:</p> <pre><code>#include &lt;Servo.h&gt; Servo myservo; // create servo object to control a servo // a maximum of eight servo objects can be created int pos = 0; // variable to store the servo position void setup() { myservo.attach(8); // attaches the servo on pin 8 to the servo object } void loop() { int maxspeed=100; int minspeed=0; int delaytime=5; int count; for(count=0; count &lt;1; count+=1) { for(pos = minspeed; pos &lt; maxspeed; pos += 1) // goes from 0 degrees to 180 degrees { // in steps of 1 degree myservo.write(pos); // tell servo to go to position in variable 'pos' delay(delaytime); // waits 15ms for the servo to reach the position } for(pos = maxspeed; pos&gt;=minspeed; pos-=1) // goes from 180 degrees to 0 degrees { myservo.write(pos); // tell servo to go to position in variable 'pos' delay(delaytime); // waits 15ms for the servo to reach the position } if(count&gt;1){ break; } } myservo.write(92); delay(100000); myservo.write(90); delay(10000000); } </code></pre>
How to use Arduino for ESC control?
<p>Survival rate for the case of multinomial resampling and the case of <span class="math-container">$w \geq \frac{1}{n}$</span> has been covered well by the accepted answer.</p> <p>However, I didn't find the case of <span class="math-container">$w &lt; \frac{1}{n}$</span> intuitive enough for myself, so I will share my own intuitive understanding about it, even though it might not be as formal. Forgive me for the lack of pictures.</p> <p><strong>How I visualize systematic resampling - Casino roulette:</strong> First, I visualize systematic resampling similarly to a casino roulette over which we spread our cumulative distribution, starting from 0 and draw samples from the start of each roulette field, after spinning the roulette by the randomly sampled value <span class="math-container">$r_0 \in [0,1/n]$</span>. The variable <span class="math-container">$n$</span> here is the number of particles.</p> <p>A bit more formally: we split the cumulative distribution into <span class="math-container">$n$</span> bins of size <span class="math-container">$1/n$</span> each, with the first bin starting from <span class="math-container">$0$</span>. Then we shift the position of those bins by the sampled random number <span class="math-container">$r_0$</span>, sampling a weight from the beginning of each bin.</p> <p><strong>Survival (or no survival) of a weight <span class="math-container">$w$</span>:</strong> If <span class="math-container">$0 \leq w \leq \frac{1}{n}$</span>, then the probability of <span class="math-container">$w$</span> surviving depends on the value of the random variable <span class="math-container">$r_0$</span>. In order for <span class="math-container">$w$</span> to not survive, we need to generate an <span class="math-container">$r_0$</span> so that <span class="math-container">$w$</span> falls between two of our equally spaced indices that define our bins after shifting the position of the bins by <span class="math-container">$r_0$</span>. In terms of casino roulettes, we should spin the roulette so that after the spin <span class="math-container">$w$</span> falls within a single roulette field and not between them.</p> <p>We can visualize this <span class="math-container">$r_0$</span> offset sampling procedure as having a single <span class="math-container">$1/n$</span> sized bin from which we sample the starting position of all bins after the spin (each roulette field) at the same time, with <span class="math-container">$w$</span> not surviving if we sample an offset that results in no starting position being covered by <span class="math-container">$w$</span> for any of the <span class="math-container">$n$</span> bins.</p> <p>That is equivalent to the probability of <span class="math-container">$r_0$</span> having a value in <span class="math-container">$[0, 1/n]$</span> but not falling on the part of the space covered by <span class="math-container">$w$</span> in any of the <span class="math-container">$1/n$</span> sized bins (<span class="math-container">$w$</span> can actually be present in 2 bins at most). More formally:</p> <p><span class="math-container">\begin{equation} prob(\bar{w}) = \frac{1/n - w}{1/n} = 1 - nw \end{equation}</span></p> <p>where <span class="math-container">$prob(\bar{w})$</span> is the probability of <span class="math-container">$w$</span> not surviving. Hence, the probability of surviving is equal to <span class="math-container">$nw$</span>.</p>
2104
2013-11-30T21:58:59.943
|particle-filter|
<p>I'm trying to figure out a way that I can calculate the probability that a particle will survive the re-sampling step in the particle filter algorithm.</p> <p>For the simple case of multinomial re-sampling, I can assume that we can model the process like a Binomial distribution if we only care about one sample/particle.</p> <p>So if the particle has a weight of w that is also the probability that it will get selected in a step of the re-sampling. So we use 1 - P(k, p, n) where P is the Binomial distribution, k is 0 (we did not select the particle in all our tries), p is equal to w and n is equal to M, the number of particles.</p> <p>What is the case though in the systematic re-sampling, where the probability of a particle being selected is proportional but not equal to its weight?</p>
How to calculate probability of particle survival for particle filter?
<p>I'm not going to mark this as the answer unless others agree it is the answer, but I found an omnidriectional ultrasonic transmitter/receiver: <a href="http://www.metrolog.net/ultra40k.php?lang=en" rel="nofollow">http://www.metrolog.net/ultra40k.php?lang=en</a></p> <p>I haven't used it or tested it yet, but it seems promising.</p>
2105
2013-12-01T02:11:11.960
|sensors|
<p>Basically, I want to detect an ultrasonic beacon in a radius around the robot. The beacon would have a separate ultrasonic emitter while the robot would have the spinning receiver.</p> <p>Are there any existing ultrasonic sensors that would meet this use case or am I stuck hacking one together myself?</p> <p>Is ultrasonic even the best choice? I was hoping that the beacon would be kept in a pocket, so I figured optical sensors were out.</p> <p>Edit: The beacon and robot will both be mobile so fixed base stations are not an option.</p>
360 degree ultrasonic beacon sensor
<p>The typical solution to this is not to try to inject commands into the existing remote control encoder, but rather to replace it.</p> <p>Typically you start by connecting a simple USB-based logic analyzer to the signals feeding the RF submodule that is attached at the top of the board. You can do some initial identification of the lines (almost always an SPI bus, possibly with an additional enable, but very rarely a custom baud rate UART) with the analzyer's GUI interface (or a scope if available). Once you know the electrical format, you identify operations on various registers and figure out which RF chip is being used from comparison to data sheets of the likely suspects (nRF24/Beken, A7105, XN297, etc). Then you write a custom streaming decoder for the packet registers for that to plug into logic analyzer software like sigrok, so that you can move the sticks and see which parts of the transmitted packet change. And you figure out the channel hopping scheme from the writes to the channel registers.</p> <p>With all of that, you have what you need to make an Arduino-like board run the RF module - or often an alternate sourced one. People have also working out how to make the nRF24L01+ talk to some similar concept chips with notably different on-air format like the X297.</p> <p>It's quite likely though that you will not have to do this entire task yourself. Typically, the R/C hobby community quickly reverse engineers all of this for any interesting inexpensive toy aircraft, and creates code to the able to operate it from a hobby grade transmitter, either running the "Deviation" firmware or as a plug-in-module which takes legacy multi-channel PPM input and uses an Arduino-like MCU to encode any one of a number of toy protocols. So first do some looking on places like rcgroups for existing work on this particular toy.</p> <p>In fact, the <a href="https://www.deviationtx.com/wiki/supported_models" rel="nofollow noreferrer">Deviation project Wiki</a> lists the WL-v262 as using an nRF24L01-compatible radio, with what has come to be called the "V202" protocol, so likely everything you need is already available between that and the more Arduino-ish PPM translator implementation at <a href="https://github.com/pascallanger/DIY-Multiprotocol-TX-Module" rel="nofollow noreferrer">https://github.com/pascallanger/DIY-Multiprotocol-TX-Module</a>.</p>
2110
2013-12-02T22:23:42.537
|arduino|sensors|radio-control|wireless|
<p>I have a WL v262 quadcopter and I want to control it using Arduino instead of the joysticks on the transmitter. I opened up the transmitter and saw that each joystick has 2 potentiometers on the PCB and that the voltage for each pot goes from 0-3.3V. I used arduino's PWM and a low pass filter and connected the output of the filtered output to the potentiometer's analog pin which is connected to the PCB (I cannot desolder and take out the pots from the PCB) but even with this $V_{out}$ going onto the analog pin, my transmitter's display gave ???? </p> <p>Now I am really confused and frustrated because I don't know how else to control this transmitter other than attaching stepper motors to the joysticks and manually controlling the transmitter but this is really my last resort. Can someone help me with this? I have spent hours and hours trial and error but I am getting nowhere. </p> <p>Here is the PCB of the transmitter:</p> <p><img src="https://i.stack.imgur.com/Kd2Rw.jpg" alt="enter image description here"></p> <p><img src="https://i.stack.imgur.com/vxRRg.jpg" alt="enter image description here"></p>
RC Transmitter Quadcopter with Arduino
<p>Yes, there is programmatic interface. Like many autopilot systems the Ardupilot uses a protocol called <a href="http://qgroundcontrol.org/mavlink/start" rel="nofollow">Mavlink</a>. It allows for both sending commands and receiving telemetry data. It also allows for managing the data that is sent to the ground control system to avoid overburdening ones communication link. There is support for Mavlink in C and python as well as a ROS library (see <a href="https://github.com/mavlink" rel="nofollow">Mavlink Github</a>).</p> <p>Furthermore this can be used for onboard control via a single board computer (SBC) such as a Raspberry Pi, Gumstix, or PCduino by interfacing the preferred SBC with the Ardupilot XBee port which is really just a TTL level serial.</p>
2114
2013-12-03T17:00:28.630
|quadcopter|ardupilot|
<p>I am interested in getting an arducopter with an ardupilot(APM). I read through the documentation and from what I understand, ardupilot is the low level hardware and firmware that directly controls the motors of the arducoptor. </p> <p>I would like to know if there is a higher level programmatic interface to the ardupilot? The mission planner provides a user interface to control the ardupilot. But is there a programmatic interface to control it?</p> <p>In other words, would it be possible for a user written 'linux process' to receive and send sensory data to and from the ardupilot respectively?</p>
sending and receiving parameters to ardupilot
<p>Follow the <a href="http://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters" rel="nofollow">Denavit-Hartenburg</a> parameters, and you'll get easy equations for the transformation matrices between each frame.</p> <ul> <li>The $z_n$-axis is in the direction of the $(n+1)^{th}$ joint axis</li> <li>The $x_n$-axis is collinear to the common normal: $x_n = z_n × z_{n-1}$ If there is no unique common normal (parallel $z$ axes), the direction of $x_n$ is from $z_{n-1}$ to $z_n$.</li> <li>The $y$-axis follows from the $x$- and $z$-axis by choosing it to be a right-handed coordinate system.</li> </ul>
2117
2013-12-03T22:49:16.563
|localization|kinematics|robotic-arm|
<p>For examples if i have this robotic arm: <a href="http://www.youtube.com/watch?v=bKafht51Juw" rel="nofollow">Example</a>, for the base rotation (5th DOF in the clip at 0:58), we know that the Z axis for that joint will be the same as the Z axis for the base frame{0}, but I don't know about Y and Z axises of the base rotation respects to the base frame, should they be the same or not ?</p> <p>And one more thing, for defining the frame of the base rotation (at 0:58 in the clip), the vertical arm pitch (at 0.47 in the clip) and the horizontal arm pitch (at 0:46 in the clip), it's pretty easy, but I don't know how to continue for defining the frame of wrist roll (at o.12 in the clip) and wrist pitch (0.23 in the clip) since the angle between the Z axis of wrist roll and the wrist pitch is now 90o. <br> Thank you very much.</p>
Defining frames for 5DOF robotics arm
<p>MOOCS are so popular nowadays, that I'm not sure you haven't check it out. Anyway, let's give it a shot.</p> <p>Please, have a look at this courses:</p> <p><strong>edx.org</strong></p> <p><a href="https://www.edx.org/course-list/allschools/electronics/allcourses" rel="nofollow">https://www.edx.org/course-list/allschools/electronics/allcourses</a></p> <p><strong>www.coursera.org</strong></p> <p>You can also check out <a href="http://www.coursera.org" rel="nofollow">www.coursera.org</a>. it's similar to mentioned above.</p> <p><strong>MIT OpenCourseware</strong></p> <p>And last but not least, you can also try to find something here: <a href="http://ocw.mit.edu/courses/find-by-topic/#cat=engineering&amp;subcat=electricalengineering&amp;spec=electronics" rel="nofollow">MIT OpenCourseware electronics</a></p> <p><strong>UPDATE:</strong> Also, there's a site with all MOOCS in one place: <a href="https://www.class-central.com/" rel="nofollow">class-central.com</a> </p>
2127
2013-12-07T05:37:07.330
|electronics|
<p>I'd like some well put video series of like 30 videos. Or anything but it needs to thorough and in easy English...less mundane. So far all resources i have found either go upto resistors code or of projects that tell you do this and this and this and tada you got this.</p> <p>Is there really no online resource for people to learn electronics. I want further master analog and do move on to digital cause it's better to spend 0.40 cents.... than spend $95 on components and get the whole thing on tiny chip.</p> <p>Please bare with me like six months i have been searching for legit source, material that is meant to teach you. I like pictures and colors. </p>
where can i learn electronics from intro to advance digital
<p><strong>SOLUTION:</strong> <em><a href="https://stackoverflow.com/questions/20445147/transform-image-using-roll-pitch-yaw-angles-image-rectification/20469709#20469709">This exact problem</a> has been solved in StackOverflow. Please read this post there for further explanation and a working solution. Thanks!</em></p>
2130
2013-12-07T18:43:21.297
|computer-vision|cameras|
<p><strong>UPDATE:</strong> <em><a href="https://stackoverflow.com/questions/20445147/transform-image-using-roll-pitch-yaw-angles-image-rectification/20469709#20469709">This exact problem</a> has been solved in StackOverflow. Please read this post there for further explanation and a working solution. Thanks!</em></p> <p>I am working on an application where I need to rectify an image taken from a mobile camera platform. The platform measures roll, pitch and yaw angles, and I want to make it look like the image is taken from directly above, by some sort of transform from this information. </p> <p>In other words, I want a perfect square lying flat on the ground, photographed from afar with some camera orientation, to be transformed, so that the square is perfectly symmetrical afterwards. </p> <p>I have been trying to do this through OpenCV(C++) and Matlab, but I seem to be missing something fundamental about how this is done.</p> <p>In Matlab, I have tried the following:</p> <pre><code>%% Transform perspective img = imread('my_favourite_image.jpg'); R = R_z(yaw_angle)*R_y(pitch_angle)*R_x(roll_angle); tform = projective2d(R); outputImage = imwarp(img,tform); figure(1), imshow(outputImage); </code></pre> <p>Where R_z/y/x are the standard rotational matrices (implemented with degrees).</p> <p>For some yaw-rotation, it all works just fine:</p> <pre><code>R = R_z(10)*R_y(0)*R_x(0); </code></pre> <p>Which gives the result:</p> <p><img src="https://i.stack.imgur.com/pyiNq.jpg" alt="Image rotated 10 degrees about the Z-image axis"></p> <p>If I try to rotate the image by the same amount about the X- or Y- axes, I get results like this:</p> <pre><code>R = R_z(10)*R_y(0)*R_x(10); </code></pre> <p><img src="https://i.stack.imgur.com/cbZoE.jpg" alt="Image rotated 10 degrees about the X-image axis"></p> <p>However, if I rotate by 10 degrees, divided by some huge number, it starts to look OK. But then again, this is a result that has no research value what so ever:</p> <pre><code>R = R_z(10)*R_y(0)*R_x(10/1000); </code></pre> <p><img src="https://i.stack.imgur.com/kssGS.jpg" alt="Image rotated 10/1000 degrees about the X-image axis"></p> <p>Can someone please help me understand why rotating about the X- or Y-axes makes the transformation go wild? Is there any way of solving this without dividing by some random number and other magic tricks? Is this maybe something that can be solved using Euler parameters of some sort? Any help will be highly appreciated!</p>
Transform Image Using Roll-Pitch-Yaw Angles (image rectification)
<p>I've found this usefull:</p> <pre><code>#!/usr/bin/env python import roslib; roslib.load_manifest('robot_mover') import rospy from geometry_msgs.msg import Twist def mover(): pub = rospy.Publisher('cmd_vel', Twist) rospy.init_node('robot_mover') twist = Twist() twist.linear.x = 0.1; # move forward at 0.1 m/s rospy.loginfo("Moving the robot forward.") pub.publish(twist) rospy.sleep(1) rospy.loginfo("Moving the robot backward.") twist.linear.x = -0.1; # move backward at 0.1 m/s pub.publish(twist) rospy.sleep(1); rospy.loginfo("Turning the robot left."); twist = Twist(); twist.angular.z = 0.5 pub.publish(twist) rospy.sleep(1); rospy.loginfo("Turning the robot right."); twist.angular.z = -0.5 pub.publish(twist) rospy.sleep(1); rospy.loginfo("Stopping!") twist = Twist() pub.publish(twist) rospy.loginfo("Node exiting."); if __name__ == '__main__': try: mover() except rospy.ROSInterruptException: pass </code></pre> <p>Source: <a href="http://pharos.ece.utexas.edu/wiki/index.php/Writing_A_Simple_Node_that_Moves_the_iRobot_Create_Robot" rel="nofollow">http://pharos.ece.utexas.edu/wiki/index.php/Writing_A_Simple_Node_that_Moves_the_iRobot_Create_Robot</a></p>
2138
2013-12-10T16:10:54.943
|mobile-robot|ros|navigation|
<p>Is there a node or package that can send commands to <code>/cmd_vel</code> to move <strong>ATRV-Jr</strong> like 2 meters forward or turn it 90 degree to right/left? I don't want to tell the robot to move with specified speed. For example when I use this command <code>rostopic pub /cmd_vel geometry_msgs/Twist '[1.0,0.0,0.0]' '[0.0,0.0,0.0]'</code> the robot starts moving forward until I send another command or send <code>break</code> command.</p>
Move ATRV robot to specific distance using ROS
<p>You can use least square approach and solve this problem using Gauss-Newton method. Following blog post sums it up very well. </p> <p><a href="http://chionophilous.wordpress.com/2011/08/26/accelerometer-calibration-iii-improving-accuracy-with-least-squares-and-the-gauss-newton-method/" rel="nofollow">http://chionophilous.wordpress.com/2011/08/26/accelerometer-calibration-iii-improving-accuracy-with-least-squares-and-the-gauss-newton-method/</a></p>
2146
2013-12-13T06:41:12.867
|arduino|accelerometer|ardupilot|
<p>I am trying to manually calibrate the on-board accelerometer of an APM 2.6 controller.</p> <p>I am using the following code (I found this somewhere, don't remember where) with Arduino 1.0.5 (in Windows environment) to fetch the accelerometer and gyro data:</p> <pre><code> #include &lt;SPI.h&gt; #include &lt;math.h&gt; #define ToD(x) (x/131) #define ToG(x) (x*9.80665/16384) #define xAxis 0 #define yAxis 1 #define zAxis 2 #define Aoffset 0.8 int time=0; int time_old=0; const int ChipSelPin1 = 53; float angle=0; float angleX=0; float angleY=0; float angleZ=0; void setup() { Serial.begin(9600); pinMode(40, OUTPUT); digitalWrite(40, HIGH); SPI.begin(); SPI.setClockDivider(SPI_CLOCK_DIV16); SPI.setBitOrder(MSBFIRST); SPI.setDataMode(SPI_MODE0); pinMode(ChipSelPin1, OUTPUT); ConfigureMPU6000(); // configure chip } void loop() { Serial.print("Acc X "); Serial.print(AcceX(ChipSelPin1)); Serial.print(" "); Serial.print("Acc Y "); Serial.print(AcceY(ChipSelPin1)); Serial.print(" "); Serial.print("Acc Z "); Serial.print(AcceZ(ChipSelPin1)); Serial.print(" Gyro X "); Serial.print(GyroX(ChipSelPin1)); Serial.print(" Gyro Y "); Serial.print(GyroY(ChipSelPin1)); Serial.print(" Gyro Z "); Serial.print(GyroZ(ChipSelPin1)); Serial.println(); } void SPIwrite(byte reg, byte data, int ChipSelPin) { uint8_t dump; digitalWrite(ChipSelPin,LOW); dump=SPI.transfer(reg); dump=SPI.transfer(data); digitalWrite(ChipSelPin,HIGH); } uint8_t SPIread(byte reg,int ChipSelPin) { uint8_t dump; uint8_t return_value; uint8_t addr=reg|0x80; digitalWrite(ChipSelPin,LOW); dump=SPI.transfer(addr); return_value=SPI.transfer(0x00); digitalWrite(ChipSelPin,HIGH); return(return_value); } int AcceX(int ChipSelPin) { uint8_t AcceX_H=SPIread(0x3B,ChipSelPin); uint8_t AcceX_L=SPIread(0x3C,ChipSelPin); int16_t AcceX=AcceX_H&lt;&lt;8|AcceX_L; return(AcceX); } int AcceY(int ChipSelPin) { uint8_t AcceY_H=SPIread(0x3D,ChipSelPin); uint8_t AcceY_L=SPIread(0x3E,ChipSelPin); int16_t AcceY=AcceY_H&lt;&lt;8|AcceY_L; return(AcceY); } int AcceZ(int ChipSelPin) { uint8_t AcceZ_H=SPIread(0x3F,ChipSelPin); uint8_t AcceZ_L=SPIread(0x40,ChipSelPin); int16_t AcceZ=AcceZ_H&lt;&lt;8|AcceZ_L; return(AcceZ); } int GyroX(int ChipSelPin) { uint8_t GyroX_H=SPIread(0x43,ChipSelPin); uint8_t GyroX_L=SPIread(0x44,ChipSelPin); int16_t GyroX=GyroX_H&lt;&lt;8|GyroX_L; return(GyroX); } int GyroY(int ChipSelPin) { uint8_t GyroY_H=SPIread(0x45,ChipSelPin); uint8_t GyroY_L=SPIread(0x46,ChipSelPin); int16_t GyroY=GyroY_H&lt;&lt;8|GyroY_L; return(GyroY); } int GyroZ(int ChipSelPin) { uint8_t GyroZ_H=SPIread(0x47,ChipSelPin); uint8_t GyroZ_L=SPIread(0x48,ChipSelPin); int16_t GyroZ=GyroZ_H&lt;&lt;8|GyroZ_L; return(GyroZ); } //--- Function to obtain angles based on accelerometer readings ---// float AcceDeg(int ChipSelPin,int AxisSelect) { float Ax=ToG(AcceX(ChipSelPin)); float Ay=ToG(AcceY(ChipSelPin)); float Az=ToG(AcceZ(ChipSelPin)); float ADegX=((atan(Ax/(sqrt((Ay*Ay)+(Az*Az)))))/PI)*180; float ADegY=((atan(Ay/(sqrt((Ax*Ax)+(Az*Az)))))/PI)*180; float ADegZ=((atan((sqrt((Ax*Ax)+(Ay*Ay)))/Az))/PI)*180; switch (AxisSelect) { case 0: return ADegX; break; case 1: return ADegY; break; case 2: return ADegZ; break; } } //--- Function to obtain angles based on gyroscope readings ---// float GyroDeg(int ChipSelPin, int AxisSelect) { time_old=time; time=millis(); float dt=time-time_old; if (dt&gt;=1000) { dt=0; } float Gx=ToD(GyroX(ChipSelPin)); if (Gx&gt;0 &amp;&amp; Gx&lt;1.4) { Gx=0; } float Gy=ToD(GyroY(ChipSelPin)); float Gz=ToD(GyroZ(ChipSelPin)); angleX+=Gx*(dt/1000); angleY+=Gy*(dt/1000); angleZ+=Gz*(dt/1000); switch (AxisSelect) { case 0: return angleX; break; case 1: return angleY; break; case 2: return angleZ; break; } } void ConfigureMPU6000() { // DEVICE_RESET @ PWR_MGMT_1, reset device SPIwrite(0x6B,0x80,ChipSelPin1); delay(150); // TEMP_DIS @ PWR_MGMT_1, wake device and select GyroZ clock SPIwrite(0x6B,0x03,ChipSelPin1); delay(150); // I2C_IF_DIS @ USER_CTRL, disable I2C interface SPIwrite(0x6A,0x10,ChipSelPin1); delay(150); // SMPRT_DIV @ SMPRT_DIV, sample rate at 1000Hz SPIwrite(0x19,0x00,ChipSelPin1); delay(150); // DLPF_CFG @ CONFIG, digital low pass filter at 42Hz SPIwrite(0x1A,0x03,ChipSelPin1); delay(150); // FS_SEL @ GYRO_CONFIG, gyro scale at 250dps SPIwrite(0x1B,0x00,ChipSelPin1); delay(150); // AFS_SEL @ ACCEL_CONFIG, accel scale at 2g (1g=8192) SPIwrite(0x1C,0x00,ChipSelPin1); delay(150); } </code></pre> <p>My objective use to calibrate the accelerometers (and gyro), so that I can use them without having to depend on Mission Planner.</p> <p>I'm reading values like:</p> <pre><code>Acc X 288 Acc Y -640 Acc Z 16884 Gyro X -322 Gyro Y 26 Gyro Z 74 Acc X 292 Acc Y -622 Acc Z 16854 Gyro X -320 Gyro Y 24 Gyro Z 79 Acc X 280 Acc Y -626 Acc Z 16830 Gyro X -328 Gyro Y 23 Gyro Z 71 Acc X 258 Acc Y -652 Acc Z 16882 Gyro X -314 Gyro Y 22 Gyro Z 78 Acc X 236 Acc Y -608 Acc Z 16866 Gyro X -321 Gyro Y 17 Gyro Z 77 Acc X 238 Acc Y -642 Acc Z 16900 Gyro X -312 Gyro Y 26 Gyro Z 74 Acc X 226 Acc Y -608 Acc Z 16850 Gyro X -321 Gyro Y 26 Gyro Z 68 Acc X 242 Acc Y -608 Acc Z 16874 Gyro X -325 Gyro Y 27 Gyro Z 69 Acc X 236 Acc Y -576 Acc Z 16836 Gyro X -319 Gyro Y 19 Gyro Z 78 Acc X 232 Acc Y -546 Acc Z 16856 Gyro X -321 Gyro Y 24 Gyro Z 68 Acc X 220 Acc Y -624 Acc Z 16840 Gyro X -316 Gyro Y 30 Gyro Z 77 Acc X 252 Acc Y -594 Acc Z 16874 Gyro X -320 Gyro Y 19 Gyro Z 59 Acc X 276 Acc Y -622 Acc Z 16934 Gyro X -317 Gyro Y 34 Gyro Z 69 Acc X 180 Acc Y -564 Acc Z 16836 Gyro X -320 Gyro Y 28 Gyro Z 68 Acc X 250 Acc Y -596 Acc Z 16854 Gyro X -329 Gyro Y 33 Gyro Z 70 Acc X 220 Acc Y -666 Acc Z 16888 Gyro X -316 Gyro Y 19 Gyro Z 71 Acc X 278 Acc Y -596 Acc Z 16872 Gyro X -307 Gyro Y 26 Gyro Z 78 Acc X 270 Acc Y -642 Acc Z 16898 Gyro X -327 Gyro Y 28 Gyro Z 72 Acc X 260 Acc Y -606 Acc Z 16804 Gyro X -308 Gyro Y 31 Gyro Z 64 Acc X 242 Acc Y -650 Acc Z 16906 Gyro X -313 Gyro Y 31 Gyro Z 78 Acc X 278 Acc Y -628 Acc Z 16898 Gyro X -309 Gyro Y 22 Gyro Z 67 Acc X 250 Acc Y -608 Acc Z 16854 Gyro X -310 Gyro Y 23 Gyro Z 75 Acc X 216 Acc Y -634 Acc Z 16814 Gyro X -307 Gyro Y 27 Gyro Z 83 Acc X 228 Acc Y -604 Acc Z 16904 Gyro X -326 Gyro Y 17 Gyro Z 75 Acc X 270 Acc Y -634 Acc Z 16898 Gyro X -320 Gyro Y 31 Gyro Z 77 </code></pre> <p>From what I understand, SPIread(...,...) returns an analog voltage value from the data pins of the sensor, which happens to be proportional to the acceleration values. Right?</p> <p><strong>My question is:</strong> How do I go about calibrating the accelerometer?</p> <p><strong>What I've tried till date:</strong> I've tried the "place horizontal... place nose down... left side, right side" technique used by mission planner. </p> <p>Basically, when placed on horizontal position, the sensor is experiencing +1g on it's Z axis and 0g in X and Y axis. Left/right side provides ±1g on Y axis and nose down/up provides ±1g on X axis. </p> <p>Now for every orientation, I've passed the raw sensor data through a LPF and then computed the mean, median and SD of this sensor data over 100 iterations. I store this mean, median and SD value in the EEPROM for each axis (one for +1g and one for 0g).</p> <p>Now, when I use the sensor, I load the stats from the EEPROM, match the mean/median and standard deviation with the current reading of 4/5 iterations. </p> <p>Here I'm working under the assumption that the values between 0g and +1g (and anything above 1g) can be interpolated/extrapolated from the data using a linear plot. </p> <ul> <li>Is this the correct approach for calibration?</li> <li>Can you suggest a better way?</li> <li>I noticed that the maxima/minima for each axis is different. Is this the expected outcome or is there something wrong in the code?</li> <li>What do I do with the gyro? How to calibrate for angular acceleration?</li> </ul>
APM Accelerometer Calibration
<p>I think I may have found the perfect controller for this project:</p> <p><a href="http://www.robotshop.com/en/dagu-micro-magician-robot-controller.html" rel="nofollow">http://www.robotshop.com/en/dagu-micro-magician-robot-controller.html</a></p> <p>The Micro Magician will allow me to control 2 simple DC motors, detect impacts and crashes with the accelerometer AND receive move commands from the IR receiver! Pretty awesome for the size. </p> <p>The only piece left in the puzzle is sending back the data to the remote controller (accelerometer data). I'm not sure if I could attach an IR transmitter or maybe BT/Wifi? </p> <p>Edit: looks like this could do the job! <a href="http://www.dawnrobotics.co.uk/dagu-arduino-bluetooth-module/" rel="nofollow">http://www.dawnrobotics.co.uk/dagu-arduino-bluetooth-module/</a></p> <p>Now to find small motors...</p>
2158
2013-12-17T23:12:35.343
|raspberry-pi|rcservo|accelerometer|
<p>I'd like to build a robot as small as possible and with as few "delicate" parts as possible (the bots will be bashing into each other).</p> <p>I was wondering if it was possible to use a small chip that could receive bluetooth/IR/wifi commands to move the motors, and in turn, send back feedback based on sensors such as an accelerometer (to detect impact).</p> <p>I can probably achieve something like this with the <a href="http://www.piborg.org/picypack" rel="noreferrer">PiCy</a> </p> <p><a href="https://i.stack.imgur.com/vjVzf.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/vjVzfm.jpg" alt="picy"></a></p> <p>however this is slightly bigger than I'd like (due to the size of the Pi) and I'm not sure how long the Pi would last taking continuous impacts.</p> <p>I'd therefore like to try to offset the brain (the Pi) to the side of the arena and just use a small chip to receive move commands, and send back data from the accelerometer.</p> <p>Do you have any recommendations for such a chip? Wifi would be my choice but if it impacts the size I could try BT</p> <p>Edit: After further research it seems an Arduino nano with a WiFi RedBack shield might do the job along with something like this for the motors: <a href="http://www.gravitech.us/2mwfecoadfor.html" rel="noreferrer">http://www.gravitech.us/2mwfecoadfor.html</a></p>
Making a tiny robot by using a remote brain
<p>ROS Indigo Igloo will be coming out in a week or 2. Will be the first release in a year (they used to have 6 month release cycle, it has been a long wait). It moves at such a fast pace, so it may be worth waiting until it comes out so you don't have to set up your environment/re-learn new stuff again.</p> <p>I would recommend you start with MoveIt, even if you already have Hydro or Groovy set up: <a href="http://moveit.ros.org/documentation/tutorials/" rel="nofollow">http://moveit.ros.org/documentation/tutorials/</a></p> <p>With this tutorial first: <a href="http://docs.ros.org/hydro/api/moveit_setup_assistant/html/doc/tutorial.html" rel="nofollow">http://docs.ros.org/hydro/api/moveit_setup_assistant/html/doc/tutorial.html</a></p> <p>Then move onto the integrating the Kinect: <a href="http://docs.ros.org/api/pr2_moveit_tutorials/html/planning/src/doc/perception_configuration.html" rel="nofollow">http://docs.ros.org/api/pr2_moveit_tutorials/html/planning/src/doc/perception_configuration.html</a></p>
2173
2013-12-19T23:13:08.690
|ros|kinect|
<p>I am a beginner of ROS, Kinect and Ubuntu. What I want is to visualize Kinect's data on rviz environment then run object recognition on it.</p> <p>I've tried a few tutorials but had no luck. All I got was an empty rviz world.</p> <p>Since I am a beginner I would appreciate any step-by-step instructions (preferably for hydro or groovy).</p> <p>I would also like to note that I've managed to get visual from Kinect so the device is working fine.</p>
Visualizing kinect data on rviz
<p>Rotary mowers are mechanically simpler, so cheaper to make. The rotary mower is more tolerant of getting dull, and if you run over something like small trees or rocks, the blades take less damage, and the motor is harder to jam and stall. The blades are also easier to sharpen. For this simplicity, you get a poorer cut, and more grass damage. I do agree, though, that the reel mower would be more efficient. You'd probably need more sensors though, to watch for jamming.</p>
2180
2013-12-23T12:22:29.310
|motor|mechanism|
<p>If not all, but major types of lawn mower robots are rotary mowers. I presume<sup>1</sup> that reel mower is more efficient, and is said to leave a better lawn health and cut. So, why industry go to the other option?</p> <hr> <p><sup>1 - I'm assuming the efficiency, as electrical rotary mowers have at least 900W universal-motors or induction motors, and a manual reel mower is capable nearly the same cutting speed.</sup></p>
Lawn mower robot (type of cutter)
<p>(1) On several Arduino models, there's a 100 nF capacitor (at red dot in diagram) connected between DTR# on the FT232RL (or equivalent) chip and the RESET pin on the CPU. <img src="https://i.stack.imgur.com/IHRoq.jpg" alt="portion of schematic from arduino-duemilanove-schematic.pdf"> (Obtain file <a href="http://arduino.cc/en/uploads/Main/arduino-duemilanove-schematic.pdf" rel="nofollow noreferrer">arduino-duemilanove-schematic.pdf</a> to see whole schematic. Also see arduino.cc <a href="http://arduino.cc/en/Main/arduinoBoardDuemilanove" rel="nofollow noreferrer">Duemilanove</a> page.)</p> <p>DTR# normally sits high, and RESET normally is high due to an internal pullup resistor (eg 20K-50KΩ) so C13 usually has 0 volts across it. In the Arduino process for programming via USB-serial, a programming cycle starts with DTR# being pulled down for 22ms (<a href="http://playground.arduino.cc/Learning/AutoResetRetrofit" rel="nofollow noreferrer">1</a>,<a href="http://playground.arduino.cc/Main/DisablingAutoResetOnSerialConnection" rel="nofollow noreferrer">2</a>), which pulls RESET low long enough to reset the processor. </p> <p>After the reset, the bootloader initializes; stores the received program into flash memory; then runs the stored program.</p> <p>Note, auto-reset can be disabled (on some Arduino models) by cutting a jumper (eg, RESET-EN). See the “Automatic (Software) Reset” sections or similar of <a href="http://playground.arduino.cc/Learning/AutoResetRetrofit" rel="nofollow noreferrer">1</a>,<a href="http://playground.arduino.cc/Main/DisablingAutoResetOnSerialConnection" rel="nofollow noreferrer">2</a>,<a href="http://arduino.cc/en/Main/ArduinoBoardFioTips" rel="nofollow noreferrer">3</a>,<a href="http://arduino.cc/en/Main/arduinoBoardUno" rel="nofollow noreferrer">4</a>.</p> <p>(2) To <a href="http://arduino.cc/en/Hacking/Bootloader" rel="nofollow noreferrer">flash a new bootloader</a>, you attach a programmer to either the 6-pin ICSP header (marked by green dot in diagram) or to J3 (blue dot). (Using J3 may require manually pressing S1 to reset the CPU before programming; see <a href="http://playground.arduino.cc/Main/DisablingAutoResetOnSerialConnection" rel="nofollow noreferrer">2</a>.)</p> <p>A bootloader is a program that accepts data to store into flash; or, if no program begins arriving in the first half second or so after a reset (see refs 1-4 above), it starts a previously loaded program.</p> <p>Programming via <a href="http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus" rel="nofollow noreferrer">SPI</a> (the Serial Peripheral Interface Bus, which uses the SCK, MISO, MOSI, SS pins of ICSP header for high speed full duplex serial data transfer) by contrast is a hardware operation that does not run a program from flash. Like <a href="http://en.wikipedia.org/wiki/JTAG#Storing_firmware" rel="nofollow noreferrer">JTAG</a>, it's one of the built-in hardware-level programming protocols in typical AVR chips.</p> <p>During SPI programming, RESET is held low and the AVR is like a puppet carrying out directives issued to it by the programmer device. The AVR is not executing a user-accessible stored program during programming; it is carrying out low-level directives issued to it by the programmer device. A programming directive precedes each bit of data on the SPI bus. For an explanation of the serial programming algorithm, see (eg) section 30.8 in doc2549.pdf, ATmega manual #2549P–AVR–10/2012. Section 30.8.2 on p. 350-351 explains the method. Table 30-17 on p. 352 shows the directives used to control programming. These include “Programming Enable”, “Chip Erase”, “Load Extended Address byte”, “Write Program Memory Page”, etc. </p>
2186
2013-12-25T17:00:21.147
|arduino|microcontroller|
<p>That is what I came to understand while reading here and there about flashing a new bootloader/understanding what a bootloader is etc etc</p> <p>The bootloader is supposed to be the first thing that runs when I power up my Arduino Duemilanove (or micro controllers in general). It does some setup then runs my app. It also listens to the usb cable so that if I upload some code it erases the old one and run the new one. There are 2 sections in the memory, one for the bootloader (S1) and one for the app (S2). Code on S1 can write to S2 but not to S1 (or strongly discouraged I don't remember).</p> <p>There are things that I don't understand though :</p> <ol> <li><p>If I upload some new code <em>while my app is running</em>, the upload works. What happened ? I thought that the bootloader gave hand to my app</p></li> <li><p>How can we flash a new bootloader ? If the bootloader is the thing that runs on section 1 (S1) and can only write to S2 and if the bootloader is the only thing that listens to new code uploads, ...</p></li> </ol> <p>Can you help me correct my thoughts and answer my questions ? Many thanks !</p>
Understanding Arduino bootloader
<p>Pixhawk is released very recently so there is a possibility of bugs associated with software and hardware which will be fixed as more people use it, report bugs and correct them. APM 2.6 is very much ready to go with lots of reviews and support available online.</p> <p>So if you wish to be in this field for significantly long time then I would suggest go for Pixhawk. If you are very new with system and also planning to tweak around with code then APM 2.6 is good start.</p> <p>Chinese version of APM 2.5.2 is surely cheap option to start but beware as there is no warranty on hardware you get.</p>
2189
2013-12-25T22:14:34.500
|uav|
<p>I'm a newbie in UAV stuff, your advice would be very helpful, i want to start mapping using fixed wing UAV, but my main choice was APM 2.6, but after some researches, i found that APM 2.6 won't be actively maintained in the future because the future releases will be PixHawk.</p> <p>i wonder if i should choose APM 2.6 for its stability, on the other side i don't see the benefits of Pixhawk apart having long time support. or being a newbie i should start with something experimental like APM 2.5.2 (cheap chinese version for APM).</p> <p>Thanks in advance</p>
What autopilot to purchase APM 2.6 or PixHawk?
<p>There is reference points on the robot for calibration, including one on the base.</p> <p>Page 123 of this manual lists them: <a href="http://lecture.sccsc.edu/amt/AMT%20206%20Electricity%20and%20Automation/ABB%20IRB%20140%20Robot/files/3HAC026320-001_revB_en.pdf" rel="nofollow">http://lecture.sccsc.edu/amt/AMT%20206%20Electricity%20and%20Automation/ABB%20IRB%20140%20Robot/files/3HAC026320-001_revB_en.pdf</a></p> <p>ABB does not calibrate the robot by finding the x,y,z position of the effector hand by direct measurement. They calculate it by zeroing the joints rotation sensors, and using that to 'blind reckon' x,y,z position from the 'home position'</p> <p>The problem is that the manufacturers calibration system requires the manufacturers special pendulum calibration tool. More details on the mark locations and the calibration process is this manual: <a href="http://www2.tec.ilstu.edu/students/ABB%20Robots/IRB1600_Manuals/files/3HAC16578-1_revC_en.pdf" rel="nofollow">http://www2.tec.ilstu.edu/students/ABB%20Robots/IRB1600_Manuals/files/3HAC16578-1_revC_en.pdf</a></p> <p>You may be able to rent the calibrator from ABB.</p> <p>If you don't have the calibration tool, then you may be able to reference using a machinists level, a accurate ruler, and a fair amount of patience! </p>
2195
2013-12-28T06:53:49.067
|industrial-robot|calibration|
<p>We are planning to recalibrate ABB IRB 1410 robot and conduct series of accuracy &amp; repeatability tests using FaroArm.</p> <p>My questions are</p> <p>i) Is there any physical identification marker on the robot which can be used to identify the location of base co-ordinate frame?</p> <p>ii) If locating the base frame is not possible, can accuracy be measured from fixed arbitrary point in space?</p>
Re-Calibration of an articulated industrial robot
<p>Brushless DC motors should have excellant lower speed performance. Your problem is probably not the motor but rather the electronic speed controller. Those sensorless controllers generally don't perform well in low speeds because there is very little signal for them to key on.</p> <p>It also sounds like you may be "fighting" the ESC's built in controller with your own control loop on top of it. The first thing I would check is what's the lowest speed command the ESC can reliably take before trying to close a loop with an additional sensor. What is your loop doing?</p> <p>If you want to build your own system you may want to replace the ESC with an amplifier and change your control algorithm to control the current/torque rather than speed. Since you have your own position sensor it may somehow be possible to fake out the back-emf signature the ESC is looking for but this isn't easy and is probably a waste of time.</p>
2205
2013-12-29T14:59:04.927
|motor|control|brushless-motor|
<p>I'm having a problem with controlling my BLDC motor when starting up and when running in low rpm. I have a custom board to measure rotation of the motor using an optical sensor and send servo pwm commands to an <a href="http://www.hobbyking.com/hobbyking/store/__14630__Turnigy_TrackStar_18A_1_18th_Scale_Brushless_Car_ESC_.html" rel="nofollow">esc</a>. The problem is, that I can't start the motor smoothly. When I slowly increase the control signal, it starts stuttering and then jumps directly up to about 1500rpm.</p> <p>Is there a way to improve this situation without using a sensored motor/esc combo?</p>
Low speed control of bldc motors
<p>There are too much of types to someone describe. I think a simple research will help you.</p> <p>Reading the other responses I will put something, to put things like they are.</p> <h1>Servo-motor</h1> <p><strong>Near any motor can be a servo motor or not</strong>. That means a brushed-motor, brush-less motor, stepper-motor (that is a brush-less motor) can be a servo-motor or not. Servo-motors have a feed-back loop to know some variables like the position or speed, or both, and correct according to the signal you put. BUT, that is made by a controller, the motor just has the sensors to provide the signals.</p> <h1>Brushed-motors</h1> <p>As the name suggests, these motors have a rotating part that is energized (major part is the rotor), and to energize it brushes are used. <strong>There's many types of brushed-motors</strong>.</p> <ol> <li><p>They can do its commutation mechanically, in the brushes, what generates sparks, less efficiency, EMI and RFI and more brush wear.</p> </li> <li><p>They can do the switching externally, and the brushes runs on continuous ring of contact, their only function is to supply power, the commutation is done by a controller. There's nearly no sparks at the brushes (although as the ring wear this can led to bumps of the brush in the ring what result in sparks).</p> </li> </ol> <p>Their stator can be of magnets (normally the motor found on toys), or can be coils too (electro-magnets) as the motors found in power-tools.</p> <p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f6/Right_Motor_Internal.JPG/180px-Right_Motor_Internal.JPG" alt="Four brushes motor, showing its brushes and contact ring, and the mechanical commutation. The brushes are inside the &quot;yellow&quot; casing, that holds the brush with a spring to force it over the contact ring." /><br> <sup>Four brushes motor partially open, showing its brushes and contact ring, and the mechanical commutation. The brushes are inside the &quot;yellow&quot; casing, that holds the brush with a spring to force it over the contact ring. The brushes are made mainly from carbon, yet have other materials, the brush wear is visible on the contact ring</sup></p> <p><a href="https://i.stack.imgur.com/9z6Pd.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/9z6Pd.jpg" alt="From left to right. Brushes of a small motor, rotor with contact ring and coils, stator and case with magnets" /></a><br /> <sub>(source: <a href="https://upload.wikimedia.org/wikipedia/commons/4/42/Brushed_dc_motor_dissembled.jpg" rel="noreferrer">wikimedia.org</a>)</sub> <br> <sup>From left to right. Brushes of a small motor, rotor with contact ring and coils, stator and case with magnets</sup></p> <h1>Brush-less motors</h1> <p>Brush-less motors supply the power to the rotating part of the motor (rotor) by other means.</p> <p><strong>Induction motors</strong>, as the name suggest, supply the power by induction. These motors are the most common in industry, they are very efficient (normally ~90%). As other motors they can have mechanical brake to have fast deceleration and holding breaking. (Ex: cable elevators).</p> <p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/Stator_and_rotor_by_Zureks.JPG/320px-Stator_and_rotor_by_Zureks.JPG" alt="Induction motor partially opened. The rotor, these small cuts in the metal form a low impedance coil, and case with its coils. The fins on the case help to dissipate the generated heat. Depending on the motor it normally have a fan attached to the axle, or use other motor for forced ventilation, for example the motor runs slow." /><br> <sup>Induction motor partially opened. The rotor, these small cuts in the metal form a low impedance coil, and case with its coils. The fins on the case help to dissipate the generated heat. Depending on the motor it normally have a fan attached to the axle, or use other motor for forced ventilation, for example the motor runs slow.</sup></p> <p><strong>DC brush-less motors</strong> have magnets in their rotor, so no electrical power supply is necessary to the rotor. Normally the magnets have high force to power ratio, see Neodymium magnets for example, and so the rotor<sup><b>1</b></sup> has low mass and consequently low inertia. That said, it can start and stop at higher rates/accelerations than other motors with less wear.</p> <p><strong>Stepper-motors</strong> are a type of brush-less motor, but with more poles, that means that applying power to the correct phase you can <em>presume</em> the position of the rotor<sup><b>2</b></sup>. But the rotor can <em>slip</em>, called &quot;lose step&quot;, and so you lost the position synchronism. That means that if use a stepper motor for precise positioning you need to be sure that the motor can handle the load and not slip. How can you be sure of the position? Put a position sensor on the axle of the motor, this with a controller turns it into a servo-motor.</p> <p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/cf/Stepper_motor.jpg/320px-Stepper_motor.jpg" alt="Stepper motor partially open. This is probably a hybrid type, with the rotor being magnetized and with gear like shape. Some steppers have round rotors, but the rotor is still magnetized with a high number of poles." /><br> <sup>Stepper motor partially open. This is probably a hybrid type, with the rotor being magnetized and with gear like shape. Some steppers have round rotors, but the rotor is still magnetized with a high number of poles.</sup></p> <p>As they don't have brushes, normally the only mechanical contact between rotor and stator are the bearings. That why they have low maintenance, high efficiency, nearly no sparks<sup><b>3</b></sup>.</p> <p><sup>1 - the rotor refereed can be the inner or outer part of the motor, the part that don't rotate relative to the power supply I would consider the stator.</sup></p> <p><sup>2 - That can be applied to others motors as well. A DC brush-less non-stepper motor, with 3 poles for example, would give you 120° of precision, but what you do is modulate the power applied to each coil (ex: less power to one more to the other) so that you get the positions between. That is used on some steppers motors to, and is called micro-stepping.</sup></p> <p><sup>3 - although major times that means spark free, one possible mean of spark is from static energy on the rotor, that sometimes find the bearings as the short circuit to discharge, resulting in wear to the bearing, but that in not so common and have some means of counter-measure.</sup></p> <p>This is only a small list of electrical motors types.</p> <h1>Mechanical transmission</h1> <p><em>Note: You called that gearing but it's not the only way to do it. I will refer to mechanical transmission</em></p> <p>That is other topic. Motors run efficiently and have more torque at certain rotations, so you put for example a reducing transmission and get less speed and more torque. Suppose a motor spin at 4000 RPM and have a 0.1Kg-cm of torque, assuming a reduction of 100:1 (that is 100 turn in the input result in 1 turn at the output) you should get 400 RPM and 1Kg-cm of torque.<sup><b>1</b></sup></p> <p><sup>1 - This is a theoretical example, the transmissions are not 100% efficiently so you lose some power</sup></p> <h2>Some types of transmissions that can have different ratios</h2> <p>Gears, Hydraulics, chain sprocket, belts. There's more of course, and each of that have their subtypes. Ex: Gear reductions can be worm-gear, planetary, etc. On some applications you can use pulleys, like cranes, elevators.</p> <p><strong>All images courtesy of Wikimedia commons <a href="http://commons.wikimedia.org" rel="noreferrer">http://commons.wikimedia.org</a></strong></p>
2209
2013-12-30T06:53:20.663
|motor|
<p>I am beginning to learn about the hardware aspect of robotics, and in order for a lot of this new information to be useful to me (whether on this site or elsewhere) I will need a basic understanding of the terminology.</p> <p>One thing that comes up repeatedly is different electric motors: servo, DC motor, brushless motor, step motor, gear motor... etc</p> <p>Is there a comprehensive list? Or at least a list of the most common ones, and their descriptions / differences?</p>
What are the different types of electric motors?
<p>I can share my experience at a startup where we built what was essentially special purpose robots. I'm sure there's a wide spectrum though. A lot depends on the complexity of your robot or machine and the target market for it. Is it something that will be manufactured in 10's of units and sold for millions a dollar a piece or something that will be manufactured in the millions and sold for hundreds of dollars. There are many other factors as well.</p> <ul> <li>We had mechanical engineers, electrical engineers, software engineers, physicists, system integrators. When you're at the "garage" stage you may have 0-1 of each but the general ideas still apply.</li> <li>Physically we had some lab spaces where the machines would be built/integrated/tested and around them was the team work area/offices.</li> <li>You start from a concept/high level system design.</li> <li>The system will typically be decomposed into separate parts/assemblies.</li> <li>Typically the design is iterated on. Some assemblies may be designed, built, tested and then changed. The system design can evolve as well.</li> <li>If you walked into our labs there'd always be some piece of the machine being run through cycle testing or some other functional tests. Things clicking and humming all over the place ;-)</li> <li>In our startup we didn't make any parts in house. We had some close machine shop contractors manufacture all the parts we needed for development. I know some startups will have a machine shop with a CNC mill/lathe for rapid turnaround of parts.</li> <li>We did do all the assembly and integration in house.</li> <li>Over time the design is finalized and various production drawings, procedures, and BOMs are made for the entire machine. Those include all the manufactured parts, the off-the-shelf parts, things like printed circuit boards etc.</li> <li>Throughout this software is being developed side by side with everything else and needs to change to match the changing system and requirements (that's my job :-) )</li> <li>There are various other business activities ongoing as well. Talking to customers, HR, finance...</li> </ul> <p>It's not as easy to prototype as it is with pure software. Arguably if you're building a small web site you can bootstrap your business with no investment. If you're building a robotic company it's probably not possible.</p> <p>I hope this gives some idea of my experience. Feel free to ask more questions.</p>
2215
2013-12-31T07:15:35.533
|manufacturing|
<p>In software engineering startups, you generally go to a room with a computer or bring your own laptop, and write code. I'm interested in how robotics startups work: Is there a separate location for designing the robots? Take for example, <a href="http://anki.com/" rel="noreferrer">Anki</a>. Do they have separate research labs for designing robots? How does a robot get from a single design to being manufactured?</p> <p>I couldn't find a better place on SE to post this (the startups business section is defunct): Please link me to another SE site if there is a better place to ask this question.</p>
How do robotics startups work?
<p>Question 1: If you are satisfied with your robots stopping distance, than you don't need electronic braking. With a heavier or faster robot, its quite useful. The larger the machine, the more inertia becomes a important design factor. For example, a form of electronic braking is used by open-pit mining dump trucks. Ordinary dump trucks, being much smaller, make do with ordinary brakes.</p> <p>Question 2: The stall flags are useful to detect that the wheel is <em>not</em> moving despite full power being applied. I'm presuming we're still talking about a small robot here - what would happen if the wheel got tangled in something like a long hair or thread? </p> <p>I had a two-inch long robot once that got its powered wheels jammed on a tiny piece of toothpick it encountered while crossing a carpet. Freak luck, but it does happen. If you only run your robot on clean surfaces, and it'll spin the tires when it bumps into something, than you probably don't need the flags. Personally, though, I would use them -- but I'm a belt AND suspenders kinda guy.</p>
2218
2014-01-01T02:05:32.327
|motor|h-bridge|
<p>I have a <a href="http://letsmakerobots.com/files/userpics/u1533/Descriptive_photo_600_0.jpg" rel="noreferrer">Micro Magician v2 micro controller</a>. It has a A3906 Dual FET “H” bridge motor driver built in.</p> <p>In the manual it states "Electronic braking is possible by driving both inputs high." </p> <p>My first question is, what is the purpose of these brakes? If I set the left/right motor speed to 0, the robot stops immediately anyway. What advantage is there to using these brakes, or am I taking the word "brake" too literally?</p> <p>My second question is, the driver has "motor stall flags that are normally held high by pullup resistors and will go low when a motor draws more than the 910mA current limit. Connect these to spare digital inputs so your program will know if your robot gets stuck." But when my robot hits a wall, the wheels just keep on spinning (slipping if you will), I take it these stall flags can be used on a rough surface where the wheels have more friction?</p>
What is the purpose of electronic braking in motors?
<p>It looks like your research here is a little too specific -- you're working on the practical level before being up to speed on the theory. Take a look at more of the higher-level concepts of motion planning and obstacle avoidance.</p> <p>The <em>extremely</em> simplified process is:</p> <ol> <li>Expressing a high-level objective for the robot</li> <li>Based on data describing the current environment, creating a set of planned movements that meet the objective (using path planning algorithms like <a href="http://en.wikipedia.org/wiki/A%2a_search_algorithm#Example" rel="nofollow">A* search</a>)</li> <li>As time passes, collecting new data about the environment (e.g. current position, new obstacles) and re-planning the path</li> </ol> <p>Quadtrees and grids are just 2 ways of representing the environment itself; one of the practical problems involved in robotics is the sheer volume of data that can come in, and quadtrees offer a more memory-space-efficient representation of that data (see also, <a href="http://en.wikipedia.org/wiki/Octree" rel="nofollow">octrees</a> for 3D spaces) in return for some extra CPU time and complexity. There are many such optimizations that you could make, but it sounds like you're considering them a little too early in the design process.</p>
2236
2014-01-04T00:26:06.863
|mobile-robot|navigation|
<p>I'm programming Lua for controlling computers and robots in-game in the Minecraft mod <a href="http://computercraft.info" rel="noreferrer">ComputerCraft</a>.</p> <p>ComputerCraft has these robots called Turtles, that are able to move around in the <em>grid based</em>(?) world of Minecraft. They are also equipped with sensors making them able to detect blocks (obstacles) adjacent to them. Turtles execute Lua programs written by a player.</p> <p>As a hobby project I would like to program a <code>goto(x, y, z)</code> function for my Turtles. Some Turtles actually have equipment to remove obstacles, but I would like to make them avoid obstacles and thus prevent the destruction of the in-game environment.</p> <p>I have no prior experience in robotics, but I have a B.Sc. in Computer Science and am now a lead web developer.</p> <p>I did some research and found some basic strategies, namely <em>grid based</em> and <em>quadtree based</em>. As I have no experience in this area, these strategies might be old school.</p> <p>Note that Turtles are able to move in three dimensions (even hover in any height). I could share the obstacles as well as obstacle free coordinates in a common database as they are discovered if that would help me out, as most obstacles are stationary once they are placed.</p> <p>What are my best options in this matter? Are there any <em>easy fixes</em>? Where do I look for additional resources?</p> <p>Thank you very much in advance! :-)</p> <p><strong>EDIT</strong>: Thank you for your feedback!</p> <p>I started reading the book <em>Artificial Intelligence: A Modern Approach, 3rd Edition</em> to get up to speed on basic theory as suggested by <strong>Ian</strong>. Pointers to other educational resources are appreciated.</p> <p>Also, I started developing a basic navigation algorithm for moving in unexplored areas, similar to what <strong>Cube</strong> suggested.</p> <p>The priority for me is as few moves as possible, as it costs time and fuel cells for each additional move (approx. 0.8 seconds and 1 fuel cell per move in either direction). I plan on using the Euclidean heuristics function in a Greedy Best-First Search for computing a path that is expected to be quite optimal in reducing the number of moves to reach the goal, if enough data is available from the shared database from previous exploration.</p> <p>Each time an obstacle is reached, I plan to use the following very basic algorithm, exploiting the fact that Turtles are able to move vertically:</p> <pre><code>1. Calculate direct horizontal path to the goal. 2. Turn to the direction of the next step of the path. 3. If an obstacle is detected in front of the Turtle go to 5. If this is the 4th time that an obstacle is detected in front of the Turtle after moving up, go to 6. 4. Move forward, go to 2. 5. If no obstacle is detected above the Turtle, move up and go to 3, else go to 7. 6. Backtrack to the coordinates the Turtle was in before moving upwards. 7. Turn left, go to 3. </code></pre> <p>When using this algorithm, records are kept of the explored coordinates and uploaded to a shared database. However, there are some cases, that I did not consider:</p> <pre><code>- When should it move down? - What if the goal is not reachable from a coordinate directly above it? - If no horizontal move in any direction is possible, how long should it backtrack? - How to detect unreachable goals (obstacles can then be removed if requested) </code></pre> <p>Maybe if enough exploration data of the area is available, a Jump Point Search is performed to calculate an optimal path. However this assumes a 2D map. How can I take the 3rd dimension into account?</p> <p>Also, what would be a good data structure to store the exploration data?</p>
ComputerCraft (Minecraft mod) navigation: Collision avoidance and path planning/finding in 2D/3D space
<p>Defining the state of the quadcopter as $\bf{x} = \left[ \begin{matrix} \mathbf{p} &amp; \mathbf{v} &amp; \mathbf{r} &amp; \mathbf{w} \end{matrix} \right]^T$ where $\mathbf{p}$, $\mathbf{v}$, $\mathbf{r}$, and $\mathbf{w}$ are the position, velocity, angular position, and angular velocity of the quadcopter respectively. A simplified transition model for a quadcopter using PD control is: \begin{align} \dot{\mathbf{p}} &amp; = \mathbf{v} \\ \dot{\mathbf{v}} &amp; = -g\mathbf{e}_3 + \text{exp}([\mathbf{r}])\mathbf{e}_3u_1/m \\ \dot{\mathbf{r}} &amp; = \mathbf{w} + \frac{1}{2}[\mathbf{r}]\mathbf{w} + (1 - \frac{||\mathbf{r}||}{2\text{tan}(||\mathbf{r}||/2)})\frac{[\mathbf{r}]^2}{||\mathbf{r}||^2}\mathbf{w} \\ \dot{\mathbf{w}} &amp; = \begin{bmatrix} k_1(u_2 - r_x) + k_2w_x \\ k_1(u_3 - r_y) + k_2w_y \\ 0 \end{bmatrix} \end{align} where $\mathbf{e}_3 = \left[ \begin{matrix} 0 &amp; 0 &amp; 1 \end{matrix} \right]^T$, $[\mathbf{r}]$ represents the skew-symmetric matrix of $\mathbf{r}$, $||\mathbf{r}||$ represents the magnitude of $\mathbf{r}$, $k_1$ and $k_2$ represent the proportional and derivative gains respectively, and the control $\mathbf{u} = \left[ \begin{matrix} u_1 &amp; u_2 &amp; u_3 \end{matrix} \right]^T$ is comprised of the desired total thrust $u_1$, the desired roll angle $u_2$, the desired pitch angle $u_3$, and assumes yaw remains the same. Note that this uses a PD controller because the integration term I is generally not useful for trajectory following.</p> <p>Using this model you can, given the current state $\mathbf{x}$, calcualte how the state will change. This of course does not give you the desired angular positions about which you asked. Assuming the user expects the quadcopter to hover once it has reached the specified desired position $\mathbf{p} = [X, Y, Z]^T$ then we need $\mathbf{v} = \mathbf{r} = \mathbf{w} = \left[ \begin{matrix} 0 &amp; 0 &amp; 0 \end{matrix} \right]^T$ for the final state.</p> <p>However that still does not give you the desired angles to transition between the initial state $\mathbf{x}_i$ and the final state $\mathbf{x}_f = \left[ \begin{matrix} \mathbf{p} &amp; \mathbf{v} &amp; \mathbf{r} &amp; \mathbf{w} \end{matrix} \right]^T$. To get that you need a higher level controller to calculate a trajectory $\pi = (\mathbf{x}[], \mathbf{u}[], \tau)$ with $\mathbf{x} : [0, \tau] \rightarrow \mathcal{X}$, $\mathbf{u} : [0, \tau] \rightarrow \mathcal{U}$ for the state-space $\mathcal{X}$ and control space $\mathcal{U}$ and some travel time $\tau$. This trajectory will give you the desired angles at any given moment.</p> <p>Unfortunately there are a lot of ways to calculate this trajectory. One possibility is to look at my own work in this area. Specifically my paper titled <a href="http://arl.cs.utah.edu/research/krrtstar/" rel="nofollow">Kinodynamic RRT*: Asymptotically Optimal Motion Planning for Robots with Linear Dynamics</a>.</p>
2244
2014-01-06T03:33:49.137
|design|pid|quadcopter|
<p>I am trying to simulate a quadcopter model on Simulink. I want to implement a PID controller for each of X,Y,Z and phi,theta, psi angles. PID gets the error, as input, which is to be minimized. For the X,Y and Z, the desired values are entered by the user and the actual values are calculated from the accelerometer data, hence, the error is the desired set value - actual value.</p> <p>For phi,theta and psi, the actual values may be obtained from the gyroscope and accelerometer (sensor fusion) but I don't actually know how to calculate the desired values for each one of them since the user is usually interested in giving the position values X,Y and Z as desired not the angle values! The absence of the desired values prevents me form calculating the angular error which is needed for the PID controller.</p>
How to know the desired orientation of a quadcopter?
<p>The noise term $\epsilon_t$ is meant to capture uncertainty in the transition model, e.g. slippage due to an imperfect friction model for wheels. In other words, components of the model that were not incorporated either because they add to much computational complexity or simply cannot be modeled, such as disturbances that cannot be known in advance. It is usually assumed to be a zero mean Gaussian distributed random variable vector of the same size as the state variable $x$ that is independent between time steps. In other words, $x, \epsilon \in \mathbb{R}^n$ where $n$ is the dimension of the state-space, $\epsilon \sim \mathcal{N}(0,\Sigma)$, $\Sigma$ is the covariance of $\epsilon$, and $\epsilon_t$ and $\epsilon_{t+1}$ are identical and independently distributed. These assumptions permit closed form solutions to the Kalman filter which in term makes it relatively efficient to compute.</p> <p>The noise term $\delta_t$ is basically the same but represents uncertainty in the sensing model. Usually $\epsilon_t$ and $\delta_t$ are not known a priori and need to be learned via some system identification method or can be learned from data using something like Expectation-Maximization.</p>
2245
2014-01-06T08:49:17.403
|kalman-filter|noise|
<p>I'm reading Probabilistic Robotics by Thrun. In the Kalman filter section, they state that $$ x_{t} =A_{t}x_{t-1} + B_{t}u_{t} + \epsilon_{t} $$</p> <p>where $\epsilon_{t}$ is the state noise vector. And in $$ z_{t} = C_{t}x_{t} + \delta_{t} $$ where $\delta_{t}$ is the measurement noise. Now, I want to simulate a system in Matlab. Everything to me is straightforward except the state noise vector $\epsilon_{t}$. Unfortunately, majority of authors don't care much about the technical details. My question is what is the state noise vector? and what are the sources of it? I need to know because I want my simulation to be rather sensible. About the measurement noise, it is evident and given in the specifications sheet that is the sensor has uncertainty ${\pm} e$.</p>
Kalman Filter and the state noise vector?
<p>Page 151 should help,</p> <p><a href="https://users.aalto.fi/~ssarkka/pub/cup_book_online_20131111.pdf" rel="nofollow noreferrer">https://users.aalto.fi/~ssarkka/pub/cup_book_online_20131111.pdf</a></p> <p>where u is a sample (particle without weight)</p>
2251
2014-01-06T22:15:12.180
|slam|particle-filter|
<p>From what I've read so far, it seems that a <em>Rao-Blackwellized</em> particle filter is just a normal particle filter used after marginalizing a variable from:</p> <p>$$p(r_t,s_t | y^t)$$</p> <p>I'm not really sure about that conclusion, so I would like to know the precise differences between these two types of filters. Thanks in advance.</p>
Difference between Rao-Blackwellized particle filters and regular ones
<p>Position measurement is all about dealing with uncertainty. Entire books have been written on how to estimate good position from uncertain data coming from several sources, so I won't try to summarize that here.</p> <p>In general though, what you need will depend a lot on how accurate you need to be. Getting GPS fixes once per second (5 times per second on some hardware) is great for something like a large boat. However, 5 fixes per second is pretty bad when you're moving at high speeds in close quarters, like a quadcopter does -- not to mention the (roughly) 20-foot circle of uncertainty in GPS positions.</p> <p>There are many position, velocity, and acceleration sensors that you can buy. Only some of them are practical for any given application, and terribly few of them are good enough to be considered real-time ground-truth by themselves (either operating in tightly controlled environments, or costing you hundreds of thousands of dollars). So, you need to pick a good mix of sensors so that each one can help keep the inaccuracies of the other sensors in check. This is entirely dependent on your robot and its application; there is no one-size-fits-all.</p> <p>Typically, you'd combine a high-accuracy-low-frequency method like GPS (or other triangulation-based scheme) with a moderate-accuracy-high-frequency method like an accelerometer or gyro. This keeps estimates from drifting too far while giving some reasonable real-time adjustments. </p>
2263
2014-01-08T09:10:29.550
|quadcopter|accelerometer|navigation|sensor-fusion|
<p>I previously thought that an accelerometer on a quadcopter is used to find the position by integrating the data got from it. After I read a lot and watched <a href="http://www.youtube.com/watch?v=C7JQ7Rpwn2k" rel="noreferrer">this Youtube video</a> (specifically at time 23:20) about Sensor Fusion on Android Devices, I seem to get its use a little correct. I realized that it's hard to filter out the considerable noise, generated from error integration, to get useful information about the position. I also realized that it is used along with the gyroscope and magnetometer to for fused information about orientation not linear translation.</p> <p>For outdoor flight, I thought of the GPS data to get the relative position, but is it so accurate in away that enables position measurement (with good precision)? How do commercial quadcopters measure positions (X,Y and Z)? Is it that GPS data are fused with the accelerometer data?</p>
Quadcopter Position Measurement (Accelerometer, GPS or Both)?
<p>I suspect that your Arduino is reseting, by the fact that the stall current of the motor by the product sheet is $800mA$, and you are using the USB power, then Arduino regulator, to supply the motor.</p> <p>As you are using the USB/Serial converter to make the connection to the Raspberry Pi, when it resets, it can be creating a new "virtual" serial port on the Raspberry Pi, like you are saying.</p> <p>The USB maximum supply of current is near $500mA$ in this case, but in reality there's more than the motor, Arduino, and so on, so its less.</p> <p>To test it, just disconnect the motor and see if don't reset or create a new "virtual" serial port.</p> <p>Being the problem (or even if its not) you should power your motor from an external supply capable of suppling the required current under the voltage you want.</p> <blockquote> <p>The servo was attempting to draw more than 500mA over USB and was triggering the Arduino's USB Overcurrent Protection feature. An appropriate power supply for the Arduino fixed the issue. </p> </blockquote> <p>I still suggest you to use other power supply to the motor, or use the same power supply but not use the power from the Arduino to supply the motor. This is due that even with an external power supply the Arduino has an power regulator with a $1000mA$ capacity, and also this could put some noise on the ATMega supply. Of course the power supply should be in the motor voltage supply range.</p>
2273
2014-01-09T01:56:50.623
|arduino|raspberry-pi|otherservos|python|
<p>I am having difficulty sustaining a connection between my Raspberry Pi (Model B running <a href="http://en.wikipedia.org/wiki/Raspbian" rel="nofollow noreferrer">Raspbian</a>) and my Arduino (<a href="http://arduino.cc/en/Main/ArduinoBoardUno" rel="nofollow noreferrer">Uno</a>) while sending signals from the Raspberry Pi to a <a href="http://www.pololu.com/product/2149" rel="nofollow noreferrer" title="title here">continuously rotating servo</a> (PowerHD AR- 3606HB Robot Servo) via Python. I'm not sure if there is a more efficent way of sending servo instructions via Python to the Arduino to rotate the servo. I'm attempting to communicate signals from the Raspberry Pi to the Arduino via USB using what I believe is considered a "digital Serial connection". My current connection:</p> <p><code>Wireless Xbox 360 Controller -&gt; Wireless Xbox 360 Controller Receiver -&gt; Raspberry Pi -&gt; Externally Powered USB Hub -&gt; Arduino -&gt; Servo</code></p> <p><img src="https://i.stack.imgur.com/u038Q.png" alt="Enter image description here"></p> <pre><code>Servo connection to Arduino: Signal (Orange) - pin 9 Power (Red) - +5 V Ground (Black) - GND </code></pre> <p>On the Raspberry Pi I have installed the following (although not all needed for addressing this problem):</p> <ul> <li><a href="https://github.com/Grumbel/xboxdrv" rel="nofollow noreferrer">xboxdrv</a></li> <li><a href="http://pyserial.sourceforge.net/" rel="nofollow noreferrer">pyserial</a></li> <li><a href="https://github.com/thearn/Python-Arduino-Command-API" rel="nofollow noreferrer">Python-Arduino-Command-API</a></li> <li><a href="http://www.pygame.org/news.html" rel="nofollow noreferrer">PyGame</a></li> <li><a href="https://github.com/zephod/lego-pi" rel="nofollow noreferrer">lego-pi</a></li> <li><a href="http://arduino.cc/" rel="nofollow noreferrer">Arduino</a></li> </ul> <p>The sketch I've uploaded to the Arduino Uno is the <a href="https://github.com/thearn/Python-Arduino-Command-API/blob/master/sketches/prototype/prototype.ino" rel="nofollow noreferrer">corresponding sketch</a> provided with the Python-Arduino-Command-API. *Again, I'm not positive that this is the best method means of driving my servo from Python to Arduino (to the servo).</p> <p>From the Raspberry Pi, I can see the Arduino is initially correctly connected via USB:</p> <pre><code>pi@raspberrypi ~/Python-Arduino-Command-API $ dir /dev/ttyA* /dev/ttyACM0 /dev/ttyAMA0 </code></pre> <p>and</p> <pre><code>pi@raspberrypi ~/Python-Arduino-Command-API $ lsusb Bus 001 Device 002: ID 0424:9512 Standard Microsystems Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. Bus 001 Device 004: ID 045e:0719 Microsoft Corp. Xbox 360 Wireless Adapter Bus 001 Device 005: ID 1a40:0201 Terminus Technology Inc. FE 2.1 7-port Hub Bus 001 Device 006: ID 0bda:8176 Realtek Semiconductor Corp. RTL8188CUS 802.11n WLAN Adapter Bus 001 Device 007: ID 046d:c52b Logitech, Inc. Unifying Receiver Bus 001 Device 008: ID 2341:0043 Arduino SA Uno R3 (CDC ACM) </code></pre> <p>From the Raspberry Pi, I'm able to rotate the servo as a test clockwise for one second, counter-clockwise for one second, then stop the servo, with the following Python script:</p> <pre><code>#!/usr/bin/env python from Arduino import Arduino import time board = Arduino(9600, port='/dev/ttyACM0') board.Servos.attach(9) # Declare servo on pin 9 board.Servos.write(9, 0) # Move servo to full speed, clockwise time.sleep(1) # Sleep for 1 second print board.Servos.read(9) # Speed check (should read "0") board.Servos.write(9, 180) time.sleep(1) print board.Servos.read(9) # (Should read "180") board.Servos.write(9, 90) print board.Servos.read(9) board.Servos.detach(9) </code></pre> <p>The output via the Raspberry Pi terminal reads:</p> <pre><code>0 180 90 </code></pre> <p>Although this only performs full-speed in both direction (as well as the calibrated "stop" speed of 90), I have successfully alternated from a full-speed to slower speeds, for example, going from 0 up to 90 in increments of 10.</p> <p>From the Raspberry Pi, I'm able to send input from my Xbox controller to drive the servo with a small custom Python script I've created along with xboxdrv (which works flawlessly with other projects I'm doing):</p> <pre><code>#!/usr/bin/python from legopi.lib import xbox_read from Arduino import Arduino # To catch Ctrl+C import signal import sys # The deadzone within which we ignore inputs, approximately 1/3 of total possible input DEADZONE = 12000 def signal_handler(signal, frame): print "Stopping Wrapper" sys.exit(0) # Capture Ctrl+C so we can shut down nicely signal.signal(signal.SIGINT, signal_handler) print "Starting Wrapper" print "Press Ctrl+C at any time to quit" board = Arduino(9600, port='/dev/ttyACM0') board.Servos.attach(9) board.Servos.write(9, 90) for event in xbox_read.event_stream(deadzone=DEADZONE): print "Xbox event: %s" % (event) # If the RB button it's being held, rotate the servo counter-clockwise at full-speed. # When the RB button is released, stop the servo. if(event.key=='RB'): if(event.value&gt;0): board.Servos.write(9, 180) print board.Servos.read(9) else: board.Servos.write(9, 90) print board.Servos.read(9) continue </code></pre> <p>This script runs, and I'm able to control the servo using the RB button on my controller. However, <strong>it eventually fails</strong> - sometimes after minutes, sometimes after seconds (rapid and intermittent input seemingly having no influence on expediting a crash). Input is no longer read by the script, the terminal comes to a halt, the servo freezes on whatever the last command given was (either spinning endlessly or stopped), and I'm forced to <kbd>Ctrl</kbd> + <kbd>C</kbd> out of the script. If I check to see if the Arduino is still connected to the Raspberry Pi, it shows that it has reconnected itself to the Raspberry Pi as "ttyACM1" (from /dev/ttyACM0 to /dev/ttyACM1):</p> <pre><code>pi@raspberrypi ~/robotarm $ dir /dev/ttyA* /dev/ttyACM1 /dev/ttyAMA0 </code></pre> <p>Why does the Arduino reconnect itself? Is there some other way I should be processing this information? Distance to the wireless Xbox receiver is not a factor as all of these pieces are adjacent to one another for testing purposes. It will prove impossible to use this servo as a wheel for my robot if I'm constantly tending to this issue.</p>
Unwanted Arduino reconnect: Servo + Arduino + Python (Raspberry Pi)
<p>Well, one of the things is that computer vision starts at light, lenses and camera, you need good ones. Its not only a question of resolution, it's also a question of sensor size, signal/noise ratio, and good lenses. There's no need for high resolution if your camera has poor optics.</p> <p>So your environment is pretty challenging and there's no certainly that it will works in this way.</p> <p>The 640x480 is not to low resolution. I have read a paper that proposed something like your circle, but without the spaces between the rings, so more light reflected and less wasted space, the circles are divided to represent a code of the place.</p> <p>A simple circle divided into 4 segments with 4 colors would give you 256 distinct codes.</p> <p>More or less this:<br> <img src="https://i.stack.imgur.com/ckiCO.gif" alt="robot land mark"></p> <p>The lines are just for exemplification. Lets suppose you use 4 divisions with 4 colors: Red, Green, Blue and White. This would give you $ 4^4=256 $ distinct marks. You could detect the circle with edge detect algorithms, and then you have the coordinates to get the color marks.</p> <p>This needs that the orientation of the robot and the mark is always the same, if the robot or camera tilts, or you want to put the mark in any position, just add a first marker.</p> <p>Adding a <em>redundancy check</em> to it is good to, as this will help removing false-positive marks.</p>
2274
2014-01-09T05:45:28.850
|software|computer-vision|artificial-intelligence|
<p>I've spent quite some time researching this, but most of my Google search results have turned up academic research papers that are interesting but not very practical.</p> <p>I'm working on a target/pattern recognition project** where a robot with a small camera attached to it will attempt to locate targets using a small wireless camera as it moves around a room. The targets will ideally be as small as possible (something like the size of a business card or smaller), but could be (less ideally) as large as 8x10 inches. The targets will be in the form of something easily printable.</p> <p>The pattern recognition software needs to be able to recognize if a target (only one at a time) is in the field of vision, and needs to be able to accurately differentiate between at least 12 different target patterns, hopefully from maybe a 50x50 pixel portion of a 640x480 image.</p> <p>Before playing with the camera, I had envisioned using somewhat small printed barcodes and the excellent <a href="https://code.google.com/p/zxing/" rel="nofollow noreferrer">zxing</a> library to recognize the barcodes.</p> <p>As it turns out, the camera's resolution is terrible - 640x480, and grainy and not well-focused. <a href="https://i.stack.imgur.com/n9pKk.png" rel="nofollow noreferrer">Here is an example still image.</a> It's not very well-suited for capturing barcodes, especially while moving. I think it could work with 8x10 barcodes, but that's really larger than I'm looking for. (I'm using this particular camera because it is tiny, light, cheap, and includes a battery and wi-fi.)</p> <p>I'm looking for two things: a suggestion or pointer to an optimal pattern that I could use for my targets, and a software library and/or algorithm that can help me identify these patterns from images. I have NO idea where to start with the right type of pattern so suggestions there would really help, especially if there is a project out there that does something resembling this. I've found <a href="http://opencv.org/" rel="nofollow noreferrer">OpenCV</a> and <a href="http://robwhess.github.io/opensift/" rel="nofollow noreferrer">OpenSIFT</a> which both seem like potential candidates for software libraries, but neither seemed to have examples of doing the type of recognition I'm talking about. I'm thinking picking the right type of pattern is the big hurdle to overcome here, so any pointers to the optimal type of pattern would be great. Being able to recognize the pattern from all different angles is a must.</p> <p>So far, my idea is to use patterns that perhaps look <a href="https://i.stack.imgur.com/HWMfF.jpg" rel="nofollow noreferrer">something like this</a>, where the three concentric color rings are simply either red, green, or blue - allowing for up to 27 unique targets, or 81 if I use 4 rings. From about 2 feet, the capture of a 3x3 inch target (from my computer screen) <a href="https://i.stack.imgur.com/7gQWJ.png" rel="nofollow noreferrer">looks like this</a> which seems like it would be suitable for analysis but I feel like there should be a better type of pattern that would be more compact and easier to recognize - maybe just a plain black and white pattern of some sort with shapes on it?</p> <p>Pointers to an optimal approach for this are greatly appreciated.</p>
Workable low-resolution object/target recognition pattern and library?
<p>You're attempting to move a robot along a predefined path without the aid of sensors, so really we just need to convert the list of points into a list of pre-scripted actions.</p> <ol> <li>Convert input points to $({\Delta}x, {\Delta}y)$ pairs</li> <li><p>Convert $({\Delta}x, {\Delta}y)$ pairs to $(\Delta\theta, d)$ pairs</p></li> <li><p>Convert $(\Delta\theta, d)$ pairs to $(V_{left}, V_{right}, {\Delta}t)$ tuples</p></li> </ol> <p>The first step is easy -- simple subtraction.</p> <p>The second step is also fairly straightforward: using the pythagorean theorem for the distance and the <code>atan2</code> function for the angle:</p> <p><a href="https://i.stack.imgur.com/m7CLt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m7CLtm.jpg" alt="Angle between 2 points"></a></p> <p>(Then just keep track of the last $\theta$ so you can calculate $\Delta\theta$).</p> <p>The last step is a little tricky. You want to convert that set of angles and distances to the left and right wheel voltages, and the time to run them. This will actually give you <em>two</em> $(V_{left}, V_{right}, {\Delta}t)$ tuples for every $(\Delta\theta, d)$ pair: one to change the angle, and one to travel the distance. </p> <p>For a given width $w$ between the wheels, the change in the angle (radians) will be based on the movements of the right and left wheel: $$ \Delta \theta = \frac{1}{w} \left( {\Delta}d_{left} - {\Delta}d_{right}\right) $$ You'll have to decide what voltages and $\Delta t$ will produce that change in distance for your robot.</p> <p>Next, you'll do the same calculation for $d$. Both motors will use the same voltage to (hopefully) travel the same distance.</p> <p>That should produce a list of times and voltages that approximate the line you drew.</p>
2279
2014-01-10T00:23:22.653
|wheeled-robot|wifi|two-wheeled|
<p>at the moment I am creating an android program, that will steer my simple, 3 wheel (2 motors, 1 for balance) robot to move online following the path drawn by user on his screen. The robot is operated through WiFi and has 2 motors that will react on any input signals.</p> <p>Imagine user drawing a path for this robot on smartphone screen. It has aquired all the points on XY axis, every time beginning with (0,0). Still I have no idea, how to somehow "convert" just points, into voltage input to both motors. Signals will be sent in approx. 60Hz connection, so quite fast. Maybe not every single axis point will be taken into consideration, there will be surely some skips, but that is irrelevant, since this path does not have to be done perfectly by the robot, just in reasonable error scale.</p> <p>Do you have any idea on how to make the robot follow defined axis points that overall create a path?</p> <p>Edit 10.01:</p> <p>The voltage will be computed by the robot, so input on both is between -255 and 255 and the velocity should increase or decrease lineary in those borders. Additionaly, I would like to solve it as if there were perfect conditions, I don't need any feedback crazy models. Let's assume that all the data is true, no sensors and additional devices. Just XY axis path and required input (ommit wheel slide too).</p>
2D path following robot, converting XY axis path to input on wheels
<ol> <li>It looks like your proportional gain is too high.</li> <li>You seem to be constantly increasing RPM on one motor while locking in the other one to make the system rotate. This isn't a good control strategy as eventually those are going to both saturate and you will lose control. Also as time increases your ability to command the system decreases. So you need a better model of the system.</li> <li>If you deal with #1, and #2 you'll have a more stable system but you may not be happy with the control bandwidth. To deal with that you need to make your system stiffer which includes getting rid of any lag on the sensor side of things and on the control side of things.</li> </ol>
2297
2014-01-13T12:14:10.597
|arduino|pid|quadcopter|stability|
<p>In continuation of the question I asked here: <a href="https://robotics.stackexchange.com/questions/2167/quadcopter-instability-with-simple-takeoff-in-autonomous-mode">Quadcopter instability with simple takeoff in autonomous mode</a> ...I'd like to ask a few questions about implementing a basic PID for a quadrotor controlled by an APM 2.6 module. (I'm using a frame from 3DRobotics)</p> <p>I've stripped down the entire control system to just two PID blocks, one for controlling roll and another for controlling pitch (yaw and everything else... I'd think about them later).</p> <p>I'm testing this setup on a rig which consists of a freely rotating beam, wherein I've tied down two of the arms of the quadrotor. The other two are free to move. So, I'm actually testing one degree of freedom (roll or pitch) at a time.</p> <p>Check the image below: here A, B marks the freely rotating beam on which the setup is mounted. <img src="https://i.stack.imgur.com/CbgYO.png" alt="enter image description here"></p> <p>With careful tuning of P and D parameters, I've managed to attain a sustained flight of about 30 seconds. </p> <p>But by 'sustained', I simple mean a test where the drone ain't toppling over to one side. Rock steady flight is still no where in sight, and more than 30 secs of flight also looks quite difficult. It wobbles from the beginning. By the time it reaches 20 - 25 seconds, it starts tilting to one side. Within 30 secs, it has tilted to one side by an unacceptable margin. Soon enough, I find it resting upside down </p> <p>As for the PID code itself, I'm calculating the proportional error from a 'complimentary filter' of gyro + accelerometer data. The integral term is set to zero. The P term comes to about 0.39 and the D term is at 0.0012. (I'm not using the Arduino PID library on purpose, just want to get one of my own PIDs implemented here.)</p> <p>Check this video, if you want to see how it works.</p> <p><a href="http://www.youtube.com/watch?v=LpsNBL8ydBA&amp;feature=youtu.be" rel="noreferrer">http://www.youtube.com/watch?v=LpsNBL8ydBA&amp;feature=youtu.be</a> [Yeh, the setup is pretty ancient! I agree. :)]</p> <p>Please let me know what could I possibly do to improve stability at this stage.</p> <p>@Ian: Of the many tests I did with my setup, I did plot graphs for some of the tests using the reading from the serial monitor. Here is a sample reading of Roll vs 'Motor1 &amp; Motor2 - PWM input' (the two motors controlling the roll):</p> <p><img src="https://i.stack.imgur.com/rnyIi.png" alt="Roll vs Motor PWM input"></p> <p>As for the input/output:</p> <p><strong>Input:</strong> Roll and pitch values (in degrees), as obtained by a combination of accelerometer + gyro</p> <p><strong>Output:</strong> PWM values for the motors, delivered using the Servo library's motor.write() function</p> <hr> <p><strong>Resolution</strong></p> <p>I resolved the problem. Here's how:</p> <ol> <li><p>The crux of the issue lied in the way I implemented the Arduino program. I was using the write() function to update the servo angles, which happens to accept only integer steps in the argument (or somehow responds only to integer input, 100 and 100.2 produces the same result). I changed it to writeMicroseconds() and that made the copter considerably steadier.</p></li> <li><p>I was adding up RPM on one motor while keeping the other at a steady value. I changed this to increase RPM in one motor while decreasing the opposing motor. That kinda keeps the total horizontal thrust unchanged, which might help me when I'm trying to get vertical altitude hold on this thing.</p></li> <li><p>I was pushing up the RPM to the max limit, which is why the quadcopter kept losing control at full throttle. There was no room for the RPM to increase when it sensed a tilt.</p></li> <li><p>I observed that one of the motor was inherently weaker than the other one, I do not know why. I hardcoded an offset into that motors PWM input.</p></li> </ol> <p>Thanks for all the support.</p> <hr> <p><strong>Source Code:</strong></p> <p>If you're interested, here's the source code of my bare-bones PID implementation: <a href="https://github.com/agnivsen/NoobCopter" rel="noreferrer">PID Source Code</a></p> <p>Please feel free to test it in your hardware. Any contributions to the project would be welcome.</p>
Quadcopter PID tuning
<p>This is an old question but I see it repeated without a real answer. Sticking with a kinematic model only, here's what I would do:</p> <p><a href="https://i.stack.imgur.com/aSJzfm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aSJzfm.jpg" alt="4 wheel model"></a></p> <p>The linear velocity of the robot is $\upsilon$ and the angular velocity of the robot is $\omega$.</p> <p>The distance to the ICR (not shown in the diagram) is $$ \frac{\upsilon}{\omega} $$ The velocity of the wheel about the ICR is $$ \omega \sqrt{ d^2+s^2} $$ The angle between our wheel and the ICR is $$ \alpha = \arctan\frac{s}{d} $$ We assume our wheels are the only thing creating motion about the ICR. $\upsilon_r$ is the linear velocity of the wheel. $$ \upsilon_r=\frac{\omega \sqrt{ d^2+s^2}}{\cos \alpha} $$</p> <p>When the Instantaneous Center of Rotation (ICR) is far from the vehicle, the model is the same as the differential drive model. When the ICR is near the center of the vehicle this model is more accurate.</p> <p>Just like the kinematic model for differential drives, this works well when friction is high and equal at all wheels. Rubber on concrete or hard floor for instance. </p>
2324
2014-01-17T17:33:12.067
|wheeled-robot|inverse-kinematics|wheel|
<p>I have a 4 wheeled differential drive robot, like the Pioneer 3-AT. There are only two motors, one for left wheels and one for right wheels.</p> <p>I want to send velocity commands to the robot, I'm using ROS and the standard commands are: [linear_velocity, angular_velocity].</p> <p>I need to convert them into left and right velocities, from literature if I had 2 wheels I should do this:</p> <p>$v_l = linear_v - \omega * |r|$</p> <p>$v_r = linear_v + \omega * |r|$</p> <p>where |r| is the absolute value of the distance from the wheels to the robot "center".</p> <p>How should I take into account that I have 4 wheels?</p>
Kinematics of a 4 wheeled differential drive robots
<p>I found the problem by switching the following</p> <pre><code> p = sqrt(x^2 + y^2 + z^2); Phi = acos(z/p); &lt;---- Phi not Theta Theta = atan2(y,x); &lt;---- Theta not Phi </code></pre>
2326
2014-01-18T08:25:38.050
|sensors|sensor-error|
<p>I'm simulating a sensor in 3D. The sensor should determine ($p, \theta, \phi$) from the origin where $\theta$ is the rotation about z-axis and $\phi$ is the rotation about x-axis. The sensor is given position of a point($x, y, z$). This is what I did</p> <pre><code> p = sqrt(x^2 + y^2 + z^2); theta = acos(z/p); &lt;---- I'm guessing the problem here phi = atan2(y,x); </code></pre> <p>Now I need to get the Cartesian coordinates ($x',y',z'$). This is what I did</p> <pre><code> [p theta phi] = getmeasurement(x, y, z); x' = p*cos(theta)*sin(phi); y' = p*sin(theta)*sin(phi); z' = p*cos(phi); </code></pre> <p>The sensor is working fine at the beginning but at a particular point it behaves strangely. I have the state vector to compare it with the measurement. I'm guessing that $\theta$ might be the problem. </p> <hr> <p>Edit: </p> <p>I'm sorry for this mistake. The aforementioned calculations based on the following picture</p> <p><img src="https://i.stack.imgur.com/BYMzR.png" alt="spherical Coordinates"></p> <p>So, the point will rotate first about z-axis ($\theta$) and then rotate about x-axis ($\phi$)</p>
problem with simulated sensor in Matlab?
<p>Has anyone tried hooking a soloshot camera base to a quadcopter? That could give you accurate mapping of an area without requiring a GPS. Just a half a minute of scanning with a camera and the base can track anyone with a tag. Find some way to take that data and map flight paths would be amazing.</p>
2330
2014-01-20T02:46:22.127
|quadcopter|
<p>I have a quadcopter equipped with PX4FMU board. You may download its datasheet from <a href="http://pixhawk.org/_media/modules/px4fmu-manual-v1.6.pdf" rel="nofollow">HERE</a>. I wonder whether it is possible to program the quadcopter to autonomously follow a path like circular motion without any human interference. Are the built-in sensors enough for this task? I also wonder how accurate the built-in GPS is? I read that it gives coordinates with a radius of 5m as error.</p>
Is it possible to achieve fully autonomous route following using PX4FMU module?
<p>A DC or AC gearmotor is all you need. If you're asking about a stepper because of the change in direction, you can use a crank shaft mechanism to convert motion from rotary to reciprocating linear like the one in the above image:</p> <p><img src="https://i.stack.imgur.com/GKuEh.gif" alt="a crank mechanism"></p>
2348
2014-01-22T11:42:33.083
|motor|stepper-motor|mechanism|
<p>I want to make a copy of this machine <a href="https://www.youtube.com/watch?v=16x-_9pcWXU" rel="nofollow">Fisher Price Soothing Motions™ Glider</a> and I'm wondering what motor to use? Simple DC motor with appropriate gearbox (slow rpm) or stepper motor?</p> <p><a href="http://www.youtube.com/watch?feature=player_embedded&amp;v=KvlVKIuXGuI" rel="nofollow">Here</a> is another instance of this idea.</p>
What motor to use for reciprocating (reversive) movement
<p>According to <a href="https://www.ic.gc.ca/eic/site/smt-gst.nsf/vwapj/smse-005-11-bell-apndix3.pdf/$FILE/smse-005-11-bell-apndix3.pdf" rel="nofollow noreferrer">this report</a>, there is a fairly predictable measurement of penetration loss in buildings as a function of radio frequency: <a href="https://i.stack.imgur.com/uRV10.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uRV10.png" alt="Measured building penetration loss versus frequency for residential buildings"></a></p> <p>Based on that, here's a chart I made that lists the wireless technologies found in <a href="http://books.google.com/books?id=lRa9AQAAQBAJ" rel="nofollow noreferrer">Integration of ICT in Smart Organizations</a> (specifically, the chart on page 207):</p> <p><a href="https://i.stack.imgur.com/53nIU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/53nIU.png" alt="Chart of wireless penetration and data rates"></a></p> <p>You will probably want a technology toward the upper-right of the chart, but the decision to favor greater speed or greater signal penetration will be yours to make.</p> <p>Another useful resource might be the <a href="http://en.wikipedia.org/wiki/Spectral_efficiency#Comparison_table" rel="nofollow noreferrer">spectral efficiency comparison table on wikipedia</a>, which gives a more basic value of "kbps per MHz". </p>
2357
2014-01-23T14:28:21.837
|wireless|
<p>robotics enthusiasts!</p> <p>I'm a member of a team which has to develop a mobile rescue robot to cooperate with firemen (e.g. on earthquake sites).</p> <p>The problem we have is connection of a commander post with the robot. The robot has to enter buildings, so it is desirable that the connection can go through several decimeters of walls and have a reach 50-100 meters. On the other hand, we need to send a lot of data (camera images, point clouds, maps) which can easily eat 10 Mbps or more.</p> <p>At this time we use 2.4 GHz WiFi for this connection. As for the speed, with direct visibility it seems to be sufficient, but only when a single robot is operating (we can use up to 3 non-overlapping channels, so in theory 3 robots can work together; but usually you have the environment messed up with home routers). We need at least 5 robots to be operating simultaneousely.</p> <p>We have tried 5 GHz WiFi, but it has problems with penetrating walls, so it can only be used for UAVs.</p> <p>My idea was to use some mobile connection technology like LTE. I found that LTE can be run on 800 MHz, which could be great for the wall penetration performance. I also found that the LTE's theoretical upload speeds (for clients) are 70 Mbps, but nobody says if it is on 2.6 GHz and how would it be changed when running LTE on 800 MHz.</p> <p>Moreover, we cannot rely on some provider's coverage. I have found that you can build your own LTE transmitter for about €2000, which seems to be interesting to us. Maybe it is possible to build it even cheaper. But we think both 2.6 GHz and 800 MHz are regulated frequencies. However, the cooperation with firefighters could persuade local regulators to give an exception to us to setup our own small LTE base station.</p> <p>And now to the question: do you think such setup would give better results than using WiFi? Or do you know of any other technologies that would help us to either increase the bandwidth or the wall penetration performance? What are their cons and pros?</p>
Cons and pros of wireless technologies for rescue robot
<p>Well, being the manipulator removable, so you can attach other thing to the valve shaft, you have so much options.</p> <h1>Mechanical coupling</h1> <p>You could use a coupled rotary actuator to the shaft by the means of a shaft coupler. There's many types, this is a <a href="https://c10645061.ssl.cf2.rackcdn.com/product/icongo/icg_39276_motorshaftcoupler.jpg" rel="nofollow noreferrer">simple one</a>, but you probably would not get the actuator axle full aligned with the valve shaft, so a coupler that permits some misalignment is desirable, like these:</p> <p><a href="http://www.abssac.co.uk/uploads/products/cat_pp651_coupling%20jaw%202.jpg" rel="nofollow noreferrer">rubber shaft coupler http://www.abssac.co.uk/uploads/products/cat_pp651_coupling%20jaw%202.jpg</a></p> <p>Every coupler has its specifications and types of misalignment it will support.</p> <p>You can use a belt or chain transmission too. Then you need to attach a pulley or sprocket to the valve axle.</p> <p><strong>Chain and sprocket transmission</strong> <br><img src="https://www.excelcalcs.com/images/repository/Chain%20Sprocket%20to%20Sprocket%20Center%20Distance.png" alt="chain and sprocket transmission" /></p> <p><strong>Synchronous belt transmission</strong> <br><img src="https://1.bp.blogspot.com/-wR4ZIpSVgik/ThOS7lOlUpI/AAAAAAAAAC0/1GU_5Tc5qsU/s1600/7-5-2011%2B4-26-04%2BPM.png" alt="Synchronous belt transmission" /></p> <p>A possible problem with these two option is the lateral force the transmission could put on the valve shaft, as it has not made to support this, it probably will wear. <sup>One way to overcome this is to use a shaft coupler to a bearing with the transmission, so the lateral force will be supported by the bearing.</sup></p> <h1>Electro-mechanical actuator</h1> <p>For actuating the valve, you can use a couple of options.</p> <ul> <li>A stepper motor.</li> <li>A DC brushed-motor with a reduction gearbox like this</li> </ul> <p><img src="https://i.stack.imgur.com/4xbRB.jpg" alt="DC brushed-motor with gearbox show" /> <br><sup>The gears are show for illustration purposes, they are normally enclosed.</sup></p> <p>There's many types of reduction gearbox too, I will not extend into this on this answer.</p> <p>You probably need some way of knowing the position of the valve shaft (feedback). It can be a multi-turn potentiometer for simple analog output.</p> <h1>RC servo motor</h1> <p>If the valve needs less than 1 turn you can try to use an <strong>RC servo-motor</strong>, they have a DC-brushed motor, a reduction gearbox, a potentiometer for feedback, and a control circuit. You then feed with the angle position in a signal modulated by a technique know as PWM. Along the fact that the valve will operate at less than a 360° angle, you need to be sure the torque of the <strong>RC servo motor</strong> will be superior to the needed. Also metal gears and shaft will provide a more robust system, considering the torque.</p> <h1>Assembly</h1> <p><img src="https://i.stack.imgur.com/TSwRl.gif" alt="assembly" /> <br><sup>@DiegoCNascimento the image can be used with credits</sup></p> <p>Here's a final assembly possibility. You should be sure the pipe could support the forces involved with a safety margin, if not, use the <em>wall mount</em> supports.</p> <hr /> <p>Note: some valves main axles will go up or down when rotated, if this is the case the mechanical coupler should support this.</p> <p><strong>I do not assume any responsibility by any damage or injure in the mounting of this, the user should have plenty attention he/she is working with high-temperature and high-pressure parts that can pose danger.</strong></p>
2362
2014-01-24T03:57:14.450
|actuator|valve|
<p>I have a steam radiator at home and it has a valve similar to the picture below.</p> <p><img src="https://i.stack.imgur.com/Ukvhd.jpg" alt="enter image description here"></p> <p>Please note that the valve doesn't have grooves on top to attach things to.</p> <p>I want to build something to turn it on and off depending on the temperature at certain points in the room.</p> <p>I have that taken care of but cannot find a way to attach a actuator(actuator is the right word in the context I guess?) to turn the valve in both directions.</p> <p>Also It is a rented apartment so I would like to avoid making any modifications to the radiator itself.</p>
Actuator to control steam valve
<p>Here is one toy case where off-diagonal elements are non-zero.</p> <p>Consider a state vector that includes the position of both the left and right wheels instead of just a single position for the robot. Now if the left wheel has a position of 100m then you know the right wheel will also have a position of roughly 100m (depending on the axle length). As the left wheel increases position so will the right wheel, in general. It's not an exact 1:1 correlation, e.g. it doesn't hold exactly when the robot is turning, but overall it holds.</p> <p>So here the off-diagonal entry between left wheel x-position and right wheel x-position would be close to 1.</p>
2365
2014-01-24T23:01:56.657
|kalman-filter|noise|
<p>I'm struggling with the concept of covariance matrix. $$ \Sigma = \begin{bmatrix} \sigma_{xx} &amp; \sigma_{xy} &amp; \sigma_{x \theta} \\ \sigma_{yx} &amp; \sigma_{yy} &amp; \sigma_{y \theta} \\ \sigma_{\theta x} &amp; \sigma_{\theta y} &amp; \sigma_{\theta \theta} \\ \end{bmatrix} $$ Now, my understanding for $\sigma_{xx}$, $\sigma_{yy}$, and $\sigma_{\theta \theta}$ that they describe the uncertainty. For example, for $\sigma_{xx}$, it describes the uncertainty of the value of x. Now, my question about the rest of sigmas, what do they represent? What does it mean if they are zeros? I can interpret that if $\sigma_{xx}$ is zero, it means I don't have uncertainty about the value of x. </p> <hr> <p>Note, I'm reading <a href="http://mitpress.mit.edu/books/principles-robot-motion">Principles of Robot Motion - Theory, Algorithms, and Implementations</a> by Howie Choset et. al., which states that</p> <blockquote> <p>By this definition $\sigma_{ii}$ is the same as $\sigma_{i}^{2}$ the variance of $X_{i}$. For $i ≠ j$, if $\sigma_{ij} = 0$, then $X_{i}$ and $X_{j}$ are independent of each other.</p> </blockquote> <p>This may answer my question if the rest of sigmas are zeros however, I'm still confused about the relationship between these variables for example $x$ and $y$. When does this happen? I mean the correlation between them. Or in other words, can I assume them to be zeros?</p> <p>Another book namely <a href="http://www.amazon.ca/FastSLAM-Scalable-Simultaneous-Localization-Robotics/dp/3540463992">FastSLAM: A Scalable Method ...</a> by Michael and Sebastian which states </p> <blockquote> <p>The off-diagonal elements of the covariance matrix of this multivariate Gaussian encode the correlations between pairs of state variables.</p> </blockquote> <p>They don't mention when the correlation might happen and what does it mean?</p>
covariance matrix in EKF?
<p>Don't think roll and pitch. think of the level flight case where the vertical axis of the 'copter is vertical. Now if you want to move in any direction at all create a plane that contains your vertical axis (so this is a vertical plane) and also contains your desired new direction. two line now define a plane. You need to tip the 'copter's vertical axis while keeping ti on that plane</p> <p>Looking down on this plane from high up on your z-axis the plane looks line a line drawn on the x,y plane pointing to your desired new heading. The line is has some angle which equals the change in heading you want call it theta. </p> <p>the rotation will have a pitch and roll component and you must distribute the total amount of tilt between pitch and roll. that is sin(theta) and cos(theta) Does this make sense? Try it: what if you want to turn 90 degrees then sin(90)=1 and cos(90)=0 this means to so "due left" you roll. To move forward is is pitch one because sin(0)=0 and cos(0) = 1. At 45 degree heading sin(45)=cos(45) so equal amount of pitch and roll. </p> <p>Now you can go any direction take the total rotation and proportion it between pitch and roll be multiplying by (total rotation)(sin(theta)) and cos for roll.</p>
2371
2014-01-25T16:06:12.857
|quadcopter|gyroscope|movement|dynamics|
<p>I'm trying to make a quadcopter move laterally at a certain angle. I've been able to find the proper roll and pitch angles for this (that work with a yaw of 0°); how would I adjust these values to compensate for a different yaw?</p>
Compensating for Yaw in Lateral Quadcopter Movement
<p>You can use all versions of the Kinect for Windows SDK, even with an Xbox-version. Some parts are limited and you require a Kinect for Windows in commercial scenarios (more information on <a href="http://www.kinectingforwindows.com/2012/09/07/what-is-the-difference-between-kinect-for-windows-kinect-for-xbox360/" rel="nofollow">my blog</a>).</p> <p>In your scenario you should be able to use the official <strong>Kinect for Windows SDK v1.8</strong> to use the speech/sound scenario. If you are going to do speech/voice recognition, in <a href="http://www.kinectingforwindows.com/2013/12/01/kinecting-ar-drone-pt-i-basic-ui-kinect-camera-voice-recognition/" rel="nofollow">this post</a> I set up a basic version to controll my drone which should be the same.</p> <p>You don't need to use VIsual Studio but C#/C++/VB.net require a compiler to run the code.</p> <p><strong>CONCLUSION -</strong> The official SDK will fit your needs, incl. support for speech/voice, but if you want to deploy the application in public you'll need the Kinect for Windows sensor for the public license.</p> <p><em>More questions? --> hello[at]kinectingforwindows[dot]com</em></p>
2376
2014-01-26T14:06:19.433
|kinect|
<p>My application is basically about sound source localization and visual servoing. I selected Kinect as the main hardware.</p> <p>I already know the basic differences between Kinect for <em>Windows</em> and Kinect for <em>Xbox</em>. I cannot access to windows version from my country (no reseller here in Turkey), but the xbox version is there at the stores. I am not sure about problem specific software selection.</p> <p>I found out that the latest Kinect SDK supports sound source localization (and beamforming) using the built-in microphone array. <strong>Can I use that SDK within the xbox version? Or is there another SDK for xbox, having the same support?</strong> I am not sure because I also read that OpenNI does not provide the best audio API.</p> <p>I will also apply some processing on image &amp; depth outputs, so I will be using OpenCV. I also want to use Qt for threading, GUI etc. So, another question: <strong>Is it possible to use the microsoft official kinect SDK within another IDE, not Visual Studio?</strong></p>
Kinect for Xbox: SDK selection
<p>The simplest sensor you can build is a weight on a string. Lower it down until the tension of the string is reduced. I assume you want sonar. So just go buy one they are dead easy to interface to.</p> <p>You can buy these at marine hardware stores. They are not cheap. Look for a depth finder that has "NMEA" output. You may have heard of NMEA as the data format that GPS receivers use to send location. But it is a common format used by wind, depth and speed sensors.</p> <p>The NMEA format is just 4800 baud serial ASCII data with end of lines and the data is typically sent once per second. You can read the data in a PC if it has a serial port and a terminal program.</p> <p>The sensors itself looks like a plumbing fitting. You drill a 1 or 2 inch hole in the boat and install the sensor facing down then a wire runs to the main unit. In theory you could create electronics that "ping" the sensor and wait for a return but it's easier to buy a complete unit with NMEA output.</p> <p>If you try and roll your own you have to deal with thermoclines, soft bottoms, rocks and even fish and do a lot of signal processing to find a reliable depth. This is <strong>not</strong> like sonar in air where there is a <strong>huge</strong> difference between a solid wall and air. Soft mud is not so much unlike water. But a NMEA sensor would be very easy to use. You pay the big price for the DSP.</p> <p>My favorite store is West Marine. Google it and I bet there is one near you if you live near ocean. I recommend them because of their high level of customer support. They will point you at the best product and answer questions like "what's NMEA?"</p>
2379
2014-01-27T15:26:32.803
|arduino|microcontroller|underwater|
<p>Just like a fish finder finds the depth of the water directly beneath it, Im trying to find a sensor that I can purchase for the Arduino that does the same. would like it to check up to 20 ft at least, with high accuracy, +/- 10 or 15 cm. All the threads and info I've been finding are water level sensors not for water depth. So does anyone know of a sensor like this and where I can find one?</p>
Water depth arduino sonar sensor
<p>You won't be able to do this with a Proto X, since (a) you don't have access to the board firmware, and (b) the hardware cannot be modified.</p> <p>If you wish to experiment with this, a good first step will be to get a <a href="http://witespyquad.gostorego.com/flip-mwc-flight-controller.html" rel="nofollow">Flip 1.5 board</a> and bluetooth module. For $24, this will give you a complete open source system that includes PC or Android bluetooth or USB communication.</p>
2387
2014-01-28T06:53:04.840
|quadcopter|programming-languages|
<p>I have bought a really small <a href="http://www.ebay.com/itm/like/231129032058?lpid=82" rel="nofollow noreferrer">proto X quad</a> (it has a joystick which navigates the device) and I am looking for a way to send a signal to this thing from my computer.</p> <p><img src="https://i.stack.imgur.com/381Yl.jpg" alt="enter image description here"></p> <p>So can anyone point me how can I turn on one of the propellers of this quad using my laptop (I have a decent knowledge in python/matlab/C# but hardware is a completely new world to me).</p>
How can I start programming proto X quad
<p>LIDAR are expensive because it normally relies on Time-of-Flight of the light for measuring each point distance. They are <em>simple</em> in theory of working, in practice it requires a good design in electronics and the sensors.</p> <p>I don't think LIDAR is the next step, but stereo vision could be. This is not related to OpenCV, and most optical vision system designers forget the basics that is a good camera with good optics. This is your source of information, if you have a bad one, you difficultly get good results. This means at least ~$200,00 for camera and lenses, with RAW protocol for images, and goes on.</p>
2394
2014-01-28T17:46:14.847
|arduino|computer-vision|lidar|
<p>I am surprised by the price range of Lidar applications considering the simplicity of the design. I try to make a simple project that requires lidar for object recognitions etc. I wouldn't like to use Visual Recognition such as OpenCV. </p> <p>Regardless of that I am trying to understand why Lidar solutions are so expensive you can see that this <a href="http://velodynelidar.com/lidar/hdlproducts/hdl32e.aspx" rel="nofollow">http://velodynelidar.com/lidar/hdlproducts/hdl32e.aspx</a> Small lidar sensor goes for 20,000$. </p> <p>I strongly believe that Lidar is the next step in robotic applications but I am not sure why it is so EXCLUSIVE. </p> <p>I have seen few projects that go for around 200$ but their performance is very bad. </p> <p>I hope you can answer what makes a Lidar so expensive and what are some cheap systems a hobbyist can afford. </p>
LIDAR solutions
<p>To append Demetris answer, if it has tank-like treads or even 4 fixed wheels, then you can also call it a <em>skid steer</em> vehicle.</p>
2396
2014-01-28T19:12:55.640
|mobile-robot|differential-drive|
<p>Is there a generic name for the category of robots that move using two opposing wheels or tank-like treads?</p>
Generic name for two-motor wheeled/tracked robots?
<ol> <li><p>Place a disk on the large shaft.</p></li> <li><p>Depends how 'professional' you want to be. </p></li> </ol> <p>A 1mm hole is not unreasonably hard to drill using a small hobby drill press, and you can do it with many battery drills with due care - the chuck minimum drill size is the main limitation. Down to about 0.8mm is easily enough achievable by mere mortals. </p> <p>Substitute imperial units below if the mood seizes you.</p> <p>(2a) You can push the 1mm shaft into material that it will penetrate and glue it in place with either epoxy or a super super-glue. Go from there.</p> <p>(2b) If you get a piece of say 4mm dia brass rod you can -<br> File one end flat - repeat until good enough.<br> Mark centre - many guides how to do this on net.<br> Centre punch centre<br> Drill 1mm hole in end. Drill in horizontal clamp may help. Or not. Drill and tap to 2mm or 2.5mm a hole at right angles to long axis.<br> Add grub screw or screw - 2 or 2.5mm to suit.<br> You now have a shaft extension and half a coupler. Use other end as seems good.</p> <p>(3) Mark a dark line on shaft. Use optical reflective detector to produce tach signal. </p>
2402
2014-01-24T22:09:59.440
|motor|
<p>I have a couple of these DC motors</p> <p><a href="http://www.pololu.com/product/2202" rel="nofollow">http://www.pololu.com/product/2202</a></p> <p>Which have an extended motor shaft that sticks out the back and is 1mm diameter. </p> <p>I'm having trouble trying to think of the best way to attach an encoder disk to this shaft.</p> <p>I thought of getting a custom wheel 3d printed and make the opening 0.9mm so it will be a tight fit. But I don't know if is just to small?</p> <p>I also though of taking the encoder disks from a PC mouse and drilling a 1mm / 0.9mm but its the same problem but with the added difficultly of trying to drill a small hole on a small thing.</p> <p>So I wondered if anyone knows a better way, or of a made disk to attach. As I just can't find anything for a 1mm shaft</p>
How to: Attach wheel encoder to motor?
<p>Look at the bottom of the page on your first link: <a href="http://www.hobbytronics.co.uk/baby-orangutan-328" rel="nofollow">http://www.hobbytronics.co.uk/baby-orangutan-328</a>. under "Documents relating to the Baby Orangutan B-328".</p> <p>There you will find links to the <a href="http://www.hobbytronics.co.uk/datasheets/baby_orangutan_b.pdf" rel="nofollow">Baby Orangutan B Users Guide</a> as well as other useful links.</p> <p>The user's guide contains the pin descriptions you need on page 5.</p>
2409
2014-01-30T17:41:21.873
|microcontroller|
<p>I am new to robotics and planning my first purchase. I'm looking at the Baby Orangutang B-328. Here is information about the microcontroller: <a href="http://www.hobbytronics.co.uk/baby-orangutan-328" rel="nofollow noreferrer">http://www.hobbytronics.co.uk/baby-orangutan-328</a>.</p> <p>The pin headers come unmounted, so you have to do the soldering yourself. My problem is that I don't know what the pin connections are for. Here is a picture of the board:</p> <p><a href="http://www.ca.diigiit.com/image/cache/data/pololu/1220/Pololu-Baby-orangutan-b-328-6-500x500.png" rel="nofollow noreferrer">http://www.ca.diigiit.com/image/cache/data/pololu/1220/Pololu-Baby-orangutan-b-328-6-500x500.png</a>.</p> <p>Could someone briefly tell me what the different connections are for, or link a website that does?</p>
Connections on a Baby Orangutang B-328 board
<p>There are two good search terms that come to mind. The first one is a "candy machine mechansim", or "dispenser mechanism" (e.g. <a href="http://www.woodworkingformeremortals.com/2012/10/make-candy-dispenser.html" rel="nofollow noreferrer">http://www.woodworkingformeremortals.com/2012/10/make-candy-dispenser.html</a>). There are several models of off-the-shelf pet food dispensers and cereal dispensers that use this method.</p> <p><a href="https://i.stack.imgur.com/Ab4YP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ab4YPm.jpg" alt="cereal dispenser"></a></p> <p>The second term would be a <a href="http://en.wikipedia.org/wiki/Screw_conveyor" rel="nofollow noreferrer">screw conveyor</a> (a.k.a. "auger dispenser"/"food auger"/"grain auger"), which describes a more continuous-flow version of what you are describing.</p> <p><img src="https://upload.wikimedia.org/wikipedia/commons/2/22/Archimedes-screw_one-screw-threads_with-ball_3D-view_animated_small.gif" alt="screw conveyor"></p> <p>This type of dispenser controls the volume that is delivered by rotating the auger by a fixed amount, not by dumping a fixed-size container.</p> <p>Beyond those, there are some homemade concepts that use some variation on the idea, like this one: <a href="http://omahamakergroup.org/2011/06/24/evening-project-automatic-cat-food-dispenser/" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UsIeTm.jpg" alt="sliding cat food dispenser"></a></p>
2411
2014-01-30T19:03:33.947
|design|
<p>I'm a complete newbie trying to build a simple robot that dispenses candy (M&amp;M, skittles, etc). However, since I'm not familiar with the field, I'm having a hard time googling because I don't know the correct terms to search for. I'm looking for a piece to build a robotic 'trap door' of sorts that will open for a specified amount of time to release candy. What parts can I use and what is are called? I've tried robotic lever, robotic door, etc with no luck.</p>
Robotic part to dispense candy
<p>The short answer is that learning robotics is easier <em>with</em> the math than without. </p> <p>Here are a few areas of robotics and the field of mathematics that make them manageable:</p> <ul> <li>Measuring data from sensors, with some level of uncertainty: <strong>statistics</strong></li> <li>Solving problems through reasoning: <strong>formal logic</strong></li> <li>Kinematics: <strong>linear algebra</strong></li> <li>Control theory: <strong>calculus</strong></li> <li>Algorithms: <strong>theory of computation, combinatorics</strong></li> <li>Mechanical design: <strong>physics</strong></li> <li>Sonar design: <strong>acoustics</strong></li> <li>Cooperative robotics, path planning: <strong>graph theory</strong></li> </ul> <p>This is a very incomplete list.</p> <p>Being able to predict or explain why a system works the way it does is very difficult without the mathematical language to describe it. The ideas are complicated, and you'd be hard pressed to find a more understandable way to communicate all but the simplest ideas in robotics without a mathematical language to rely on. (Not to mention that it's a more objective process to simplify mathematical equations than an english description of the same system.)</p> <p>You may find that studying robotics will make it easier to learn the math, because it will give you a practical application for what's ordinarily just a bunch of theory. </p>
2413
2014-01-30T21:41:35.710
|software|
<p>I am studying Informatics and I am interested in doing a Masters in Robotics and I was checking out some unis and their courses and I saw that Robotics contains analysis and a lot of math.</p> <p>Why is that?</p>
Why is analysis required to study robotics?
<p>In robotics the <em>configuration space</em> is <strong>exactly</strong> the <em>joint space</em> of the manipulator.</p> <p>Differently, to indicate the space where the forward kinematic law maps the joints configuration into, we use the terms <em>task space</em> and <em>operational space</em>, equivalently.</p>
2422
2014-02-01T14:01:58.290
|kinematics|motion-planning|forward-kinematics|
<p>For robotic manipulator like the one on the picture:</p> <p><img src="https://www.learnartificialneuralnetworks.com/images/rcfig1.jpg" alt="robot arm"></p> <p>are the configuration space and joint space equivalent? I am trying to understand difference between the two...</p>
Is configuration space same as joint space?
<p>So, the calibration code was essentially correct. The problem was in my Makefile dependency code.</p> <p>For previous tests i used a range from 0.5 to 2.5ms and that was clearly too much for this ESC. I tried to decrease the range to 0.8ms - 2.2ms, but my makefile didn't notice the change and kept uploading the old version ... After a clean rebuild the ESC is calibrated without any problems.</p> <p>As for the oscilloscope check... I mixed them up :-)</p>
2426
2014-02-02T00:37:02.163
|c|calibration|esc|avr|
<p>I have a <a href="https://hobbyking.com/hobbyking/store/__14630__Turnigy_TrackStar_18A_1_18th_Scale_Brushless_Car_ESC_.html" rel="nofollow">Turnigy ESC</a> and I am controlling it from AVR. Now I need to calibrate it to set the range of the input. With a servo tester I managed to calibrate it without any problems, more or less by following the user guide, but when I try to do the same procedure from code, the ESC starts beeping in some confused pattern and then enters programming mode.</p> <p>My code looks like this:</p> <pre><code>void calibrate_turningy_esc() { servo_set16(SERVO_RANGE_TICKS); for (uint16_t i = 0; i &lt; 10000; ++i) _delay_ms(1); servo_set16(-SERVO_RANGE_TICKS); for (uint16_t i = 0; i &lt; 10000; ++i) _delay_ms(1); servo_set16(0); for (uint16_t i = 0; i &lt; 10000; ++i) _delay_ms(1); } </code></pre> <p>where +SERVO_RANGE_TICKS is 2.2ms pulse length, -SERVO_RANGE_TICKS is 0.8ms pulse length and 0 is 1.5ms. The timeouts of 10s were measured during the manual calibration with a stopwatch.</p> <p><strike>I have checked with an oscilloscope that the output servo signal looks the way I would expect it -- 10 seconds of 2.2ms pulses, 10 seconds of 0.8ms pulses and then 1.5ms pulses.</strike> Edit: I made a mistake here, see my answer.</p> <p>Do you have any idea what to change to calibrate the ESC?</p>
How to programatically calibrate Turningy esc?
<p>Could you perhaps elaborate on your performance requirements?</p> <p>I'm quite confident you will be able to get Tomcat running, <a href="http://www.youtube.com/watch?v=aUCa8GQvsjo" rel="nofollow noreferrer">here's a video of someone getting Tomcat 5.5 running</a> on a beagleboard c4 (note the video is dated Feb 2011). Also remember that you can run a normal <a href="https://wiki.ubuntu.com/ARM" rel="nofollow noreferrer">Ubuntu distro</a> on the beaglebone, so installing Tomcat should be as easy as simply going 'sudo apt-get install tomcat7'.</p> <p>The big issue is performance. If you're planning to use the beaglebone as a server handling thousands of requests a second you're probably going to have a problem.</p> <p>The following table indicates the performance difference between embedded application processors and a desktop PC (the benchmark calculates pi to 500 digits, <a href="http://upon2020.com/blog/2013/07/comparing-raspberry-pi-beaglebone-and-mirabox/" rel="nofollow noreferrer">see Johannes Ernst's blog for more info</a>).</p> <p><img src="https://i.stack.imgur.com/Dpe8k.png" alt="enter image description here"></p> <p>The above benchmark obviously does not characterize other performance areas important in running a server (e.g. disk or network I/O), but let's assume you're looking at roughly a tenfold increase in processing time.</p> <p>For instance, if a single Tomcat transaction usually took 10 ms to complete, it could now take 100 ms. If you're only using the beaglebone to serve a small number of consumers (infrequently), and responsiveness isn't as critical, it should work fine.</p>
2435
2014-01-16T13:13:49.980
|beagle-bone|
<p>There is this project I am working on which is using a BeagleBone and we need a JSP container to run on it. I was thinking of Tomcat but wanted to know if Tomcat is suitable for embedded systems.</p> <p>Is it too resource-heavy? If yes, are there other lighter JSP containers? I know only of Tomcat and Jetty.</p>
JSP container for embedded systems
<p>One is internal, one is external.</p> <p>Self-interference refers to instances where something like a robot arm (with many degrees of freedom) may attempt to move in a path that crosses part of its own body. Your <code>CollisionDetector</code> is likely keeping track of the joint angles in a planned arm movement and seeing if any of them result in such a condition. The robot doesn't need any information about its environment to do this, just a reliable measurement of its own position (and possibly the shape of whatever it may be gripping). </p> <p>"Collision Avoidance" usually refers to the detection of transient objects with externally-facing sensors. For example, a robot has planned a path based on a map of a room but other robots (or people) might be moving around in that same space; the robot detects them with some sort of sensor, and adjusts its planned path (or just stops) until the risk of collision has gone away. This is usually more difficult than detecting self-interference, because modeling the external environment is a bit more complicated than modeling a robot's internal state.</p>
2456
2014-02-07T13:57:54.403
|movement|
<p>Someone told me when explaining about a controller module named <code>CollisionDetector</code> that it only checks self-<code>interference</code> and moves accordingly without detecting <code>collision</code>. To me both sounds the same. How are they different?</p>
How are Interference Avoidance and Collision Avoidance different?
<p>It turns out that the statement in the Roomba SCI spec sheet about ports is either outdated or incorrect. The Roomba 5XX series (and 7XX series) uses port 115200 to communicate, and that's what was wrong.</p> <p>Wiring: I wired the TXD of the cable to the RXD of the roomba (so the orange cable would go into port 3 on the roomba), and vice versa. I also wired the ground and that was all that was needed.</p>
2462
2014-02-09T23:32:02.023
|mobile-robot|software|wheeled-robot|
<p>I have a Raspberry Pi with <a href="https://www.sparkfun.com/products/9718" rel="nofollow">this</a> FTDI cable and a Roomba 560. The Roomba has an SCI port to allow for control of the roomba via serial. I installed the PySerial library on the pi and send valid commands to Roomba, but the roomba doesn't respond. I have the TXD of the cable attached to the TXD of the roomba, the RXD on the cable wired to the RXD on the roomba, and a ground on the cable wired to the ground on the roomba (everything in it's respective port). I do not have power going from the cable to the roomba or vice-versa.</p> <p>What I can't figure out is why the commands aren't working. There's no error message upon running the python code. <a href="http://www.irobot.com/images/consumer/hacker/Roomba_SCI_Spec_Manual.pdf" rel="nofollow">This</a> is the information sheet for the Roomba's SCI port.</p> <p>Code:</p> <pre><code>import serial ser = serial.Serial('/dev/ttyUSB0') # this is the defualt Roomba baud rate ser.baudrate = 57600 # Start SCI - puts into safe mode ser.write(chr(128)) # Enable full mode ser.write(chr(131)) # Spot clean ser.write(chr(134)) print 'Done' </code></pre>
Connecting a Raspberry Pi to a roomba via an FTDI cable
<p>The short answer is no.</p> <p>By amplifying a GPS signal, you are creating a multipath situation. Instead of the proper signal path from the satellite directly to the receiver, you would be creating a <a href="http://en.wikipedia.org/wiki/Triangle_inequality" rel="nofollow noreferrer">triangle inequality</a> (unless your amplifiers just happened to be exactly in line between each satellite and the receiver).</p> <p><a href="https://i.stack.imgur.com/2XFCp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2XFCp.png" alt="enter image description here"></a></p> <p>You might decide that having an imperfect signal inside the building is better than no signal at all. However, the bigger issue is that you'd be magnifying the multipath effects for any GPS receivers <em>outside</em> the building.</p> <p>You should look at an alternate GPS-like technology, such as radio-frequency <strong>TDOA</strong> (time-difference of arrival). </p>
2470
2014-02-11T12:38:16.860
|wireless|gps|
<p>I'm multimedia developer who is searching for a way to get GPS signal inside buildings/structures. Is amplification a reliable way to fix this GPS signal issue?</p> <p>Will a "GPS Amplifier" work as perfectly as using GPS outside?</p>
GPS Amplifier - Is this reliable?
<p>Four fixed pitch propellers is one of the simplest configurations. The motors are mounted rigidly and the control of the vehicle is 100% electronic through controlling the speed of the propellers ( <a href="http://en.wikipedia.org/wiki/Quadcopter#Flight_control" rel="nofollow">http://en.wikipedia.org/wiki/Quadcopter#Flight_control</a> ).</p> <p>With the classical two prop helicopter configuration you need pitch control for the main rotor. Let's say you have two blades on the main rotor you would need two additional actuators and you end up with a more complex and expensive design. I think you could get by without pitch control of the tail rotor but I'm not sure. ( <a href="http://en.wikipedia.org/wiki/Helicopter_flight_controls#Flight_conditions" rel="nofollow">http://en.wikipedia.org/wiki/Helicopter_flight_controls#Flight_conditions</a> )</p> <p>A tri-copter would typically need tilt control of the rear motor. ( <a href="http://en.wikipedia.org/wiki/Multirotor" rel="nofollow">http://en.wikipedia.org/wiki/Multirotor</a> )</p> <p>The basic physics of the situation is that in order to move freely around in 3d space you require certain degrees of freedom. A quadcopter is arguably the simplest mechanical design that provides those degrees of freedom. It's a trade-off.</p> <p>In terms of additional propellers, the trade-off is often the inertia of a single propeller. Higher inertia means it's harder to change the speed of the propeller and so the vehicle is less agile and less controllable. Therefore to generate more lift more propellers are often a better choice than a larger propellers (if you stick with the premise of fixed pitch, fixed mount). Since adding one prop doesn't really make much sense you mostly see an even number of propellers.</p>
2476
2014-02-14T21:24:40.963
|quadcopter|
<p>I am planning on creating a quad-copter with my Arduino that I have. I have created a few land robots before but no aerial vehicles, so this is all new to me. I was looking on the Internet for different models, and I see that most robots have 4 propellers. I have also seen a few hexacopters (?) and octocopters but that many propellers can't get a but out of hand. Does having 4 propellers the best and most efficient thrust to weight ratio, or will 3 propellers/arms work better?</p>
Benefits of the Number of Propellers
<p>I work with ASUS Xtion as well. The best library that I found so far and I am using daily is <strong>Point Cloud Library</strong> (<strong>PCL</strong>). You can find, download, and Install it from <a href="http://pointclouds.org/" rel="nofollow">here</a>.</p> <p>After installing and getting it to work, you can go to the tutorial section of the website, and there is a <a href="http://pointclouds.org/documentation/tutorials/ground_based_rgbd_people_detection.php#ground-based-rgbd-people-detection" rel="nofollow">very good tutorial</a> similar to what you want. There is a <strong>pcl_people module</strong> that you can start from.</p> <p>This tutorial aims at explaining how to detect people from <strong>RGB-D</strong> data with the <strong>pcl_people module</strong>. With the proposed method, people standing/walking on a planar ground plane can be detected in <strong>real time</strong> with standard CPU computation. This implementation corresponds to a people detection algorithm for RGB-D data.</p>
2481
2014-02-18T11:03:31.773
|openni|
<p>I would like to write a simple program which processes the depth feed from an Asus xtion depth sensor using OpenNI. The sensor will be fixed like a CCTV camera and count multiple targets moving around. </p> <p>The processing would involve some simple functions on each frame (background subtraction, level sets, connected components filter), and some multi-target tracking across frames. I have searched the web, but it is hard to see how best to get started (and I'm also quite new to programming in C). </p> <p>Can anyone recommend any existing code that can help to get started / any libraries which would be suitable for this real-time application? Or perhaps there is some opensource code which already does such a thing? Would really appreciate any pointers from anyone with experience. Thanks!</p>
writing a simple program for processing RGBD video with OpenNI
<p>I'm assuming this is with respect to a Kalman filter?</p> <p>Mathematically, yes it can be zero. The effect of this is that model is assumed to be perfect and estimation uncertainty is due 100% to the uncertainty in the initial state. In the extreme case, if you assume 0 initial uncertainty you will never have any model uncertainty and all your measurements will be ignored.</p> <p>Even assuming non-zero initial uncertainty the behavior will probably not be what is expected. Consider a simple position+velocity state vector. Over time, the position uncertainty will grow a little bit due to initial velocity uncertainty, but the velocity uncertainty will be constant. No matter how long you wait until the first measurement arrives the velocity estimate will have low uncertainty so not much weight will be given to the measurement.</p> <p>So, all usage scenarios that I've seen have non-zero dynamic noise.</p>
2488
2014-02-19T20:29:09.470
|kalman-filter|noise|ekf|
<p>Can I assume the noise of motion model to be zero? If so, what are the consequences of doing so?</p>
Can motion model noise be zero?
<p>proportional controller may yield steady state error. it's better to use PI control. that your arm will fall when the motors are not powered is due to the fact that no special backdrive protection is included in the motor or the mechansim. you may want some actual brake to stop the backdrive</p>
2489
2014-02-19T22:00:45.633
|robotic-arm|
<p>So my team made a Vex robot for the <a href="http://www.vexrobotics.com/wiki/Toss_Up" rel="nofollow">toss-up competition</a>, but we need the arm up during the autonomous. The problem is that it's too heavy to stay up on its own. I was going to use encoders to count what angle the arm is at. I was going to use this code, but I'm not sure if there's a better way.</p> <pre><code> while(MotorEncoder[rightMotor] &lt; 1000) { motor[rightMotor] = 80; motor[leftMotor] = 80; } </code></pre> <p>Would anyone recommend a better solution or is this the best way? This is untested by the way.</p>
VeX - Keeping arm at an angle