Answer
stringlengths
44
28.2k
Id
stringlengths
1
6
CreationDate
stringlengths
23
23
Tags
stringlengths
4
94
Body
stringlengths
54
35.7k
Title
stringlengths
13
150
<p>I think you are confusing 2 issues. <em>Inverse dynamics</em> is the process of mapping end effector position, velocity, and acceleration to joint torques. as described in this book, page 298: <a href="http://books.google.com/books?id=jPCAFmE-logC&amp;lpg=PR2&amp;pg=PA298#v=onepage&amp;q=inverse%20dynamics&amp;f=false" rel="nofollow">http://books.google.com/books?id=jPCAFmE-logC&amp;lpg=PR2&amp;pg=PA298#v=onepage&amp;q=inverse%20dynamics&amp;f=false</a></p> <p>But the paper you posted is simply modeling and calibrating their robot's non-geometric parameters. </p> <p>So i think there can be multiple solutions to the inverse dynamics problem as i define above. because when only given the end effector parameters, the arm can potentially be in different configurations to realize this. a simple example is a 2 link planar arm where the elbow can be on either side. as seen in figure 2.31, page 93: <a href="http://books.google.com/books?id=jPCAFmE-logC&amp;lpg=PR2&amp;pg=PA93#v=onepage&amp;q=two-link%20planar%20arm&amp;f=false" rel="nofollow">http://books.google.com/books?id=jPCAFmE-logC&amp;lpg=PR2&amp;pg=PA93#v=onepage&amp;q=two-link%20planar%20arm&amp;f=false</a></p> <p>but i still think the problem as you describe, mapping joint position, velocity, and acceleration to joint torques is a low-level control problem and probably has a unique solution. however, when factoring in nonlinearities like friction can probably make the answer non-unique. for example, imagine a joint with lots of friction. a range of joint torques will be sufficient to hold the joint at a given angle.</p>
2499
2014-02-23T18:49:51.070
|dynamics|inverse-dynamics|
<p>I know that inverse kinematics ($p \rightarrow q$, p: desired pose of the end-effector, q: joint angles) is not a function because there might be multiple joint angle vectors q that result in the same pose p.</p> <p>By inverse dynamics control I mean the mapping $(q, \dot{q}, \ddot{q}) \rightarrow u$ (u: required torques. I am not very experienced with these kind of problems. Is the mapping a function, i.e. for each triple $(q, \dot{q}, \ddot{q})$ there is a <em>unique</em> solution u? My intuition says it is. But I am not sure. If there is not, would it always be possible to obtain a solution by averaging two or more solutions?</p>
Can inverse dynamics control be regarded as a function?
<p>Alright, this is my first question that I am answering. I'll try to throw as much light as possible to clarify your doubt. I guess, I'll open the door to many more answers.</p> <p>Anyways, I am a 2nd Year Robotics Engineering student from Bristol, UK. I started out with an Arduino because when I entered Uni, I had no previous experience with electronics or even coding (zero knowledge infact). So, my uni had us start out with Arduino to get some coding experience with C. We did some coding on an Altera Development Board as well, that was to get us familiar with bit-wise operations for controlling hardware from the register level. Arduino is easier because you don't go into the territory of registers. You have pre-written functions for many operations like reading from sensors, writing to serial port, displaying strings etcetera. My first robot, my team named is Zeus was a helper robot, it's tasks were to follow wall, detect and avoid obstacles, identify and stop at the goal. The last task requires some algorithms to do it efficiently but hey, common, we just started out and so all we had to do was search the entire area, which was a fairly small one. We just had to connect sensors which were ready made and picked from the LEGO Kit and then just program them in Arduino using C. It was easy, the only thing that was difficult for us was we didn't have enough knowledge of C when coding, its easy now that I think back. Now, I am working on PIC Microcontroller, its a level above. Here you have to deal with registers and reading datasheets and configuring ports and yaada yaada yaada. I'd say start from scratch, learn how to use the Arduino first and when you successfully interface sensors with your motors on the Arduino platform which is again easy, then step up to do stuff like sending data wirelessly from a bluetooth back and forth between PC and Arduino. Then step up to do some small projects which will serve as foundation for the big stuff like line-follower robot. It's not as easy as the word sounds. It could be fairly easy on an Arduino but once you get into the discrete platforms like Atmel and PIC, you will deal with much more deeper levels of coding so to keep you strong for that, work on the Arduino a fair bit and then step up to the more complex MCUs like Atmel,PIC, Altera and so on.</p> <p>I hope that answers your question a little atleast. I might have missed something, let me know, if you need more clarity on a specific issue. </p>
2507
2014-02-26T09:40:25.650
|arduino|microcontroller|
<p>I want to begin robotics.So as a beginner what micro-controller would be convenient?Arduino or PIC?what type of robots can be built with arduino or PIC? Should I start from just a line-following vehicle? </p>
robotics as a beginner
<p><em>Someuseful resources</em></p> <p><em>Video lecture</em> : <a href="http://videolectures.net/bmvc2012_davison_scene_perception/" rel="nofollow">Monocular SLAM and Real-Time Scene Perception</a></p> <p><em>Lecture slides from TU-Dresden</em> <a href="http://www.wv.inf.tu-dresden.de/~wiki/Robotics/Tutorials/SLAM-Summer-2006/Lectures/Lacroix/lacroix-visionSLAM.pdf" rel="nofollow">Vision-based SLAM lecture </a></p> <p><em>Lecture from Bristol University (some links dead)</em> : <a href="http://www.cs.bris.ac.uk/Research/Vision/Resources/bmvctutorial/" rel="nofollow">Visual SLAM tutorial</a></p>
2510
2014-02-26T23:03:30.713
|kalman-filter|slam|ekf|
<p>I would like to know if there is a good source that combines Slam problem with vision. From mathematical perspective, there are numerous resources that handle SLAM ,however, I didn't find a good source that focuses on slam and vision. </p>
Slam and Vision (good resources)?
<p>Because air is a compressible fluid it's a rather complicated issue. This might be better suited to physics.stackexchange.com, but maybe I can point you in the right direction...</p> <p>You can think of each rotor blade as a wing moving in a circle, and then you can view it in terms of aerodynamics.</p> <p>Things we need to consider:</p> <ol> <li>The speed the blade near the rotor will be greater than that at the end.</li> <li>The angle of attack affects drag and lift, which translate to airflow. This can be maximized in favor of airflow but is interdependent with RPM.</li> <li>Blades are often shaped along their length to take advantage of #1 and #2.</li> <li>Blade count is somehow related to drag and ambient air velocity or temperature or something.</li> <li>The stator and enclosure shape can induce turbulent air currents, which will increase drag.</li> <li>At high speed you need to take into account laminar flow.</li> <li>Thanks to thermodynamics, all the power you put in has to go somewhere. So the total of heat, airflow, acceleration, friction, and static force (every action has an opposite reaction) is going to give you your net power requirement.</li> </ol> <p>It might be best to experiment, since fluid dynamics can be tricky.</p> <p>See also:</p> <ul> <li><a href="https://physics.stackexchange.com/questions/9643/shape-of-fan-blades">Shape of Fan Blades</a> - approaches to simulation</li> <li><a href="https://physics.stackexchange.com/questions/9030/design-of-a-common-table-fan">Design of a common table fan</a></li> <li><a href="https://physics.stackexchange.com/questions/69915/what-part-of-the-fan-blade-actually-does-the-noise-generation">What part of the fan blade actually does the noise generation?</a> - has an excellent answer on blade/enclosure turbulence.</li> </ul>
2524
2014-02-28T13:57:24.480
|motor|power|
<p>I need to specify a fan motor combination and wondered if there are formulas that can work this out? The fan we are using is a crossflow fan:</p> <p><img src="https://i.stack.imgur.com/zenu6.jpg" alt="cross flow fan"></p> <p>So I'm assuming the power required to drive it is derived from the number of blades, dimensions of blades (including angle of attack), dimension of barrel/wheel and the speed in RPM.</p> <p>Is this possible or does it need to be worked out practically with experimental measurements etc?</p> <p>Hopefully this is the correct stack for this question, if not then mods please feel free to edit/close. </p>
How to calculate the power required to drive a fan
<p>The spec says hex 8000 as in hexadecimal (which I usually write in C/Python notation as <code>0x8000</code>). Each byte is exactly two hexadecimal digits. So you can break <code>0x8000</code> into <code>0x80</code> and <code>0x00</code> and send them separately, which correspond to decimal numbers <code>128</code> and <code>0</code>.</p> <p>You also have to know which to send first (referred as little-endian vs big-endian). The spec says [radius high byte] [radius low byte], so high byte comes first (big-endian).</p> <p>Therefore send the sequence <code>[128, 0]</code> in order to transmit hex 8000.</p>
2530
2014-02-28T23:55:33.550
|mobile-robot|raspberry-pi|serial|irobot-create|roomba|
<p>I have a Raspberry Pi hooked up to a Roomba 560's serial port. While going over <a href="http://www.irobot.com/images/consumer/hacker/Roomba_SCI_Spec_Manual.pdf" rel="nofollow">the spec</a>, I noticed movement controls weren't as simple as I expected. I can't send bytes larger than 255, but, according to the spec, to go straight I have to send 8000. How does this work?</p> <p>EDIT: My solution was the following three functions:</p> <pre><code>import serial import time def start(): #Open serial connection global ser ser = serial.Serial('/dev/ttyUSB0', 115200) # this is the defualt Roomba baud rate # Start SCI - puts into safe mode ser.write(chr(128)) # Enable the safe mode ser.write(chr(131)) # this is required or the command may fail time.sleep(1) # This makes the serial command fit the big-endian notation def make4(num): num = num[2:] z2a = 4-len(num) for q in range(z2a): num = '0' + num return '\\x'+num def move(vel, rad): # Init move command ser.write(chr(137)) # velocity vhex = hex(vel) vhex = make4(vhex) #radius vrad = hex(rad) vrad = make4(vrad) # send to roomba ser.write(vhex) ser.write(vrad) </code></pre>
Help sending serial command to Roomba
<p>It's called a slip ring.</p> <p><a href="http://en.wikipedia.org/wiki/Slip_ring" rel="nofollow">http://en.wikipedia.org/wiki/Slip_ring</a></p> <p>The Wikipedia page has several alternate names for it.</p> <p>Be careful when using these devices, a cheap or damaged one with poor brushes will destroy a high-speed digital signal (and even worse, it will get damaged after you build a working prototype.)</p>
2542
2014-03-01T10:21:53.887
|motor|motion|joint|
<p>In most situations, range of motion is limited by the fact that we need to carry power or information past a joint. So, past a certain point there are either cables in the way, or the cables would stretch so much that they would either prevent further movement or break.</p> <p>However, if we situate the conductors in concentric rings around or within the rotor shaft, we can have a joint that can rotate forever while keeping in contact with any modules on the other side.</p> <p>What do we call this mechanism? Does it even have a name?</p>
What do we call a coupling with infinite range of rotation
<p>Since the screen of an iPad is your desired workspace, and all the arm will have to lift is a stylus, this will not have to be a large arm. There are many "hobby" arms built with hobby airplane servos that will probably work for you. I won't list any makes / models because there are many out there. </p> <p>I am not sure what your comment about lag is about, but if you want a fast arm, you should consider a Delta robot arm configuration. Because the "standard" serial arms mentioned above are not very fast. These are harder to come by. the only hobby level delta arm i have seen was the <a href="https://www.kickstarter.com/projects/1984252088/robot-army-starter-kit" rel="nofollow">Robot Army Starter Kit</a> on Kickstarter.</p>
2553
2014-03-02T22:25:04.560
|robotic-arm|
<p>I'd like to buy a capacitive touch input robot in order to remote access my iPad but I'm having trouble describing a correct kind of robot. </p> <p>I would like to keep lag down to an additional 60ms so that it is still a high quality interface. </p> <p>I would like to have a robotic arm equipped with a capacitive pen that moves to places on the ipad screen based on the mouse or I'd like a array of capacitive pens that emulate the touch of a user. </p> <p>I guess I'd use Squires software reflect and the mirror function but I'm open to using an SHD camera with the robotic arm and a pixel sensor array with the array of capacitive pens. </p> <p>Does this make sense? How could I improve the design? What materials would I need to build it myself. Assuming ready built arm? How could I build an array of capacitive touch micro pens?</p>
Capacitive touch input robot to remote access iPad
<p>Very intuitive explanation with geometric interpretation for covariance and its decomposition. </p> <p><a href="http://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/" rel="nofollow noreferrer">http://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/</a></p>
2556
2014-03-03T16:51:18.867
|kalman-filter|
<p>I am working on an EKF and have a question regarding coordinate frame conversion for covariance matrices. Let's say I get some measurement $(x, y, z, roll, pitch, yaw)$ with corresponding 6x6 covariance matrix $C$. This measurement and $C$ are given in some coordinate frame $G_1$. I need to transform the measurement to another coordinate frame, $G_2$. Transforming the measurement itself is trivial, but I would also need to transform its covariance, correct? The translation between $G_1$ and $G_2$ should be irrelevant, but I would still need to rotate it. If I am correct, how would I do this? For the covariances between $x$, $y$, and $z$, my first thought was to simply apply a 3D rotation matrix, but that only works for a 3x3 submatrix within the full 6x6 covariance matrix. Do I need to apply the same rotation to all four blocks? </p>
How to rotate covariance?
<blockquote> <p>Is this torque rating at the horn of the servo motor or the torque rating of the actual motor before any gear reduction is done</p> </blockquote> <p>For reputable manufactures and vendors, the ratings will include all internal gear reduction, or external in the case of gear-head motors.</p> <blockquote> <p>Is this motor sufficiently strong for my application?</p> </blockquote> <p>That depends more on the details of your application. 200 oz-in of torque means the servo <em>should</em> be able to lift <strong><em>up to</em></strong> 200 oz within one inch of its center shaft. That means it can work on <strong><em>up to</em></strong> [ 200 oz / 16 oz/lb ] = 12.5 pounds within that inch.</p> <p>To get 4 - 5 pounds out of it, your rotational radius must be within:</p> <p><em>12.5 lb-in / 4.5 lb = 2.78 inches.</em></p> <p>However, operating the motor continuously under a full load will most likely cause it to fail fairly quickly. It will also slow down the rotational speed.</p> <hr> <p>Speaking of speed, the servo is given at 0.23 seconds / 60 degrees. That means it will rotate 60 degrees in 0.23 seconds. a Full circle is 360 degrees, so 4 * 0.23 = 0.92 seconds for a full rotation. </p> <p><strong>However</strong>, (and correct me if I am wrong) no where in the listing does it mention continuous rotation. A traditional RC servo motor cannot rotate all the way around unless it specifically states it can or is manually altered (voiding any warranties). It will typically only rotate a maximum of 140 to 180 degrees.</p>
2570
2014-03-06T01:04:22.797
|motor|servomotor|
<p>I am looking at buying a servo motor for a an application that must be able to lift 4-5 lb at a rotational speed of approximately 1rpm. The servo motor listed here <a href="http://www.robotshop.com/ca/en/hitec-hs755mg-servo.html">http://www.robotshop.com/ca/en/hitec-hs755mg-servo.html</a> states a stalling torque of 200 oz-in. Is this torque rating at the horn of the servo motor or the torque rating of the actual motor before any gear reduction is done?</p> <p>Is this motor sufficiently strong for my application?</p>
Do servo motor specifications take into account the gear ratio inside?
<blockquote> <p>A spherical joint can be represented by three consecutive rotary joints with intersecting rotation axes.</p> </blockquote> <p>This statement is true, your restrictions, I'm afraid are not. The position of spherical wrist can be determined based on what you said, but don't forget that there is another transformation from spherical wrist to the end-effector of the robot and that is the point with the last $d_i$. For example consider figure 4.2.A from the link you provided:</p> <p><img src="https://i.stack.imgur.com/VCce2.png" alt="enter image description here"></p> <p>If you wanted to end kinematic chain at the red point, you could go with $d=0$, but to calculate the position of green point, how would you continue?</p> <p>In <a href="http://robotics.usc.edu/~aatrash/cs545/Lecture8.pdf" rel="nofollow noreferrer">this</a> set of slides there some example DH for spherical joints. Furthermore, you can have a look at the <a href="http://www.dis.uniroma1.it/~deluca/rob1_en/09_DirectKinematics.pdf" rel="nofollow noreferrer">slides</a> of the same professor whom you have mentioned his exam (it can give you an idea of why he has answered the question that way).</p> <p>Finally, you should remember that there are the following ambiguities in Denavit-Hartenberg convention i.e. two DH representation of the same robot, can be different, yet will result in the same forward kinematics. DH ambiguities:</p> <ol> <li>Frame$_0$: origin and $x_0$ axis are arbitrary.</li> <li>Frame$_n$: $z_n$ axis is not specified (but $x_n$ <em>must</em> be orthogonal to the intersect $z_{n-1}$</li> <li>When $z_{i-1}$ and $z_i$ are parallel, the common normal is not uniquely defined ($O_i$ can be chosen arbitrarily along $z_i$) </li> <li>When $z_{i-1}$ and $z_i$ are incident, the positive direction of $x_i$ can be chosen at will (however, we often take $x_i = z_{i-1} ! z_i$)</li> </ol>
2591
2014-03-09T08:57:37.203
|kinematics|dh-parameters|joint|
<p>What are valid values for the Denavit-Hartenberg parameters $d$ and $a$ (sometimes called $r$) of the last 3 links of a robot with spherical wrist?</p> <p>From <a href="http://www.eng.utah.edu/~cs5310/chapter4.pdf" rel="nofollow">this reference</a>, "A spherical joint can be represented by three consecutive rotary joints with intersecting rotation axes."</p> <p>So the retrictions should be:</p> <p>$L_{n-2}$ ($d$ arbitrary, $a=0$)</p> <p>$L_{n-1}$ ($d = 0$, $a=0$)</p> <p>$L_{n}$ ($d = 0$, $a=0$)</p> <p>But in <a href="http://www.dis.uniroma1.it/~deluca/rob1_en/WrittenExamsRob1/Robotics1_10.07.07.pdf" rel="nofollow">this exam</a> I have found on the internet, It says that the KUKA robot has spherical wrist, and $d$ of the last joint is different to $0$. Would $d\neq0$ in the last link still yield a spherical wrist?</p>
Denavit–Hartenberg parameters of a robot with spherical wrist
<p>If you already have two separate 12V batteries then it makes no sense to convert 24V down to 12V which you already have. As the two of your batteries are coupled together to form 24V, just draw a wire from the place where the two batteries meet and you'll have 12V there (with respect to the ground).</p> <p>If by some strange reason you really want to convert those 24V to 12V, take a look at some switch-mode power supplies (like LM2576, it can output 3A but there are better ones, you should have no problem finding them), they are more efficient than linear voltage regulators (like 7805) and can give higher output current.</p> <p>Regards</p>
2594
2014-03-09T22:05:27.197
|bec|
<p>I have built a r/c car that runs on 2 30AH 12V DC deep cycle batteries. The motors are 24v motors that will each draw around 15A at full power. My motor controller can handle this, as well as reclaiming braking energy. </p> <p>This is my way of saying that i have a 24v power system. Now my issue is that I want to run a 12v device on this 24v service. I do not want to have the hassle of another battery to maintain so i would like to power it off the main batteries. All the BECs and other converters that i have found only supply around 1 amp while the device i am looking at powering will take around 4-5A 12v DC. Does anyone know of a device that will do this.</p>
24v dc to 12vdc converter
<p><a href="http://www.tldp.org/HOWTO/Lex-YACC-HOWTO.html" rel="nofollow">YACC and Lex</a> to the rescue.</p> <p>They are exactly what you ask for: "a general purpose datagram parsing library". Bytes go in one end, and structured data comes out the other end. You are responsible for doing 2 things:</p> <h2>Tokenizing</h2> <p><a href="http://www.cs.rug.nl/~jjan/vb/lextut.pdf" rel="nofollow">Lex</a> is used for tokenizing &ndash; deciding how the input stream of bytes should be grouped. For example, to tokenize the text of a book, you'd want the tokens to be the words (i.e. all the letters and the apostrophe), and the punctuation. In the GPS message you showed, the tokens would be the literal string <code>$GPGLL</code> (and separately, any other GPS commands that start with <code>$</code>), commas, groups of digits, <code>N</code>, <code>W</code>, and a few others. This is essentially regular expression matching.</p> <h2>Parsing</h2> <p><a href="http://www.cs.rug.nl/~jjan/vb/yacctut.pdf" rel="nofollow">YACC</a> is used for parsing &ndash; deciding how the sequence of tokens should form structure. To use the example of text in a book, parsing would allow you to tell whether certain text was a quotation (or parenthetical text) by keeping track of the opening and closing marks. <em>As an aside: try as you might, there is no regular expression that can tell you whether quotes or parenthesis are evenly matched; hence the need for parsing.</em> In the GPS example, the parse rules would be something like "a GPGLL command is the <code>$GPGLL</code> token followed by <code>,</code>, followed by a float (which itself is an <code>integer</code> token followed by a <code>.</code> optionally followed by another <code>integer</code>), followed by an <code>N</code> or <code>S</code> token"... and so on.</p> <p>Writing a parser is essentially writing a <a href="https://people.cs.umass.edu/~mccallum/courses/inlp2007/lect5-cfg.pdf" rel="nofollow">context-free grammar</a>.</p> <p>Although these may sound scary and difficult (they certainly sounded so to me...), I can say from personal experience that learning how to use these tools is much <em>much</em> <strong>much</strong> easier than trying to write your own parser from scratch and then maintain it. There are many examples and tutorials online that should help you get started. </p>
2597
2014-03-10T03:53:52.493
|sensors|software|driver|
<p>What software libraries are there for assisting the general problem of parsing a stream of sensor data?</p> <p>We use various sensors like LIDARs and GPSINS units that provide messages in proprietary binary formats, and have to write drivers for each one. Even though there's a lot of similar concepts used in each sensor (like a general purpose datagram for all messages, consisting e.g. of start/end sentinels, length specifications and a checksum, and then a variety of well-defined message formats for the payload), it ends up being a lot of tedious work to develop a driver each time.</p> <p>I'd love a solution where I can write out packet/message specifications in some format, and have a library that finds &amp; extracts valid messages from a stream, and provides them in a simple data structure format.</p> <p>I'm not too fussed about what language, but basically want a general purpose datagram parsing library. There's a lot of customisation with sensors, maybe some odd format parsing, and probably some initial configuration to start the data stream, so this is really something I want as a library for processing the data in real-time that can be used as part of a driver/application.</p> <p>Everything I find is either too basic (the low level tools for interpreting individual elements, so still lots of time spent extracting individual elements explicitly), or too specific (i.e. parsers written specifically for one particular protocol).</p> <p>As a concrete example, consider NMEA messages:</p> <ul> <li>There's a basic outer datagram (starts with <code>$</code> followed by message name, then comma separated data, and ends with <code>*</code>, checksum and line terminating character)</li> <li>Data is in ASCII so needs to be parsed to binary for computational use</li> <li>Outer datagram allows for validation and removal of incomplete/corrupted messages</li> <li>Message name &amp; content would be further parsed for consumption</li> <li>Field names can be specified for ease of use</li> </ul> <p>A 'GPGLL' message might be turned from <code>$GPGLL,4533.21,N,17739.11,W,113215.22,A*31</code> into a programmatic data structure containing latitude, longitude, UTC timestamp and its validity.</p>
Software libraries for parsing sensor data
<p><strong>Pure pursuit</strong> is the standard method for following a trajectory with a differential drive (or ackerman steering) robot. It is a very simple technique. You should be able to search for it and find some (very old) papers describing it.</p>
2604
2014-03-11T15:44:13.980
|control|pid|navigation|differential-drive|
<p>I have a robot platform with differential drive which knows it's position and orientation. Lets say that the space through which the robot moves is known and it has only static obstacles. The task is to move the robot from point A and heading alpha (on which it currently stands) to point B and heading beta on the map.</p> <p>Lets also say that I can obtain a reasonable trajectory (in relation to the turning abilities of the robot). As both the robot and the sensors are inert, what are some general approaches for controlling such a robot to follow the path? It should of course be kept in mind that the final task is to reach the point B without colliding with the obstacles and not the perfect trajectory following.</p> <p>I hope the question is not too general.</p>
Differential drive trajectory following control
<p>I feel that people who say "this is off-topic" should <a href="http://communitywiki.org/en/OnAndOffTopic" rel="nofollow noreferrer">tell you about some other place where that is on-topic</a>.</p> <ul> <li>I see that you and I are not the only people who think the electronics stackexchange is a good place to ask questions about the <a href="https://electronics.stackexchange.com/search?q=LPC1343">NXP LPC1343</a> and other <a href="https://electronics.stackexchange.com/search?q=Cortex%20M3">ARM Cortex M3</a> processors.</li> <li>The Arduino Due uses a ARM Cortex-M3 CPU -- perhaps the <a href="http://forum.arduino.cc/" rel="nofollow noreferrer">Arduino Forum</a> or the <a href="https://arduino.stackexchange.com/">Arduino Stackexchange</a> or the <a href="http://playground.arduino.cc/" rel="nofollow noreferrer">Arduino Playground</a> might be a good place to ask about setting up a toolchain and debugger for the M3 and other things that are not hyper-specific to the LPC1343.</li> <li>the <a href="http://www.inf.u-szeged.hu/gcc-arm/" rel="nofollow noreferrer">GCC ARM Improvement Project</a> might be a good place to ask about setting up a toolchain and debugger that is not specificic to the M3.</li> <li>The <a href="http://www.lpcware.com/forums/microprocessor-forums/lpc13xx" rel="nofollow noreferrer">lpc13xx Microprocessor Forum</a> may be a good place to ask about things that are specific to the LPC13xx series (including the LPC1343).</li> <li>The LPCXpresso now uses the LPC1347 chip -- perhaps you might pick up some good tips at the <a href="http://www.lpcware.com/forums/lpcxpresso/lpcxpresso-forum" rel="nofollow noreferrer">LPCXpresso Forum</a></li> <li>Perhaps you might pick up some good tips in the "Getting started with the <a href="https://www.olimex.com/Products/ARM/NXP/LPC-P1343/" rel="nofollow noreferrer">LPC-P1343</a>" document.</li> <li>I hear the <a href="http://dangerousprototypes.com/forum/search.php?keywords=LPC1343&amp;submit=Search" rel="nofollow noreferrer">Dangerous Prototypes forum sometimes discusses the LPC1343</a>.</li> </ul> <p>Good luck.</p>
2606
2014-03-11T23:00:17.080
|microcontroller|embedded-systems|arm-cpu|
<p>I am a beginner to robotics and embedded systems. Consequently I have a lot of questions related to the toolchain and how things are going together like how to debug or how to connect a bluetooth module.</p> <p>I already tried <a href="https://electronics.stackexchange.com/">https://electronics.stackexchange.com/</a> and it did not work out for me.</p> <p>Any ideas where I can get help with my LPC1343 related questions?</p>
Where to ask NXP LPC1343 / ARM Cortex M3 related questions
<p>The microcontroller you are using has a certain clock frequency. Every operation you make it compute takes a specific (but constant) number of clock cycles to perform. What your code does is it enters the loop function, it performs all the tasks within it and then it jumps back to it's beginning and does everything again.</p> <p>As you are limited by the number of <em>operations</em> you can perform per second and not the number of <em>loop iterations</em>, you need to either decrease the number of operations you are performing per loop iteration or find a way to increase the number of operations you can do per second.</p> <p>In order to increase the number of operations you can do per second you need to increase the clock frequency which might mean buying a new chip if the one you are using is already at it's maximum.</p>
2609
2014-03-12T05:01:29.220
|arduino|
<p>Just wanted to clarify some pretty basic Arduino concept:</p> <p>If I put this code into an arduino board:</p> <pre><code>double start, endTime; long int counter = 0; void setup() { Serial.begin(19200); start = micros(); } void loop() { endTime = micros(); counter++; //Point A if((endTime - start) &gt; 1000000) { Serial.println(counter); counter = 0; start = micros(); } } </code></pre> <p>...I see >38000 value in my serial monitor (for the 'counter' variable).</p> <p>However, the moment I put in some heavy calculation in 'Point A', the value drops to 150 - 170. This is expected, I guess.</p> <p>My question is: Is the only way to push up the operational frequency lies in optimising the code/calculation? Or, is there some other way I can get faster execution?</p>
Arduino Operational Frequency
<p>The easiest way to built a 4 wheeled robot with motors to all 4 wheels is to buy a <a href="http://rads.stackoverflow.com/amzn/click/B003ML36HI" rel="nofollow">Rock crawler</a> off of the internet, and to remove it's body and Radio receiver. </p> <p>Then connect it's motors' and steering motor's wires to the output of a microcontroller and you have your 4WD robot, with the differential n steering system with suspensions n all. </p> <p>If you are building a differential drive robot, best way is to give power to 2 diagonally opposite wheels and keep other 2 wheels freely rotating. This means your robot's platform should be circular</p>
2613
2014-03-12T17:51:39.443
|wheeled-robot|kinematics|
<p>I found a model for 2-wheeled robots here:</p> <p><a href="https://robotics.stackexchange.com/questions/106/what-is-a-suitable-model-for-two-wheeled-robots/134#134">What is a suitable model for two-wheeled robots?</a></p> <p>How should I adapt it to a 4-wheeled setting?</p>
What is a suitable model for four-wheeled differential drive rigid-body robots?
<p>2pietjuh2's Mom:</p> <p>Are you familiar with <em>dog whistles</em>?</p> <p><img src="https://i.stack.imgur.com/nWn9B.jpg" alt="Ultrasonic dog whistle (wikipedia)"></p> <p>Dog whistles make a sound at a frequency that dogs can hear but humans can not. The sound it makes has a higher frequency than the maximum frequency humans can hear.</p> <p>If we try to draw a picture of what <strong>human</strong> hearing is like it looks something like this: <img src="https://i.stack.imgur.com/xTAZz.jpg" alt="Audiogram (wikipedia)"></p> <p>You may have seen this already if you've ever taken a hearing test. Where the red circles are low signal where our hearing gets worse. In this example we have a person who can hear pretty well between 250Hz and 8000Hz (Hz is a measuring unit called after <a href="http://en.wikipedia.org/wiki/Heinrich_Hertz" rel="nofollow noreferrer">Heinrich Hertz</a> and means cycles per second) but less well outside this range, or <strong>band</strong>. We can think of the area from the first frequency to the last frequency as a "band". Humans with good hearing can still hear something up to 20000Hz but we are not as sensitive to those higher frequency sounds. At some point we simply will hear nothing at all no matter how loud/strong the sound is.</p> <p><strong>Not</strong> this kind of a band:</p> <p><img src="https://i.stack.imgur.com/uH1fo.jpg" alt="enter image description here"></p> <p>But more like a band you'd wear on your wrist. We measure the <strong>width</strong> of this band and that's our <strong>band width</strong>, <strong>bandwidth</strong>. For our human example it's 8,000 minus 250 Hertz or about <strong>7,750</strong> Hertz (ignoring the little dip around 4,000). If someone asked me to design the hearing system for this human they would specify a bandwidth of 7,750Hz. We usually need to decide what's large enough to count as being inside the band in order to determine this. The "standard" human hearing range is often considered to be 20Hz to 20,000Hz or a bandwidth of about 19,980Hz.</p> <p>Meanwhile, our best friend:</p> <p><img src="https://i.stack.imgur.com/zo1ib.jpg" alt="enter image description here"></p> <p>Has a hearing range of about 40 Hz to 60 kHz (kHz is kilo-Hertz or one thousand Hertz). So we can say his hearing bandwidth is about <strong>59,960Hz</strong> (60,000-40).</p> <p>We can explain this mathematically and more rigorously but that will take more time where most people would lose interest (a few hours should yield an expert level of understanding for this particular concept). I think it's important to stress out the origin of the term as the width of the band in a graph of frequency vs. response (e.g. in radio, light etc.)</p> <p>(p.s. All images from Wikipedia)</p>
2620
2014-03-13T10:02:34.107
|control|
<p>I am working on a system which is measuring a force. The specification is to have a 500Hz bandwidth on the measurement. </p> <p>Now I was trying to explain this 500Hz bandwith to my mom and I could not really explain it easily.</p> <p>What is the most easy way to explain the term bandwidth of a measurement to someone without control engineering background?</p>
How to explain bandwidth of a measurement to a noob?
<p>EDIT: Sorry, didn't see that this was such an old question, I'm used to Stack Overflow where the first page of questions spans several minutes instead of several months!</p> <p>The DJI Naza Flamewheel F450 is a popular kit, I am building one. You can get it at a bunch of different places online, such as <a href="http://heli-heli.com/items/quad-copters/dji-innovations/dji-flame-wheel-f450-arf-kit-includes-motors-esc-propellers-dji-fw450-detail.htm" rel="nofollow">here</a>. It includes the airframe, motors, ESCs, and props, and because they are from the same kit they are pretty much guaranteed to be compatible.</p> <p>You will have to source the remaining parts yourself, i.e. the battery, Tx/Rx, and flight controller. Your battery and Tx look fine for the Flamewheel F450 kit, and I can't comment on the Hobbyking FC unit because I have no experience with it (I use the DJI Naza-M Lite).</p> <p>Also, don't forget the LiPo battery charger!</p> <p>Hope this helps, good luck with your build!</p>
2631
2014-03-14T08:16:30.340
|motor|sensors|quadcopter|multi-rotor|
<p>I am completely brand new to quadrocopter building. I am currently about to start building a Quad. I have done a little bit of research and was thinking of buying the following parts:</p> <ul> <li>KK2.1 Hobbyking Flight Controller</li> <li>Turnigy H.A.L Quadcopter Frame</li> <li>4 x NTM Prop Drive 35-30 1100kv / 380w</li> <li>Turnigy 9X 9CH</li> <li>Turnigy Plush 40A ESC</li> <li>Slow Fly Prop Left</li> <li>Slow Fly Prop Right</li> <li>Quad Power Dist Board</li> <li>Turnigy 5Ah 3S25C LiPo</li> </ul> <p>What do you think of these parts/Do you have any complete builds with instructions that you would recommend instead?</p> <p>Thanks</p>
Quadrocopter build - Do these parts look fine?
<p>Total current that can be drawn from a battery depends on discharge capacity (C) and ampere-hour (Ah) rating of a battery. Maximum continuous current that can be drawn from batter can be given as follows</p> <blockquote> <p>Maximum Current (A) = discharge capacity (C) x ampere-hour(Ah)</p> </blockquote> <p>For example, in your case if you are using 5000mAh, 25C battery, then you can draw maximum 125A continuously. </p> <p>Generally discharge capacity of battery is mentioned as continuous discharge and burst discharge. Continuous discharge rating can be used to calculate maximum allowed continuous current. Burst discharge rating can be used to calculate maximum burst current that an be handled by battery. Usually burst current is can be drawn for few seconds (less than 10 seconds). For example, in your case if you are using 5000mAh, 25C-50C battery, they you can draw 125A continuous and 250A burst current.</p> <p>So for application purpose, one should always use continuous discharge rating for calculations and keep in mind burst rating only as fail-safe.</p> <p>If you are using batteries in parallel then make sure all have same discharge rating. Discharge rating and voltage of resultant battery will be same as individual battery and only Ah rating will add up. So in your case, resultant battery should be 9600mAh, 25C then only it will be able to supply 240A(8x30A) continuously. In case you are using 4 batteries in parallel then each battery should be at least 2400mAh.</p>
2635
2014-03-15T11:28:39.080
|battery|
<p>I am using 8 brushless motors for an octocopter. Each motor can be run at maximum 30A. I use 4 batteries in parallel. How high C number is needed?</p> <p>$$\frac{30*8}{4*2} = 30C$$</p> <p>When running the motors at 100% load, it will draw 30C from each battery. Can a 25C with max 50C be used, or will it run hot?</p> <p>Additionaly, how many ampere hours can be drawn from a 5000mAh battery before it's empty? Many 12V car batteries can only be drawn for 60% of their stated capacity before they need to be charged.</p>
How much can realistically be drawn from a 25C max 50C battery?
<p>You can use this code in MATLAB:</p> <pre><code>x = 0:0.1:20; y = dsigmf(x,[1 4 1 16]); plot(x,y) </code></pre> <p>Or</p> <pre><code>clc clear all; speed=600; step=300; for i=0:1:step y1(i+1)=1/(1+exp(-(0.05)*(i-100))); y2(i+1)=1+(-1/(1+exp(-(0.05)*(i-200)))); end y=[y1,y2]; plot(y) xlabel('time(Step)') ylabel('Speed') xlim([-0.05 600]) </code></pre>
2637
2014-03-16T01:10:35.437
|control|
<p>When researching robots, micro mouses etc I've often come across people taking about generating "speed profiles" and how to calculate them. Also profiles for acceleration , deceleration , turning etc. Also trapezoidal profile?</p> <p>But I can't seem to find exactly what is meant by this. The how or why also.</p> <p>So what is a "profile" in this sense and why you would need one?</p>
What is meant by a speed profile?
<p>A Raspberry Pi should be sufficient for the control you intend to do with it.</p> <p>In designing a controller under a full multitasking operating system, like the Linux operating systems that are available for the Raspberry Pi, you have to be careful about the real-time requirements, and if the time share chunk of processor made available to your software will be enough to keep up with the rt nature of cyber-physical interactions.</p> <p>In small words: if you don't run much beyond the controller code in the RPi, it should suffice, but if you want to have other pieces of code running, like some higher level supervisory control or even a Java virtual machine for some reason, you will have to assess the response time, and even take the risk of some runaway thread end up blocking your controller for some time.</p> <p>The solution for that, and if you take it you will most likely have no headaches in the process, is to have non-preemptive multi-tasking OS, or, even better, a Real Time OS running on the Raspberry Pi.</p> <p>And you can have both (not at the same time, of course!). Raspberry Pi has support for the <a href="http://www.raspberrypi.org/risc-os-for-raspberry-pi/" rel="nofollow">RISC OS</a> Operating System, which employs cooperative multitasking, where you can give absolute priority for the control loop, and only run other pieces of code once the control loop is done for the cycle.</p> <p>And, my personal best, you have <a href="http://www.xenomai.org/" rel="nofollow">Xenomai</a> running on it. Xenomai is a hard real-time supervisor that has Linux running under it, so you can have a full deploy of a Linux machine, with all the benefits of having all the pre-packaged software and other amenities, and you also have a Real Time layer where you can run your control loop without any risks of getting interrupted out of time. You can find instructions on how to have Xenomai installed on the RPi <a href="http://www.raspberrypi.org/forums/viewtopic.php?f=28&amp;t=50184" rel="nofollow">here</a>. There used to be a ready SD image of it available, but it seems to have gone offline at this time.</p> <p><strong>EDIT:</strong> A ready to install SD image of Xenomai for the Raspberry Pi can be downloaded from <a href="http://diy.powet.eu/2012/07/25/raspberry-pi-xenomai/" rel="nofollow">this link</a>.</p>
2639
2014-03-16T17:01:40.787
|arduino|control|raspberry-pi|real-time|
<p>I want to create a two wheel remote controlled robot. I have worked out a lot of the logic with regards to balancing. I have started to read up on motor control and arduino vs beagleboard black vs raspberry pi.</p> <p>Is the multitasking nature of a full linux OS a problem I need to be concerned with for this application?</p> <p>I expect that I have to adjust the motors at least 20 times per second, but I don't think a slight variation in the update loop interval to be a problem. Possibly, I will face problems if I need to do PWM myself?</p> <p>Basically, the way I plan to make the robot work is by using an accelerometer to have a reference to where down is. The robot will autonomously work to keep the direction of the accelerometer down. The remote control will simply adjust the readings from the accelerometer, and the balancing loop will then react as if the robot is falling and accelerate the wheels.</p>
Raspberry Pi for two wheel robot?
<p>If you're using a microcontroller (Arduino-type) with a loop, you can expect a response time of microseconds or less. If you're using a more advanced controller (Raspberry PI or BeagleBone Black), the latency will be less predictable, but very fast. The reason for less predictable latency is multitasking on the systems running Linux, while the Arduino isn't running anything that will interfere.</p> <p>Given that you plan to use sound as a signal, there will be physical factors that create greater variation in reaction speed; wind direction comes to mind as a significant source of latency.</p> <p>Of course, if you're trying to trigger it by the camera click sound, I guess you could do quite well with a simple arduino.</p>
2641
2014-03-16T22:01:46.667
|arduino|
<p>I want to make a circuit that powers a transistor when a sound above a set threshold is reached. (Trigger a flash for high speed photography.)</p> <p>How long will the response time be?</p>
What is the response time of an Arduino Nano?
<p>Since your frame is about (530g + other parts' weight), you need to make sure that your 4 motors give you a combined thrust of at least 100 g more than, just to get off the ground and maneuver. Anything greater than that is good. (<a href="http://aeroquad.com/showthread.php?1048-Calculating-motor-thrust" rel="nofollow">Thrust Calculator</a>).</p> <p>You battery amps * burst C / constant C should be greater than the motor amp pull * 4.</p> <p>Propellers look OK. Make sure their inner diameter matches motor shaft diameter, else buy adapters.</p> <p>Motor controller board is also good.</p> <p>Your radio receiver output should control the ALE, THR, ELE, RUD (Aileron, Thrust, Elevation, Rudder). Which means the radio receiver output (the CHx output pins) should be the input (the 4 above) of the controller board.</p>
2652
2014-03-18T02:38:24.990
|quadcopter|microcontroller|radio-control|
<p>I'm planning to order parts for my first quadcopter build and I had a few questions. <a href="https://docs.google.com/document/d/1jJsP1noISRmEyQZirJVCQK5455tRXtAwk2ZPOVDElxk/edit?usp=sharing" rel="nofollow">Here</a> is my parts list. I'm crossing my fingers that they are all compatible, and I'm pretty sure they are. I have two questions:</p> <ol> <li>Do I need a Power Distribution Board and if so, what does it do?</li> <li>Where on my flight controller do I attach my radio receiver? </li> </ol>
A few questions about my first quadcopter build
<p>Yes, of course it 'can be crushed' - (imagine I'm standing in front of a ginormous hydraulic press, with an evil grin on my face). </p> <p>The sensor deforms under pressure. When it deforms far enough it will reach its elastic limit and fail.</p> <p>The solution is to arrange to transfer the load from the sensor to a stand when the load exceeds the range (and before it exceeds the elastic limit and cracks). For a piezo sensor, this could be as simple as a pin that stops the anvil motion before it destroys the sensor.</p> <p>If the transient is applied and removed rapidly, but all other changes in force are slow, you could simply put a shock absorber in parallel with the sensor. The shock would take the load during the overpulse.</p>
2657
2014-03-18T04:05:40.643
|sensors|
<p>I have found some load sensors (piezoelectric) that measure relatively small weights (on the order of ~ grams). That's what I need!</p> <p>However... Around my robot, there will occasionally be bursts of extremely high pressure. These bursts do not need to be measured... they just wash over. The pressure appears, to the sensor, to be a ~ 2,000+ kg </p> <p>Question: Are these sensors likely to break or fatigue? I realize piezos do not measure via deformation, but still... that's a big load!</p> <p>Maybe I should just order a few and try...</p>
Could piezoelectric sensors be crushed?
<p>First, be careful when using the term "observable" with respect to Kalman filters. It has a precise mathematical meaning that basically determines whether or not the filter is even possible.</p> <p>With respect to your question, you need to select a subset of the observation and measurement noise covariance matrices depending on which measurements are available. Assume you have the system $$ \begin{align} x_{t+1} &amp;= A_t x_t + B_t u_t + G_t w_t \\ y_t &amp;= C_t x_t + v_t \end{align} $$ where $w$ and $v$, with covariance $W$ and $V$, are the dynamic and measurement noises respectively.</p> <p>Now partition your measurements as $$ \begin{align} y = \begin{bmatrix} y_1 \\ y_2\end{bmatrix},\quad C = \begin{bmatrix} C_{11} &amp; C_{12} \\ C_{21} &amp; C_{22} \end{bmatrix},\quad V = \begin{bmatrix} V_{11} &amp; V_{12} \\ V_{21} &amp; V_{22} \end{bmatrix} \end{align} $$</p> <p>If you only have measurement 1 available, you would use the observation equation $$ \begin{align} y_1 &amp;= C_{11} x + v_1 \end{align} $$ and apply the Kalman filter measurement update accordingly.</p> <p>The difference in time between measurements is not a problem since you can just keep running the time update equation ($x_{t+1} = \dots$) until you get a measurement. i.e. You might run the time update equation 5 times before running a single measurement update.</p>
2661
2014-03-18T16:35:08.657
|kalman-filter|
<p>I have a system that I can make a strong kinematic model for, but my sensors send readings at unpredictable times. When I say unpredictable, I am not just saying the order the readings will arrive, I also mean that sensors are able to sleep when they do not see a significant change. When an input arrives for any given sensor, that information can be used to infer the states of many other sensors based on my model.</p> <p>At first, it seemed like a Kalman Filter was exactly what I needed because I could make a prediction of all of the states of the system and then update those states when one piece of information comes in and repeat this process until a good estimate of the system as a whole was determined. However, after reading over Kalman filters, it looks like they assume that every state will be updated on a regular basis. Is there a way the Kalman filter can be modified for when you are unsure about what input will come in next and you are also unsure how much time will elasped before the next input arrives? Please note that in my case, once the information arrives, I will know the source of the input as well as the time that has elapsed since the last update, I just won't be able to predict these two things beforehand.</p>
Kalman Filter when states are not observable at the same time?
<p>You might want to check out this grape vine pruning robot: <a href="https://www.youtube.com/watch?v=9GaGO9LIDEA" rel="nofollow">https://www.youtube.com/watch?v=9GaGO9LIDEA</a>.</p>
2667
2014-03-20T17:35:28.547
|mobile-robot|robotic-arm|
<p>My friend has acquired a (small) wine yard to discover how much work it is tending and harvesting the grapes.</p> <p>Now we are musing to enlist robotic help. The vine stocks are usually connected by a stiff wire, so some dangling machine might be feasible.</p> <p>We are looking for references/inspirations for an agricultural robotic vine assistant. Any ideas?</p>
Wine yard robotics?
<p>The drift is being caused either by your position data being consistently wrong, or by your roll/pitch PIDs not correcting the drift error with their integral term. </p> <p>In any case, it is the integral term of a PID that (when working properly) prevents drift.</p>
2668
2014-03-20T22:28:34.050
|arduino|quadcopter|
<p>I wrote my own quadcopter firmware which is based on some older code. This code shall stabilize the copter to be always in equilibrium. The model is behaving relatively nice. I can control it with my laptop. However I noticed, that the copter is hovering to the side (if not manually controlled), likely because of wind, not well balanced or turbulence. </p> <p>My idea was maybe to fuse GPS and accelerometer data to implement a function which shall help to hold the position. But this will likely only work if I have a hold altitude function, because changes in pitch or roll change the height, because the thrust is changed slightly. This is why I recently added a routine which shall allow to hold the altitude. </p> <p>Is someone having experiences with this? I mean with avoiding side drifts of the model because of whatever by software? The problem is in my opinion, that I don't know whether the position change is wanted (by remote control) or not. Additionally it is hard to localize the correct position and calculate the distance caused by drift from it (just with GPS, but this is not precise).</p> <pre><code>void hold_altitude(int_fast16_t &amp;iFL, int_fast16_t &amp;iBL, int_fast16_t &amp;iFR, int_fast16_t &amp;iBR, const int_fast32_t rcalt_m) { // Enhance the performance: // This function is only needed for (semi-)autonomous flight mode like: // * Hold altitude // * GPS auto-navigation if(_RECVR.m_Waypoint.m_eMode == GPSPosition::NOTHING_F) { return; } // Return estimated altitude by GPS and barometer float fCurAlti_cm = _HAL_BOARD.get_alti_m() * 100.f; // Estimate current climb rate float fBaroClimb_cms = _HAL_BOARD.get_baro().climb_rate_ms * 100; float fAcclClimb_cms = _HAL_BOARD.get_accel_ms().z * 100; // calculate the float fAltStabOut = _HAL_BOARD.m_rgPIDS[PID_THR_STAB].get_pid(fCurAlti_cm - (float)(rcalt_m*100), 1); float fBarAcclOut = _HAL_BOARD.m_rgPIDS[PID_THR_ACCL].get_pid(fAltStabOut - fBaroClimb_cms, 1); float fAccAcclOut = _HAL_BOARD.m_rgPIDS[PID_THR_ACCL].get_pid(fAltStabOut - fAcclClimb_cms, 1); int_fast16_t iAltOutput = _HAL_BOARD.m_rgPIDS[PID_THR_RATE].get_pid(fAltStabOut - (fBarAcclOut + fAccAcclOut), 1); // Modify the speed of the motors iFL += iAltOutput; iBL += iAltOutput; iFR += iAltOutput; iBR += iAltOutput; } </code></pre> <p>Copter control:</p> <pre><code>// Stabilise PIDS float pit_stab_output = constrain_float(_HAL_BOARD.m_rgPIDS[PID_PIT_STAB].get_pid((float)rcpit - vAtti.x, 1), -250, 250); float rol_stab_output = constrain_float(_HAL_BOARD.m_rgPIDS[PID_ROL_STAB].get_pid((float)rcrol - vAtti.y, 1), -250, 250); float yaw_stab_output = constrain_float(_HAL_BOARD.m_rgPIDS[PID_YAW_STAB].get_pid(wrap180_f(targ_yaw - vAtti.z), 1), -360, 360); // is pilot asking for yaw change - if so feed directly to rate pid (overwriting yaw stab output) if(abs(rcyaw ) &gt; 5.f) { yaw_stab_output = rcyaw; targ_yaw = vAtti.z; // remember this yaw for when pilot stops } // rate PIDS int_fast16_t pit_output = (int_fast16_t)constrain_float(_HAL_BOARD.m_rgPIDS[PID_PIT_RATE].get_pid(pit_stab_output - vGyro.x, 1), -500, 500); int_fast16_t rol_output = (int_fast16_t)constrain_float(_HAL_BOARD.m_rgPIDS[PID_ROL_RATE].get_pid(rol_stab_output - vGyro.y, 1), -500, 500); int_fast16_t yaw_output = (int_fast16_t)constrain_float(_HAL_BOARD.m_rgPIDS[PID_YAW_RATE].get_pid(yaw_stab_output - vGyro.z, 1), -500, 500); int_fast16_t iFL = rcthr + rol_output + pit_output - yaw_output; int_fast16_t iBL = rcthr + rol_output - pit_output + yaw_output; int_fast16_t iFR = rcthr - rol_output + pit_output + yaw_output; int_fast16_t iBR = rcthr - rol_output - pit_output - yaw_output; // Hold the altitude hold_altitude(iFL, iBL, iFR, iBR, rcalt); hal.rcout-&gt;write(MOTOR_FL, iFL); hal.rcout-&gt;write(MOTOR_BL, iBL); hal.rcout-&gt;write(MOTOR_FR, iFR); hal.rcout-&gt;write(MOTOR_BR, iBR); </code></pre>
Removing quadcopter drift to the side
<p>Answering myself.</p> <p>Yes it works flawlessly, you just need to take care of the different parameters for the connection. Especially the flow control one needed to be set at none.</p> <p>Then just send the command using a std::string and the "&lt;&lt;" operator.</p>
2672
2014-03-21T05:54:35.640
|serial|communication|
<p>I'm trying to use the <a href="http://www.micromo.com/manuals/sites/en/steuerungen/mcbl_3006_s_rs.html" rel="nofollow noreferrer">MCBL Controller</a> by Faulhaber to control my motor. I'm trying to program some sort of driver on linux using the serial connection and libserial. But it does not seem to be working for now. </p> <p>I'm using the usb to RS232 converter like this one:</p> <p><a href="https://i.stack.imgur.com/jA8vW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jA8vWm.png" alt="Cable"></a></p> <p>I'm wondering if it's well supported by libserial. I've read that yes but does anyone have any experience with it?</p>
MCBL Controller through RS232
<p>Regarding interference of active sensors: Yes, this is a problem for Primesensor-based RGBD cameras (like the Kinect), but it does not make these sensors completely useless. There are even some <a href="http://www.precisionmicrodrives.com/tech-blog/2012/08/28/using-vibration-motors-with-microsoft-kinect" rel="nofollow">clever hardware hacks</a> to work around this problem. A much bigger problem of these sensors is exposure to direct sunlight. This has slightly improved with Kinect 2.0.</p> <p>About your main question: The different approaches you mentioned all have different advantages and problems/failure cases. RGB-D sensors suffer from interference (the worst being interference with sunlight). A similar alternative is relying on stereo vision, but this is computationally more expensive and may fail in textureless environments. Monocular camera + IMU requires the least space/weight but is less accurate.</p> <p>Neither option is going to become completely irrelevant in the near future, but the ideal approach depends on the exact problem you need to solve.</p>
2673
2014-03-21T09:38:10.010
|kinect|sensor-fusion|odometry|
<p>I am doing research of ego-motion estimation and positioning in 6DoF space. And I found that apparently all systems are based on active RGB-D sensors, like Kinect. I understand, that such sensors provide greater accuracy, and requires less computational resources.</p> <p>But if such systems will be used, for example, for augmented reality or robot navigation, how they are going to solve the problem of the interference of signals from different systems, operating in the same space? If many people will wear AR glasses with active sensors - they will interfere with each other, aren't they?</p> <p>Are there big commercial projects, that use passive visual odometry with multiple camera units and IMU sensors? I found some good papers on this topic, but I have not found commercial application of such technology. I am going to make research of passive odometry method for AR, but is it actually a problem with active depth sensors, that i described earlier?</p> <p>UPD: The main question:</p> <p>Is passive odometry, based on video flow analysis and IMU, worth to make deep research in this topic, or active sensors - is our future, and the signal mix is not a big deal, and passive odometry is a dead end of such kind of technology? Because it will be not very useful to make research in useless technology...</p>
Passive ego-motion estimation vs active
<p>I made a small routine for that, but not yet completely without errors. E.g.the acceleration term is still contradicting the differcence in height term, but working for it's own well.</p> <pre><code>const float fBias_g = 0.1f; const float fScaleF_g = 100.0f; int_fast16_t iAltZOutput = 0; // Barometer &amp; Sonar int_fast16_t iAccZOutput = 0; // Accelerometer // Return estimated altitude by GPS and barometer bool bOK_H, bOK_R, bOK_G; int_fast32_t fCurAlti_cm = altitude_cm(&amp;_HAL_BOARD, bOK_H); float fClimbRate_cms = climbrate_cms(&amp;_HAL_BOARD, bOK_R); // Get the acceleration in g Vector3f vAccel_g = accel_g(&amp;_HAL_BOARD, bOK_G) * fScaleF_g; // Calculate the motor speed changes by the error from the height estimate and the current climb rates // If the quadro is going down, because of an device error, then this code is not used if(_RECVR.m_Waypoint.mode != GPSPosition::CONTRLD_DOWN_F) { float fAltZStabOut = _HAL_BOARD.m_rgPIDS[PID_THR_STAB].get_pid((float)(rcalt_cm - fCurAlti_cm), 1); iAltZOutput = _HAL_BOARD.m_rgPIDS[PID_THR_RATE].get_pid(fAltZStabOut - fClimbRate_cms, 1); } // Don't change the throttle if acceleration is below a certain bias vAccel_g.z = sign_f(vAccel_g.z) * (abs(vAccel_g.z) - fBias_g) * fScaleF_g; // Calculate the throttle changes if copter changes altitude float fAccZStabOut = _HAL_BOARD.m_rgPIDS[PID_ACC_STAB].get_pid(vAccel_g.z, 1); iAccZOutput = _HAL_BOARD.m_rgPIDS[PID_ACC_RATE].get_pid(fAccZStabOut, 1); // Modify the speed of the motors to hold the altitude iFL += iAltZOutput + iAccZOutput; iBL += iAltZOutput + iAccZOutput; iFR += iAltZOutput + iAccZOutput; iBR += iAltZOutput + iAccZOutput; </code></pre>
2683
2014-03-22T23:09:54.270
|sensors|quadcopter|accelerometer|ardupilot|
<p>I wonder currently how to implement an altitude control for a quadcopter. I have atm just a barometer/GPS and an accelerometer. </p> <p>Barometer and GPS are relatively straight forward implemented, but not very precise and slow. For the accelerometer readout, I remove the constant 9.81 m/s² acceleration by a low pass filter. Then I take this data and calculate out of it the climb-rate(in cm per s). I know the speed approximation by this way is not so great. However I don't know a better approach so far.</p> <p>For the calculation of the motor speeds I use atm two PIDs (STAB and RATE). </p> <p>I coded the example shown below, without much testing so far. I believe it will not work out in a smooth and nice way. E. g. instead of the speed calculated of the accelerometer I could use the climb-rate of the barometer. However for low altitudes and small changes I do need very likely the accelerometer. </p> <p>ArduPilot seems to use somehow in a different way both with a third PID for the acceleration. I believe they calculate the height difference like me. Then they use maybe for STAB-Pid the barometer climb rate (not like me the accelerometer) and calculate with acceleration data another output. Unfortunately I don't know how exactly, or whether there are other methods.</p> <p>Does someone know the exact layout to implement with a barometer and accelerometer an altitude hold function. I mean I am really not sure whether my ideas would be correct. Maybe I can post some options later.</p> <p>My PIDs:</p> <pre><code>m_rgPIDS[PID_THR_STAB].kP(1.25); // For altitude hold m_rgPIDS[PID_THR_RATE].kP(0.35); // For altitude hold m_rgPIDS[PID_THR_RATE].kI(0.15); // For altitude hold m_rgPIDS[PID_THR_RATE].imax(100); // For altitude hold </code></pre> <p>Code for altitude hold:</p> <pre><code>// Stabilizing code done before float fCurAlti_cm = _HAL_BOARD.get_alti_m() * 100.f; // Barometer and GPS data float fAcclClimb_cms = _HAL_BOARD.get_accel_mg_ms().z * 100; // Accelerometer output in cm per s (gravitational const. corrected) // calculate the difference between current altitude and altitude wanted float fAltStabOut = _HAL_BOARD.m_rgPIDS[PID_THR_STAB].get_pid(fCurAlti_cm - (float)(rcalt_m*100), 1); // Rate it with climb rate (here with accelerometer) int_fast16_t iAltOutput = _HAL_BOARD.m_rgPIDS[PID_THR_RATE].get_pid(fAltStabOut - fAcclClimb_cms, 1); // Modify the speed of the motors iFL += iAltOutput; iBL += iAltOutput; iFR += iAltOutput; iBR += iAltOutput; </code></pre>
Altitude hold for quadcopter with Accelerometer and Barometer
<p>As it turns out, it is a bug in the current MotionLab 2.7.2. In theory, the node ID should be able to be changed on the Drive Parameter screen. However, the field for changing the ID is disabled:</p> <p><img src="https://i.stack.imgur.com/ANe6h.png" alt="enter image description here"></p> <p>This was compounded by the fact that only documentation for the old MotionLab is available on their site, and the old procedure (right-click on drive in tree) no longer applies. The documentation for 2.7.2 is a work in progress and is not available yet. So missing documentation combined with this issue was the primary cause of confusion.</p> <p>At the time of this writing, Ingenia has both confirmed this bug (and will be issuing an update soon) and stated that updated documentation will be available soon as well.</p> <p>In the mean time, Ingenia has offered the following workaround:</p> <blockquote> <p>Node ID can be change writing the register 0x2000 , subindex 0x1.</p> <ol> <li><p>Just open with a terminal you communication port and write:</p> <pre><code>&lt;Current node Id&gt; &lt;Action&gt; &lt;Subindex and index in hexa&gt; &lt;in hexa&gt; </code></pre> <p>E.g.:</p> <pre><code>0x20 w 0x12000 &lt;Node number&gt; </code></pre> <p>I suggest you using Realterm for instance.</p></li> <li><p>[Use] the old MotionLab.</p></li> </ol> <p>After changing the Node id, remember to ‘commit’ the parameters (save in non-volatile memory).</p> </blockquote> <p>Theoretically, you could also do this via, e.g., libcanopen on Linux. 0x2000 subindex 0x01 can be used to write the 8-bit node ID (range 1-127), and parameters can be committed to non-volatile memory by writing 0x65766173 to index 0x1010 subindex 0x01 (<a href="http://www.ingeniamc.com/En/-Pluto-DC-Servo-Drive.CREF.pdf" rel="nofollow noreferrer">source</a>).</p> <p>I'm presuming the updated software and documentation will be available by the time anybody actually finds and reads this post.</p>
2694
2014-03-25T22:45:13.137
|control|can|
<p>Does anybody know how to configure the node ID of an Ingenia <a href="http://www.ingeniamc.com/En/-Pluto-compact-dc-servo-drive-canopen" rel="nofollow">Pluto</a> DC Servo Drive?</p> <p>I've got a request out to their support team, but perhaps somebody here is already familiar with these drive boards.</p> <p>I do have Ingenia MotionLab 2.7.2, but it does not ship with documentation and the MotionLab user manual on the site is out of date (I had previously been looking through the hardware documentation, but it turns out the info was in MotionLab documentation; although the instructions for previous versions no longer seem to apply to 2.7.2).</p>
Set CANopen Node ID of Ingenia Pluto DC Servo Drive
<p>I found the name of the component: <a href="http://en.wikipedia.org/wiki/Power_management_IC" rel="nofollow">PMIC, or Power Management IC</a>.</p> <p>To quote from <a href="http://www.linear.com/products/pmic_%28dc-dc,_powerpath_and_battery_charger%29" rel="nofollow">this example</a>: </p> <blockquote> <p>PMICs seamlessly manage power flow from various power sources (wall adapters, USB and batteries) to power loads (device systems and the charging battery), while maintaining current limits where required (such as that specified for USB). To this end, PMICs typically feature built-in PowerPath control, DC/DC conversion and battery charging functions.</p> </blockquote>
2700
2014-03-26T14:20:19.360
|power|battery|
<p>I'm working on a project where mains voltage will sometimes be disconnected, and the system will have to run on battery for as long as possible before doing a (safe) shutdown. The desired behavior is exactly like a laptop battery system:</p> <ol> <li>When mains voltage is connected, charge the battery and power the system from mains</li> <li>When mains voltage is disconnected, power the system from the battery</li> <li>Prevent the battery system from supplying current when the batteries are discharged below a certain voltage (to prevent damage).</li> </ol> <p>Is there a name for this type of system, or a name for the feature(s) that I should be looking for when I look at chargers? (If it matters, this system will be 12V, so I'm looking at 14.8V Lithium battery options.)</p>
Battery system with and without mains voltage attached
<p>What you are describing is essentially a textbook case for using a Kalman filter. First you need a prediction step. Let's assume you are predicting the pose of the robot $(x,y,\theta)$, given the previous pose estimate and your high-frequency velocity measurements $(v,\omega)$, where $v$ is the linear velocity and $\omega$ is the angular velocity.</p> <p><img src="https://i.stack.imgur.com/A1D1F.png" alt="prediction step"></p> <p>$P$ is the 3x3 covariance matrix that represents the uncertainty of the robot pose. $Q$ is the covariance of your inputs (i.e., how noisy are those velocity measurements?) $F$ is the Jacobian of the motion model with respect to the state and $G$ is the Jacobian with respect to the inputs, i.e.,</p> <p><img src="https://i.stack.imgur.com/OkQHT.png" alt="Q and Jacobians"></p> <p>Now you have your less frequent correction updates, which actually measure the full state, making this quite simple, i.e.,</p> <p><img src="https://i.stack.imgur.com/GOUTa.png" alt="correction step"></p> <p>where $z_k$ is your measurement (from the camera) and $R$ is the covariance matrix associated with that measurement (probably a diagonal matrix). This measurement is compared with the predicted measurement (which in your case is just the latest pose estimate). In this simple case, the Kalman gain is the proportion of the current pose covariance compared to the sum of the pose covariance and the measurement covariance.</p> <p>To answer your question about the different rates, you can just run your motion update repeatedly until your prediction update arrives. For example, it might happen that the motion update occurs 100 times before your perform a correction.</p> <p>You also asked about how to handle three cameras. The easiest way is to just process them sequentially; just apply three corrections in a row. Another way is to stack them and perform a single update. You would need to adjust the correction update step to do it this way.</p>
2708
2014-03-27T15:21:50.917
|sensors|localization|kalman-filter|sensor-fusion|
<p>I have a system in which I have two separate subsystems for estimating robot positions. First subsystem is composed of 3 cameras which are used for detecting markers the robot is carrying and which outputs 3 estimates of the robot's position and orientation. The second subsystem is a system which is located on the robot and is measuring speed on the two points of the robot. By numerically integrating those two I can get an estimate on the robot's position and orientation (because I am tracking two points at once).</p> <p>The first system is less accurate but the second system drifts. First system gives output about once a second while the second one gives output much more frequently (100-200 times per second).</p> <p>I assume there must be a better approach than to just reset the position with the first system's estimate (as it is not 100% accurate), but to also use the accumulated position from the second sensor system and fuse that with the new data from the first system. Also, there is a question how to fuse 3 estimates of the first system? There must be a better way than pure average as it might happen that the two estimates are exactly the same and the third one is completely different (meaning that it is probably more wrong)?</p> <p>Do you have any fusion algorithms to recommend to use in such a system? I know about Kalman filter, but I am having trouble figuring out how to use it as the two systems output data at different frequencies.</p> <p>I hope the question is clear enough, what is the best approach to fuse the estimates into a more correct and accurate estimate?</p> <p>Thanks</p>
Multiple position estimates fusion
<p>It closely depends on the method you use to determine location. That is, will you use lateration or angulation? If you wish to use angulation then you need to synchronize the emitters (or receivers, depending on which of the two is being localized), find the phase difference using correlation of signals and then based on that estimate the location. At this point, the most difficult thing is the synchronization part and that's what will affect the error the most. If you're using lateration (measuring the time of flight), you need to somehow know when the signal was generated and when it arrived at the target. For this you could use some RF transmission technology (ZigBee or WiFi or something). If your detection resolution is high enough you can get a pretty good accuracy (if your sampling frequency is for example 500kHz for example it means you can detect the minimal distance of around 0.6mm because that's how much the sound travels between the two measurements) but the thing to also consider is the latency in the RF communication. That latency can probably be measured and estimated to a degree but it will affect your precision.</p> <p>All in all to finally answer your actual question, if all is calibrated well you can get an error bellow 1%. Check this out for example <a href="http://link.springer.com/chapter/10.1007%2F978-1-4471-2386-6_105" rel="nofollow">http://link.springer.com/chapter/10.1007%2F978-1-4471-2386-6_105</a> their results say 0.1% error.</p> <p>Regards</p>
2714
2014-03-28T00:42:24.880
|localization|ultrasonic-sensors|
<p>I'm considering building an absolute, indoor robot-positioning system based on ultrasound Time Of Flight. Transducers will be ordinary, narrow-band, 40 kHz ones.</p> <p>Based on your experience, what is the best exactitude and precision we can achieve with such a system?</p> <p>I'm aware that the answer to such a question will depend on many factors, both hardware and software (if applicable), but I'm not asking about the performance of one solution or another, but about the intrinsic limitations of the ultrasound technology.</p>
Precision we can expect of an ultrasound-based localisation system
<p><strong>Max Motor Speed</strong> is used to protect the motor against being driven over it's maximum speed. The units for this are RPM.</p> <p><strong>Max Profile Velocity</strong> is given in same units as the profile velocity and applies only to profile position mode.</p> <p>(Source CiA DSP 402 V2.0)</p> <p>While the standard doesn't expand on these, the way I see it is that the first number is something you set up based on the motor's data sheet to prevent damaging the motor. The units are set up that way.</p> <p>The second number is an application specific limit. You want to protect yourself against moves that can be damaging in your application and you use the same units used for position profiles.</p> <p>Having two parameters allows for running the motor in velocity mode in higher speeds than you allow during positioning modes. While that may not seem important I've actually worked on an application where a plate was loaded on a drum and the drum then spun up to considerable speed. In this application the loading portion used positioning mode and needed to be slower than the velocity mode used later so this separation of limits could have come in handy. In general having those sorts of limits built-in reduces the chance of damaging the machine you're working on.</p>
2726
2014-03-30T19:10:20.193
|motor|control|can|
<p>CAN301/402 provides Max Motor Speed (0x6080,0x00) and Max Profile Velocity (0x607F,0x00). In profiled motions, the maximum speed is limited to the lower of these two values. In non-profiled motions, the maximum speed is limited to Max Motor Speed.</p> <p>What is the intended purpose of Max Profile Velocity, rather than only providing Max Motor Speed and using that everywhere instead?</p>
What's the difference between CAN's Motor Max Velocity vs. Profile Max Velocity?
<p>I guess that the approach in structured cameras is pretty much the same.</p> <p>However, the world is trying to minify this kind of sensors. In particular</p> <ul> <li><a href="http://structure.io/" rel="nofollow">http://structure.io/</a></li> <li><a href="https://www.google.com/atap/projecttango/" rel="nofollow">https://www.google.com/atap/projecttango/</a></li> </ul> <p>Some people speculate that the reason for kinect to be so big is the array of microphones that allows to localize source of noise.</p> <p>I haven't heard of anything else, though if I find new info I will expand my answer.</p>
2744
2014-04-06T06:05:49.843
|sensors|kinect|cameras|lidar|
<p>I'm aware of the PrimeSense camera powering the Kinect. Are more advanced sensor types available now in the &lt; $500 range? For example, has there been any sort of game-changer in structured light techniques? Do decent flash lidar cameras exist now?</p>
What different sensing approaches are used in the current batch of indoor 3D cameras?
<p><strong><em>Linear servo actuator</em></strong>, combined with a <strong><em>load-cell</em></strong>, inline, to measure the load that is transferred between the actuator and the load that it is connected to. </p> <p>The main advantage of a load cell is that it gives analog output (voltage) that can be sensed. They don't require bridges to convert weak signals to senseable signals as do the strain gauges.</p> <p>This combination is used for advanced robots, that have to be rapid for better control and also especially for back-driving the robot's manipulators.</p>
2754
2014-04-08T16:24:42.103
|sensors|mechanism|force-sensor|identification|
<p>I'm looking for a part that will do a particular function. It has to be able to move along one axis and tell me the force that is being exerted on it.</p> <p>Kind of how like a piston moves inside an engine (one axis of movement) except that something will be pushing at the top of the piston and I need to know how hard its pushing. Another difference is that the piston won't be constantly moving back in forth, but needs to be able to receive commands like. <code>move x centimeters forward</code> and then remain stationary at its new position.</p> <p>I know to make this it would involved a sensor and something that can exert force but what is the name of the described machine?</p> <p>Edit #1 - Response to Matthew Gordon</p> <p>The piston would have to move between 0-6 centimeters. The form factor would be small, ideally smaller than the palm of your hand. (Smaller=better) The forces it would have to deal with are comparable to the forces exerted on a bicycle by its chain. I'm a math/cs person not engineering so I don't know technical terms for these kinds of things off the top of my head. It would have to be real time sensor reading, but the volume of data could be processed by a phone. Would have to working in conjunction with wireless communication, probably Bluetooth, but I'd have to look into the latency requirements to be sure.</p>
What is the Name of the part I'm describing
<p>Aside from getting the final result as a composition of matrix multiplication, which helps a lot indeed, one crucial aspect of DH convention is the possibility of describing a rototranslation in terms of <strong>4</strong> variables only for each link (namely, the link length, twist, offset, and the joint angle), in place of the canonical <strong>6</strong> (i.e. 3 for translation and 3 for rotation).</p> <p>To sum up, given that in DH we can conveniently assign the location of the subsequent reference frames complying with the specified standard, we are thus able to compact the representation: e.g. for a anthropomorphic manipulator equipped with 7 degrees-of-freedom, we can deal just with 7*4=28 independent variables/parameters instead of 7*6=42.</p>
2758
2014-04-10T05:36:28.567
|kinematics|dh-parameters|
<p>When one wants to model a kinematic chain and in particular define the frames attached to each body, it is common to use the <a href="http://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters" rel="noreferrer">Denavit-Hartenberg parameters</a>.</p> <p>What are the advantages of this representation?</p> <p>I can understand the interest of having a normalized representation but does it impact the algorithms performance? The algorithm is not trivial to implement, what gain can we expect from this instead of, for instance, just fixing reference frames by hands (i.e. arbitrarily) like this is done in many robotics formats such as <a href="http://wiki.ros.org/urdf" rel="noreferrer">URDF</a>.</p>
What are the advantages of using the Denavit-Hartenberg representation?
<p>Here is a useful thesis that might help: <a href="http://dspace.mit.edu/bitstream/handle/1721.1/69500/775672333.pdf" rel="nofollow">http://dspace.mit.edu/bitstream/handle/1721.1/69500/775672333.pdf</a> And this paper discusses some of the control problems, including the drift compensation problem that you're having:</p> <p><a href="http://www.raysforexcellence.se/wp-content/uploads/2013/01/Dennis-Jin-Development-of-a-stable-control-system-for-a-segway.pdf" rel="nofollow">http://www.raysforexcellence.se/wp-content/uploads/2013/01/Dennis-Jin-Development-of-a-stable-control-system-for-a-segway.pdf</a></p> <p>As a kind of side comment, you are better off to have the gyro mounted closer the wheels rather than up at the top as the linear accelerations measured are proportional to their distance from the wheel axis, and this may be making your drift worse.</p>
2767
2014-04-13T03:26:55.713
|accelerometer|gyroscope|nxt|
<p>I'm attempting to build a segway robot using a gyrosensor and accelerometer.</p> <p>I'm having trouble getting the robot to remain standing, for some reason, and I can't identify the problem. Here's what I know:</p> <p>The gyroscope API for the lejos NXT platform is here:</p> <p><a href="http://www.lejos.org/nxt/nxj/api/" rel="nofollow">http://www.lejos.org/nxt/nxj/api/</a></p> <p>By using timestamps and angular velocity, the project attempts to infer the angle of the robot. The API suggests that in order to be accurate, it must be polled 100 times per second (or every 10ms on average).</p> <p>The problem is that simply polling the gyrosensor takes 4ms. Polling the accelerometer takes 10ms.</p> <p>The dimensions of the robot: Height: 28cm wheel circumference : 13.25cm Radius of a wheel, given the circumference: 2.1cm</p> <p>The accelerometer is mounted on the top of the robot (at approximately 28cm from the ground, 26cm from axis of rotation)</p> <p>In order to keep the correction amount linear (as opposed to trying to correct an arbitrary angle) , I translate the angle of the robot to a distance to travel along the ground to "right" the robot. This might be a bit naive, and I'm open to suggestion here. Basically it's just the horizontal distance calculated using a right-angle triangle with the angle of the robot at the top and hypotenuse of 28cm.</p> <p>If that's not clear, it's essentially the horizontal distance from the top of the robot and the bottom of the robot.</p> <p>Right now my main concern is the amount of drift the gyroscope seems to be experiencing. Given the fact that with the NXT java software package, it's nearly impossible to poll 100 times per second, the amount of error accumulated by the gyroscope is fairly large.</p> <p>Finally, I've implemented a PID control system. The thing I'm not clear about with respect to this system is the integral and derivative of error must be calculated given a set of values. Say, the last 20 error measurements recorded.</p> <p>If the amount of past errors recorded is a variable, and the PID constants are variable, and the speed of the wheels is a variable, it seems this problem begs for some kind of automated optimization. But how to do it? If I set the speed to 120 RPM (roughly the max of the NXT servos) and take the past 20 errors for calculating the integral and derivative of the error, will it be possible to optimize the PID constants successfully? Or must all 5 variables be tuned together?</p> <p>Thanks ahead for any insight on the problem.</p>
NXT Segway problem. Need advice/help
<p>I see that his question is from a long time ago but, just in case it still gets attention, here are some thoughts based on running a small line follower:</p> <p>Let's address the battery problem first:</p> <p>I assume that the statement <code>uXbot_move(MAX_SPEED + steer, MAX_SPEED);</code> has parameters that map to PWM values - possibly by some scaling factor. </p> <p>A fixed PWM value will produce a different drive voltage as the battery voltage sags. In your motor driver, measure the battery voltage with an ADC input. Use that value to adjust the actual PWM duty cycle so that the average voltage stays constant. Change the <code>uxBot()</code> to accept a Voltage rather than a PWM. Now you are able to command, say, 3 Volts to the motor and be confident that is what it gets regardless (within limits) of the actual battery voltage. If you had 8 bits of PWM resolution you might end up with code fragments like these:</p> <pre><code>void setLeftMotorVolts(float volts) { int motorPWM = (255 * volts) / batteryVolts; setLeftMotorPWM(motorPWM); } void setRightMotorVolts(float volts) { // reverse the right motor int motorPWM = (-255 * volts) / batteryVolts; setRightMotorPWM(motorPWM); } </code></pre> <p>How about the performance tuning with speed:</p> <pre><code> if (steer &lt; -5) { // turn left turn = -1; uXbot_move(MAX_SPEED + steer, MAX_SPEED); } else if (steer &gt; 5) { // turn right turn = 1; uXbot_move(MAX_SPEED, MAX_SPEED - steer); } else { // go straight turn = 0; uXbot_move(MAX_SPEED, MAX_SPEED); } </code></pre> <p>Part of the problem here is that the robot forward speed will change as it turns. Any turning motion will reduce the forward speed. The effect is clearly visible in the video you linked to. While this can help navigate sharp turns it messes with the dynamics somewhat. </p> <p>If, instead, you make changes to BOTH wheel speeds together, the forward speed will remain constant. Clearly, there must be some headroom left for the outer wheel to be able to increase speed. With both wheels changing speed, the code fragment above would reduce to just:</p> <pre><code>uXbot_move(MAX_SPEED + steer, MAX_SPEED - steer); </code></pre> <p>That will make the response consistent but the controller may still have trouble adapting to different speeds. To sort that out, consider scaling the steering correction with the current speed. That is, the correction will be larger at higher speeds. </p> <p>Thinking about the controller, I have three observations. </p> <p>First, it is important that the PID function is called at regular, constant intervals. It just does not work well unless that is true. Your code does not specify that so it may already be true. Indeed, the existence of the division by PERIOD_MS indicates that you are aware of this.</p> <p>Second, for the specific line following task that you describe, you do not need an I term in the controller. In fact, because you do nothing to limit integral windup, it may be doing more harm than good.</p> <p>Finally, unless you are seriously limited for processor power, do your calculations in floating point as much as possible. Only reduce to an int when you actually calculate the PWM duty cycle for the timer register. </p>
2768
2014-04-13T06:29:53.297
|pid|differential-drive|
<p>I have a differential drive robot that works fine (good PD parameters) driving at say 1 m/s. Now, if it speeds up (to 1.2 m/s) it starts wobbling again. What would be a good strategy for a controller that is able to cope with the whole speed range of 0 - 4 m/s?</p> <p>edit 14th of April:</p> <p>The robot is a line follow robot but I do not see how this would be related to my question since a robot following a trajectory would have the same problem. </p> <p>I recently talked to other developers of differential drive robots and they are facing similar issues e.g. they told me that they need to adjust PID parameters once the battery is not fully charged hence the robot drives at a different speed.</p> <p>I do not know if you guys are into youtube, but if your are really interested in my robot this link would be helpful: <a href="https://www.youtube.com/watch?v=vMedNPhXlEo" rel="nofollow">https://www.youtube.com/watch?v=vMedNPhXlEo</a></p> <p>PID parameters are: P 0.31, D 0.59, I 0.00</p> <p>PID controller programmed using C:</p> <pre><code> // note: the inner wheel turns backwards for narrow curves // cte is -128..128 depending on the robots position // relative to a trajectory / black line /** Execute the PID controller and update motor speeds */ void PID() { int32_t steer; int32_t cte; cte = 128 - get_segment_center(0); // Compute PID equation steer = (int)( -P * (float)cte -D * (float)(cte - diff_cte) / (float)PERIOD_MS -I * (float)int_cte ); if (steer &lt; -5) { // turn left turn = -1; uXbot_move(MAX_SPEED + steer, MAX_SPEED); } else if (steer &gt; 5) { // turn right turn = 1; uXbot_move(MAX_SPEED, MAX_SPEED - steer); } else { // go straight turn = 0; uXbot_move(MAX_SPEED, MAX_SPEED); } diff_cte = cte; int_cte += cte; } </code></pre>
differential drive PID controller
<p>Let's assume that you know the position of P=(xp,yp) in the initial state. Let's also assume that you know the transformation matrix t which transforms the initial state of the robot to the next state. Due to the relative nature of positions, you can say that robot is standing still and that the point P is moving. This is exactly what you need as you wish to view everything from robot's own coordinate system. It is not moving relative to itself, but the point P is.</p> <p>By that we can say: P'=inverse(T)*P</p> <p>Regards</p> <p>Edit: I'm not completely sure that the transformation matrix you wrote is perfectly correct, please check it and edit the post if it's incorrect for future reference (it should hold the difference between the values, not ending values). Also, keep in mind that the rotation and translation are not commutative so don't let that give you headaches.</p>
2771
2014-04-14T09:15:18.207
|mobile-robot|kalman-filter|tracks|
<p>How can I track a fixed point $P=(x_P, y_P)$ from a moving robot?</p> <p>Coordinates of $P$ are relative to the state/pose of the robot (x axis looks forward the robot and y axis is positive on the right of the robot). Suppose that the initial robot state/pose is at $S_{R}=(x_R, y_R, \theta_R)$. The next frame (namely after $\Delta t$) with the applied control $(v, \omega)$ the robot is at state $S_{R'}=(x_{R'}, y_{R'}, \theta_{R'})$.</p> <p>Where (I set the axes as OpenCV):</p> <p>$x_{R'} = x_R + v cos(\theta_R) \Delta t $ </p> <p>$y_{R'} = y_R + v sin(\theta_R) \Delta t $ </p> <p>$\theta_{R'} = \theta_{R} + \omega\Delta t$</p> <p>The question is: which are the coordinates $(x_P', y_P')$ of the same point $P$ relative to $S_{R'}$?</p> <p><img src="https://i.stack.imgur.com/TgM1h.png" alt="enter image description here"></p> <p>As visible in the picture, I know the transformation from the initial state to the next state of the robot and the coordinate of P in reference to the initial state</p> <p>$$ t = \begin{pmatrix} cos(\theta_{R'}) &amp; -sin(\theta_{R'}) &amp; x_{R'}\\ sin(\theta_{R'}) &amp; cos(\theta_{R'}) &amp; y_{R'}\\ 0 &amp; 0 &amp; 1\\ \end{pmatrix} $$</p> <p>Please correct me if I made some mistakes!</p> <p>Thank you, any help is appreciated.</p>
Point tracking from a mobile robot
<p>For each motor the output should be:</p> <ul> <li>A value at which the power from the 4 motors roughly keeps the quadcopter airborne, for instance 1500ms.</li> <li>Plus or minus the influence of the throttle. For instance with your throttle is in the 0:1 range you could apply 600 * (throttle - 0.5), which will put the motors output in the 1200:1800ms range.</li> <li>Plus or minus the PIDs output. For instance with your PIDs in the 0:1 range you could add 200 * (PID - 0.5). The individual PID contribution would be in the -100:100 range. For a +4 configuration there are 2 PIDs to add for each motor, so the output would be in the 1000:2000ms range.</li> </ul> <p>Here is my unmodified code for this. My throttle is in the -100:100 range, and my PIDs are already in the appropriate range. I'm sure you can translate to your own specifications easily.</p> <pre><code>uint16_t motor[4]; float base = MOTOR_IDLE + (THROTTLE_K * throttle); motor[MOTOR_FRONT] = base + pitchRatePID.getControl() - yawRatePID.getControl(); motor[MOTOR_LEFT] = base + rollRatePID.getControl() + yawRatePID.getControl(); motor[MOTOR_BACK] = base - pitchRatePID.getControl() - yawRatePID.getControl(); motor[MOTOR_RIGHT] = base - rollRatePID.getControl() + yawRatePID.getControl(); </code></pre> <p>The THROTTLE_K constant would be what to modify if I experienced your problem.</p> <p>As you realise the possible motors output exceeds the ESC range, which is evil. Bounding the output is not an elegant way to do it as it will trip up your PIDs computations. A better way which maintains the speed ratios between the motors is the add or remove a constant to all the motors to get all the outputs back within the bounds. Here is a sample from my own code, I hope it makes sense.</p> <pre><code>// Fix motor mix range int16_t motorFix = 0; uint16_t motorMin = motor[0], motorMax = motor[0]; for(int i=1 ; i&lt;4 ; i++) { if(motor[i] &lt; motorMin) motorMin = motor[i]; if(motor[i] &gt; motorMax) motorMax = motor[i]; } if(motorMin &lt; MOTOR_MIN) motorFix = MOTOR_MIN - motorMin; else if(motorMax &gt; MOTOR_MAX) motorFix = MOTOR_MAX - motorMax; // Refresh motors for(int i=0 ; i&lt;4 ; i++) Servo::setChannel(i, motor[i] + motorFix); </code></pre> <p>I hope this helps, please let me know if you need further clarifications!</p>
2772
2014-04-14T12:24:37.403
|motor|quadcopter|pid|pwm|esc|
<p>I've been writing some quad copter software and I am not sure what the best way is to map the throttle and PID inputs to ESC power.</p> <p>My throttle range is 0-1 and my PID outputs are 0-1. My ESC's have a range of 1060us to 1860us.</p> <p>I have mapped the motor speeds like this:</p> <pre><code>_motorPower[0] = map((_rcConstrainedCommands[3] + _ratePIDControllerOutputs[1] + _ratePIDControllerOutputs[2] + _ratePIDControllerOutputs[0]), 0.0, 4.0, 1060, 1860); _motorPower[1] = map((_rcConstrainedCommands[3] + _ratePIDControllerOutputs[1] - _ratePIDControllerOutputs[2] - _ratePIDControllerOutputs[0]), 0.0, 4.0, 1060, 1860); _motorPower[2] = map((_rcConstrainedCommands[3] - _ratePIDControllerOutputs[1] - _ratePIDControllerOutputs[2] + _ratePIDControllerOutputs[0]), 0.0, 4.0, 1060, 1860); _motorPower[3] = map((_rcConstrainedCommands[3] - _ratePIDControllerOutputs[1] + _ratePIDControllerOutputs[2] - _ratePIDControllerOutputs[0]), 0.0, 4.0, 1060, 1860); </code></pre> <p>This works but if my quad is perfectly level (i.e. the PID outputs are 0) and I apply full throttle (1.0) then map this to ESC power I will only get quarter power (1260us).</p> <p>How should I be doing this so that if my throttle is on max then I get max power? If my throttle is half (0.5) then I should get half power plus the PID values etc.</p> <p>Can anyone help me with this?</p> <p>Thanks Joe</p>
Quadcopter Throttle and PID mixing to Motor Speed
<p>First step to decide electronics is roughly estimate your all up weight (AUW) which is total flying weight including payload. Now your target total maximum thrust(T) provided by system should be just double your AUW which will ensure good maneuverability and control. This will given you amount of thrust to be provided by each motor (in your case it will be T/4 as you are using 4 rotors). Now you have got a criteria to choose a motor. For multirotor purpose, people generally use low rpm brushless motors (&lt; 1200KV). You should also refer to <a href="https://robotics.stackexchange.com/questions/2416/low-amp-fpv-quadcopter-motors/2440#2440">this</a> link for more criteria for motor selection. Once motor is selected, propeller can be selected based on manufacturer data for wattage and thrust for compatible propellers. ESC can be chosen which is rated higher than maximum current rating of motor.</p> <p>In your case, motor looks good as maximum thrust is around 1Kg. However no propeller data is given. I would personally suggest you to bench test motor with range of propeller to find optimum performance. However user comments in discussion section on webiste suggest 10x4.5 prop. Current rating for motor is 22A so ESC you are considering (30) should do fine. </p>
2781
2014-04-15T21:17:31.523
|quadcopter|brushless-motor|esc|multi-rotor|
<p>I am preparing for my first quadcopter build and need to know how to tell what motors/ESC's/propellers will work with each other. I also would like to know how to tell what the motors would be capable of carrying/how much thrust they have. I would like to put a camera on this copter. I cannot find anywhere a straight answer to this question.</p> <p>The ones I currently think are the ones I want are:</p> <p>ESC: <a href="https://www.hobbyking.com/hobbyking/store/__25365__Turnigy_Multistar_30_Amp_Multi_rotor_Brushless_ESC_2_4S.html" rel="nofollow">https://www.hobbyking.com/hobbyking/store/__25365__Turnigy_Multistar_30_Amp_Multi_rotor_Brushless_ESC_2_4S.html</a></p> <p>Motor: <a href="https://www.hobbyking.com/hobbyking/store/__28112__Turnigy_D3530_14_1100KV_Brushless_Outrunner_Motor_US_Warehouse_.html" rel="nofollow">https://www.hobbyking.com/hobbyking/store/__28112__Turnigy_D3530_14_1100KV_Brushless_Outrunner_Motor_US_Warehouse_.html</a></p> <p>Propeller: 11inch</p> <p>This copter needs to be able to carry a camera (~go pro)</p> <p>TLDR: How does one match ESC's/Motors/Propellers, and how to tell if they can get the job done?</p> <p>(ESC - Electronic Speed Control)</p>
How to know what motor/ESC/propeller combination will work for a quadcopter?
<p>I believe the Create was not sold in Europe because it is not RoHS compliant.</p> <p>However, every Roomba can be hacked. You can communicate with the robot with the "Open Interface" or sometimes called the "Serial Control Interface". There is lots of information about this on the web. See here for example: <a href="http://www.irobot.com/filelibrary/pdfs/hrd/create/Create%20Open%20Interface_v2.pdf" rel="nofollow">http://www.irobot.com/filelibrary/pdfs/hrd/create/Create%20Open%20Interface_v2.pdf</a></p> <p>Note that you will need to either make or buy a cable to connect to the 7 pin mini-DIN port on the robot. Every Roomba has this. Sometimes it is under the faceplate though.</p> <p>Most libraries you can find online communicate with the Roomba outlined above and do not use the "Command Module" that was unique to the Create. </p>
2798
2014-04-21T13:37:14.647
|mobile-robot|ros|irobot-create|
<p>I'm trying to find a good beginners platform to use ROS with, and I came across the iRobot Create. To my surprise, they do not sell in Europe. Why is that?</p>
Why does iRobot not sell the Create in Europe?
<p>Since posting the question I tried to tune the 3 PID parameters a lot but I never reached any acceptable result. I didn't try to modify the error curve as I suggested in the question, but I found an alternative solution which works reasonably well. There might be other good solutions.</p> <p>I was using a PID as follows:</p> <ul> <li>process setpoint: angle</li> <li>process output: difference between motors speed (used in motor mix equation)</li> </ul> <p>While in theory this might very well work, I could not find good values for Kp, Ki, and Kd to obtain satisfying results. I changed my program to use 2 cascaded PIDs as follows:</p> <p><strong>Outer PID:</strong></p> <ul> <li>process setpoint: angle</li> <li>process output: angular rate</li> </ul> <p><strong>Inner PID:</strong></p> <ul> <li>process setpoint: angular rate</li> <li>process output: difference between motors speed (used in motor mix equation)</li> </ul> <p>I first tuned the inner PID so as to get no oscillation but a slight resistance against angular change when the target angular rate is zero. I then tuned the outer PID so as to get no oscillation but a reasonably responsive attitude matching the stick inputs. Even with a very basic tuning the results are acceptable.</p> <p>Thanks again to all the participants, your input encouraged me to learn more about PIDs and avoided me some errors.</p>
2800
2014-04-21T22:27:35.670
|pid|
<p>I'm developing/tuning a software PID for a quadcopter. For now I'm only trying to stabilise the pitch angle using the front and back motors, and I'm only looking at Kp. The motors have a control resolution: input variations need to reach a threeshold to have any effect at all.</p> <p>The process output does reach the setpoint, but not precisely enough for my requirements. There is no steady-state error (aka droop), the hunting range is centered on the setpoint, just too wide for my requirements. Also the instability is not an oscillation, but more of a random drift which needs to be large enough before the PID attempts to correct it.</p> <ul> <li>With a lower Kp the output needs to diverge from the setpoint significantly before the error is big enough for the PID to attempt to correct it.</li> <li>With a higher Kp the PID oscillates.</li> </ul> <p>I could not find a reasonable compromise.</p> <p>I'm thinking about applying the cuberoot function (or similar) to the error before feeding it to the PID: that way small errors should be significant enough for the PID to attempt to correct them, and large errors would be reduced and might not trigger oscillations. I suppose someone must have been through this before: is this a good solution? Are there any better alternatives?</p> <p>This is not a steady-state error (aka droop) or oscillation issue as far as I can tell: please don't suggest using Ki or Kd</p> <p><strong>EDIT</strong>: I have clarified the problem description and suggested using cuberoot rather than logarithm which was a bad idea indeed.</p> <p>Thanks, Marc.</p>
PID output does not reach setpoint precisely enough
<p>Besides the clear <a href="https://robotics.stackexchange.com/a/4505/6941">answer</a> that helps distinguish planning from control, I would stress the point that classical approaches based on pole placement (e.g. PID) and optimal control techniques (e.g. LQR) respond to different design requirements:</p> <ul> <li><p>The former methods are more oriented to vary the transient dynamics of the closed-loop system (faster/slower response) so as they allow for a principled way to reject disturbances; hence, key features are: frequency analysis, transient response, steady-state errors, robustness margins, ease of gain tuning.</p></li> <li><p>The latter methods focus much more on meeting some given integral goals, that is to comply for example with objectives such as maximum level of control effort or final displacement attained in the minimum time that are all computed over the whole trajectory; thus, key features are: bounds on control output, state feedback, performance on the whole.</p></li> </ul> <p>However, pole placement techniques are preferable for the difficulties encountered in LQR design of setting up proper cost functions, as described in <a href="http://en.wikipedia.org/wiki/Linear-quadratic_regulator#General_description" rel="nofollow noreferrer">wikipedia</a>.</p>
2807
2014-04-22T14:21:39.717
|control|motion-planning|
<p>For a robot, say path planning in particular, what are the pros and cons of choosing classical control theory or optimal control (LQR for example) ?</p>
What are the advantages of optimal control as opposed to control theory?
<p>You've asked more than one question, so I'll try to answer them in order.</p> <ol> <li><p>The Robotics community has not yet hit the limits of current hardware, so very little work is being done on the exotic cutting edge like neuromorphic hardware. The exception to this is software neural nets, which have come in and out of fashion for decades, and the Nv artificial neurons used successfully in BEAM robotics.</p></li> <li><p>No. Neuromorphic systems are not ideal for precision. Most probes are remote-controlled, not Artificially Intelligent. </p></li> <li><p>To some extent. A bird's brain could not be built in neuromorphic form, and then used to fly a fighter jet, as the control actuators and senses are completely different. If the algorithms for good flight are properly understood, they could be implemented on digital hardware more easily than trying to wire up a neuromorphic chip. </p></li> <li><p>Its not. The hope was that if we could throw enough neurons at the problem , and put them in a learning circuit, then we would not have to understand NLP and similar problems, because the machine would magically solve them for us. But while that might work for a demo - look up Perceptrons, for instance - it does not get us any closer to the algorithms we need.</p></li> <li><p>Not at all. Neuromorphic mimics the neural structures in living creatures. Quantum computers specialize in simultaneously parallel computing.</p></li> <li><p>Most likely. Digital systems work very well for measurement and calculation. Consider how microcontrollers are often easier to understand, debug and use than operational amplifiers, for many control systems.</p></li> </ol> <p>I am afraid that this is largely an opinion-based answer, as the questions were very general, and contained enough scope for an entire course.</p>
2812
2014-04-23T09:28:32.033
|microcontroller|computer-vision|machine-learning|research|
<p>I asked a similar kind of question some time ago (<a href="https://robotics.stackexchange.com/questions/2526/neuromorphic-engineering-and-robotics">Neuromorphic Engineering and Robotics</a>)</p> <p>Since then, many things have come to the point of revelation. A <a href="http://journal.frontiersin.org/Journal/10.3389/fnins.2013.00118/full" rel="nofollow noreferrer">road-map for neuromorphic computing</a> was revealed recently; It proposes the analog way of computation, to solve advanced computer vision problems. IBM and Qualcomm are also working on the similar project though on the digital side. Memristor technology is slated to come very soon.</p> <p>The question I am asking here is <strong>How is the Robotics community working to adopt the technology?</strong> This question opens the domain for other pressing questions which have been answered cryptically since the 1980s.</p> <p>Are neuromorphic computers good for <em>mission critical precise robots</em>, like that on Mars? Can we use neuromorphic systems on <em>Avionics systems</em>? How is neuromorphic processing going to help us solve the problems on NLP, and knowledge processing? <em>Aren't quantum computers very similar to neuromorphic computers in ideology?</em> If neuromorphic robots gain traction, will digital hardware still be required?</p> <p>It would be really nice if someone could explain all points, because answers in various but sparsely related research papers are very cryptic.</p>
Role of Neuromorphic Computing and Quantum Computing in the field of Robotics and AI
<p>Let's assume a constant Kalman filter $x_n = Gx_{n-1}+\omega$ with $\omega \sim \mathrm{N}(0, W)$ and $y_n = Fx_n+ \nu$ with $ \nu \sim \mathrm{N}(0,V)$, and initial state $x_0 \sim \mathrm{N}(m_0, C_0)$, where $C_0$ is the initial state variance. Then, the sequence of state covariance matrices $C_0, C_1, \dots, C_n$ converges to a constant covariance matrix $C &gt; 0$ when $n \to \infty$ if the system is observable, and $C$ is a function of the system and measurement covariance matrices $W$ and $V$, respectively (i.e., depends only on the signal-to-noise ratio). The Kalman gain also converges to a constant matrix. This convergence process occurs in just a few steps.</p> <p>In summary, the state covariance matrix $C_n$ does not reduces indefinitely with time, it reduces until achieve a limiting constant covariance matrix $C&gt;0$. In fact, it is not even sure that $C_n$ will decrease! If $C_0 &lt; C$, the sequence $C_0, C_1, C_2 \dots$ increases until $C$.</p> <p>See the second chapter of the following reference for a proof: <a href="http://rads.stackoverflow.com/amzn/click/0387947256" rel="nofollow">Bayesian forecasting and dynamic models, by West and Harrison</a></p>
2826
2014-04-25T15:42:28.123
|kalman-filter|
<p>I am learning about Kalman filters, and implementing the examples from the paper <a href="https://www.cs.cornell.edu/Courses/cs4758/2012sp/materials/MI63slides.pdf" rel="noreferrer">Kalman Filter Applications - Cornell University</a>.</p> <p>I have implemented example 2, which models a simple water tank, filling at a constant rate. We only measure the tank level, and the Kalman filter is supposed to infer the fill rate.</p> <p><img src="https://i.stack.imgur.com/w9SEw.png" alt="Kalman Filter Example, filling a water tank."></p> <p>According to the model, the fill rate is a constant, so I assumed that over time, the Kalman filter would converge more and more accurately (and with less and less noise) on the correct fill rate. However, the amount of noise in the fill rate never seems to reduce after the first few iterations:</p> <p><img src="https://i.stack.imgur.com/zNd1Q.png" alt="Kalman Filter example, Filling a water tank at a constant rate."></p> <p>This graph shows how the fill rate part of the state vector changes over the course of 1000 iterations of the simulation.</p> <p>Adjusting the Measurement Variance Matrix seems to have very little effect on the fill rate noise.</p> <p>Also, the Kalman gain vector and State Variance matrix seem to be constant throughout the simulation. I assumed that the State Variance would reduce as the filter became more and more confident in its state estimate.</p> <p>Questions: - Is this graph what I should expect to see? - Should the Kalman Gain vector and State Variance matrix change over time in this situation?</p>
How much should I expect a Kalman filter to converge?
<p>You will need BOTH. Use encoder sensors on the wheels one for each side. These will allow a PID controller to maintain wheel speed even with the variable mechanical loads on an uneven terrain.</p> <p>The heading sensor (IMU) will need gyros too and maybe a magnetic compass to correct for drift in the gyros. You will need a PID controllerl that looks at the difference between commanded heading and the heading as measured by the IMU this PID will adjust the target speeds of the two wheel PIDs</p>
2829
2014-04-26T04:39:43.460
|sensors|control|pid|differential-drive|
<p>So I am building a differential drive robot and I want it to autonomously drive in a straight line on an uneven surface. I know I need a position and velocity PID. As of now, I am deciding on which sensors to buy. Should I use optical encoders, accelerometers, or something else?</p> <p>I wanted to go with accelerometers due to the error encoders would face due to slippage, but I am not sure.</p> <p>Some enlightenment would help!</p>
Differential Drive Robot on uneven surfaces
<p>Servos should be able to accomplish both the 90 degree and 360 degree (i.e. "continuous rotation") movements. </p> <p>Servos operate on the concept of "steps", so you simply need to figure out how many servo steps represent the range of motion of each servo, then keep track of the current step (which can be used to calculate position) in your code. </p> <p>Alternatively, you could directly measure the position with a separate sensor (like a wheel encoder). This is traditionally used in cases where the servo could slip, and cause the steps and the position to become out of sync. This will probably not be an issue in an airship, where the only load on the motor will be the turning of the turret in free space.</p>
2841
2014-04-28T12:00:36.333
|sensors|rcservo|
<p>First a bit of background, I am planning to make a highly maneuverable airship controlled by four thrust vectored propellers. I don't want to rely on a rudder and forward momentum for turns but instead be able to maneuver with direct prop thrust. I want to be able to point each prop anywhere within a half sphere/dome. So two axis, 360 degrees traversal for the forward/back/up/down and 180 degrees for the left/right. The nearest thing I can think of is a ball turret similar to <a href="http://en.wikipedia.org/wiki/Ball_turret" rel="nofollow">this ball turret</a> but instead of a gun, have a motor and propeller. The turret can rotate infinitely through 360 degrees, but the gun rotates through 90 degrees.</p> <p>My first thought was for a servo for both axis, but they are limited in range and I would like the 360 axis to be able to rotate continuously. This would allow for the turret to rotate to the desired angle using the shortest path. </p> <p>My question is, What do I need to be able to rotate the turret and still know what angles the turret is currently pointing?</p>
What approaches should I consider to create rotating a turret
<p>What you're looking for is generally referred to as a slip ring. Here's a cheaper slip ring sold by Adafruit along with a video demonstrating how it works: <a href="http://www.adafruit.com/products/736">http://www.adafruit.com/products/736</a>. They sell hobbyist parts. I've used slip rings before to manufacture my own rotating LiDAR using a Hokuyo.</p> <p>Here's another company that sells what seem to be industrial grade slip rings: <a href="http://www.moflon.com/">http://www.moflon.com/</a>. I found them by doing a quick Google search for "slip ring connector", which you can use to find many many more options.</p> <p>As a final note, here's the video of the rotating LiDAR I built which uses a slip ring...you will see the slip ring in the first 30 seconds of the video: <a href="https://www.youtube.com/watch?v=7PLcqhXVndI&amp;list=UU0A5iVxngtVcHPjANH75z1A">https://www.youtube.com/watch?v=7PLcqhXVndI&amp;list=UU0A5iVxngtVcHPjANH75z1A</a>.</p>
2852
2014-04-30T01:11:52.023
|sensors|wiring|
<p>I am planning to create a motor turret described <a href="https://robotics.stackexchange.com/questions/2841/what-approaches-should-i-consider-to-create-rotating-a-turret">in this question</a>. But to simplify the problem, I'm thinking of a wind turbine with a generator in the main head that can rotate freely through 360 degrees to face the wind. How would I prevent the power and sensor wires coming from the head and down the shaft from twisting? </p>
How to prevent twisting of cables
<p>As a complete answer to this question would be very very long I'll attempt to answer it in generally high-level lines. If you wish to know specifics ask several more concise questions.</p> <ol> <li>You need to create the drones. The drones need to be controllable and stable. Quadcopters are usually OK, octocopters are more stable/easier to control. Larger ones are more stable but have less maneuver capabilities.</li> <li>You need a localization system so each drone knows it's position and orientation in a fixed coordinate system. This can be obtained in a quite large number of ways. Usually orientation is obtained using IMU and the position can be obtained via GPS (or dGPS for higher precision if possible), some sound-based system (using TDOA method or similar) or some visual systems for example. Another approach would be to fully localize only one drone and then use some system to find the relative positions of other drones with respect to that one.</li> <li>After that, you need to produce control sequences to guide the drones along those paths. This can be done from the ground by some wireless control system or can be implemented directly onto the drones.</li> </ol> <p>Given how wide your question is I hope this covers at least some portion of the answer you were looking for.</p>
2854
2014-04-30T04:57:54.670
|quadcopter|
<p>How would one go about making a number of drones fly in a preset pattern or formation. for example have them rotating around a point.</p> <p>something like this. <a href="https://www.youtube.com/watch?v=ShGl5rQK3ew" rel="nofollow">https://www.youtube.com/watch?v=ShGl5rQK3ew</a></p>
Create set of drones to fly in patterns
<p><strong>At the foundation of PID control, there is the assumption that the quantity you are measuring (and using to compute your error) has a direct linear relationship with the quantity you are controlling.</strong></p> <p>In practice, people frequently bend this rule without things going horribly wrong. In fact this resiliency to modeling error--when our assumptions (our "model") do not match exactly to the real system--is one of the reasons PID control is such a popular and commonly used technique.</p> <p>In your case, what you are measuring is distances between your robot and the walls, and what you are controlling is the speed of your motors. So your question really boils down to <strong>"How can we describe a (hopefully) linear relationship between these quantities?"</strong></p> <p>First off, we have to deal with the fact that we have at least six different numbers floating around (four sensor measurements and at least two motor speeds), when we'd really like to have just one, to make a simple SISO (Single Input Single Output) PID controller.</p> <p>For the motors, one way we can accomplish that is by choosing to use the <em>difference</em> between the motor speeds as our control quantity. We will have to control overall speed separately, but it makes steering a lot easier this way.</p> <p>$\omega = R_\text{wheel speed}-L_\text{wheel speed}$</p> <p>Next, for combining the range sensor inputs we have a number of different options to choose from. Which one is best for a given situation will depend a lot on the geometry of the robot and where the sensors are pointing. For now, let's just assume we can boil it down to a DISTANCE_LEFT and DISTANCE_RIGHT that tells us how far away the walls are on either side. We can then take the <em>difference</em> and use that as our "error" measurement. Minimizing this error makes the distance to either wall equal and keeps us centered in the hall. So far, so good.</p> <p>error = RIGHT_DISTANCE - LEFT_DISTANCE</p> <p><img src="https://i.stack.imgur.com/ADPvY.png" alt="enter image description here"></p> <p>Now for the part where we bend the rules. When we chose to use the difference between left and right motor speeds as our control variable, what we were in effect doing is controlling the <em>angular rate</em> of our robot. The relationship between angular rate and us being centered in the hallway goes something like this:</p> <blockquote> <p>angular rate --> {<em>integration over time</em>} --> robot heading angle --> {<em>cosine of heading angle</em> * <em>forward movement speed</em>} --> movement speed toward or away from the hallway center --> {<em>integration over time</em>} --> robot position in the hallway</p> </blockquote> <p>Yikes! That does not look linear. What do we do? Well, we can do what all good engineers do when faced with a complex math problem. We approximate. ;)</p> <p>For example, if the robot is facing in roughly the direction down the hall that we would like it to go, we can make the claim that our robot's heading (for small heading angles) is (approximately) linearly proportional to it's movement speed toward or away from the hallway center. Now we have:</p> <blockquote> <p>angular rate --> {<em>integration over time</em>} --> robot heading angle --> {<em>approx. linear</em>} --> movement speed toward or away from the hallway center --> {<em>integration over time</em>} --> robot position in the hallway</p> </blockquote> <p>Better, but we're not there yet. We still have those integration operations to deal with. This is where it gets complicated. I'll see about filling out this answer more later, but for now I'll say that the reason this PID controller (might) still work has to do with (a.) the integral and derivative terms in the controller and (b.) interaction between the controller and the dynamic equations governing the motion of the robot itself.</p>
2868
2014-05-02T18:44:51.583
|control|pid|differential-drive|
<p>Edited: I have a differential drive robot that needs to drive down a hall and stay in the center. I have 4 ultra sonic sensors, 2 on each side. Currently, I thought of implementing pure pursuit and having a lookahead distance etc. but I am not sure if this is the correct approach. I have all the lower level PIDs for motor control working, I just need some help in choosing a path planning algorithm? Will pure pursuit work to my needs? OR do people have any suggestions.</p>
Differential Drive Robot Control
<p>You're right to see this as a problem of complexity. Whatever setup you decide to use, keep in mind that you will inevitably have to troubleshoot it. It's already going to be complex -- but make it work in your favor.</p> <p>In other words, let's say that you choose to make a subassembly for your sensors (like arduino + sensors, or GPIO board + sensors). Make sure that this subassembly is completely self-contained: it can read all the sensor values, and without being connected to the beaglebone it can tell you whether all sensors are being reported properly. What you <em>don't</em> want to do is make it so that the entire system has to be connected in order to check the low-level sensors.</p> <p>As you compare options, figure out which options will support a modular system design and which options won't. </p>
2875
2014-05-04T20:13:08.077
|arduino|mobile-robot|sensors|computer-vision|beagle-bone|
<p>I'm building the biggest robot I've ever done. The hardware I have so far is as follows:</p> <ul> <li>HCR-Platform from DFRobot as a base</li> <li>x2 12V 146Rpm DC motor with two phase hall encoder</li> <li>x7 Sharp 2Y0A21 IR sensors</li> <li>x6 URM37 ultrasonic sensor</li> <li>x4 IR ground sensor</li> <li>Microsoft Kinect</li> </ul> <p>Right now I'm only using a RoMeo board (arduino 328 compatible) to drive the motors and process the PIDs for the wheels + steering and also access to all the sensors (except for the kinect). I have a BeagleBone Black running linux that is intended to be the main brain connected to the RoMeo using the RS232 port and do the processing of the Kinect + wifi access.</p> <p>I started thinking about connecting the sensors to the Beagle board directly so I don't need to waste time sending commands to the arduino board to read the sensors and that yielded the first issue, beagle board works on 3.3V instead of 5V used on the sensors. </p> <p>After this I thought to create a board with voltage divisors to connect the sensors there and then using a ribbon cable to connect this new board to the beaglebone. I couldn't find any 2x23 IDC male connector to create the "interface cable" between the two boards so the beaglebone option is out as I don't want to have tons of jumper cables all over the place.</p> <p>This morning I thought about all this again and I researched about GPIO boards over USB and found three from Numato, only one works on TTL 5V and has 8 pins so I would need a few of them to use all the sensors so unless I design my own board this option is out too.</p> <p>At this point I'm quite confused in terms of what's the best hardware I could use to drive this beast. Right now I think I should use a Intel NUC with linux for the Kinect, wifi and usb link to a custom made sensor board. This custom made board will work on TTL 5V, provide a power bus for the sensors and will interface all the "low level" sensors using USB as link. I also thought about an FPGA for the custom board but I'm not sure if this would help me or if it's worth the effort of learning how to use it.</p> <p>What are your thoughts on this? Any idea on how this issues are solved on "complex" robots?</p>
Connect sensors to beaglebone/arduino in a complex robot
<p>why not to consider a hybrid cable which can be stiff ("frozen") like a pipe or flexible as a rope? In such a case it can push not only pull. By the hybrid cable I mean a cable covered with many small cylindrical pulleys.</p>
2876
2014-05-05T09:04:51.260
|sensors|mechanism|
<p>According not only to <a href="http://www.hitl.washington.edu/people/tfurness/courses/inde543/READINGS-03/BERKLEY/White%20Paper%20-%20Haptic%20Devices.pdf">this link</a>, but in my humble opinion to common sense as well, cable-based haptic devices have got lower inertia and are less complicated mechanically. I also believe that controlling them is not that big deal - the inverse kinematics should be quite straightforward. Moreover, the play should be easy to compensate - if there occurs any at all, since the cables are tensioned. Cables should also be easy - ? just a guess from me - to be equipped with strain gauges or to become strain gauges themselves, allowing to enhance control capabilities of a device.</p> <p>Where am I wrong? Why is that the links-based systems (e.g. PhaNTom or Falcon, although that latter has got cable transmission), especially with impedance control, are the only I seem to be able to buy? Is it because of cable elongation (creep)? Or too constrained workspace (esp. angles)?</p>
Why linkage-based haptic devices are much more common than cable (tension)-based ones?
<p>You should keep as a guideline that you should NOT modify the external package. If you have to, it should be in a way that can in the end be integrated upstream (i.e. be sent as a patch to the original developers and integrated there).</p> <p>First thing: what name to give to the package ? - do NOT use the external/ category. This is deprecated and no new packages should be added to it - instead, find a <a href="http://rock-robotics.org/stable/documentation/packages/package_structure.html" rel="nofollow">proper category</a> to put it in. The basename (the name of the package inside the category) is probably easy to derive from the library's own name or the name of the device it drives (if it is a device driver, Rock has special rules for device driver names, devicetype_devicename as e.g. imu_xsens).</p> <p>Ideally, the external package follows standard practice: uses a standard build system such as CMake or autotools and provides an install target. It can be <a href="http://rock-robotics.org/stable/documentation/autoproj/adding_packages.html" rel="nofollow"> added to the autoproj configuration like any other package</a>. Ideally, a manifest file should be added in the package set in the manifests/</p> <p>If the package does NOT follow these practices, then you will unfortunately have to get your hands dirty. You will have to create a special autobuild handler that handles your package manually. There are examples for these in the rock package set</p> <pre><code>import_package 'drivers/aria' do |pkg| def pkg.do_build # Perform install steps # Use run('build', Autobuild.tool(:command_name), 'argument', 'to', 'the', 'command') to run an external command # For instance # run('build', Autobuild.tool(:make)) # FileUtils.install to copy files end def pkg.do_install # Perform install steps # Use run('install', Autobuild.tool(:command_name), 'argument', 'to', 'the', 'command') to run an external command # FileUtils.install to copy files # Files should be installed in the prefix path returned by #prefix end def pkg.do_pkgconfig pkgconfig_dir = File.join(prefix, 'lib', "pkgconfig") FileUtils.mkdir_p pkgconfig_dir # Generate a proper pkg-config file that you install in # pkgconfig_dir # A good way is to save a template file in the package # set directly and generate the file using ERB end pkg.post_install do pkg.progress_start "building %s" do pkg.do_build end pkg.progress_start "installing %s" do pkg.do_install end pkg.progress_start "generating pkgconfig file for %s" do pkg.do_pkgconfig end end end </code></pre>
2882
2014-05-06T07:38:21.110
|software|rock|
<p>The <a href="http://rock-robotics.org" rel="nofollow">Rock</a> framework already includes a lot of software libraries. However, I would like to add an existing external library so that I can use it for my component development. What is the preferred way of doing that?</p>
How do I add an external library to the Rock Framework?
<p>The original T-slot frame system, was developed in the 80's by <a href="http://www.item24.de/en.html" rel="nofollow noreferrer">item GmbH</a> and was quickly developed into the <a href="http://catalog.item24us.com/en/index.php?cat=c1001009411_MB-Building-Kit-for-Mechanical-Engineering.html" rel="nofollow noreferrer"><em>MB Building Kit System</em></a>, hence I've always known it as <em>item bar</em>.</p> <p>Since then though, there have been many other manufacturers producing similar systems. Examples include <a href="http://www13.boschrexroth-us.com/framing_shop/" rel="nofollow noreferrer">Bosch Rexroth</a> (as mentioned by <a href="https://robotics.stackexchange.com/users/1584/guy-sirton">Guy Sirton</a>), <a href="http://www.8020.net/" rel="nofollow noreferrer">80/20</a> (<a href="https://robotics.stackexchange.com/a/2886/37">Andrew Capodieci's suggestion</a>), <a href="http://uk.rs-online.com/web/b/flexlink/" rel="nofollow noreferrer">FlexLink</a> and many others, but <em>item</em> is still one of the most popular and well respected, especially in Europe.</p> <p>Also, as <a href="https://robotics.stackexchange.com/users/187/david-cary">David Cary</a> says there are also people producing micro versions of T-slot systems, such as <a href="http://openbuildspartstore.com/v-slot/?sort=featured" rel="nofollow noreferrer">OpenRail/VSlot</a>, <a href="http://www.makerbeam.eu/" rel="nofollow noreferrer">MakerBeam</a> and <a href="http://www.openbeamusa.com/" rel="nofollow noreferrer">OpenBeam</a> etc.</p>
2885
2014-05-06T10:51:34.707
|frame|identification|
<p>What's the name of the "big meccano" used in the photo below to construct all the cabinets and racks?</p> <p>It appears to be an aluminium cut-to-length system of 4-way rails. I've seen it used many times and assume it has a well-known brand-name to those that know.</p> <p><img src="https://i.stack.imgur.com/HG4sp.jpg" alt="Inside Audi facility"></p> <p>Photo taken from theverge.com and was a feature about how Audi are building a new car.</p>
Can you identify the construction material/system used in the pic?
<p>This is something that i have looked for in standard textbooks and have not found much information on it. I am not sure if it is because they are geared towards industrial arms which are typically not backdrivable. (meaning once you reach a position, you can essentially turn off your motors and the arm will continue to hold position). Or if they don't care about this level of control. (meaning they have low-level controllers that handle it and the arm will "know what to do"). Or if it is too simple of a problem to worry about.</p> <p>nevertheless, i wrote some code to calculate these torques for a simulated robot. the output seems correct, although it was never tested. This is python code for OpenRAVE. </p> <pre><code># Calculate the center of mass past the current joint. # This assumes the mass is specified properly for each link. def get_COM_past_joint(joint): mass = 0.0 com = np.array([0.0, 0.0, 0.0]) for i in range(joint.GetDOFIndex(), robot.GetDOF()): j = robot.GetJointFromDOFIndex(i) link = j.GetHierarchyChildLink() mass += link.GetMass() com += link.GetMass() * link.GetGlobalCOM() # todo: add mass in robot's hand com /= mass return (mass, com) # Calculate the torque required to hold joints in current configuration. # Procedure: # 1. calculate the center of mass "past" the joint # 2. find the closest perpendicular distance between the COM and joint axis # 3. cross the joint axis with gravity vector # 4. cross the perpendicular distance vector with gravity vector # 5. return mass * direction * magnitude of both crosses def get_static_joint_torque(joint): (mass, com) = get_COM_past_joint(joint) anchor = joint.GetAnchor() axis = joint.GetAxis() joint_to_com_closest = (anchor - com) - np.dot((anchor - com), axis) * axis axis_cross_k = np.cross(axis, np.array([0, 0, -1])) joint_to_com_closest_cross_k = np.cross(joint_to_com_closest, np.array([0, 0, -1])) direction = np.sign(np.dot(axis, joint_to_com_closest_cross_k)) return mass*direction*np.linalg.norm(axis_cross_k)*np.linalg.norm(joint_to_com_closest_cross_k) </code></pre> <p>EDIT: I recently found out that OpenRAVE does a more advanced form of this calculation already. (And their units are correct too). </p> <pre><code>robot.SetDOFValues(dofvalues) robot.SetDOFVelocities(dofvelocities) torques = robot.ComputeInverseDynamics(dofaccelerations) </code></pre> <p>or</p> <pre><code>torqueconfiguration, torquecoriolis, torquegravity = robot.ComputeInverseDynamics(dofaccelerations,None,returncomponents=True) </code></pre>
2889
2014-05-07T12:31:33.813
|torque|manipulator|
<p>I have a 7 dof manipulator (Kuka LBR4+) and would like to calculate the joint torques needed to keep the arm in a static equilibrium. In most books the transposed jacobian is used to map the forces applying on the end effector to the joint torques.</p> <p>$\tau = J^T\cdot F$</p> <p>That however doesn't take the mass of the links into account. Is there a way to calculate the needed torques for a given configuration so that, assuming an ideal case, by setting these torques the arm will be in a static equilibrium?</p> <p>cheers</p> <p><strong>EDIT:</strong></p> <p>For everybody interested, i found a solution to this problem in <em>Introduction to Robotics - Third Edition</em> by John J. Craig on Page 175-176. It is done with the aid of the iterative Newton-Euler dynamics algorithm. The actual trick is, to set all velocities and accelerations to zero except for the base acceleration. The base acceleration will be $^0 \dot v_0 = G$, where G has the magnitude of the gravity vector but points in opposite direction. This is equivalent to saying that the base of the robot is accelerating upwards with 1 g and this upward acceleration causes exactly the same effect on the link as gravity would.</p>
Static equilibrium for 7 dof manipulator
<p>None of what I will say in this answer is safe, and you should not even attempt it without proper supervision, safety equipment and a class D fire extinguisher around. <a href="http://www.batteryspace.com/warningsforusingbatteries.aspx" rel="nofollow">Lithium ion batteries need to be handled carefully, in both the mechanical and electrical sense. Physical stress can cause fires, as can improperly charging or discharging.</a></p> <p>Given the danger of reverse engineering the cell management circuitry in your batteries, it would be better to disassemble them and treat them as individual cells. Any of the cells that haven't gone flat need to be grouped to minimize any mismatches in their charge levels or lifetimes. Then you can look at constructing a battery management system that can safely deal with the cells you have.</p>
2899
2014-05-09T08:09:16.167
|battery|
<p>I'm working on my first robot project. I previously used a 12V 6Ah sealed lead acid battery, but recently I aquired some 15 ASUS Li-Ion battery packs, each of them 14.8V and either 2200 mAh or 4400 mAh. The laptops have been discarded, and some of the battery packs seem to be dead.</p> <p>The battery packs have an 8 pin connector. Inside, I assume there's a bunch of 18650-cells and some electronics.</p> <p>My robot can handle 14.8 V directly.</p> <p>How can I use these batteries? How can I charge them without the laptops? I'm a little put off by the idea of taking the 18650-cells out of the packs and rebuilding my own battery pack and charging system, but if that's what's needed I have to do it.</p> <p>The packs are marked ASUS A41-A3 for the 2200 mAh ones, and ASUS A42-A3 for the 4400 mAh ones.</p>
Salvaging a bunch of laptop battery packs
<p>I think people are getting their mS and uS mixed up! Generally it was always between 1mS and 2mS but to get the full range or extended range its said to be 0.8mS and 2.2mS (or 800uS to 2200uS) not 800 to 2200mS (I'm sure it was just a typo but to save confusion). Obviously there needs to be a gap between the pulses so at 300Hz you would have a pulse of 2.2mS followed by a gap of 1.13mS, this is known as the Mark and the Space and can be written as a ratio or Mark/Space ratio although here it is irrelevant as I will explain. ( 300x(0.0022+0.00113)=1Second ). The old servos always want 50Hz which means as the Mark changed in duration (desired servo position) the Space must also change in duration to keep the frequency always at 50Hz (longer Mark needs a shorter Space to maintain the overall length of the signal and vice versa). The new servos are quite happy to receive a position signal at 50Hz but will happily accept 300Hz so the only importance is to send the correct Mark duration for the required position and make no change to the Space in between, this makes basic interfacing with microprocessors and program writing that little bit easier for beginners as for instance you could always place a set 2.6mS Space between the Marks giving a position update frequency of between 294Hz and 208Hz(2.6mS+0.8mS and 2.6mS+2.2mS) which for many uses is more than fast enough and also makes the importance of the mark/space ratio redundant. Of course a constant update of 300Hz could be better in some cases requiring better more consistent performance but then a simple calculation is required in the program to set the space accordingly. </p>
2921
2014-05-14T20:16:06.100
|rcservo|pwm|servomotor|
<p>Many websites say that analog servo motors work on 50Hz and digital servo motors work on 300Hz. My question is, does this difference apply only to the inner control loop of the servo or does the user of the digital servo actually provide 300Hz PWM signal? To rephrase, are all (most) servos including digital ones controlled with 50Hz PWM, or are digital ones specifically controlled with 300Hz PWM? Thanks</p>
Controlling digital servos
<p>P (Proportional) meaning how responsive the system. Not only P can control the system responsive. Clock or Sampletime can make the system more responsive. So when u try only I (Integral) if responsive system has been high so it's possible only using ID.</p>
2925
2014-05-15T10:10:56.020
|control|pid|
<p>Why doesn't a PID only consisting of ID exist?</p>
Why does a ID controller not exist?
<p>If you've built this from scratch, the best person to troubleshoot your code is yourself.</p> <p>Look at the basic pieces of the code: the sensor, the PIDs, and the motor control. Write some functions to help you test each of those pieces individually, so that you can determine whether the error you're seeing is in just one component, or a system-level error based on some incorrect code in all 3 components.</p> <p>Once you know the extent of the error, you'll be able to either recognize the problem and fix it, or to ask a more focused question here.</p>
2931
2014-05-16T14:37:35.183
|quadcopter|pid|imu|
<p>I have built a quad copter completely from scratch (electronics, mechanics and software). I am now at the point where all my sensor data looks correct and when I tilt the quad copter the correct motors increase and decrease.</p> <p>I have been trying to tune the PIDs for a couple of days now, in rate mode it stays level and rotates at roughly the correct degrees per second when I give it a command.</p> <p>In stability mode a lot of the time it just spins around the axis and when I did get it stable it kept rotating from upright to upside down and then maintaining an upside down flat position. I have come to the conclusion that I am either doing something completely wrong or I have some + - signs mixed around somewhere.</p> <p>Would anyone who is knowledgeable about quad copter control code be able to take a look at what I have done and how it works as I'm really struggling to work out what needs to change and what I should try next.</p> <p>My flight control code is posted below, the other relevant classes are hardware.h and main.cpp</p> <pre><code>#include "mbed.h" #include "rtos.h" #include "hardware.h" //Declarations float Constrain(const float in, const float min, const float max); float map(float x, float in_min, float in_max, float out_min, float out_max); void GetAttitude(); void Task500Hz(void const *n); void Task10Hz(); //Variables float _gyroRate[3] ={}; // Yaw, Pitch, Roll float _ypr[3] = {0,0,0}; // Yaw, pitch, roll float _yawTarget = 0; int _notFlying = 0; float _altitude = 0; int _10HzIterator = 0; float _ratePIDControllerOutputs[3] = {0,0,0}; //Yaw, pitch, roll float _stabPIDControllerOutputs[3] = {0,0,0}; //Yaw, pitch, roll float _motorPower [4] = {0,0,0,0}; //Timers RtosTimer *_updateTimer; // A thread to monitor the serial ports void FlightControllerThread(void const *args) { //Update Timer _updateTimer = new RtosTimer(Task500Hz, osTimerPeriodic, (void *)0); int updateTime = (1.0 / UPDATE_FREQUENCY) * 1000; _updateTimer-&gt;start(updateTime); // Wait here forever Thread::wait(osWaitForever); } //Constrains value to between min and max float Constrain(const float in, const float min, const float max) { float out = in; out = out &gt; max ? max : out; out = out &lt; min ? min : out; return out; } //Maps input to output float map(float x, float in_min, float in_max, float out_min, float out_max) { return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min; } //Zeros values void GetAttitude() { //Take off zero values to account for any angle inbetween the PCB level and ground _ypr[1] = _ypr[1] - _zeroValues[1]; _ypr[2] = _ypr[2] - _zeroValues[2]; //Swap pitch and roll because IMU is mounted at a right angle to the board //Gyro data does need swapping - done within freeIMU class float pitch = _ypr[2]; float roll = _ypr[1]; _ypr[1] = pitch; _ypr[2] = roll; } void Task500Hz(void const *n) { _10HzIterator++; if(_10HzIterator % 50 == 0) { Task10Hz(); } //Get IMU data _freeIMU.getYawPitchRoll(_ypr); _freeIMU.getRate(_gyroRate); GetAttitude(); //Rate mode if(_rate == true &amp;&amp; _stab == false) { //Update rate PID process value with gyro rate _yawRatePIDController-&gt;setProcessValue(_gyroRate[0]); _pitchRatePIDController-&gt;setProcessValue(_gyroRate[1]); _rollRatePIDController-&gt;setProcessValue(_gyroRate[2]); //Update rate PID set point with desired rate from RC _yawRatePIDController-&gt;setSetPoint(_rcMappedCommands[0]); _pitchRatePIDController-&gt;setSetPoint(_rcMappedCommands[1]); _rollRatePIDController-&gt;setSetPoint(_rcMappedCommands[2]); //Compute rate PID outputs _ratePIDControllerOutputs[0] = _yawRatePIDController-&gt;compute(); _ratePIDControllerOutputs[1] = _pitchRatePIDController-&gt;compute(); _ratePIDControllerOutputs[2] = _rollRatePIDController-&gt;compute(); } //Stability mode else { //Update stab PID process value with ypr _yawStabPIDController-&gt;setProcessValue(_ypr[0]); _pitchStabPIDController-&gt;setProcessValue(_ypr[1]); _rollStabPIDController-&gt;setProcessValue(_ypr[2]); //Update stab PID set point with desired angle from RC _yawStabPIDController-&gt;setSetPoint(_yawTarget); _pitchStabPIDController-&gt;setSetPoint(_rcMappedCommands[1]); _rollStabPIDController-&gt;setSetPoint(_rcMappedCommands[2]); //Compute stab PID outputs _stabPIDControllerOutputs[0] = _yawStabPIDController-&gt;compute(); _stabPIDControllerOutputs[1] = _pitchStabPIDController-&gt;compute(); _stabPIDControllerOutputs[2] = _rollStabPIDController-&gt;compute(); //If pilot commanding yaw if(abs(_rcMappedCommands[0]) &gt; 0) { _stabPIDControllerOutputs[0] = _rcMappedCommands[0]; //Feed to rate PID (overwriting stab PID output) _yawTarget = _ypr[0]; } //Update rate PID process value with gyro rate _yawRatePIDController-&gt;setProcessValue(_gyroRate[0]); _pitchRatePIDController-&gt;setProcessValue(_gyroRate[1]); _rollRatePIDController-&gt;setProcessValue(_gyroRate[2]); //Update rate PID set point with desired rate from stab PID _yawRatePIDController-&gt;setSetPoint(_stabPIDControllerOutputs[0]); _pitchRatePIDController-&gt;setSetPoint(_stabPIDControllerOutputs[1]); _rollRatePIDController-&gt;setSetPoint(_stabPIDControllerOutputs[2]); //Compute rate PID outputs _ratePIDControllerOutputs[0] = _yawRatePIDController-&gt;compute(); _ratePIDControllerOutputs[1] = _pitchRatePIDController-&gt;compute(); _ratePIDControllerOutputs[2] = _rollRatePIDController-&gt;compute(); } //Testing _ratePIDControllerOutputs[0] = 0; // yaw //_ratePIDControllerOutputs[1] = 0; // pitch _ratePIDControllerOutputs[2] = 0; // roll //Calculate motor power if flying if(_rcMappedCommands[3] &gt; 0.1 &amp;&amp; _armed == true) { //Constrain motor power to 1, this means at max throttle there is no overhead for stabilising _motorPower[0] = Constrain((_rcMappedCommands[3] + _ratePIDControllerOutputs[1] - _ratePIDControllerOutputs[2] + _ratePIDControllerOutputs[0]), 0.0, 1.0); _motorPower[1] = Constrain((_rcMappedCommands[3] + _ratePIDControllerOutputs[1] + _ratePIDControllerOutputs[2] - _ratePIDControllerOutputs[0]), 0.0, 1.0); _motorPower[2] = Constrain((_rcMappedCommands[3] - _ratePIDControllerOutputs[1] + _ratePIDControllerOutputs[2] + _ratePIDControllerOutputs[0]), 0.0, 1.0); _motorPower[3] = Constrain((_rcMappedCommands[3] - _ratePIDControllerOutputs[1] - _ratePIDControllerOutputs[2] - _ratePIDControllerOutputs[0]), 0.0, 1.0); //Map 0-1 value to actual pwm pulsewidth 1060 - 1860 _motorPower[0] = map(_motorPower[0], 0.0, 1.0, MOTORS_MIN, 1500); //Reduced to 1500 to limit power for testing _motorPower[1] = map(_motorPower[1], 0.0, 1.0, MOTORS_MIN, 1500); _motorPower[2] = map(_motorPower[2], 0.0, 1.0, MOTORS_MIN, 1500); _motorPower[3] = map(_motorPower[3], 0.0, 1.0, MOTORS_MIN, 1500); } //Not flying else if(_armed == true) { _yawTarget = _ypr[0]; //Set motors to armed state _motorPower[0] = MOTORS_ARMED; _motorPower[1] = MOTORS_ARMED; _motorPower[2] = MOTORS_ARMED; _motorPower[3] = MOTORS_ARMED; _notFlying ++; if(_notFlying &gt; 500) //Not flying for 1 second { //Reset iteratior _notFlying = 0; //Reset I _yawRatePIDController-&gt;reset(); _pitchRatePIDController-&gt;reset(); _rollRatePIDController-&gt;reset(); _yawStabPIDController-&gt;reset(); _pitchStabPIDController-&gt;reset(); _rollStabPIDController-&gt;reset(); } } else { //Disable Motors _motorPower[0] = MOTORS_OFF; _motorPower[1] = MOTORS_OFF; _motorPower[2] = MOTORS_OFF; _motorPower[3] = MOTORS_OFF; } //Set motor power _motor1.pulsewidth_us(_motorPower[0]); _motor2.pulsewidth_us(_motorPower[1]); _motor3.pulsewidth_us(_motorPower[2]); _motor4.pulsewidth_us(_motorPower[3]); } //Print data void Task10Hz() { int batt = 0; _wirelessSerial.printf("&lt;%1.6f:%1.6f:%1.6f:%1.6f:%1.6f:%1.6f:%1.6f:%d:%1.6f:%1.6f:%1.6f:%1.6f:%1.6f:%1.6f:%d:%d:%d:%d:%1.6f:%1.6f:%1.6f:%1.2f:%1.2f:%1.2f:%1.2f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f:%1.8f&gt;", _motorPower[0], _motorPower[1], _motorPower[2], _motorPower[3], _ypr[0], _ypr[1], _ypr[2], batt, _ratePIDControllerOutputs[0], _ratePIDControllerOutputs[1], _ratePIDControllerOutputs[2], _stabPIDControllerOutputs[0], _stabPIDControllerOutputs[1], _stabPIDControllerOutputs[2], _armed, _initialised, _rate, _stab, _gyroRate[0], _gyroRate[1], _gyroRate[2], _rcMappedCommands[0], _rcMappedCommands[1], _rcMappedCommands[2], _rcMappedCommands[3], _yawRatePIDControllerP, _yawRatePIDControllerI, _yawRatePIDControllerD, _pitchRatePIDControllerP, _pitchRatePIDControllerI, _pitchRatePIDControllerD, _rollRatePIDControllerP, _rollRatePIDControllerI, _rollRatePIDControllerD, _yawStabPIDControllerP, _yawStabPIDControllerI, _yawStabPIDControllerD, _pitchStabPIDControllerP, _pitchStabPIDControllerI, _pitchStabPIDControllerD, _rollStabPIDControllerP, _rollStabPIDControllerI, _rollStabPIDControllerD); } </code></pre> <p>The whole program can be seen on my mBed page at <a href="http://mbed.org/users/joe4465/code/QuadMK6/" rel="nofollow">http://mbed.org/users/joe4465/code/QuadMK6/</a></p> <p>If you need any more info or something explaining let me know.</p> <p>If anyone can point me in the right direction or has any idea of what I should try next it would be very much appreciated.</p> <p>Thanks Joe</p>
Quad copter attitude control
<p>Yes it supports Java 8 Embedded.</p> <p>Which basically is Java 8 SE without any graphical library support (because the RoboRio is headless).</p> <p>Beta teams using Java are currently using Java 8 Embedded for Arm as the JVM on the RoboRio, and Eclipse as the IDE with WPILib provided plugins for project development.</p>
2933
2014-05-16T17:10:56.370
|microcontroller|
<p>My FRC team will recently upgrade from <a href="http://en.wikipedia.org/wiki/CompactRIO" rel="nofollow"> CompactRIO </a> to RoboRIO.</p> <p>CompactRIO only supports up to Java 1.4. What version of Java does RoboRIO?</p>
Does RoboRIO support Java 8?
<p>So imagine this scenario: </p> <p>You are driving you car towards a red stop light. As a human, you would naturally adjust your speed so that you smoothly stop in front of the stop line. </p> <p>Here is how a PID controller would do it:</p> <p><strong>P:</strong> The proportional part of the controller would measure the exact distance from the car to the stop light, and drive slower and slower as it approaches the light. The P-controller will drive super slow when getting close to the light, eventually stop without actually reaching it.</p> <p><strong>I:</strong> The integral part of the controller would go fast towards the light, pass it, realise it had gone too far, drive back, drive back too far, then go forwards again, drive pass the light again, and so on. The I-controller might, in fact, never stop <em>on</em> the line. </p> <p><strong>D:</strong> The derivative part of the controller would know exactly when to start breaking, so that it does not break to early - like the P-controller - nor does it drive to far - like the I-controller. Instead, it would start breaking exactly when it needs to, giving a firm but smooth stop in front of the stop light. </p> <p>A properly tuned <strong>PID</strong> would combine these characteristics: The <strong>P</strong> would measure the exact distance to the light, and adjust the initial speed according to this distance. The <strong>I</strong> would be more aggressive, and enforce that no breaking is done until the the stop light is close, while the <strong>D</strong> would take over breaking when it realises the car is going way too fast towards the stop light, and break firmly. </p> <p>Depending on the weighting of Kp, Ki and Kd, these characteristics will be more or less visible in the final system. Of course, the example contains some simplifications. For a general tuning guide, have a look at the <a href="http://en.wikipedia.org/wiki/Ziegler%E2%80%93Nichols_method" rel="nofollow">Ziegler-Nichols method;</a> the first paragraph in this wiki pretty much explains it all. Note that this tuning method is merely a guide, and that you will most likely have to manually fine tune the system afterwards.</p> <p>Hope that helps!</p>
2935
2014-05-16T23:06:36.733
|control|
<p>I am trying to understand how a PID controller moves the poles and zeros of an transfer function. I've been playing a bit with it, but aren't able to see some kind of connection. </p> <p>I mean that P and I rises the Overshoot which would mean that the damping ratio gets smaller, thereby should away from the real axis. </p> <p>and D should should do the opposite, but it doesn't seem to be true with the examples i've used.. am i doing something wrong??</p> <p>Well i kind of just want a general knowlegde of how it affect second order systems. </p>
How PID affect the Root locus of a close loop transfer function
<p>Distributed cooperative coverage algorithms for robots sounds like an area of active research. I suggest looking at some academic papers. Here are a few to get you started:</p> <p><a href="http://cdn.intechopen.com/pdfs-wm/12637.pdf" rel="nofollow">Multirobot Cooperative Model applied to Coverage of Unknown Regions</a></p> <p><a href="http://www.ri.cmu.edu/pub_files/pub2/butler_zack_2000_2/butler_zack_2000_2.pdf" rel="nofollow">Cooperative Coverage of Rectilinear Environments</a></p>
2936
2014-05-17T11:58:48.997
|mobile-robot|sensors|coverage|
<p>I'm currently working on a school project about simulating robots scanning an area, but it has been a struggle to find what strategy the robots should use. Here are the details:</p> <p>I am given a certain amount of robots, each with a sensing range of $r$. They spawn one after another. Their task to scan a rectangular area. They can only communicate with each other when they are within communication range. </p> <p>I am looking for the best strategy, (i.e. time efficient solution) for this. Any reply or clue to the strategy will be appreciated. </p>
Best strategy for area scanning using little sensing bots
<p>What you seem to be concerned about is the well known nonminimum-phase effect of the zero $s_0=0.05548/0.311$ in the right half of the s-plane.</p> <p>You should first ask yourself whether this lag has a physical interpretation in the context of your system. Many physical systems expose such a dynamics, but here it might also depend on how you carried out the identification.</p> <p>Then, to properly account for such a lag, if it really exists and is not introduced by the identification process, you should make use of proper techniques. For example, look <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0CB8QFjAA&amp;url=http%3A%2F%2Fdigitool.library.colostate.edu%2Fwebclient%2FDeliveryManager%3Fpid%3D88646&amp;ei=OzwDVPjXI4aN7Abl2IDQCA&amp;usg=AFQjCNHDwLUAFgKeq1qyaPIbjuMP9o35eQ&amp;sig2=F-5vNha3Bgg9Emlu8MryPg&amp;bvm=bv.74115972,d.ZGU" rel="nofollow">here</a>.</p>
2939
2014-05-18T13:58:47.247
|pid|
<p>I identified my system and now I am trying to tune PI regulator since I think I do not need D.</p> <p>I came across this graph while Matlabing and I do not know what does it mean.</p> <p>I am using pidtune() to get my P and I values. (I think computation is all correct, I made model in simulink to confirm). Anyway see my picture and arrow is pointing at what I do not understand. Why is my system going below zero first?</p> <p>It is supposed to be water flow regulator.</p> <p><img src="https://i.stack.imgur.com/36usV.png" alt="enter image description here"></p> <p>Transfer function: $$ \frac{-0.311s + 0.05548}{s^2 + 0.06882s + 0.0007626}$$</p> <p>Continuous-time PI controller in parallel form: $$K_p + K_i * \frac{1}{s}$$</p> <p>With $K_p = 0.256$, $K_i = 0.000342$</p>
What does this "inverse" peak mean? (step function)
<p>With LQR you can either decrease $Q$ or increase $R$. If that's not working you have to consider the possibility that your motor just will not work.</p> <p>It also might be worth looking at different controller design techniques. If I remember correctly, LQR minimizes the total energy, which usually means a large initial correction followed by a slop back to zero for the control signal (as opposed to lots of oscillation). There are other techniques that can focus on the maximum peak instead of total energy (e.g. generic optimal control or $\mathcal{H}_2$ robust-control techniques, which Matlab has commands for) but they tend to be difficult to understand and use without a lot of study.</p> <p>Aside from changing the motors, you're left with studying the other optimal controllers or just playing around with PID, feedforward, and/or pole-zero shaping to see if you can get a better response.</p>
2940
2014-05-18T19:39:04.547
|control|design|
<p>I am trying to implement a controller for an inverted pendulum using LQR (with MATLAB command lqr(A,B,Q,R)). The problem is that the motors are relatively weak, so I tried to increase R, but simulations show that the effort is still very high. How can I reduce the effort?</p>
LQR design with low effort
<p>Since the robot is unlikely to visit all the locations you have a sparse data set that you can most simply store in a hash table. You would have to think of a good way of hashing the x and y locations together. You need a function hash(x,y) and feed that into a hash table.</p>
2945
2014-05-19T01:48:30.073
|software|mapping|
<p>From what I understand, you can create a map using sensors and storing values in an array. For example, the array can have 0's for places not visited, and 1's for places visited, or 0's for open spaces and 1's for occupied spaces, etc. </p> <p>The map I'm thinking of making will have about 1000 x 2000 members. This means I need an array of 2 million members. That seems like a lot. If I have eight attributes about each member (like temperature, light level, sound level, etc.), then I have 16 million pieces of information to capture. </p> <p>Is this correct, or is there a better way to create a map for a robot using software?</p>
How does a robot efficiently store a map it makes?
<p>Find a project you really think is cool with a tutorial and then work towards that. For programming, a good way to learn is to beg, borrow and steal from others by combining their programs to make a hybrid of your own. This gives you something good to start with and you can see what the programmer did to make it work in the first place.</p>
2963
2014-05-22T03:04:14.977
|arduino|raspberry-pi|
<p>I am interested in learning how to build a dynamic quadcopter, and hope to be fairly proficient with Arduino/raspberry-pi. What resources and/or practices might you recommend? </p>
How do I learn about Arduino/raspberry-pi based robotics on my own?
<p>Let's try to give a comprehensive answer and stay on topic!</p> <p>As you said in your question there is one PID per axis. This means 3 PIDs in total: roll, pitch, and yaw.</p> <p>Also the motors input is a PWM high-level time, typically in the 1000-2000us range. There are 4 motors input: front, back, left, and right. One subtlety of the motors is that there is a value under which the motors stop completely, for instance 1100ms.</p> <p>There is another variable not mentioned in the question: the throttle.</p> <p>The first step in my quadcopter project was to tune the throttle. My throttle values were between -100 and 100, so I applied the following formula to each motor: motor = 300 * throttle / 100 + 1550. The constant values are arbitrary and give us a motor value in the 1250-1850 range, other values of the same magnitude would work.</p> <p>Only then do we apply the PIDs using the motor mix formula. For instance for my +4 quadcopter the formula is:</p> <ul> <li>Front = Throttle + PitchPID - YawPID</li> <li>Back = Throttle - PitchPID - YawPID</li> <li>Left = Throttle + RollPID + YawPID</li> <li>Right = Throttle - RollPID + YawPID</li> </ul> <p>The PID output has no unit. Choosing the right P, I, and D constants shall give us values which can stabilise the quadcopter. One caveat is that depending on the PID values the motor input can exceed the bounds: it could go over 2000us or under 1100us (which is our example motor cut value). In order to avoid such a situation we can analyse the higher and the lower of our motors inputs and add a constant to get all the motors within bounds.</p> <p>For instance if the motor mix formula gave us:</p> <ul> <li>Front = 1900</li> <li>Back = 1800</li> <li>Left = 2100</li> <li>Right = 1700</li> </ul> <p>We would remove 100 from each motor and get:</p> <ul> <li>Front = 1800</li> <li>Back = 1700</li> <li>Left = 2000</li> <li>Right = 1600</li> </ul> <p>This would give us motors input values within bounds and maintain the difference between motors.</p> <p>As a side note you said in a comment that the thrust vs PWM input is not linear. This is true but PIDs can drive non-linear systems with more or less success. For quadcopters the consensus is that it works reasonably well :)</p> <p>If anything was unclear please let me know!</p> <p>Cheers, Marc.</p>
2964
2014-05-22T06:36:03.223
|quadcopter|pid|
<p>I'm trying to develop a control system to a quadcopter and one of my options is to use a PID controller (which I think is the most used method).</p> <p>From what I've read, the commom strategy is to run a PID algorithm to each axis. My question is: how the PID output is converted to PWM signals?</p> <p>I'm asking that because the three axes and the four rotors depend on each other. I mean, if I slow down a couple of rotors which are opposite to each other then the quadcopter will move down in a vertical axis. But if I speed one of them and slow down the other, the quadcopter will rotate in a different axis.</p> <p>So we cannot isolate each axis and associate them with a single rotor or a pair of those. And because of that, how can a PID output (which is associated to an axis) can be converted to PWM signals to the rotors?</p> <p>Is that a MIMO system?</p>
Quadcopter PID output
<p>2 comments ...</p> <p>First, as many people said in answer to your other question, you need to understand that your transfer function is not the real transfer function of the motor. Real motors have nonlinear, and other, unmodelled effects that can cause problems in certain cases.</p> <p>Second, you need to consider the entire system, not just a single subsystem. Stability is a property of the combination of motor+controller, so a poorly designed/tuned controller connected to a stable motor can easily make the whole system unstable.</p>
2972
2014-05-23T07:52:27.187
|motor|control|
<p>I am having a hard time grasping the concept of a DC motor with load being unstable, and stable due to a controller </p> <p>My confusions appears as I am trying to design a controller for one using Z-N method, and the transfer function i've identified using matlab tells me that my DC-motor always will be stable. </p> <p>Which makes sense, since feeding it constant voltages, will lead to a constant veloicty. </p> <p>But to use the z-n approach the system has be able to become unstable, and since this isn't possible i am getting confused if a motor are able to become motor for which i am to design a controller for. </p> <p>The question in simplicity, how come can a controller make a motor, (if the motor itself cannot (due to pole zero plot)) unstable. </p>
Transfer function of DC motor being unstable due to a controller?
<p>Just to be clear, my answers assume you are asking about NXT, <strong>not</strong> the more recent EV3. Also I couldn't find information on how RobotC works on NXT – if it loads native code straight to memory or generates RXE files to run on the NXT VM – so I'll cover both scenarios.</p> <p>According to the <a href="http://www.legolab.daimi.au.dk/DigitalControl.dir/LEGO%20MINDSTORMS%20NXT%20Hardware%20Developer%20Kit/LEGO%20MINDSTORMS%20NXT%20Hardware%20Developer%20Kit.pdf" rel="nofollow">NXT Hardware Developer Kit</a>, the NXT brick's main processor is connected to 64KB RAM. That is the amount of memory available for <a href="http://www.cs.tau.ac.il/~stoledo/lego/nxt-native/" rel="nofollow">bare metal programming</a>, that is if you load a native program into the brick, bypassing the original firmware. Of course in this case your program will also have to include drivers to control robot sensors, motors and so on.</p> <p>If however your program runs on top of the NXT VM, the <a href="http://cache.lego.com/upload/contentTemplating/Mindstorms2SupportFilesDownloads/otherfiles/download49E7B34DE74049D6BC872D3A0FB2A1F6.pdf" rel="nofollow">Executable File Specification</a> says that <em>"[w]hen the VM runs a program, it reads the encoded .RXE file format from flash memory and initializes a 32KB pool of RAM reserved for use by user programs. The .RXE file specifies the layout and default content of this pool. (...) Non-volatile sub-components remain in the file, which the VM can refer to at any time during program execution. The code space is an example of a non-volatile sub-component. The bytecodes never change at run-time, so these bytecodes remain entirely in flash memory."</em></p> <p>In other words, RXE programs have a 32KB limit for data structures, but bytecode instructions are not included in this total – they are executed directly from Flash memory, which has a size of 256KB (minus the size of the NXT VM – unfortunately I couldn't find a number for this).</p> <p>As for processing speed, the NXT HDK says the main processor runs at 48MHz. How this translates to actual speed again depends on whether RobotC programs run natively or on top of the VM. The <a href="http://lejos-osek.sourceforge.net/" rel="nofollow">nxtOSEK</a> project <a href="http://lejos-osek.sourceforge.net/whatislejososek.htm" rel="nofollow">reports</a> their test program achieves a maximum speed of 1,864,000 loop iterations per minute.</p> <p>But at any rate, I'd bet peripheral pooling, not processing speed, will be you main speed limiting factor. <a href="http://www.cs.tau.ac.il/~stoledo/lego/nxt-native/" rel="nofollow">According to Sivan Toledo</a>, <em>"[t]he Interval Timer of the AT91SAM7S256 is used to generate an interrupt every 1 ms. This interrupt is used for time keeping and for running all the periodic tasks of the system."</em> So every 1ms your program stops while the NXT drivers talk to sensors and motors.</p> <p>So to summarize:</p> <ol> <li>RXE programs running on top of the VM are limited to 32KB RAM for data structures, whereas native programs will have 64KB to share between data and code;</li> <li>RXE programs run straight from Flash, so their code size is limited to how much storage is taken by the firmware. For native programs see the previous answer;</li> <li>The brick's main processor runs at 48MHz, but peripheral interruptions occur once every 1ms, so even native code will probably run somewhat slower than that.</li> </ol> <p>I hope this helps.</p>
2982
2014-05-24T20:00:52.270
|nxt|robotc|
<p>I'm writing a C code generator geared toward RobotC and complex tasks for an FTC team, and was wondering about some performance and storage concerns:</p> <ol> <li>How much memory is available for my program's data? It'll be mostly pre-defined lookup tables, generally in the form of multidimensional arrays.</li> <li>How much NXT memory is available for my program itself? As in, roughly how much code can I expect to fit into a single RobotC compiled program?</li> <li>How quickly do programs execute, generally? Looking at disassembly most of my generated lines correspond to 2-4 opcodes.</li> </ol> <p>Based on these, I'm trying to make a decision of precomputation vs runtime pathfinding.</p> <p>I'm using NXT/Tetrix. My major interest at this point with these questions is for pathfinding. I plan to have a 64x64 grid and be running Djisktra's A* algorithm with a heuristic function that assigns a penalty to turns and is as close to consistent as possible (not sure if consistency/monotonicity is doable with the turn penalty).</p> <p>Roughly 8 paths would be cached if I decide to use the pre-cached lookup tables.</p> <p>Instead of a set, I'll probably use a boolean array for the set of nodes visited. The fact that I'm working with a square layout will allow me to use a 2D array for the map needed to reconstruct the path.</p> <p>I'd love some feedback and answers to my question if anyone has any. Thanks!</p>
Performance/memory considerations for pathfinding lookup tables on RobotC for a small set of paths
<p>Yes, I am a technician who works on stabilized camera gimbals for drones and we use hall effect sensors as encoders on our camera gimbals. You must mount the hall effect sensor IC facing toward the motor pole in the rear side of the motor as close as possible without touching. This will detect the magnetic flux as the motor shaft rotates. This works very well for DC brushless motors, I cannot guarantee that it will work as well for any other type of motor.</p>
2987
2014-05-25T11:41:37.727
|brushless-motor|encoding|hall-sensor|
<p>I have upgraded the motors in my robotic arm to sensored, brushless RC car motors. The hope was to reuse the Hall sensors to double as a rotary encoder, by tapping 2 Hall sensors and treating the 2 bits as a quadrature signal (a crude quadrature since 2 of the 4 states will be longer than the other 2).</p> <p>This works when none of the motor phases are powered and I just rotate the motor manually. But once the stator coils are energized, the encoder no longer counts correctly: When running at low power, the counting is correct, but when running under high power, the count is monotonic (only increases or decreases) no matter if I run in reverse or forward.</p> <p>I'm almost certain this is because of the stator coils overpowering the permanent magnets on the rotors. So is there still a way to use the Hall sensors as an encoder?</p> <p>Sorry if this is an obvious question. I'd love to research this problem more if I had more time.</p> <p><strong>Update:</strong> I've measured the wave forms with my DSO quad and see the expected 120 degree separated signals (the measurement for phase C gets more inaccurate over time because I only had 2 probes, so I measured phases A &amp; B first, then A &amp; C, and then merged them.</p> <p>When ESC speed is 0.1: <img src="https://i.stack.imgur.com/tXBod.png" alt="speed = 0.1"></p> <p>When ESC speed is 0.3: <img src="https://i.stack.imgur.com/4D4Na.png" alt="speed = 0.3"></p> <p>Previously, I was using a hardware quadrature counter (EQEP module on a BeagleBone). At speed=0.3, this was counting backwards no matter if I do forward or reverse!</p> <p>I then implemented quadrature counting on an LPC1114FN28 uController. The result was still bad at high speeds (count didn't change at all). The logic was:</p> <pre><code>void HandleGPIOInterrupt() { const uint8_t allowableTransitions[4][2] = {1, 2, 3, 0, 0, 3, 2, 1}; static int prevState = -1; int state = phaseA | (phaseB * 2) if (prevState != -1) { if (allowableTransitions[prevState][0] == state) { ++rotations; } else if (allowableTransitions[prevState][1] == state) { --rotations; } } prevState = state; } </code></pre> <p>Then I got the idea to change the code to not update prevState until an expected state happens (to deal with glitches):</p> <pre><code> int state = phaseA | (phaseB * 2) if (prevState != -1) { if (allowableTransitions[prevState][0] == state) { ++rotations; prevState = state; } else if (allowableTransitions[prevState][1] == state) { --rotations; prevState = state; } else { // assume transition was a glitch } } else prevState = state; </code></pre> <p>Now the counting finally is correct in both directions, even at speeds higher than 0.3! But are there really glitches causing this? I don't see any in the waveforms?</p>
Can I reuse the hall sensors in a brushless motor as an encoder?
<p>a couple of years ago I tried Julius and it worked quite well. It is documented here, hope it helps: <a href="http://achuwilson.wordpress.com/2012/01/11/speech-recogition-using-julius-in-linux/" rel="nofollow">http://achuwilson.wordpress.com/2012/01/11/speech-recogition-using-julius-in-linux/</a></p> <p>and</p> <p><a href="http://achuwilson.wordpress.com/2012/01/11/chippu-speech-recognition/" rel="nofollow">http://achuwilson.wordpress.com/2012/01/11/chippu-speech-recognition/</a></p>
2988
2014-05-26T03:39:44.167
|mobile-robot|linux|speech-processing|digital-audio|
<p>I wanted to present a voice-controlled robot in my lab's upcoming demo contest. My robot is essentially a x86 Ubuntu notebook resting on top of a two-wheeled platform, so in principle any solution available on Linux would do.</p> <p>I looked into <a href="http://julius.sourceforge.jp/en_index.php" rel="nofollow">Julius</a>, but it seems the only comprehensive acoustic model available for it is aimed at the Japanese language – which coincidentally I <em>can</em> speak a little, but apparently not clearly enough to produce anything beyond garbled text. I also tried the <a href="https://gist.github.com/alotaiba/1730160" rel="nofollow">Google Speech API</a>, which has a decent selection of languages and worked very well, but requires Internet access. Finally there is <a href="http://cmusphinx.sourceforge.net/" rel="nofollow">CMU Sphinx</a>, which I haven't yet tested, but I'm afraid might have a problem with my accent (I'm a nativa Brazilian Portuguese speaker, and apparently there is no such acoustic model available for it).</p> <p>Is that all there is to it? Have I missed any additional options? As you may have guessed, my main requirement is support for my native language (Brazilian Portuguese), or failing that, good performance for English spoken with foreign accents. A C++ API is highly desirable, but I can do with a shell interface.</p>
Voice control solution for Linux robot?
<p>Yes this is correct, given two assumptions:</p> <ol> <li><p>Each measurement is independent (i.e., the (Gaussian) distribution of observation $z_i$ is uncorrelated with $z_j$). Usually this is a fair assumption (e.g., measuring the position of landmarks with a laser scanner).</p></li> <li><p>Data association is known. In other words, you "just knew" that your first observation was in fact an observation of landmark 1. Therefore you can just compute the innovation with the predicted observation generated by landmark 1. Not knowing what landmark the observation belongs to is where the double loop comes in. In that case, you need to compare the observation with the predicted observations of all* the other landmarks, and choose the one that is most likely**, using a metric such as the Mahalanobis distance.</p></li> </ol> <p>*You can probably speed this up by only comparing it to landmarks that are estimated to be in the field of view of the sensor.</p> <p>**This is just one method of data association. Others (e.g., joint compatibility) exist.</p>
2990
2014-05-26T06:33:18.090
|sensors|localization|ekf|
<p>Let's say we have a bunch of observations $z^{i}$ from sensor and we have a map in which we can get the predicted measurements $\hat{z}^{i}$ for landmarks. In EKF localization in correction step, should we compare each observation $z^{i}$ with the entire predicted measurement $\hat{z}^{i}$?, so in this case we have two loops? Or we just compare each observation with each predicted measurement?, so in this case we have one loop. I assume the sensor can give all observations for all landmarks every scan. The following picture depicts the scenario. Now every time I execute the EKF-Localization I get $z^{i} = \{ z^{1}, z^{2}, z^{3}, z^{4}\}$ and I have $m$, so I can get $\hat{z}^{i} = \{ \hat{z}^{1}, \hat{z}^{2}, \hat{z}^{3}, \hat{z}^{4}\}$. To get the innovation step, this is what I did $$ Z^{1} = z^{1} - \hat{z}^{1} \\ Z^{2} = z^{2} - \hat{z}^{2} \\ Z^{3} = z^{3} - \hat{z}^{3} \\ Z^{4} = z^{4} - \hat{z}^{4} \\ $$ where $Z$ is the innovation. For each iteration I get four innovations. Is this correct? I'm using EKF-Localization in this book <a href="http://rads.stackoverflow.com/amzn/click/0262201623" rel="noreferrer">Probabilistic Robotics</a> page 204. </p> <p><img src="https://i.stack.imgur.com/Ht7Yw.jpg" alt=""> </p>
innovation step ekf localization?
<p>2 kgs is not very much, so i would probably consider arms that are slightly less <em>industrial</em> than Kuka, ABB, Fanuc, Denso and the like. However, 1.6m is pretty long and that may be hard to find in non-industrial arms.</p> <p>Keep in mind there are many other factors to consider when choosing a robot arm. human safety, accuracy, repeat-accuracy, speed, workspace envelope. </p> <p>for example, for relatively planar tasks, low payload, and very fast speeds, a delta configuration robot is best. Larger payloads, maybe a SCARA configuration is better. super large workspace, and planar tasks, perhaps a gantry is best. the point being, there is more than <em>typical</em> robot arms with spherical workspaces.</p> <p>some of the arms listed below just won't survive the number of cycles industrial arms are typically driven to. the arms listed below are slightly more human safe than standard industrial arms, but probably less accurate.</p> <p>all that being said, here are some more arms to consider:</p> <p>As Pikey mentioned, Universal Robotics (<a href="http://www.universal-robots.com/" rel="noreferrer">http://www.universal-robots.com/</a>) has some arms that might work.</p> <p>You should also consider the WAM arm from Barrett (<a href="http://www.barrett.com/robot/products-arm.htm" rel="noreferrer">http://www.barrett.com/robot/products-arm.htm</a>). It has been around a long time, and has a longer reach than many other arms. </p> <p>A lighter duty arm which is very human safe is the Kinova arm: <a href="http://kinovarobotics.com/" rel="noreferrer">http://kinovarobotics.com/</a></p> <p>Another that i have seen, but never touched is the Schunk arm: (<a href="http://mobile.schunk-microsite.com/en/produkte/produkte/dextrous-lightweight-arm-lwa-4d.html" rel="noreferrer">http://mobile.schunk-microsite.com/en/produkte/produkte/dextrous-lightweight-arm-lwa-4d.html</a>)</p> <p>The HDT Arm might be overkill: (<a href="http://www.hdtglobal.com/products/robotics/Adroit_Manipulator_Arm/" rel="noreferrer">http://www.hdtglobal.com/products/robotics/Adroit_Manipulator_Arm/</a>)</p> <p>and of course Baxter for very light industrial use: <a href="http://www.rethinkrobotics.com/products/baxter/" rel="noreferrer">http://www.rethinkrobotics.com/products/baxter/</a></p>
3007
2014-05-28T11:39:24.533
|robotic-arm|mechanism|
<p>I had the opportunity to work for a factory/company that is in the domain space of production and they want to use a robotic arm for part of the production line.</p> <p>They want basically a robotic arm with payload of about 2 Kgs or more and an arm length of more than 1600mm</p> <p>I have researched a few companies like Kuka.com but I am not sure what I should be looking for when making suggestions and researching for it.</p> <p>Are there any suggestions you can give me on few good points to be careful about with robotics arms? Any innovating companies out there I should consider? How is an installation done and if I should find a supplier for it etc. Please enlighten me. </p>
Industrial Robotic Arm
<p>I would guess that the real "interference" you'd see would be magnetic interference. Motors draw quite a bit of current, and that will affect your compass reading.<br> <a href="http://onlinephys.com/magnetism.html" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PgB1N.jpg" alt="Compasses affected by electric current"></a></p> <p>Of course, the same applies to the ESCs that feed the motors -- the supply of current from the battery to the ESCs should also be kept away from the flight controller (which presumably has a compass).</p>
3013
2014-05-29T15:30:22.680
|brushless-motor|multi-rotor|esc|
<p>I have built several quadcopters, hexacopters, and octacopters. Between the flight controller (I use 3DR APM2.6 or Pixhawk) and the motors I use heavy duty power wires as well as a servo-style cable carrying a PWM control signal for the ESC. Three short heavy-duty wires then connect the motor to the ESC, one for each phase.</p> <p>Several times I've heard or read people saying that the electronic speed controllers (ESCs) should be mounted far away from the flight controller (FMU seems to be the abbreviation en vogue) and close to the motors. I think the idea is that this cuts down on interference (I'm not sure what sort) that could be emitted by the long ESC -> motor wires that would be required if you have the ESCs all at the center of the aircraft. Another consideration is that ESCs can be cooled by propellers if they are right under the rotor wash, as mine usually are.</p> <p>So, I've always mounted ESCs close to motors, but realized that design could be much simpler if ESCs are mounted centrally. So, my question is: what are the pros and cons of mounting ESCs close to the motor versus close to the FMU?</p>
Does it matter if my electronic speed controllers are close to my brushless motors?
<p>It depends on the project. But generally you should choose the language you are most comfortable with. C/C++, Python have the most tools and libraries available. However I'd choose Ada all the days in the week. It's an elegant and simple yet very advanced language which has a very strong typing. The latest 2014 GPL version has been release for the public recently. There are not many Ada users left but the support communities are great. To sum it up, try the popular languages first and then try Ada perhaps. Links to visit if you want more information on Ada or if you have questions: </p> <ol> <li><a href="http://www.reddit.com/r/ada/" rel="nofollow noreferrer">Ada on Reddit</a> </li> <li><a href="https://groups.google.com/forum/m/#!forum/comp.lang.ada" rel="nofollow noreferrer">Comp.lang.ada</a> </li> <li><a href="https://stackoverflow.com/questions/tagged/ada">Ada on StackOverflow</a></li> <li><a href="http://www.seas.gwu.edu/~mfeldman/ada-project-summary.html" rel="nofollow noreferrer">Who uses Ada?</a></li> <li><a href="http://www.pegasoft.ca/resources/boblap/book.html" rel="nofollow noreferrer">The Big Online Book of Linux Ada Programming</a></li> <li><a href="http://rads.stackoverflow.com/amzn/click/0521197163" rel="nofollow noreferrer">Building Parallel, Embedded, and Real-Time Applications with Ada</a></li> </ol> <p>Alse check out the <a href="http://www.ros.org/" rel="nofollow noreferrer">Robot Operating System</a> which has many modules and packages ready to use.</p>
3017
2014-05-30T01:06:12.457
|mobile-robot|wheeled-robot|programming-languages|
<p>In general, what is a good programming language for robotics? I am a starting robo nerd and don't know anyone who would know things like this.</p>
Programming language?
<p>I strongly suggest using a single sound card instead of multiple USB microphones or sound cards. For an application like sound location, it is essential that the samples from the different microphones are recorded at exactly the same time. When using multiple devices, that's next to impossible; they will be slightly out of sync when the recording starts, and because their sample rates are not exactly the same, after some time the time difference becomes substantial. This can be counteracted using adaptive clock synchronization and adaptive rate resampling (see alsaloop for an example) but you will never obtain the precision a single sound card can get you.</p> <p>So this means either using the stereo input of a standard sound card and dealing with the ambiguities this introduces or using a (USB) sound card that has 4 (microphone) inputs. Also a tip on the microphones: get omnidirectional ones, it makes the samples much easier to compare.</p> <p>I'm not aware of the kind of the kind of software you're looking for, but directly working with ALSA is not too difficult; ALSA comes with great examples and audio data is relatively easy to work with compared to image data.</p>
3021
2014-05-30T06:25:12.813
|mobile-robot|digital-audio|linux|
<p>I want to give my Linux robot the ability to locate a sound source and drive towards it. I am reading a <a href="http://cdn.intechopen.com/pdfs-wm/10567.pdf" rel="nofollow">paper</a> on sound localization that seems to cover the theory well enough, but I'm at a loss as to how do I implement it. Specifically I would like to know:</p> <ol> <li>How do I connect two microphones to a Linux PC?</li> <li>How do I record from two microphones simultaneously?</li> <li>Is there any library of sound processing algorithms (similar to how OpenCV is a library of computer vision algorithms) available for Linux?</li> </ol>
Directional hearing for Linux robot?
<p>You might want to start by checking out <a href="http://www.cs.utexas.edu/~aim/" rel="nofollow">AIM - Autonomous Intersection Management</a> by the University of Texas at Austin. It is a simple 2D simulator, but sounds like it has the features you need.</p> <p>Also, you might want to search for civil engineering traffic simulators. I'm sure there are a bunch of them. </p> <p>One more option, you can use something like <a href="http://cs.gmu.edu/~eclab/projects/mason/" rel="nofollow">MASON</a> a multi-agent simulation toolkit to code your own simulator with multiple intelligent agents.</p>
3026
2014-05-30T07:23:48.233
|simulator|gazebo|
<p>Hello I wanted to simulate a busy urban road,similar to <strong>Darpa Urban Challenge</strong> for an autonomous <strong>self-driving-car</strong>. I'm in search of simulators for that.</p> <p>I've seen gazebo since its integration with ROS is easier but editing world files or indeed creating them itself is difficult. In <strong>torcs</strong> simulator I have seen many world files but not many sensors. I don't want much physics in my simulation. I want a light weight simulator(for checking out path planning on an urban road) and in which creating roads are easier. </p> <p>I've even searched for gazebo sdf files similar to urban city but in vain.</p>
world files for simulating roads and tracks
<p>As is often the case in robotics, there is no universally agreed-upon approach to formation control (which is what you are trying to achieve). As explained in the introduction to <a href="https://www.researchgate.net/publication/3352651_Formation_feedback_control_for_multiple_spacecraft_via_virtual_structures" rel="nofollow noreferrer">this article</a>, there are roughly three basic approaches currently in use:</p> <ul> <li><em>Behaviour-based,</em> where each robot moves according to a set of local rules, and the formation dynamics results from the interactions between neighboring robots;</li> <li><em>Virtual structure,</em> where the robots are conceived to be parts of a single body (e.g. vertices of a polygon) and trajectories are calculated to conserve the shape of that abstract body;</li> <li><em>Leader-following,</em> where a designated &quot;leader&quot; robot follows a prescribed trajectory, and the &quot;followers&quot; track transformed versions of the leader current pose.</li> </ul> <p>Given your description of the intended scenario, I'd say leader-following would be the best approach. This could be achieved fairly easily in the following manner (see <a href="http://april.eecs.umich.edu/courses/eecs498_f09/wiki/images/6/6b/L02.LinAlgCoordinates.pdf" rel="nofollow noreferrer">this presentation</a> for references on terminology and formulas):</p> <ol> <li>Define the intended pose <span class="math-container">$p_f$</span> for the follower robot in the leader's reference frame – e.g. <span class="math-container">$p_f = [0 \quad 1 \quad 0]^T$</span> for keeping to the left, or <span class="math-container">$p_f = [0 \quad -1 \quad 0]^T$</span> for keeping to the right;</li> <li>Every once in a while calculate the follower's pose in the global reference frame <span class="math-container">$g_f = Tp_f$</span> and relay it to the follower robot;</li> <li>Have the follower robot adjust its position to comply with the received pose <span class="math-container">$g_f$</span>.</li> </ol> <p>Just for reference, the Rigid Body Transformation matrix T is given by:</p> <p><span class="math-container">$$ T = \begin{bmatrix} cos(\theta) &amp; -sin(\theta) &amp; x \\ sin(\theta) &amp; cos(\theta) &amp; y \\ 0 &amp; 0 &amp; 1 \end{bmatrix} $$</span></p> <p>Where <span class="math-container">$[x \quad y \quad \theta]^T$</span> is the leader robot's current pose.</p>
3035
2014-05-31T22:09:10.213
|kinematics|
<p>This is quite a basic question. I'm practising robot programming with VRep. I have 2 K3 robots in the scene. One robot follows a predefined path. I want the second robot to move "in parallel" with the first one so they keep same orientation and same distance at all time. When there is a turn, I want the follower to slow/accelerate a little to keep the parallel.</p> <p>In my implementation, I use wireless communication. The first robot will periodically "tell" the second about its speed, orientation. The second will use these parameters to calculate two speed to its two wheel. But when I run it, it doesn't work. The orientation of the follower is wrong. The distance is not maintained. I was totally confused.</p> <p>I think this is quite a rudimentary task. There must be some practise to follow. Can somebody help to provide some ideas, references? That will be highly appreciated! </p>
How to make one robot follow the other in parallel formation
<p>If I understand correctly, you would like to understand how control a two-wheeled differential drive robot so that you can achieve smooth/elegant driving while arriving at a desired final orientation--this is known as the parking problem. Ian presented an AI approach to the problem, which is interesting, but I would be remiss if I didn't interject with a control theoretic perspective.</p> <p>Because two-wheeled differential drive robotics are a well-studied robotic platform, we can determine a control law from a model of the dynamics. Differential drive robots can be modeled with unicycle dynamics of the form: $$\dot{z}=\left[\begin{matrix}\dot{x}\\ \dot{y} \\ \dot{\theta} \end{matrix}\right] = \left[\begin{matrix}cos(\theta)&amp;0\\sin(\theta)&amp;0\\0&amp;1\end{matrix}\right] \left[\begin{matrix}v\\\omega\end{matrix}\right],$$ where $x$ and $y$ are Cartesian coordinates of the robot, and $\theta \in (-\pi,\pi]$ is the angle between the heading and the $x$-axis. The input vector $\left[v, \omega \right]^T$ consists of linear and angular velocity inputs.</p> <p>The parking problem that you mention interest in in your comment has been studied widely. <a href="http://www-personal.umich.edu/~jongjinp/papers/Park-icra-11.pdf" rel="nofollow">A Smooth Control Law for Graceful Motion of Differential Wheeled Mobile Robots in 2D Environment</a> presents one possible solution.</p> <p>Another solution to tracking a trajectory is to control a point, which is holonomic, some small distance $l$ away from the center of the the two wheels rather than controlling the unicycle robot directly. To do this, we can derive the following rotation matrix to transform the control law of the robot to the control law of the point: $$\dot{p}=\left[\begin{matrix}\dot{p_x}\\\dot{p_y}\end{matrix}\right]=\left[\begin{matrix}\text{cos}(\theta)&amp;-l\text{sin}(\theta)\\\text{sin}(\theta)&amp;l \text{cos}(\theta)\end{matrix}\right]\left[\begin{matrix}v\\\omega\end{matrix}\right]$$</p> <p>$\dot{p}$ is the velocity of the point being controlled, and it is decomposed into its $x$ and $y$ components. At this point, control is quite simple, simply control the point directly! Setting $$\dot{p}=u=r(t),$$ accomplishes this, where $u$ is the input, and $r(t)$ is the reference trajectory that you want; this will accomplish smooth movement along the trajectory.</p>
3040
2014-06-01T23:34:43.803
|mobile-robot|inverse-kinematics|
<p>I have been reading about kinematic models for nonholonomic mobile robots such as <a href="http://en.wikipedia.org/wiki/Differential_wheeled_robot" rel="noreferrer">differential wheeled robots</a>. The texts I've found so far all give reasonably decent solutions for the forward kinematics problem; but when it comes to inverse kinematics, they weasel out of the question by arguing that for every possible destination pose there are either infinite solutions, or in cases such as $[0 \quad 1 \quad 0]^T$ (since the robot can't move sideways) none at all. Then they advocate a method for driving the robot based on a sequence of straight forward motions alternated with in-place turns.</p> <p>I find this solution hardly satisfactory. It seems inefficient and inelegant to cause the robot to do a full-stop at every turning point, when a smooth turning would be just as feasible. Also the assertion that some points are "unreachable" seems misleading; maybe there <em>are</em> poses a nonholonomic mobile robot can't reach by maintaining a single set of parameters for a finite time, but clearly, if we vary the parameters over time according to some procedure, and in the absence of obstacles, it <em>should</em> be able to reach any possible pose.</p> <p>So my question is: what is the inverse kinematics model for a 2-wheeled differential drive robot with shaft half-length $l$, two wheels of equal radii $r$ with adjustable velocities $v_L \ge 0$ and $v_R \ge 0$ (i.e. no in-place turns), and given that we want to minimize the number of changes to the velocities?</p>
"Smooth" inverse kinematics model for 2-wheeled differential drive robot
<p>Such "conditioning" was existing in the pre-syskit days, and it got removed as it made the state machines considerably more complex. Usually, what starts as a simple condition becomes quickly something a lot more involved.</p> <p>The best way is therefore to create a dedicated composition that would do the monitoring:</p> <pre><code>class PipelineFinder &lt; Syskit::Composition argument :expected_heading add PipelineDetector::Task, :as =&gt; 'detector' def heading_within_tolerance?(current, tolerance) # Test for tolerance end task_script do # Don't know the name of the relevant port, so I am making it up reader = detector.pipeline_state_port.reader wait detector.align_auv_event if (data = reader.read_new) &amp;&amp; heading_within_tolerance?(data.yaw, tolerance) emit :success else emit :false_positive end end end </code></pre> <p>Note that you could also attach the task script on the detector task in a separate action:</p> <pre><code>describe('runs the pipeline detector but filters against an expected direction'). required_arg('expected_heading') def detect_pipeline_with_validation(arguments = Hash.new) detector = pipeline_detector_def expected_heading = arguments[:expected_heading] detector.task_script do # same script than up there, except that expected_heading # is in this case a local variable end detector end </code></pre>
3049
2014-06-03T14:20:42.650
|rock|syskit|
<p>I am searching for a way that allows me to wait for some conditions on ports before applying a new state.</p> <p>My concrete Problem: I want to make sure that my AUV aligns to the right pipeline. Therefore before starting the pipeline-tracking, I want to check for the current system heading.</p> <p>My current state-machine looks like this:</p> <blockquote> <p>find_pipe_back = state target_move_def(:finish_when_reached => false , :heading => 1 ...)</p> <p>pipe_detector = state pipeline_detector_def</p> <p>pipe_detector.depends_on find_pipe_back, :role => "detector"</p> <p>start(pipe_detector) forward pipe_detector.align_auv_event, success_event</p> </blockquote> <p>roughly I am looking for a way to condition the last-forward.</p>
How to define conditions for state-machines in roby?
<p>From <a href="http://www.nethat.net/" rel="nofollow">netHAT</a>, you can find an EtherCAT shield for the Raspberry Pi:</p> <blockquote> <p>The netHAT module developed by Hilscher adds the RaspberryPi® RealTime-communication. This enables the RaspberryPi® to be used as a slave in a PROFINET network. But not only PROFINET is possible, all other common real-time Ethernet networks such as Ethernet / IP or EtherCAT could be supported. The protocol stacks are preconfigured to 32 input and output bytes, that can be exchanged with the PLC. Hilscher provides everything which is necessary for the communication with the PLC, device description files, firmware and application example. Just attach the module and start to communicate.</p> </blockquote>
3058
2014-06-05T09:11:56.587
|communication|
<p>I have a riddle about EtherCAT in mind and I'd like to have your point of view about it...</p> <p>With the rise of open platforms and hardware, and easily accessible embedded machines, it is now rather straightforward to install a RT system such as Xenomai on a raspberry PI, or a beagleboard black, or whatever cheap platform you prefer...</p> <p>Now to connect these a RT bus would be really cool (e.g. EtherCAT...).</p> <p>Hence my question: every hobbyist face the same problems with RT communication, so is there any good reason why there does not exist any open EtherCAT shield for raspberry PI or beagleboards? It would solve so many problems...</p> <p>Any thoughts on why? Any idea?</p>
Why cannot we find EtherCAT shields?
<p>Here are a few possible points of consideration. Certainly the UKF has many counterpoints where it has an advantage too.</p> <p>The most obvious advantage is computation power. Don't forget that traditionally, these filters are implemented on embedded systems with very limited computational resources. Also, while I don't have much experience with UKFs myself, one significant advantage of EKFs is their relative ease of implementation. For many systems, the Jacobian is very easily derived analytically which makes the EKF implementation mostly straightforward.</p> <p>Another area of potential advantage is ease of tuning. I don't remember how many parameters are tunable in a UKF, but the EKF tuning parameters are already well understood because the basic Kalman Filter is ubiquitous so anybody thinking about using a UKF already knows how to tune an EKF.</p> <p>Finally, don't ignore momentum. If an existing system already has a working EKF, why go through all the work of implementing and testing a UKF?</p>
3063
2014-06-06T08:12:25.613
|mobile-robot|localization|kalman-filter|ekf|
<p>The Unscented Kalman Filter is a variant of the Extended Kalman Filter which uses a different linearization relying on transforming a set of "Sigma Points" instead of first-order Taylor series expansion.</p> <p>The UKF does not require computing Jacobians, can be used with discontinuous transformation, and is, most importantly, more accurate than EKF for highly nonlinear transformations.</p> <p>The only disadvantage I found is that "the EKF is often slightly faster than the UKF" (Probablistic Robotics). This seems negligible to me and their asymptotic complexity seems to be the same.</p> <p>So why does everybody still seem to prefer EKF over UKF? Did I miss a big disadvantage of UKF? </p>
Why should I still use EKF instead of UKF?
<p>You have 6 degrees of freedom. Three translational and three rotational. You have linear movements along the xyz axis and rotations around the xyz axis.</p>
3069
2014-06-06T21:49:37.827
|quadcopter|
<p>It might be kind of a stupid question but how many degrees of freedom are there in a typical quadcopter? I say some saying 4 and some saying 6. The difference stands in translation throughout the other 2 axis (horizontal ones). Being strict to what you can directly tell the quadcopter to do, only 4 movements are possible since you cannot apply a pure lateral force. But you can tilt to start a lateral movement and align the body right after and let it hover in a horizontal axis, theoretically. So, formally, how many degrees of freedom should I consider to exist?</p>
Quadcopter degrees of freedom
<p>The value $S^k$ (innovation variance) needs to be calculated for all landmarks, but the subsequent update steps (post line 16 --- after argmax), need be applied to all of the map, given the landmark update that was selected on line 16 --- the argmax. </p> <p>FYI, argmax searches over the list of landmarks for the landmark maximizing the equation given. It selects the index of the landmark which is <em>most likely</em> to be the landmark given the measurement value. Notice that the equation in line 16 is the gaussian multivariate pdf with the measurement and mean measurement for landmark k, normalized by the measurement variance, S.</p> <p>There's a notation problem here, $\argmax$ should have a subscript denoting what the search is <em>over</em>. In this case, $\argmax$ is searching over $k$, since $i$ is the current measurement we're trying to associate with each of the $k=1...$ possible landmarks.</p>
3073
2014-06-09T02:32:37.780
|localization|ekf|data-association|
<p>Given part of the following algorithm in page 217 <a href="http://rads.stackoverflow.com/amzn/click/0262201623" rel="nofollow">probabilistic robotics</a>, this algorithm for EKF localization with unknown correspondences </p> <p>9. for all observed features $z^{i} = [r^{i} \ \phi^{2} \ s^{i}]^{T} $</p> <p>10. &nbsp; &nbsp; for all landmarks $k$ in the map $m$ do</p> <p>11. &nbsp; &nbsp; &nbsp; &nbsp; $q = (m_{x} - \bar{\mu}_{x})^{2} + (m_{y} - \bar{\mu}_{y})^{2}$</p> <p>12. &nbsp; &nbsp; &nbsp; &nbsp; $\hat{z}^{k} = \begin{bmatrix} \sqrt{q} \\ atan2(m_{y} - \bar{\mu}_{y}, m_{x} - \bar{\mu}_{x} ) - \bar{\mu}_{\theta} \\ m_{s} \\ \end{bmatrix}$</p> <p>13. &nbsp; &nbsp; &nbsp; &nbsp; $ \hat{H}^{k} = \begin{bmatrix} h_{11} &amp; h_{12} &amp; h_{13} \\ h_{21} &amp; h_{22} &amp; h_{23} \\ h_{31} &amp; h_{32} &amp; h_{33} \\ \end{bmatrix} $</p> <p>14. &nbsp; &nbsp; &nbsp; &nbsp; $\hat{S}^{k} = H^{k} \bar{\Sigma} [H^{k}]^{T} + Q $</p> <p>15. &nbsp; &nbsp; endfor </p> <p>16. &nbsp; &nbsp; $ j(i) = \underset{k}{\operatorname{arg\,max}} \ \ det(2 \pi S^{k})^{-\frac{1}{2}} \exp\{-\frac{1}{2} (z^{i}-\hat{z}^{k})^{T}[S^{k}]^{-1} (z^{i}-\hat{z}^{k})\} $</p> <p>17. &nbsp; &nbsp; $K^{i} = \bar{\Sigma} [H^{j(i)}]^{T} [S^{j(i)}]^{-1}$</p> <p>18. &nbsp; &nbsp; $\bar{\mu} = \bar{\mu} + K^{i}(z^{i}-\hat{z}^{j(i)}) $</p> <p>19. &nbsp; &nbsp; $\bar{\Sigma} = (I - K^{i} H^{j(i)}) \bar{\Sigma} $</p> <p>20. endfor</p> <p>My question is why the second loop ends in the line 15. Shouldn't it end after the line 19. I've checked the <a href="http://robots.stanford.edu/probabilistic-robotics/errata.html" rel="nofollow">errata</a> of this book but nothing about this issue. </p>
Data association with ekf?
<p>Normally for a control system like this one would have a couple options:</p> <ul> <li>Run the code on a co-processor that feeds data to and from the PLC over serial</li> <li>Run the code on a server that sends data to and from the PLC over ethernet or serial</li> <li>Run the code directly on the PLC if it has a general purpose processor or an integrated FPGA ect...</li> </ul> <p>Normally for reactors and other high risk systems they have a redundant architecture so you will need to have either two (or more) coprocessors or two (or more) servers running the code. but all this will be directed by the current infrastructure that exists and how it is set up, and the computational requirements of the code. But before anything like this is used in the control loop of something as high risk as a reactor it will have to have some real improvement over the current setup, and be rigorously validated to ensure it works as expected in all situations.</p>
3079
2014-06-10T09:44:06.053
|sensors|control|
<p>How would one typically integrate a neural network into an online automation system?</p> <p>As an example, we have developed a neural network that predicts a difficult to measure variable within a reactor using multiple sensors. We then use this predicted variable to tell the automation system to, for example, increase/decrease the stirrer speed.</p> <p>How would someone implement this idea into a commercial system. Would they develop a function block that can simulate the neural network? Would they run a software on the server that reads and writes to the PLC control tags?</p>
Typical method for integrating a neural net into a PLC
<p>First, you will need to test the communication range. Make a test app to talk between the phone and arduino/bluetooth shield and then walk around to see how far away they can still communicate. If it's not far enough you can't do much else and need to find a different communication solution.</p> <p>After that, you will need to modify the source code from the open source projects you referenced as I don't believe they use the same setup as you. First, you will need to modify/replace the communication logic to use your bluetooth. Second, you won't be able to do autonomous flight with those sensors so you will need to strip out anything related to that from the code.</p> <p>Finally, you just need to define your own protocol for sending movement commands from the phone to the bluetooth shield.</p>
3080
2014-06-10T14:50:59.700
|arduino|quadcopter|
<p>I need help on how to go about building a quadcopter software from scratch with the available tools I have with me. I don't have a transmitter radio therefore the only way I can do remote control is using an android phone with the itead studio bluetooth shield that I was recently given. How can I use the existing open source software, i.e aeroquad or arducopter. The following are parts that I have:-</p> <ol> <li>Arduino Uno </li> <li>Bluetooth shield </li> <li>Four brushless motors </li> <li>Q450 frame Four</li> <li>ESC Turnigy </li> <li>MPU6050</li> </ol>
Arduino Quadcopter using bluetooth shield and android phone
<p>You missed the part that autobuild can separate between push and pull urls.</p> <p>See documentation at <a href="http://rock-robotics.org/next/api/autobuild/Autobuild/Git.html" rel="nofollow">http://rock-robotics.org/next/api/autobuild/Autobuild/Git.html</a></p> <p>The Gitorious statement automatically set the push_to url, see here: <a href="http://rock-robotics.org/next/api/autoproj/Autoproj.html#method-c-gitorious_server_configuration" rel="nofollow">http://rock-robotics.org/next/api/autoproj/Autoproj.html#method-c-gitorious_server_configuration</a></p> <p>everything you have to do is add:</p> <pre><code>- control/orogen/&lt;package&gt;: url: git://spacegit.dfki.uni-bremen.de/virgo/orogen-&lt;package&gt;.git push_to: git@spacegit.dfki.uni-bremen.de:virgo/orogen-&lt;package&gt;.git </code></pre>
3091
2014-06-11T10:10:23.250
|rock|
<p>I want to overwrite the git source of a package in autoproj. That package is by default on gitorious and I forked it on spacegit to apply specific patches. According to the autoproj documentation (<a href="http://rock-robotics.org/stable/documentation/autoproj/customization.html" rel="nofollow">http://rock-robotics.org/stable/documentation/autoproj/customization.html</a>), I set the new repo in the overrides.yml by:</p> <pre><code> - control/orogen/&lt;package&gt;: url: git://spacegit.dfki.uni-bremen.de/virgo/orogen-&lt;package&gt;.git </code></pre> <p>But if I inspect the remotes of the newly checked out package, only the fetch url is adapted to spacegit whereas the push url still points to the default gitorious repo:</p> <pre><code>$ git remote -v autobuild git://spacegit.dfki.uni-bremen.de/&lt;project&gt;/orogen-&lt;package&gt;.git (fetch) autobuild git@gitorious.org:/rock-control/orogen-&lt;package&gt;.git (push) </code></pre> <p>How can I overwrite both the fetch and the push source of a package in the overrides.yml?</p>
How to overwrite default git source in autoproj?
<p>Kalman framework can be adapted to deal with joint state and delay estimation: see <a href="http://www.diva-portal.org/smash/get/diva2:483306/FULLTEXT01.pdf" rel="nofollow">this paper</a> as an example.</p> <p>Don't forget to consider the <a href="http://en.wikipedia.org/wiki/Smith_predictor" rel="nofollow">Smith Predictor</a>, which will help you design the linear regulator as the system was without delays.</p>
3098
2014-06-13T14:40:46.723
|sensors|control|sensor-fusion|sensor-error|
<p>Assume I have a rather simple system I want to control, but all sensor measurements exhibit considerable time delay, i.e.:</p> <p>$z_t = h(x_{(t-d)}) \neq h(x_t)$ </p> <p>With my limited knowledge about control, I could imagine the following setup:</p> <ul> <li>One observer estimates the delayed state $x_{(t-d)}$ using control input and (delayed) measurements.</li> <li>A second observer uses the delayed observer's estimate and predicts the current state $x_t$ using the last control inputs between delayed measurement and current time.</li> <li>The second observer's estimate is used to control the system.</li> </ul> <p>Can I do any better than that? What is the standard approch to this problem? And is there any literature or research about this topic?</p>
Controlling a system with delayed measurements
<p>There are several traps you might have stepped into, but it is difficult to tell without more information. The first issues that came to my mind:</p> <ul> <li><p>The equations you wrote down are for <em>sampling</em> from the velocity motion model. But then you write about the Kalman Gain approaching singularity, which only makes sense of you apply a Gaussian filter (EKF or UKF). There is no sampling in EKF or UKF.</p></li> <li><p>The model above is not defined for $\omega = 0$. You need to handle this special case by computing the limit for $\omega \to 0$. Hint: <a href="https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule" rel="nofollow">L'Hôpital's rule</a></p></li> <li><p>The model assumes perfect accuracy (no noise) if $\omega = v = 0$. This is a rather strong assumption and may or may not lead to problems.</p></li> </ul>
3101
2014-06-14T02:32:35.297
|mobile-robot|kinematics|motion|motion-planning|noise|
<p>I want to implement the velocity motion model in Matlab. According to <a href="http://rads.stackoverflow.com/amzn/click/0262201623" rel="nofollow">Probabilistic Robotics</a> page 124, the model is as following </p> <p>\begin{align*} \hat{v} &amp;= v + sample(\alpha_{1} v^{2} + \alpha_{2} w^{2}) \\ \hat{w} &amp;= w + sample(\alpha_{3} v^{2} + \alpha_{4} w^{2}) \\ \hat{\gamma} &amp;= sample(\alpha_{5} v^{2} + \alpha_{6} w^{2}) \\ x' &amp;= x - \frac{\hat{v}}{\hat{w}} sin \theta + \frac{\hat{v}}{\hat{w}} sin(\theta + \hat{w} \Delta{t}) \\ y' &amp;= y + \frac{\hat{v}}{\hat{w}} cos \theta - \frac{\hat{v}}{\hat{w}} cos(\theta + \hat{w} \Delta{t}) \\ \theta' &amp;= \theta + \hat{w} \Delta{t} + \hat{\gamma} \Delta{t} \end{align*}</p> <p>where $sample(b^{2}) \Leftrightarrow \mathcal{N}(0, b^{2})$. With this kind of variance $\alpha_{1} v^{2} + \alpha_{2} w^{2}$, the Kalman Gain is approaching singularity. Why? </p>
Velocity Model Motion in Matlab (Probabilistic Robotics)
<p>The motor/esc pair you selected look fine. However, you should probably use a larger battery. 2.2 Ah * 25C = 55 amp continuous current draw. Each of your motors is rated 15 amps, but the speed controllers can handle more which means that they will probably occasionally draw more. 15 amps * 4 motors = 60 amps which is 5 amps higher than your battery is rated at. While this might work, the battery probably wouldn't last very long, and I personally wouldn't want to be near it while it is on. I would strongly suggest that you either find a larger battery or one with a higher C rating. Pushing lipos to their maximum ratings is almost never a good idea. </p>
3105
2014-06-14T05:57:10.350
|quadcopter|brushless-motor|multi-rotor|esc|battery|
<p>I'm a noobie just starting out and trying to come up what I need to build my first quadrocopter, I just wanted to run something by people with some experience before I commit to buying anything.</p> <p>Would <a href="http://www.hobbyking.com/hobbyking/store/__27775__Turnigy_Multistar_20_Amp_Multi_rotor_Brushless_ESC_2_4S_OPTO_.html" rel="nofollow">this esc</a> be fine for running <a href="http://www.hobbyking.com/hobbyking/store/__39036__Turnigy_Multistar_2213_980Kv_14Pole_Multi_Rotor_Outrunner.html" rel="nofollow">this motor</a>? As I understand it the ESC should be rated for slightly above what the max amps are for the motor?</p> <p>On top of that, should <a href="http://www.hobbyking.com/hobbyking/store/__7636__ZIPPY_Flightmax_2200mAh_3S1P_25C_EU_warehouse_.html" rel="nofollow">this battery</a> be able to run all of the motors without any issue?</p>
Will a 20amp ESC run a Turnigy 2213-980?
<p>Very short answer: 2</p> <hr> <h1>Sensors</h1> <p>Regarding whether reading from sensors all in one node or each separately, you should ask yourself this question:</p> <h3>Are the sensors meaningless without the other?</h3> <p>This question asks if the sensors are tightly coupled or not. For example, say you have a sensor that is sensitive to temperature (and you need to compensate for it). You add a temperature sensor primarily to fix the value of the other sensor. In this scenario, it makes sense to read both values at the same time, since they are tightly coupled. In fact, without the readings from the temperature sensor, the readings from the original sensor is useless.</p> <p>On the other hand, if the sensors are individually useful, by all means keep them in separate nodes. This has many benefits:</p> <ul> <li>The nodes can be run on separate processors</li> <li>The nodes can be reused in future robots</li> <li>Failure in communication with one node doesn't bring the whole system down</li> <li>Restart of acquisition from a faulty sensor board can be done separately from the others</li> </ul> <p>In fact, if you <em>need</em> any of the above benefits, you would have to go with separate nodes, even if the sensors are tightly coupled, but that usually doesn't happen.</p> <h1>Actuators</h1> <p>This is analogous.</p> <h3>Are the actuators meaningless without the other?</h3> <p>For example, if you are designing a wrist with <a href="https://robotics.stackexchange.com/a/1201/158">robotic tendons</a> where for each tendon (for whatever reason) two motors are responsible to simultaneously work to move a joint in one or the other direction, then having them served in the same node makes much more sense than separate.</p> <p>On the other hand, where actuators are independent (common case), it makes sense to have one node for each actuator. In that case, each could be put in a different node. Besides the exact same benefits as with sensors, there is this added benefit:</p> <ul> <li>If an actuators is stalled (for whatever reason), the other actuators still function. If there is redundant degrees of freedom, they could even completely compensate for it.</li> </ul> <p>This has one implication. If you <em>need</em> the actuators to work in harmony, then put them in the same node. This is not just because of failure in communication, but because different nodes means different delays; on a distributed system each node is on a different part of the network and hence the difference in delays, on a centralized system different delays happen on high CPU loads due to each process's <em>luck</em> in scheduling.</p> <h1>Should There Be a Handler?</h1> <p>Even though the answer is "it depends", there is a common approach with many advantages. Let's change the name and call it "controller". The approach is "yes, there should be a controller".</p> <p>The advantages of having a controller are (among many):</p> <ul> <li>Decoupled processing: each node is responsible for one thing which means: <ul> <li>Simplicity: which implies <ul> <li>Easier development</li> <li>Easier debugging</li> <li>Fewer errors</li> <li>Less chance of failure</li> </ul></li> <li>Reusability: because the same controller can be used with different sensor nodes if they have the same functionality (i.e. message and service formats).</li> </ul></li> <li>Execution on separate hardware: each node can be moved in the network. For example, sensor and actuator nodes may be moved to a dedicated microcontroller (<a href="http://wiki.ros.org/rosserial_arduino" rel="noreferrer">Arduino</a> for example (not that I recommend)) and the controller on a PC.</li> <li>Avoid extreme ugliness: if the sensors wanted to directly influence the actuators, the result is simply a mess. Assuming no controller, let's look at each case: <ul> <li>One sensor node: basically this means the sensor node and the controller are put together in the same node. Not too bad, but very unnecessary.</li> <li>Many sensor nodes: this is the mess. This means the controller is <em>distributed</em> among the sensor nodes. Therefore all the sensor nodes have to talk with each other for each to know how to control its associated actuator(s). Imagine a failure in communication or various kinds of delays and you'll see how difficult it becomes. Given that this is utterly unnecessary, there is no reason for doing it!</li> </ul></li> </ul> <p>These said, there are disadvantages too. Having more nodes (any nodes, not just the controller) means:</p> <ul> <li>More wasted communication: the data have to move around in standard formats (so serialized and deserialized) through network or shared memory, ROS core has to look at them and decide who to deliver them to, etc. In short, some system resources are wasted in communication. If all nodes where in one, that cost could have been zero.</li> <li>Higher chance of failure: if for whatever reason a network link goes down, or a node dies, there is a failure in the system. If you are not prepared for it, it can take down the whole system. Now this is actually a good thing in general to be able to lose part of the system but not all of it (<a href="https://en.wikipedia.org/wiki/Graceful_degradation" rel="noreferrer">graceful degradation</a>), but there also exist applications where this should be avoided as much as possible. Cutting the communication and putting all code in one node actually helps with system stability. The down side is of course, the system either works fine or suddenly dies completely.</li> <li>Chaotic timings: each node runs on its own. The time it takes for its messages to arrive at others is non-deterministic and varies run by run. Unless your nodes timestamp each message (as a side note: you need to have synchronized clocks to a good degree, which ROS doesn't) and unless each receiving node can take the delay into account and control accordingly (which is a very difficult task on its own) then having multiple nodes means high uncertainty about the age of the data. This is actually one of the reasons (among many) that most robots move so slow; their control loop has to be slow enough to make sure all data correspond to the current period. The larger the delays, the slower the control loop.</li> </ul> <p>In all above disadvantages, the solution is to reduce the number of nodes, preferably to a single node. Wait a minute, that's not using ROS anymore! Exactly.</p> <p>To summarize:</p> <ul> <li>Use ROS for non-realtime systems where delays could sporadically get high. In that case, feel free to have as many ROS nodes as you wish. In fact, it's very good practice to have each ROS node do <em>one and only one</em> thing. That way, they become very simple, and they become highly reusable.</li> <li>On the other hand, for realtime systems, by all means avoid ROS. For that there is <a href="http://www.orocos.org/" rel="noreferrer">orocos</a> and technologies like <a href="https://en.wikipedia.org/wiki/EtherCAT" rel="noreferrer">EtherCAT</a> and more often than not, ad-hoc solutions.</li> </ul> <p>As a final word, in practice ROS does fine. Not great, but fine. Very often the system is not critical and the chance of failure is so small that every now and then a restart is not a big deal. This is the <a href="https://en.wikipedia.org/wiki/Ostrich_algorithm" rel="noreferrer">Ostrich algorithm</a>!</p>
3110
2014-06-16T08:39:24.630
|control|ros|
<p>I'm going to build a <em>small robot system</em>, and it seems like that ROS serves a nice framework to control and program the system.</p> <p>However, I am wondering which is the best practice to manage the components of my robot.</p> <ul> <li><p>Does it make sense to put all the sensors in one node?</p></li> <li><p>Should I only put the sensors of the same type in one node or is it better to have one node for one sensor? </p></li> <li><p>Is it a good practice to have some kind of handler node, which takes input from sensors and steers the corresponding actuators or should the actuator nodes and sensor nodes communicate directly?</p></li> </ul> <hr> <ol> <li><p>Fused sensor nodes and actuator nodes with handler <img src="https://i.stack.imgur.com/wkjDK.jpg" alt="1. Fused sensor nodes and actuator nodes with handler"></p></li> <li><p>Single sensor and actuator nodes with handler <img src="https://i.stack.imgur.com/WZkGN.jpg" alt="enter image description here"></p></li> <li><p>Direct communication <img src="https://i.stack.imgur.com/MyCez.jpg" alt="enter image description here"></p></li> </ol> <p>For me, I guess the best is to have some kind of handler, which handles the communication between sensors and actuators and have one node for each element of the robot (like in figure 2), because the system is in this way loosely coupled and can be extended easily, but I want to know what your opinion is.</p>
ROS: Best practices?
<p>I don't know of any devices that exist for this purpose (besides devices that are not for sale, such as <a href="http://tangible.media.mit.edu/project/transform/" rel="nofollow">the MIT Media Lab's "Transform" table</a>), but I'd imagine that the 2 most promising approaches for a homemade solution would be:</p> <ol> <li>Use a robot arm or X/Y table to move to each pin location and push it the desired distance -- one pin at a time. Reset the pins by just tilting the pins' frame back.</li> <li>Attach a <a href="http://en.wikipedia.org/wiki/Bowden_cable" rel="nofollow">bowden cable</a> to each pin and use a servo (or linear actuator) at the other end to control the position. This has the advantage of setting all the pins in parallel.</li> </ol>
3117
2014-06-17T15:18:32.427
|3d-printing|
<p>I'm looking for a device that can push out independent pinpoints from something similar to a Pin Point Impression Toy. I'm looking to create a 3D image from for example my computer. Does anybody know the name of such a device or can point me in the right direction of making one? </p> <p>I've been looking now for a while, but I'm having some slight problems finding a good way to describe it as a search term.</p> <p>I'm sorry if this is the wrong forum.</p>
Device that can push out independent pin points?
<p>The KF (EKF), even though it is closed form, does not require you to input directly what your sensors sense. Preprocessing is OK if you are doing it perfectly and not losing information. </p> <p>Consider GPS measurements, which may occur in a coordinate frame which is "rotated" with respect to the robot's base or the robot's map coordinate frame. The "rotation" (pre-processing) and scaling (to find x-y from long-lat), are typically not included in the Jacobian. </p> <p>In your example, if "rotating" the velocity sensor meant that it detected: $v+b$ for velcoity vector $v$ and some fixed bias $b$, then yes, simply removing the bias before incorporating the data is fine. Your innovation has the same value, since $h(x)$ is already the operation you described (applying the rotation).</p> <p>But if the velocity sensor now measured $v+f(x)$, as in it measured greater velocity if the robot was turning right than left (as is the case of optical flow calculations from a camera facing a little to the right instead of directly forward), then you'd have to be much more careful. (Update jacobians or have near-perfect pre-processing).</p>
3118
2014-06-17T16:48:08.560
|kalman-filter|ekf|
<p>Let's say I have a hypothetical sensor that provides, for example, velocity estimates, and I affix that sensor at some non-zero rotational offset from the robot's base. I also have an EKF that is estimating the robot's velocity.</p> <p>Normally, the innovation calculation for an EKF looks like this:</p> <p>$$ y_k = z_k - h(x_k) $$</p> <p>In this case, $h$ would just be the rotation matrix of the rotational offset. What are the ramifications if instead, I pre-process the sensor measurement by rotating $z_k$ by the inverse rotation, which will put its coordinates in the frame of the robot? Can I then safely just make $h$ the identity matrix $I$?</p>
Observation Model Jacobian for Fixed Transforms
<p>Try andymark, their 10 inch mecanums (or smaller) may suit you well.</p>
3121
2014-06-17T22:12:32.113
|wheel|
<h2>Where can I buy multi-directional omni wheels?</h2> <p>I'm specifically looking at something which can support in <strong>excess of 100kg/wheel, so around 400kg in total</strong>. Also, a possible mission profile would include a 300 meter excursion outdoors on asphalt path, so they should be a little durable. The only ones I can find online are small ones for experimenting.</p> <p><img src="https://4.bp.blogspot.com/-tB_edZOgyPc/TvCsA0_ebAI/AAAAAAAA5pg/cV4ggfOYUWs/s1600/Omniwheel-grabcad.JPG" alt="pic"></p>
Where can I buy heavy-duty Omni Wheels?
<p>The factors that you mention will definitely have an effect on the accuracy of your PID. Whether that inaccuracy is enough to negatively affect your outcome is something that only you can determine -- preferably after doing some tests. In most cases, these inaccuracies are safe to ignore.</p> <p>In general, if you have some nonlinear effect in your actuator, you would handle that separately from your PID. For your case, you would need to write some algorithm that converts the desired value (PID output) to the appropriate PWM signal, based on the current state of the motor. Even if you can't measure the properties of the motor directly to do this, a simple mathematical model of the motor might be enough.</p> <p>Before you start in this direction, make sure that you've actually found a problem behavior that is being caused by naive conversion of PID level to PWM level (i.e., you know that your PID is tuned properly but you still see a problem). Otherwise, you will have no way to judge whether your algorithm is helping or hurting your results.</p>
3124
2014-06-18T08:54:44.627
|motor|pid|pwm|
<p>It is "good enough" for PID output directly controls, without further modelling, the PWM duty cycle?</p> <p>Logic behind the question is, </p> <p>In case of pure resistance heater, PWM duty cycle percentage directly relates to power (on off time ratio). So, direct control is appropriate.</p> <p>However, motor has two additional effects,</p> <p>a) with considerable <strong>inductance</strong>, initial current is smaller and ramping up over time</p> <p>b) as RPM gradually ramping up, after time constant of mechanical inertia etc, increasing <strong>back EMF</strong> will reduce current</p> <p>Will it be wise to ignore the above two effects and still expect a reasonably good outcome? </p> <p>Application is 6 volts, 2 watt DC brushed motor, gear 1:50, 10000 RPM no load, PWM frequency 490Hz, driving DIY 1kg robot.</p>
PWM PID control for small 2 watts brushed DC motor
<p>This paper, <a href="http://www.nt.ntnu.no/users/skoge/prost/proceedings/ecc-2013/data/papers/0927.pdf" rel="nofollow">Full Quaternion Based Attitude Control for a Quadrotor</a> by Emil Fresk and George Nikolakopoulos, demonstrates what you are trying to achieve.</p> <blockquote> <p><em>Abstract</em>— The aim of this article is to present a novel quaternion based control scheme for the attitude control problem of a quadrotor. A quaternion is a hyper complex number of rank 4 that can be utilized to avoid the inherent geometrical singularity when representing rigid body dynamics with Euler angles or the complexity of having coupled differential equations with the Direction Cosine Matrix (DCM). In the presented approach both the quadrotor’s attitude model and the proposed non-linear Proportional squared (<em>P<sup>2</sup></em>) control algorithm have been implemented in the quaternion space, without any transformations and calculations in the Euler’s angle space or DCM. Throughout the article, the merits of the proposed novel approach are being analyzed and discussed, while the efficacy of the suggested novel quaternion based controller are being evaluated by extended simulation results.</p> </blockquote>
3137
2014-06-20T13:20:14.710
|quadcopter|pid|stability|
<p>I'm making a quadcopter. I have set up a PID loop to stabilize it to a given Euler angle (pitch and roll). The problem arises when the roll approaches 90 degrees (45 degrees and up). The values don't make sense anymore, as it approaches the gimbal lock. I intend to make it do complex maneuvers like looping etc., which exceeds the 45 degree roll limit.</p> <p>How can I use quaternions to overcome this problem? (I get quaternions from the MPU-9150.) I have read many articles on the matter of quaternions, but they all talk about rotations in 3D software, and tweening between two rotation points. This makes little sense as I do not know imaginary numbers and matrices.</p>
How to use quaternions to feed a PID quadcopter stabilization loop?
<p>For these sorts of problems, I always like to attempt to solve them myself. And they're surprisingly simple when you think about them geometrically. Parallel mechanisms are especially easy because each part of the mechanism can be considered separately, and each part is very simple.</p> <p>The base part (at the top of the image) contains three servo motors. Think about one motor. When the end effector (blue triangle) is in some position, what angle does the motor need to be?</p> <p><img src="https://i.stack.imgur.com/TCNYQ.png" alt="Parallel Mechanism"></p> <p>Imagine a sphere centered on one corner of the end effector triangle. The radius of the sphere is the length of the arm attached to that corner. Now imagine the servo arm swinging, tracing out a circle. Where does this circle intersect the sphere?</p> <p>This is a pretty simple problem to solve. Think of the servo arm sweeping out a plane. That plane intersects the sphere:</p> <p><img src="https://i.stack.imgur.com/E1h9z.png" alt="Parallel Mechanism"></p> <p>... <a href="https://en.wikipedia.org/wiki/Circle_of_a_sphere" rel="noreferrer">creating a circle</a>. Now the problem is just the intersection of two circles. Find the two intersection points, and choose the one that makes most sense (the one that puts the servo arm into its possible angle range.</p> <p>Added: For an intuitive explanation of inverse kinematics for serial manipulators, that avoids horrible Denavit-Hartenberg parameters, try my old <a href="http://freespace.virgin.net/hugo.elias/models/m_ik2.htm" rel="noreferrer">Inverse Kinematics Article</a>.</p>
3144
2014-06-23T01:16:02.063
|kinematics|inverse-kinematics|
<p>Let me start off by saying that I am currently going to university majoring in computer engineering. I love software/hardware and I especially love robotics and I want to apply my knowledge of software/hardware in robots. I have never taken a formal class on robotics, so I don't really know where to start or how to approach the mathematics that robots entail. </p> <p>Currently, I am interested in calculating the inverse kinematics of a delta robot. To clarify a bit more, I am trying to determine the required joint angles that will position the end-effector of the delta robot to a specific location given some x,y,z coordinate. The delta robot that I will be basing my design off of is shown in the image below.</p> <p><img src="https://i.stack.imgur.com/mr8hD.jpg" alt="enter image description here"></p> <p>Based off of some research that I have been doing for the past few days, I found that the sort of mathematics involved are usually like those of Denavit-Hartenberg parameters, Jacobian matrices, etc. I am going to be honest, I have never encountered Denavit-Hartenberg parameters or Jacobian matrices and I don't even know how to apply these to solve the kinematics equations and let alone find the kinematics equations. Most of the articles that I have read, mainly deal with serial manipulator robots and the mathematics in finding the kinematics equations of those serial manipulators. I couldn't really find any good material or material that was easy to understand given my current situation on parallel manipulators. </p> <p>I wanted to ask my question here in the hopes that someone in the community could direct me to where I can start on learning more on obtaining the inverse kinematics equations of parallel manipulators and solving those equations.</p> <p>Any help will be much appreciated.</p> <p>Thank you.</p>
Inverse Kinematics of Parallel Manipulator (Delta Robot)