Answer
stringlengths
44
28.2k
Id
stringlengths
1
6
CreationDate
stringlengths
23
23
Tags
stringlengths
4
94
Body
stringlengths
54
35.7k
Title
stringlengths
13
150
<p>The best book I have seen on the subject of mechanisms is <a href="http://www.amazon.co.uk/Mechanisms-Mechanical-Sourcebook-Nicholas-Chironis/dp/0071361693/" rel="nofollow noreferrer">Mechanisms And Mechanical Devices by Neil Sclater and Nicholas P. Chironis</a>.</p> <p><img src="https://i.stack.imgur.com/0WZ63.jpg" alt="Mechanisms And Mechanical Devices"></p> <p>It's got loads of great mechanisms in it, from simple linear movements:</p> <p><img src="https://i.stack.imgur.com/gQroz.png" alt="enter image description here"></p> <p>... to complex packaging machines:</p> <p><img src="https://i.stack.imgur.com/arW7F.png" alt="enter image description here"></p> <p>It also covers robotics.</p>
641
2012-12-07T16:29:15.963
|design|mechanism|
<p>I am working with students (9th &amp; 10th grade) on robotics and wanted to get a good book which covers basic mechanisms. Does anyone have any recommendations. Searching Google or Amazon yields many results, however, I thought the community might have a standard book to use. </p>
Good book on mechanisms
<p>There is a project on Google Code called <a href="http://code.google.com/p/mma7361-library/" rel="nofollow">MMA7361-library</a>. The <a href="http://www.geeetech.com/wiki/index.php/MMA7361_Triple_Axis_Accelerometer_Breakout" rel="nofollow">GE Tech Wiki</a> has a simple example showing how to use this library. Copied inline below for posterity.</p> <pre><code>#include &lt;AcceleroMMA7361.h&gt; AcceleroMMA7361 accelero; int x; int y; int z; void setup() { Serial.begin(9600); accelero.begin(13, 12, 11, 10, A0, A1, A2); accelero.setARefVoltage(5); //sets the AREF voltage to 3.3V accelero.setSensitivity(LOW); //sets the sensitivity to +/-6G accelero.calibrate(); } void loop() { x = accelero.getXAccel(); y = accelero.getYAccel(); z = accelero.getZAccel(); Serial.print("\nx: "); Serial.print(x); Serial.print(" \ty: "); Serial.print(y); Serial.print(" \tz: "); Serial.print(z); Serial.print("\tG*10^-2"); delay(500); //make it readable } </code></pre>
642
2012-12-08T05:06:56.877
|arduino|sensors|accelerometer|
<p>I recently purchased a 3-axis accelerometer from Amazon, and can't seem to find how it works. I've been looking for quite a while now, and haven't found any real clues. The x, y, and z values always seem to return the same values. They change when I tilt or move the accelerometer, but revert to about 120 for each reading. I am currently using this device with the Arduino Uno, using the following code: </p> <pre><code>int x=1,y=2,z=3; void setup() { pinMode(x, INPUT); pinMode(y, INPUT); pinMode(z, INPUT); Serial.begin(9600); } void loop() { Serial.println(); Serial.print(analogRead(x)); Serial.print(", "); Serial.print(analogRead(y)); Serial.print(", "); Serial.print(analogRead(z)); } </code></pre> <p>Also, how would I go about converting this to tilt?</p>
MMA7361 Accelerometer Always Displays Same Values
<p>It is possibly a bias in the accelerometer. The measured non-zero results (like yours) are the bias. No idea if the magnitude of these biases is right, i.e., you may be experiencing more error than you should expect from a bias. To be clear, you can subtract this bias from your estimate of acceleration. However, you should be sure that the bias doesn't change with time. Then you'll need more sophisticated methods. see: <a href="http://www.vectornav.com/support/library?id=76" rel="nofollow">overview of INS</a> or <a href="http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=05594974" rel="nofollow">some literature</a> and <a href="http://www.analog.com/en/content/td_accelerometer_specifications_definitions/fca.html" rel="nofollow">typical accelerometer specs</a>.</p> <p>Does your datasheet help? There's probably a section on calibration schemes. <a href="http://www.analog.com/en/mems-sensors/mems-inertial-sensors/adxl345/products/product.html" rel="nofollow">For example</a>:</p> <p>"In a no-turn or single-point calibration scheme, the part is oriented such that one axis, typically the z-axis, is in the 1 g field of gravity and the remaining axes, typically the x- and y-axis, are in a 0 g field. The output is then measured by taking the average of a series of samples. The number of samples averaged is a choice of the system designer, but a recommended starting point is 0.1 sec worth of data for data rates of 100 Hz or greater. This corresponds to 10 samples at the 100 Hz data rate. For data rates less than 100 Hz, it is recommended that at least 10 samples be averaged together. These values are stored as X0g, Y0g, and Z+1g for the 0 g measurements on the x- and y-axis and the 1 g measurement on the z-axis, respectively"</p> <p>EDIT: Also please compare this error to the specified precision of your sensor. Turns out this small error was not a problem, and not unexpected.</p>
646
2012-12-10T11:51:14.537
|design|electronics|accelerometer|calibration|
<p>I am trying to calibrate a MEMS accelerometer. I was able to calibrate it for the current axis which is parallel to gravity and shows correctly, 1g. But the other two axes which should be 0.00g are showing +-0.02g instead. So, e.g., when the accelerometer's x axis is parallel to gravity, it should show (1g, 0g, 0g) and not (1g, 0.02g, -0.01g) like now.</p> <p>How could I eliminate those values, e.g. further calibrate accelerometer? </p> <p><strong>EDIT:</strong> The <a href="http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/DATASHEET/CD00091417.pdf" rel="nofollow">acelerometer's datasheet</a> says nothing about calibrating except that <em>The IC interface is factory calibrated for sensitivity (So) and Zero-g level (Off)</em> (page 20).</p>
MEMS accelerometer calibration
<p>I'm in agreement with <a href="https://robotics.stackexchange.com/a/652/37">@movrev</a> but wanted to expand beyond the scope of a comment. RN-42 is slick. I'm coding for it now, and I think it is an excellent BT choice.</p> <p>Low cost and multiple receivers (switches) appear to be mutually exclusive. You might consider the RN-42 as a BT receiver to preserve smartphone interface. Then, you might consider an 802.15 (Zigbee) mesh-like solution to distribute a switch command from the RN-42 "master" receiver to the "slave" switches. The Microchip MRF24J40 is a reasonable 802.15 solution.</p>
650
2012-12-10T17:44:15.900
|sensors|circuit|
<p>I can control a relay from an Android smartphone using Arduino and Bluetooth as seen <a href="http://bellcode.wordpress.com/2012/01/02/android-and-arduino-bluetooth-communication/" rel="nofollow">here</a>.</p> <p>However, it seems too costly to be using Arduino and a Bluetooth receiver for driving a switch. As long as Bluetooth is a radio frequency, is it possible to make a <strong>simple</strong> Bluetooth receiver which can output 1 or 0 to drive a relay? If yes, how tough that is going to be?</p> <p>The main factor here is the <strong>cost</strong>, which should be \$1-$5. </p>
Can I make a simple Bluetooth receiver?
<p>You can forcefully disable IR light for Kinect for Windows (by using the property <code>ForceInfraredEmitterOff</code>) which you cannot do for xbox Kinect. This feature is very useful when using multiple Kinect sensors because when their FOVs overlap, some serious issues arise.</p>
654
2012-12-11T09:58:33.730
|sensors|kinect|
<p>As I see there is a huge price gap between the two \$223 vs \$99 (at amazon).</p> <p>My intention is to use one of those from Ubuntu linux to perform depth sensing, navigation etc. and naturally I prefer the cheaper. However I am not sure if I miss some important point while betting on the Kinect for Xbox version. </p> <p>As it seems the Windows version is <a href="http://www.pcworld.com/article/247724/kinect_for_windows_available_february_1_but_overpriced_at_249.html" rel="noreferrer">overpriced</a> because it has the license for development. <a href="https://gaming.stackexchange.com/questions/73290/can-kinect-for-windows-work-on-an-xbox-360">Here</a> it is stated that there are internal differences but without exact details (The minimum sensing distance seems to be better for Windows version.).</p> <p>Could anyone give a comparison chart? It would be good to know about</p> <ul> <li>Connectivity: USB, special connector, ... .</li> <li>Hardware differences: are they the same or do they really differ in weight, energy consumption, speed, sensing range, ...?</li> <li>Driver: could I use Xbox version under Ubuntu?</li> <li>API usage: could I develop on Xbox version, could I use the same/similar API on both, is the API for Xbox mature enough?</li> <li>License: is it against the license of Xbox version to develop for home/hobby/educational use?</li> </ul> <p>Thanks.</p>
What is the difference between Kinect for Windows and Kinect for XBox?
<p>You could use a simplified Linux (without the X-Windowing and other graphics utilities that a robot may not need). Raspian-Lite is one such OS.</p> <p>Then there are a number of robot frameworks that will run on the Raspberry Pi. They run under the Linux OS.</p> <p>ROS (the Robot Operating System) is one of the most popular frameworks, but also the most complicated to learn. As Peter Parker learned: With Great Power Comes a Great Learning Curve. :)</p> <p>ROS is written in C++, but can be accessed in a number of languages including Java and Python.</p> <p><a href="https://gobot.io/" rel="nofollow noreferrer">Gobot</a>. This is a framework written in the language Go. It has a number of sensors and boards that the framework knows about. You can put the main intelligence onto a large computer and use a smaller board with a wireless connection on the robot. Gobot has sister projects (Artoo - uses Ruby, and Cylon.js - uses JavaScript).</p> <p>Go is a C-like language that is fairly easy to use, created by Google. I think of it as C's BASIC (Go is to C like BASIC is to Fortran). It compiles very quickly.</p> <p>I am thinking of starting a Rust port of Gobot (Rust is yet another C-like language similar to Go, but it was designed to be as type-safe as possible. Many pointer errors common to C/C++ are caught by the Rust compiler.</p>
667
2012-12-12T19:57:51.837
|raspberry-pi|operating-systems|
<p>Is there an operating system for the Raspberry Pi that is specifically made for running robotics applications? Or an operating system whose purpose is to optimized just to run a few specific programs?</p> <p>I've been working with an Arduino for a while now. As far as efficiency goes, it makes sense to me to just upload a specific set of commands and have the hardware only need to handle that, and not have to worry about running a full fledged operating system. Is something like this possible to do on a Raspberry Pi?</p>
Raspberry Pi operating system for robotics
<p>As NXC is a compiled language, I would assume calling a non-existent function or accessing a non-existent variable would throw a syntax error during compilation as it does in other compiled languages I have used.</p> <p>Conversely, in interpreted languages like PHP and Python you can often call or access things that don't exist without creating an issue until that actual call or access happens - hence the need for method's like <code>isset()</code> and <code>function_exists()</code> in PHP.</p> <p><strong>Edit:</strong> Looking at the <a href="http://bricxcc.sourceforge.net/nbc/nxcdoc/nxcapi/task.html" rel="nofollow">Tasks docs</a> it seems that a "task" is similar to a function in that it is defined in the source code, versus being created dynamically at run-time. I expect if you write something like: <code>Precedes(non_existent_task_name);</code> ("Precedes" being a function to start tasks), and try to compile you would trigger the same sort of syntax error as you would if you did something like: <code>call_of_non_existent_function();</code></p>
671
2012-12-13T04:43:47.557
|nxt|programming-languages|mindstorms|not-exactly-c|
<p>Is there a way to check if a task, function or variable exists in Not eXactly C?</p> <p>I know that in PHP you can use <code>isset()</code> to check if a variable exists and <code>function_exists()</code> to do the same for a function, but is there a way to do that in NXC?</p> <p>I am specifically interested in checking whether a task exists or it is alive.</p>
Check if task exists in Not eXactly C
<p>If I were you, I would first (1) read the <a href="http://www.onsemi.com/pub_link/Collateral/LM2576-D.PDF" rel="nofollow noreferrer">LM2576 datasheet</a>. I'm assuming you are using a circuit similar to the schematic and PCB layout on page 23 of the LM2576 datasheet. I'm guessing you've tweaked the circuit slightly, replacing the manually-operated pot shown on the schematic for R2, replaced with some sort of microprocessor-controlled thing that frequently changes its effective resistance to make that motor spin faster or slower.</p> <p>Then I would (2) put my finger on the chip. If it feels so hot that I can't hold my finger on it, I would suspect</p> <ul> <li>thermal shutdown.</li> </ul> <p>georgebrindeiro covered this. You'll also want to read p. 18 of the datasheet, the section "Thermal Analysis and Design": "The following procedure must be performed to determine whether or not a heatsink will be required. ...". The typical solution is to add a big heat sink to the chip. Do you see the size of the heatsink on the PCB layout on page 23 of the datasheet?</p> <p>Next I would (3) take a cheap multimeter, switch it to "Amp" mode, and connect it in-line with the motor leads.</p> <p>When the output voltage drops (which I measure with my more expensive multimeter), and I hear and see the motors slow down, what does the cheap multimeter say?</p> <p>Does the cheap multimeter stay pegged at some high current above 3 A (the datasheet says the internal limit will be somewhere between 3.5 A and 7.5 A)? If so, then we almost certainly have:</p> <ul> <li>current limiting. The regulator is working as-designed.</li> </ul> <p>The typical solution is to figure out what is pulling so much current, and somehow replace it with something that pulls less current. Perhaps the robot has run into a wall or got stuck in a rut, and the motors have stalled out. Then maybe the controller needs to sense this condition, stop its futile efforts to punch through the wall, and reverse direction.</p> <p>Sometimes we really need more current than one regulator can supply. Then we must replace that regulator with multiple regulators or higher-current regulators or both. (But not so much current that it immediately melts the wires in the motor).</p> <p>On the other hand, if the output voltage <em>and</em> the output current drop, then I would (4) connect my more expensive multimeter to the power input pins of the regulator. If that is much lower than I expected, then perhaps:</p> <ul> <li>The input voltage is lower than I expected.</li> </ul> <p>When you supply the motors with a higher voltage, they drain the batteries much quicker. Partially-drained batteries have a no-load voltage <em>lower</em> than you might expect from the "nominal" voltage printed on the battery, and <em>loaded</em> batteries have an even lower voltage. I doubt this is your problem, since you imply that if you disconnect the battery and reconnect the <em>same</em> battery, it seems to start working again. The typical solution is to put more batteries in series to increase the actual working voltage.</p> <p>Next I would (5) try disconnecting the batteries, and powering everything from a high-current grid-powered "power supply" set to the appropriate output voltages. If it seems to run fine off that power supply, but not from batteries, I would suspect:</p> <ul> <li>switching-regulator latchup. "Latchup of Constant-Power Load With Current-Limited Source" <a href="http://www.smpstech.com/latch000.htm" rel="nofollow noreferrer">http://www.smpstech.com/latch000.htm</a></li> </ul> <p>When you supply the motors with a higher voltage, they pull a higher current. Surprisingly often, the total impedance at the input power pins of switching regulator is so high (bad) that when the regulator turns its internal power switch on, the voltage at its input power pins plummets so low that it's not possible to pull the desired current from that battery. Milliseconds later, when the switch turns off, the battery pulls the voltage back up to some reasonable voltage -- so this problem is difficult to debug with a standard multimeter, but obvious when you have the input power pins connected to an oscilloscope.</p> <p>The only complete cure is to somehow reduce the total input impedance at the input power pins. Occasionally all you need to do is reduce the wiring impedance -- shorten the wires (reduce the resistance), or bring closer together the PWR and GND wires (reduce the inductance), or both. Occasionally all you need to do is put more capacitance across the power input pins of the regulator. As a battery drains, its effective series resistance increases. In theory, you can always cure this by putting more batteries in parallel to reduce the net ESR of of all the batteries in parallel.</p> <p>Some people want a very lightweight robot, and so they can't add more batteries, so they can't completely cure the problem. Sometimes they can get adequate performance from the robot by using various "soft-start" or "load-shedding" techniques. Rather than trying to pull a lot of power from the batteries to get up to speed quickly -- more power than the battery can physically supply, and so triggering this unwanted latchup -- we pull somewhat less power from the batteries, slowly ramping up to speed, and applying various "limp mode" and "tired mode" techniques to keep the total power at any one instant low enough that the battery can supply it. (You may be interested in <a href="https://robotics.stackexchange.com/questions/416/what-is-the-best-way-to-power-a-large-number-27-servos-at-5-v">What is the best way to power a large number (27) servos at 5 V?</a> ).</p>
684
2012-12-15T13:14:25.053
|motor|electronics|power|
<p>I have an LM2576 circuit plus an adjuster to adjust the output voltage, for controlling motor speed in a line follower robot. The circuit works great when adjusted to give out low voltages, but when I adjust it to higher voltages for my motors to go faster, it works great for 1-2 minutes, then suddenly cuts down the power and motors start to go extremely slow.</p> <p>Even when I decrease or increase the output voltage, it won't respond until I turn off the power and turn it back on again. There is something mentioned in the LM2576 datasheet that if we overload the IC it will cut down the power until the load comes lower, so I think it might be a problem with that.</p> <p>Since this problem has already caused us to lose the competitions with 5+ teams, I would like to solve it for our next competition, so why does our LM2576 circuit suddenly reduce the power?</p>
Why does our LM2576 circuit suddenly cut down the power?
<p>As a follow-up (albeit after 2.5 years) I would like to add some other things that I was able to work on and might possibly help you / anybody else foraying into this field.</p> <p><strong>Option 1)</strong> Use an existing robot hardware and add your own sensors and actuators:</p> <p>For e.g.: I had access to a iRobot Create (Roomba without the vaccum cleaner). So I modified that hardware-wise by adding a long "neck" made out of a plastic tube that was lying around (I had to add a tough-plastic platform on top of the Create on the screws provided so that I could fix whatever I wanted to this plastic platform (see image below to get an idea). </p> <p><a href="https://i.stack.imgur.com/hlrkc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hlrkc.jpg" alt="Robot"></a></p> <p>You can then interface with the Roomba from your laptop or other PC using several Roomba interface packages available (I used the one for Matlab since my CV code was in Matlab). I connected a Kinect v2 to my laptop and thus I had complete control over the robot as well as was able to acquire data from the Kinect. The wires and electronics that you see on the robot are from my previous attempt to supply power directly from the robot battery to the Kinect v1 that I was using on it earlier to make the robot completely mobile. For the v2 I did not bother with the power conversion as it was complicated so I just used tethered power supply for the kinect v2. Since I had a laptop (Thinkpad with usb 3.0 that the Kinect v2 needs), the interfacing with Kinect v2 was relatively easy; I used the libfreenect2 for running Kinect v2. I called it as a system call from my Matlab image processing code.</p> <p>The advantage of this system is that it's cheap. You only have to pay for the robot base, the Kinect, and any other hardware you buy. The disadvantage is that it's very slow moving robot since it's top heavy. The Kinect v2 also had to be tethered. This setup cost me around \$ 300 since the base was pre-owned from ebay, and the Kinect costs ~$200. Not including any additional hardware costs.</p> <p><strong>Option 2)</strong>: Build your own robot hardware. This option is more time+money consuming. The way I did this was I purchased a 6-wheel drive robot chasis (it's not cheap; around $300) from <a href="https://www.sparkfun.com/products/11056" rel="nofollow noreferrer">sparkfun</a>. To this I added a pan-tilt system from <a href="https://www.servocity.com/pt785-s" rel="nofollow noreferrer">servocity</a>. And to this PT system I added a asus xtion (since it's light). The motors of the robot and the PT system were controlled by a gorilla controller board, which received commands from a Jetson TK1 which was processing the Asus depth data, and sending commands to the motors. Unfortunately I do not have the pictures on my machine right now; but it looks a bit like a cross between Wall-E and a 6WD military vehicle ;) . Jetson TK1 is considerably powerful and I've run Kinect v1, Asus Xtion, and Kinect v2 on it (it has a usb 3.0 as well). For Kinect v1 and Asus I used OpenNI2 and for Kinect v2 I used libfreenect2 (for both these robot options I used Ubuntu btw). But since the PT system was present I stuck with the lightweight Asus sensor.</p> <p>The advantage of this system is that you can change it however u want with different hardware or electronics. The disadvantage is that you've to do a lot of motor controller programming on your own which means you've to understand how to control motors using PWM signals, for example. This setup aint cheap. Cost me around $800-900 without the additional hardware I had to purchase to make it work.</p> <p>In both these options I did not use any odometry data coming from the wheels or sensors (e.g. how many rotations of the wheel, data from the IR sensors on the Create etc). I relied entirely on vision data.</p> <p>Hopefully this helps. Let me know if you have questions.</p>
687
2012-12-15T21:31:07.840
|kinect|
<p>I want to learn robotics and really interested in making a robot based on Kinect sensor.</p> <p>I see so many projects like <a href="http://www2.macleans.ca/2011/11/03/the-150-robot-revolution/" rel="nofollow">this one</a>, and just wondering how it works on top level. I downloaded Kinect SDK and did some basic tutorials, but I just don't think that Microsoft SDK is the library to use for real robotics projects. Any suggestions where to start and what library to use? Any good books in particular or online resources?</p>
Robotics with Kinect
<p>I've seen a number of systems in this configuration and most went for an outside track solution. Part of the reason for this is control of bend radius. With an outside track, the bend is obvious at all positions and it is clear when you <em>run out of track</em>.</p> <p>If you are bothered about cabling complexity, you could put more of the electronics on the rotated stage, so instead of having motor, encoder and other cables all running down the energy chain, you would just have power and data lines, with everything else done by remote i/o.</p> <p>Taking this to the extreme, I've worked at a place where this technique was used with slip rings for a continuously revolving robot. It had two scara arms and all of the control electronics for them mounted on a revolving platform. The data slip rings were horribly noisy, so the data connection had to have more ECC than normal, but it all worked well.</p>
689
2012-12-16T01:59:01.610
|control|wiring|routing|motion|
<p>I'm building a motion control platform with 3 DoF: 1 axis of rotation (theta) and 2 cartesian (x,y). In most applications, like wrist actuation, you have an X-Y stage with a rotating servo as the stage's payload. This configuration works well since little of the power and data wiring needs to transit to the non-linear moving portion of the platform. </p> <p>For my inverted application, the stackup is reversed. The rotating axis comes first (from the mounting plane) with the stage connected as the rotating platform's payload. Now nearly all of the wiring (power, command, sensor, and otherwise) must be routed to the non-linearly moving section.</p> <p>I can see two broad approaches: </p> <ol> <li><p>The inside track, I route the cabling through the center of rotation.</p></li> <li><p>The outside track, I route the cabling around outside the outer diameter of the rotating platform.</p></li> </ol> <p>Mathematically, I can see that (1) results in minimum cable length, but maximum torsional loading, while (2) results in maximum cable length, but minimum torsional loading on the wires.</p> <p>Having limited experience with cable routing (and the associated carriers, strategies, and products) in non-linear applications, my question is...</p> <h3>...which approach is better in practice?</h3> <p>Cost isn't really the issue here. I'm more interested in reliability, ease of construction, availability of commercial components (says something about the popularity of the technique), etc...</p> <p>e.g. the generic concepts behind why you pick one over the other. </p> <p>...of course, if you have some part numbers for me I wouldn't be upset &lt;-- I know I'm not supposed ask that here ;-)</p>
Cable routing in theta, x, y motion control system. Better inside or outside?
<p>When using the EKF (or standard KF) on a real robot, you will want to tell the filter how much noise there is in each measurement, and in the process.</p> <p>The purpose of this is so that the Kalman filter can decide how much it "trusts" each source of data, and therefore, the weighting to give each measurement in its final estimation.</p> <p>For real robot data, the noise is already in the measurement. I think when you say "noise matrix", you are referring to the covariance matrix. This is not the actual noise per se, but rather, the noise covariance matrix describes the magnitude of the noise (that can be expected by the Kalman Filter), and the correlation between different noise terms, for a normal distribution of noise. You will generally want as accurate a noise covariance as possible, however, it can simply be estimated. When working with real data, you can perform a quick experiment to estimate the noise covariance, or you can also estimate it by consulting datasheets, or select a somewhat sensible value. Where there is not much data available, the covariance will normally be a diagonal matrix (ie. no correlation). The diagonal elements of the covariance matrix is also referred to as the <strong>variance</strong>. That means that you are telling the Kalman filter what the variance of the different noise sources are (square of standard deviation of the noise).</p> <p>If on the other hand, you are wondering why the models related to the Kalman Filter may have a noise term (as opposed to a noise covariance), they are only models, and those equations are not actually used in the algorithm. The equations used by the algorithm will have terms representing the noise covariance (not the actual noise - which is unknown), which it normally keeps an online estimate of.</p>
690
2012-12-16T05:29:07.907
|slam|kalman-filter|
<p>When using an EKF for SLAM, I often see the motion and measurement models being described as having some noise term. </p> <p>This makes sense to me if you're doing a simulation, where you need to add noise to a simulated measurement to make it stochastic. But what about when using real robot data? Is the noise already in the measurement and thus does not need to be added, or does the noise matrix mean something else?</p> <p>For example, in Probabilistic Robotics (on page 319), there is a measurement model: $z_t^i = h(y,j) + Q_t$, where $Q_t$ is a noise covariance. Does $Q_t$ need to be calculated when working with real data?</p>
Noise in motion and measurement models
<p>Yes, there is a difference between the Create and an off-the-shelf Roomba. The Create doesn't have a vacuum motor or any of the cleaning brushes. And there is an empty payload bay where all of the cleaning stuff used to be. Additionally, the Create has an added microcontroller on it that you can push code onto. </p> <p>But both the create and the Roomba let you control the robot directly over a serial interface. I think this API is the same between the Create and the 500 series Roomba. I am not sure about the 600 or 700 series Roombas, but i kind of doubt it changed.</p>
693
2012-12-18T01:47:33.573
|ros|roomba|irobot-create|
<p>Is there anything different between a iRobot Roomba and the Create? I want go start building my own turtlebot and playing with ROS but with the cost of all the parts I'm going to have to do it piece by piece. It's pretty easy to find cheap used Roombas. </p>
Can I use ROS with a Roomba?
<p>They aren't. The word servo refers solely to a device that uses negative feedback for control.</p> <p>Gearboxes or cheap brushed motors can be noisy. You can get very quiet systems if you are willing to pay for it.</p> <p>Cheap hobby grade servos can sometimes chatter if they do not settle in a stable state. This is normal and is caused by poor tuning, a lack of a deadband, and backlash between the motor and the encoder (potentiometer).</p> <p>Gearbox noise is caused by the spur gear teeth hitting eachother. You can use heavier grease or quieter gear geometries such as helical gears which mesh smoothly.</p> <p><img src="https://i.stack.imgur.com/kOyp1.jpg" alt="enter image description here"></p> <p>There are also piezoelectric or memory wire based servos which are completely silent.</p>
709
2012-12-20T20:49:33.020
|rcservo|
<p>I was working on a project to make a bedside night light out of a stuffed butterfly or bird. I was making a mechanism to make the wings flap with a servo motor and some small gears. The <a href="http://www.vexrobotics.com/276-2162.html">servo motor</a> was very loud as it moved. And this was whether or not the servo was moving large amounts, small amounts, fast or slow. </p> <p>I've worked with small servos before and realized they usually are pretty noisy machines, but I can't really explain why.</p> <p>Why are small servo motors noisy when they move? Is it backlash in the internal gearing?</p>
Why are Servo Motors so noisy?
<p>The recently open-sourced <a href="http://www.coppeliarobotics.com" rel="nofollow">V-REP simulator</a> may suite your needs. I found it more approachable than Gazebo, and it can run on Windows, OSX, and Linux. Their tutorials are fairly straight forward. There are a ton of different ways to interface with it programmatically (including with ROS). It looks like there is even a <a href="http://www.coppeliarobotics.com/helpFiles/en/hexapodTutorial.htm" rel="nofollow">tutorial for making a hexapod</a>, which you could probably use as a starting point if they don't already have a quadruped example available. Unfortunately, I believe the simulator is tied directly with the UI rendering, which I believe is not necessarily the case with Gazebo.</p> <p>So, your program would have to use one of the many ways to interface with V-REP, and then feed the performance of a particular gait, determined from some sensor in V-REP, into a machine learning algorithm (perhaps something from OpenCV as @WildCrustacean mentioned). You'd then have to come up with a translation from the gait description used by the simulated robot to something used to command actual motors on your Arduino.</p> <p>On the other hand, you could make your own simulator using an existing physics engine, rendering it with a graphics library. Bullet and OGRE, respectively, could be used for this purpose, if you like C++. There are tons of others for other programming languages.</p> <p>I would also look into how researchers who work on gait generation do their simulations. There might be an existing open source project dedicated to it.</p>
712
2012-12-20T22:33:51.033
|mobile-robot|arduino|microcontroller|machine-learning|simulator|
<p>I'm currently building a robot with four legs (<a href="http://en.wikipedia.org/wiki/Quadrupedalism/" rel="nofollow noreferrer">quadruped</a>), 3 DOF (Degrees of Freedom) and Its been suggested <a href="https://robotics.stackexchange.com/questions/327/learning-algorithms-for-walking-quadruped/">here</a> that I use a simulator to do the learning on a computer and then upload the algorithms to the robot. I'm using an <a href="http://arduino.cc" rel="nofollow noreferrer">Arduino Uno</a> for the robot and what software could I use to simulate the learning and then be able to upload to the Arduino board?</p>
Quadruped Learning Simulator
<p><a href="http://www.pointclouds.org/" rel="noreferrer">PCL</a> has a nice C++ templated <a href="http://docs.pointclouds.org/trunk/group__sample__consensus.html" rel="noreferrer">RANSAC library</a> which can solve your problem.</p> <p>If you feel, PCL is too big of a dependency, then using <a href="http://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60" rel="noreferrer">umeyama</a> function in <a href="http://eigen.tuxfamily.org/index.php?title=Main_Page" rel="noreferrer">Eigen's</a> geometry module is probably the easiest way towards a working solution for your problem.</p>
716
2012-12-21T10:31:13.480
|c++|ransac|
<p>Often when I need to perform model fitting I find myself looking for a decent C++ library to do this. There is the RANSAC implementation in <a href="http://reference.mrpt.org/svn/classmrpt_1_1math_1_1_model_search.html" rel="nofollow">MRPT</a>, but I was wondering if there are alternatives available.</p> <p>To give an example for the type of problems I would like to solve: For a set $A$ of (approx 500) 3D point pairs $(a, b)$ I would like to find the Isometry transform $T$, which maps the points onto each other so that $|(a - Tb)| &lt; \epsilon$. I would like to get the largest subset of $A$ for a given $\epsilon$. Alternatively I guess I could have the subset size fixed and ask for the lowest $\epsilon$.</p>
C++ Robust Model Fitting Library
<p>I tried using convex hull method, I found the convex hull of both the regions in 3D. Then, found the amount of intersection they have. If it is significantly high, then one object is lying inside the other. If it is not, then it may be on top of it, or may not be. So I tested that by finding the 2D convex hulls of both the object regions, found the intersection. This gave me if one object lies on top of the other.</p>
718
2012-12-21T14:47:11.653
|kinect|computer-vision|algorithm|
<p>How do we know that an object is contained inside another object or is just lying on top of it? </p> <p>Lets take an example of a cup-plate-spoon. The cup is lying on top of the plate. But the spoon is inside the cup. How do we distinguish between the 2 situations? What are the criteria to decide whether A is contained inside B or just lying above B?</p> <p>I am trying to solve it using kinect.</p>
How can computer vision distinguish one object being contained by another vs being on top of it?
<p>This seems to work correctly and is a much simpler solution:</p> <pre><code>function [X, FVAL, EXITFLAG, OUTPUT, GRAD] = join_maps(m1, m2) p = [m1(1:3);m2(1:3)]; x1 = [p;m1(4:end)]; x2 = [p;m2(4:end)]; guess_0 = zeros(size(x1,1),1); q = @(x)x'*eye(length(x))*x; fit = @(x)q(x1-x)+q(x2-x); [X,FVAL,EXITFLAG,OUTPUT,GRAD] = fminunc(fit ,guess_0); end </code></pre> <p>I've changed the output to better match the description for fminunc. </p> <p>The output with map_1 and map_2 is</p> <pre><code>X = 3.7054 1.0577 -1.9404 3.7054 1.0577 -1.9404 2.4353 -1.1101 81.0000 </code></pre> <p>In this case, there is no need to call H(X), because the first two poses are identical, so the two maps share the same frame of reference. The function H just transforms the state estimate into the frame of reference of the submap.</p>
725
2012-12-24T00:50:05.517
|slam|
<p><strong>There is a lot of background here, scroll to the bottom for the question</strong></p> <p>I am trying out the map joining algorithm described in <a href="http://services.eng.uts.edu.au/~sdhuang/Shoudong_IROS_2010.pdf">How Far is SLAM From a Linear Least Squares Problem</a>; specifically, formula (36). The code I have written seems to always take the values of the second map for landmark positions. My question is, am I understanding the text correctly or am I making some sort of error. I'll try to explain the formulas as I understand them and show how my code implements that. I'm trying to do the simple case of joining just two local maps. </p> <p>From the paper (36) says joining two local maps is finding the a state vector $X_{join,rel}$ that minimizes:</p> <p>$$ \sum_{j=1}^{k}(\hat{X_j^L} - H_{j,rel}(X_{join,rel}))^T(P_j^L)^{-1}(\hat{X_j^L} - H_{j,rel}(X_{join,rel})) $$</p> <p>Expanded for two local maps $\hat{X_1^L}$ and $\hat{X_2^L}$ I have:</p> <p>$$ (\hat{X_1^L} - H_{j,rel}(X_{join,rel}))^T(P_1^L)^{-1}(\hat{X_1^L} - H_{j,rel}(X_{join,rel})) + (\hat{X_2^L} - H_{j,rel}(X_{join,rel}))^T(P_2^L)^{-1}(\hat{X_2^L} - H_{j,rel}(X_{join,rel})) $$</p> <p>As I understand it, a submap can be viewed as an integrated observation for a global map, so $P^L_j$ is noise associated with the submap (as opposed to being the process noise in the EKF I used to make the submap, which may or may not be different). </p> <p>The vector $X_{join,rel}$ is the pose from the first map, the pose from the second map and the union of the landmarks in both maps.</p> <p>The function $H_{j,rel}$ is:</p> <p>$$ \begin{bmatrix} X_{r_{je}}^{r_{(j-1)e}}\\ \phi_{r_{je}}^{r_{(j-1)e}}\\ R(\phi_{r_{(j-1)e}}^{r_{m_{j1}e}}) (X^{r_{m_{j1}e}}_{f_{j1}} - X^{r_{m_{j1}e}}_{r_{(j-1)e}})\\.\\.\\.\\ R(\phi_{r_{(j-1)e}}^{r_{m_{jl}e}}) (X^{r_{m_{jl}e}}_{f_{jl}} - X^{r_{m_{jl}e}}_{r_{(j-1)e}})\\ X_{f_{j(l+1)}}^{r_{j-1e}}\\ .\\.\\.\\ X_{f_{jn}}^{r_{j-1e}} \end{bmatrix} $$</p> <p><strong>I'm not convinced that my assessment below is correct:</strong></p> <p>The first two elements are the robot's pose in the reference frame of the previous map. For example, for map 1, the pose will be in initial frame at $t_0$; for map 2, it will be in the frame of map 1.</p> <p>The next group of elements are those common to map 1 and map 2, which are transformed into map 1's reference frame.</p> <p>The final rows are the features unique to map 2, in the frame of the first map.</p> <p><strong>My matlab implementation is as follows:</strong></p> <pre><code>function [G, fval, output, exitflag] = join_maps(m1, m2) x = [m2(1:3);m2]; [G,fval,exitflag,output] = fminunc(@(x) fitness(x, m1, m2), x, options); end function G = fitness(X, m1, m2) m1_f = m1(6:3:end); m2_f = m2(6:3:end); common = intersect(m1_f, m2_f); P = eye(size(m1, 1)) * .002; r = X(1:2); a = X(3); X_join = (m1 - H(X, common)); Y_join = (m2 - H(X, common)); G = (X_join' * inv(P) * X_join) + (Y_join' * inv(P) * Y_join); end function H_j = H(X, com) a0 = X(3); H_j = zeros(size(X(4:end))); H_j(1:3) = X(4:6); Y = X(1:2); len = length(X(7:end)); for i = 7:3:len id = X(i + 2); if find(com == id) H_j(i:i+1) = R(a0) * (X(i:i+1) - Y); H_j(i+2) = id; else % new lmk H_j(i:i+2) = X(i:i+2); end end end function A = R(a) A = [cos(a) -sin(a); sin(a) cos(a)]; end </code></pre> <p>I am using the <a href="http://www.mathworks.com/help/optim/ug/fminunc.html">optimization toolbox</a> to find the minimum of the fitness function described above. The fitness function itself is pretty straightforward I think. The function H returns the vector H described above.</p> <p><strong>The result is:</strong> When I run join_maps on the two vectors</p> <pre><code>map_1 = [3.7054;1.0577;-1.9404; %robot x, y, angle 2.5305;-1.0739;81.0000]; % landmark x, y, id map_2 = [3.7054;1.0577;-1.9404; 2.3402;-1.1463;81.0000]; % note the slightly different x,y [G,fv,output,exitflag] = join_maps(map_1, map_2) </code></pre> <p>The output is:</p> <pre><code>Warning: Gradient must be provided for trust-region algorithm; using line-search algorithm instead. &gt; In fminunc at 341 In join_maps at 7 Local minimum found. Optimization completed because the size of the gradient is less than the default value of the function tolerance. &lt;stopping criteria details&gt; Local minimum possible. fminunc stopped because it cannot decrease the objective function along the current search direction. &lt;stopping criteria details&gt; G = 3.7054 1.0577 -1.9404 3.7054 1.0577 -1.9404 2.3402 -1.1463 81.0000 fv = 1.3136e+07 output = iterations: 1 funcCount: 520 stepsize: 1.0491e-16 firstorderopt: 1.6200e+05 algorithm: 'medium-scale: Quasi-Newton line search' message: [1x362 char] exitflag = 5 </code></pre> <p><strong>The question:</strong></p> <p>My program gives map 2 is the minimum of the map joining function. It seems like the minimum should be somewhere between map 1 and map 2. I'm pretty sure the problem is with the matrix H. What am I doing wrong?</p>
Least squares map joining
<p>Assuming the map is a point cloud and that you know the alignment between the ground truth data and the map then calculating the <a href="http://en.wikipedia.org/wiki/Mean_squared_error">mean squared error</a> (MSE) would give you a relative understanding of the accuracy. A lower MSE would indicate they are very similar, 0 of course mean identical, and a high MSE would idicate they are very different.</p> <p>If you do not know the alignment between the ground truth and the map then you could use <a href="http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm">expectation maximization</a> over the alignments to find the best fit and then use MSE.</p>
734
2012-12-28T16:47:56.377
|slam|mapping|
<p>When you've created a map with a SLAM implementation and you have some groundtruth data, what is the best way to determine the accuracy of that map? </p> <p>My first thought is to use the Euclidean distance between the map and groundtruth. Is there some other measure that would be better? I'm wondering if it's also possible to take into account the covariance of the map estimate in this comparison. </p>
Comparing maps to groundtruth
<p>I know I'm digging up an old thread and this is sort of off topic, but I don't think you can just get hold of ET1200 chips from Beckhoff. I emailed them a little while back and was advised that I need to join the Ethercat group. To do this I had to demonstrate that I was going to contribute back to the group - ie, by building and selling devices that used the Ethercat stuff. At that (and this) point in time, I am still prototyping my device (a brushless motor controller for robot applications - currently using CAN) so I could not offer anything (I cannot give a solid time for completion - Im still working on my undergrad). I expressed to them my disappointment. They said not to be disappointed!! Pretty funny stuff! I would <em>really</em> like to get into Ethercat, but the ASICs seem to be untouchable to hobbyists or those without a company. Also, this is my first post, so apologies if I have angered the gods by digging up an old post!</p>
736
2012-12-29T17:06:37.283
|microcontroller|electronics|communication|robotic-arm|
<p>I'm building a hobby 6-DOF robotic arm and am wondering what the best way is to communicate between the processors (3-4 AVRs, 18 inches max separation). I'd like to have the control loop run on the computer, which sends commands to the microprocessors via an Atmega32u4 USB-to-??? bridge.</p> <p>Some ideas I'm considering:</p> <ol> <li>RS485 <ul> <li>Pros: all processors on same wire, differential signal more robust</li> <li>Cons: requires additional chips, need to write (or find?) protocol to prevent processors from transmitting at the same time</li> </ul></li> <li>UART loop (ie, TX of one processor is connected to RX of next) <ul> <li>Pros: simple firmware, processors have UART built in</li> <li>Cons: last connection has to travel length of robot, each processor has to spend cycles retransmitting messages</li> </ul></li> <li>CANbus (I know very little about this)</li> </ol> <p>My main considerations are hardware and firmware complexity, performance, and price (I can't buy an expensive out-of-box system).</p>
Inter-processor communication for robotic arm
<p>The lag in the compass is because of a low-pass filter, to suppress high frequency noise. </p> <ul> <li>There exist more expensive magnetometers which have less noise, and therefore, less lag.</li> <li>It is also possible to use a gyroscope to improve accuracy. In fact, this is what Inertial Measurement Units (IMUs) do. This can be accomplished by using a Kalman filter. Improving accuracy helps to decrease lag, because increased accuracy reduces the dependency on a low pass filter to suppress noise. The Kalman filter fuses the data from the magnetometer, and also the gyroscope (which measures rate of change in heading).</li> </ul> <p>If you stick with your current compass, there are two possible solutions (Warning, this does get increasingly advanced, but option 1 should be accessible to most people without too much work).</p> <ol> <li><p>You can try to cancel out the filter. This can remove lag, but also increases high frequency noise. After doing this, you can try to control the robot based on the new estimate of heading. To do this, you must experiment to work out the low pass filter parameters. For example, in discrete time, you might find:</p> <p>$$\hat\theta(t)=a_0\theta(t)+a_1\theta(t-1)+\cdots+a_k\theta(t-k)$$ where $\hat\theta(t)$ is the estimated heading (compass output) at time $t$, $\theta$ is the actual heading (ground truth) at time $t$.</p> <p>You can find the parameters $a_i$ by doing an experiment where you measure the ground truth using some other external means. Given $n+k+1$ samples, you have this equation: $$\left[\matrix{\hat\theta(k)\\\vdots\\\hat\theta(k+n)}\right]=\left[\matrix{\theta(k)&amp;\theta(k-1)&amp;\cdots&amp;\theta(0)\\\vdots&amp;\vdots&amp;&amp;\vdots\\\theta(k+n)&amp;\theta(k+n-1)&amp;\cdots&amp;\theta(n)}\right]\left[\matrix{a_0\\a_1\\\vdots\\a_k}\right]$$</p> <p>And you can solve by finding: $$\left[\matrix{a_0\\a_1\\\vdots\\a_k}\right]=\left[\matrix{\theta(k)&amp;\theta(k-1)&amp;\cdots&amp;\theta(0)\\\vdots&amp;\vdots&amp;&amp;\vdots\\\theta(k+n)&amp;\theta(k+n-1)&amp;\cdots&amp;\theta(n)}\right]^{+}\left[\matrix{\hat\theta(k)\\\vdots\\\hat\theta(k+n)}\right]$$ where $M^+$ is the pseudo-inverse matrix of $M$. There is no definitive way to work out $k$, so you will probably just guess. For bonus points, this assumes that the noise is white and independent, but you can whiten it first to remove bias, and therefore improve your estimate of the parameters.</p> <p>You can convert this to a transfer function (also known as Z-transform in the discrete time domain):</p> <p>$$\frac{\hat\Theta(z)}{\Theta(z)}=a_0+a_1 z^{-1}+...+a_k z^{-k}$$</p> <p>To cancel this out, we can take the inverse (where $\bar\theta(t)$ is our new estimate of heading):</p> <p>$$\frac{\bar\Theta(z)}{\hat\Theta(z)}=\frac{1}{a_0+a_1 z^{-1}+\cdots+a_k z^{-k}}$$</p> <p>Converting back to the time domain:</p> <p>$$a_0\bar\theta(t)+a_1\bar\theta(t-1)+\cdots+a_k \bar\theta(t-k)=\hat\theta(t)$$</p> <p>$$\bar\theta(t)=\frac{\hat\theta(t)-a_1\bar\theta(t-1)-\cdots-a_k \bar\theta(t-k)}{a_0}$$</p> <p>we can then use $\bar\theta$ to control the robot.</p> <p>This will be very noisy, so you might want to still put $\bar\theta$ through a low-pass filter before use (although perhaps one with less lag).</p></li> <li><p>The above solution is still not the best way. The noisy estimate may not be very useful. If we put this into a state space equation, we can design a Kalman filter, and a full-state feedback controller using LQR (linear quadratic regulator). The combination of a Kalman filter and LQR controller is also known as an LQG controller (linear quadratic gaussian), and use loop-transfer recovery to get a good controller.</p> <p>To do this, come up with the (discrete-time) state-space equations:</p> <p>$\vec{x}(t)=A\vec{x}(t-1)+B\vec{u}(t-1)$, $\vec{y}(t)=C\vec{x}(t)$</p> <p>or: $$\vec{x}(t)=\left[\matrix{\theta(t)\\\theta(t-1)\\\cdots\\\theta(t-k)}\right]= \left[\matrix{ A_1&amp;A_2&amp;\cdots&amp;0&amp;0&amp;0\\ 1&amp;0&amp;\cdots&amp;0&amp;0&amp;0\\ 0&amp;1&amp;\cdots&amp;0&amp;0&amp;0\\ \vdots&amp;\vdots&amp;&amp;\vdots&amp;\vdots&amp;\vdots\\ 0&amp;0&amp;\cdots&amp;1&amp;0&amp;0\\ 0&amp;0&amp;\cdots&amp;0&amp;1&amp;0 }\right] \vec{x}(t-1) + \left[\matrix{B_0\\B_1\\0\\\vdots\\0\\0}\right]\vec{u}(t-1)$$</p> <p>$$\vec{y}(t)=\left[\matrix{\hat\theta(t)}\right]=\left[\matrix{a_0\\a_1\\\vdots\\a_k}\right]\vec{x}(t)$$</p> <p>where $\vec{u}(t-1)$ represents the power in the motors to turn the robot, and $A_0$, $A_1$, $B_0$, $B_1$ is how much it affects the heading based on position and speed (you can choose non-zero values for the other elements of the $B$ matrix, and first row of the $A$ matrix too).</p> <p>Then, you can build your observer (Kalman Filter), by choosing noise estimates $Q_o$ and $R_o$ for the process noise and measurement noise. The Kalman Filter can then find the optimal estimate of heading, given those assumptions about the noise. After choosing the noise estimates, the implementation just depends on implementing code for the Kalman Filter (equations can be found on Wikipedia, so I won't go over it here).</p> <p>After that, you can design an LQR controller, this time, choosing $Q_c$ and $R_c$ representing the weighting given to regulating the heading, and trying to limit the use of the actuators. In this case, you might choose $Q_c = \left[\matrix{1&amp;0&amp;0&amp;\cdots&amp;0\\0&amp;0&amp;0&amp;\cdots&amp;0\\\vdots&amp;\vdots&amp;\vdots&amp;&amp;\vdots\\0&amp;0&amp;0&amp;\cdots&amp;0}\right]$ and $R_c = \left[1\right]$. This is done because LQR finds the optimal controller to minimise a cost function: $J = \sum{(\vec{x}^T Q\vec{x} + \vec{u}^T R \vec{u})}$</p> <p>Then, you just put it through the discrete time algebraic Riccati equation:</p> <p>$$P = Q + A^T \left( P - P B \left( R + B^T P B \right)^{-1} B^T P \right) A$$</p> <p>and solve for a positive definite matrix $P$.</p> <p>Thus, your control law can be given by:</p> <p>$$\vec{u}(t)=-K(\vec{x}(t)-\vec{x}_{ref}(t))$$</p> <p>where $K = (R + B^T P B)^{-1}(B^T P A)$</p> <p>Finally, just doing this won't work very well, and is likely to be unstable because of the noise. Indeed, that means option 1 probably won't work unless you first put $\bar\theta$ through a low-pass filter (albeit not necessarily with such a long effective lag time). This is because while LQR is guaranteed stable, as soon as you use a Kalman filter, the guarantee is lost.</p> <p>To fix this, we use the Loop Transfer Recovery technique, where you adjust the Kalman filter, and instead choose a new $Q_o = Q_0 + q^2BVB^T$, where $Q_0$ is your original $Q$ matrix, tuned so that the Kalman filter is optimal. $V$ is any positive definite symmetric matrix, which you can just choose as the identity matrix ($V=I$). Then, just choose a scalar $q$. The resulting controller should become (more) stable as $q \rightarrow \infty$, although the $Q_o$ matrix becomes de-tuned, which means it becomes less optimal.</p> <p>Therefore, you just increase $q$ until it is stable. Another way you can try to make it stable, is to increase $R_c$ (or decrease $Q_c$) to make the LQR controller slower.</p></li> </ol> <p>The concepts in this post does get quite advanced, but if you need to solve things like the Riccati equation, you can use MATLAB or other software to do this. There may also be libraries already implementing the Kalman filter (again, I believe MATLAB also does this).</p> <p>For an embedded application, you may need to implement the Kalman filter yourself, although there is probably a C implementation.</p>
738
2012-12-31T01:09:06.787
|sensors|compass|
<p>I've got a tread-driven robot, with low precision wheel encoders for tracking distance and an electronic compass for determining heading. The compass has significant (> 1 second) lag when the robot turns quickly, e.g. after reaching a waypoint &mdash; pivoting in place to point to its new heading. </p> <p>What are ways for dealing with the lag? I would think one could take a lot of measurements and model the compass response. However, this seems problematic since it's rate-dependent and I don't know the instantaneous rate.</p> <p>As a simple-but-slow approach, I have the robot turn until it's very roughly pointed in the right direction, then make very small incremental turns with brief measurement pauses until it's pointed the right way. Are there other ways of dealing with this? </p>
What are methods for dealing with compass lag (rate dependent hysteresis)?
<p>Yes, such a method can give you a reasonable estimates of noise. Note that it is susceptible to systematic error. For instance if you are flying a quadrotor in the presence of a fan. This would show up in your findings which is generally undesirable.</p> <p>With that said you could improve your estimates by using the <a href="https://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm" rel="nofollow noreferrer">forward-backward algorithm</a>. This algorithm is named from the fact that it consists of a forward pass and a backward pass. The forward pass is basically just an application of a Kalman-Filter, which as you may already realize, only includes data available up until the time step for which the state is being estimated. The backward pass then improves these estimates by including data available after the time in question.</p> <p>I have only used the forward-backward algorithm with a KF (i.e. not an EKF). As such I don't know the precise implementation details when using an EKF. However there does appear to be some <a href="https://ieeexplore.ieee.org/document/4518436" rel="nofollow noreferrer">literature</a> on the topic.</p> <p>Edit: As I think about this further it occurs to me that you can use <a href="https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm" rel="nofollow noreferrer">expectation maximization</a> (EM) and <a href="https://en.wikipedia.org/wiki/Coordinate_descent" rel="nofollow noreferrer">coordinate descent</a> (CD) to automatically determine the noise parameters. The process would treat the covariance matrices for the process model (call it P) and observation models (call it O) as EM parameters and proceeds as follows:</p> <ol> <li>Start with your best guess for one of the matrices. Say P for the sake of an example. Then use EM to identify O.</li> <li>Use the O found in step 1 with EM to improve the estimate of P.</li> <li>Repeat steps 1 and 2 until the log-likelihood of using O and P reaches some stopping criterion (e.g. stops varying, varies very little, or is minimized).</li> </ol>
741
2013-01-01T22:39:17.693
|noise|ekf|
<p>I've seen <a href="https://robotics.stackexchange.com/questions/18/what-are-good-methods-for-tuning-the-process-noise-on-kalman-filters">this question</a>, which asks about determining the process noise for an EKF. I don't see anything there about pre-recorded data sets. </p> <p>My thought on how to determine the noise parameters, assuming ground truth is available, would be to run the data several times with the EKF and minimize the mean square error, while varying the noise parameters.</p> <p>Is this an acceptable way to determine noise for a pre recorded data set? Are there better (or just other) ways from determining the optimal noise values based just on the data set?</p>
How do you determine EKF process noise for pre-recorded data sets?
<p>Forgive me I seem to get caught up in people looking for answers to a question they didn't ask a few times already. So given this question; </p> <p>"How do I determine how many amps will be needed from the power supply?"</p> <p>You can roughly guess it, in your case you say 4 servos drawing an amp each, 4 amps total.</p> <p>Pick any BEC that will supply greater than 4 amps, there are plenty available from the other answer I already gave.</p> <p>Then apply a logging circuit to validate your estimate; Make your own that logs both current and voltage, or buy a simple already build device that is made just for that purpose such as an <a href="http://www.eagletreesystems.com/MicroPower/micro.htm" rel="nofollow">Eagle Tree Logger</a></p> <p>There is no magic formula to find correct amperage. Only real world usage and observation will give you a more precise estimate. Any such formula would not account for your mechanical design, friction, drag, gravity etc.</p>
748
2013-01-05T02:37:24.810
|design|power|rcservo|bec|
<p>I'm trying to power 7-12 <a href="http://en.wikipedia.org/wiki/Servo_%28radio_control%29" rel="nofollow">servos</a>, and I was under the impression that each one would need about an amp, but in looking around for an appropriate <a href="http://en.wikipedia.org/wiki/Battery_eliminator_circuit" rel="nofollow">BEC</a> to supply them, I notice that most seem to output around 1-3.5 amps.</p> <p>They won't all be running at once, but often, say 4 will be drawing enough juice to move.</p> <p>Obviously, I'm missing some link in my understanding. How do I determine how many amps will be needed from the power supply?</p>
How many amps do I want my Switching BEC to provide?
<p>I think I see where you're confused, and you're correct in noticing that the motor variables are a little different than regular variables.</p> <p>The <code>#pragma config( )</code> is actually doing a <em>lot</em> of heavy lifting for you because it is a "<a href="http://www.cplusplus.com/doc/tutorial/preprocessor/" rel="nofollow noreferrer">preprocessor directive</a>". In other words, there is a hidden step &mdash; preprocessing &mdash; between the code you wrote and the code that the compiler sees. The preprocessor is why you have access to the <code>motor[ ]</code> array (which you didn't need to declare before using), and why assigning a value to a <code>motor</code> causes your real-world motors to move (which does not happen when assigning to a "normal" variable). The code generated by the preprocessor is saving you from writing a lot of setup code yourself, but it is also making some normal-looking variables do some unexpected things.</p> <p>In your case, this is the motor config line that you wrote:</p> <pre><code>#pragma config(Motor, port7, light_blue, tmotorVex393, openLoop) </code></pre> <p>This tells the preprocessor to generate some code that does the following:</p> <ol> <li>Declare a variable of type <code>tMotor</code> and store it in <code>motor[light_blue]</code></li> <li>Set the port of <code>motor[light_blue]</code> to <code>port7</code></li> <li>Add a function that gets called when you assign a value to <code>motor[light_blue]</code>, which converts that value directly to a power-level signal understood by Vex 393 (and outputs this signal on <code>port7</code>). In other words, use <em>open loop</em> control instead of PID control.</li> <li>(Other things not relevant to this question)</li> </ol> <p>So, <code>light_blue</code> is not an "alternate name" for <code>motor[port7]</code>, and in fact <strong>neither of those names are correct</strong>. The correct way to refer to this motor in code is <code>motor[light_blue]</code>. In other words, <code>light_blue</code> is an index into the <code>motor[ ]</code> array.</p> <p>The <a href="https://robotics.stackexchange.com/a/753/350">code that Spiked3 posted</a> would be the proper way to set up your function:</p> <pre><code>#pragma config(Motor, motorA, light_blue, tmotorNXT, PIDControl, encoder) void testThing (tMotor motor_name) { motor[motor_name] = 20; // set power to 20 } task main() { testThing(light_blue); } </code></pre>
751
2013-01-05T09:00:41.440
|robotc|
<p>I'm trying to program advanced functions in <a href="http://www.robotc.net/wiki/VEX2_Functions_Motors_and_Servos#motorType" rel="nofollow">RobotC</a> but I'm not too sure I'm doing it right. I want to specify the motor port I'm using, but I assigned names to all the motors. Funny thing though, they don't exactly work the same as regular variables.</p> <p>For instance, motor[port7]'s alternate name is light_blue.</p> <pre><code>#pragma config(Motor, port7, light_blue, tmotorVex393, openLoop) </code></pre> <p>I'm not really sure if these are new variables, or just specifications. Anyway, here is the variable's signature:</p> <pre><code>int motor[tMotor motor] </code></pre> <p>My code plans on doing something similar to this:</p> <pre><code>void testThing (Motor motorName) { motorName = someValue; } testThing(light_blue); </code></pre> <p>But with the int/motor hybrid variable/unidentified I'm not sure how well that would work out. Or at all.</p>
Confused about the variables in RobotC?
<p>I test positive for aspergers syndrome on highly validated tests and I can read facial expressions sufficiently well to notice certain patterns in my daily interactions and one of the most noticeable "common" patterns is the disgust/revulsion pattern. The sequencing is pretty simple, I start talking, within 500ms of hearing my talking, the face of the respondent contorts in a most hideous way, and then they immediately start wanting to leave/get-away by whatever means possible. </p> <p>Researching the uncanny valley, as part of my studies into all things related to aspergers syndrome, I couldn't help but notice the similarity of the reactions I get and the reactions described on wikipedia.</p> <p>Knowing of the evolutionary significance of autism spectrum disorders (Mainly, less of them work, they earn less money on average and, anecdotally, they tend to have fewer friends/girlfriends than average... and money has its evolutionary underpinnings...), it would seem to me there's probably an evolutionary connection between autism and the uncanny valley concept.</p> <p>Now, according to the tests, I'm a "light sufferer"(I feel like I suffer plenty, though, BELIEVE me.), so I can read facial expressions in a way that more severely affected apparently can't(?). So, I appear to be right between the extremes of blends-in-and-reads-people-well/doesn't-blend-in-and-can't-read-people-well, getting the worst of both worlds.</p>
758
2013-01-07T21:09:56.563
|research|hri|uncanny-valley|
<p>I'm familiar with the idea of the <a href="http://en.wikipedia.org/wiki/Uncanny_valley" rel="nofollow">uncanny valley</a> theory in human-robot interaction, where robots with <em>almost</em> human appearance are perceived as creepy. I also know that there have been research studies done to support this theory using MRI scans. </p> <p>The effect is an important consideration when designing robotic systems that can successfully interact with people. In order to avoid the uncanny valley, designers often create robots that are very far from humanlike. For example, many therapeutic robots (Paro, Keepon) are designed to look like animals or be "cute" and non-threatening.</p> <p>Other therapeutic robots, like <a href="http://kaspar.herts.ac.uk/" rel="nofollow">Kaspar</a>, look very humanlike. Kaspar is an excellent example of the uncanny valley, since when I look at Kaspar it creeps me out. However, people on the autism spectrum may not experience Kaspar the same way that I do. And according to Shahbaz's comment, children with autism have responded well to Kaspar.</p> <p>In the application of therapeutic robots for people on the autism spectrum, some of the basic principles of human-robot interaction (like the uncanny valley) may not be valid. I can find some anecdotal evidence (with Google) that people on the autism spectrum don't experience the uncanny valley, but so far I haven't seen any real studies in that area.</p> <p>Does anyone know of active research in human-robot interaction for people on the autism spectrum? In particular, how does the uncanny valley apply (or doesn't it apply) when people on the autism spectrum interact with a humanlike robot?</p>
In HRI, how is the "uncanny valley" experienced by people on the autism spectrum?
<p>You are correct in that there is no Kinematic difference. </p> <p><a href="http://en.wikipedia.org/wiki/Kinematics" rel="nofollow">Kinematics</a> do not consider why things happen - ie dynamic stability.</p> <p>There are obvious physical differences, but when the math is worked out for kinematics, it should be the same. This of course implies a certain realistic cap on the level of kinematics. For example it has been pointed out a bicycle leans to steer, but that only occurs once a certain velocity is reached. Until then the kinematics are different. And once that velocity is reached <a href="http://en.wikipedia.org/wiki/Precession" rel="nofollow">Gyroscopic precession</a> also becomes involved. One has to choose where reasonableness is satisfied. If you think all the physics can be modeled, I have some contacts at Yamaha motorcycle racing I'd like to introduce you to.</p> <p>I found a <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;sqi=2&amp;ved=0CDIQFjAA&amp;url=http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.41.7363&amp;rep=rep1&amp;type=pdf&amp;ei=JgHuUP_-DqOSiALdtIDACw&amp;usg=AFQjCNEj20VNHnRRCJVEvP4DIGIsZA3xnw&amp;sig2=Nl3MEaEfABs3E9KYsQMjGQ&amp;bvm=bv.1357700187,d.cGE" rel="nofollow">PDF</a> that describes the math in detail. The Kinematic math for ackerman is known as the bicycle model. Unless that's a crued joke, I would say it implies the correct answer to your question.</p>
763
2013-01-09T00:41:35.487
|mobile-robot|design|kinematics|theory|
<p>I got the following homework question:</p> <blockquote> <p>What are the general differences between robots with Ackermann steering and standard bicycles or tricycles concerning the kinematics?</p> </blockquote> <p>But, I don't see what differences there should be, because a car-like robot (with 2 fixed rear wheels and 2 <strong>dependent</strong> adjustable front wheels) can be seen as a tricycle-like robot (with a single adjustable front wheel in the middle).</p> <p>Then, if you let the distance between the two rear wheels approach zero, you get the bicycle.</p> <p>So, I can't see any difference between those three mobile robots. Is there something I am missing?</p>
Differences between Ackermann steering and standard bi/tricycles concerning kinematics?
<p>As it says in the description of the file format, it is for graph based SLAM approaches. These work on minimizing the error of a pose constraint network. You can think of it this way: There are a number of reference frames (your vertices) and then you have knowledge on the transformation between these frames. These transformations are associated with an uncertainty. <a href="http://ais.informatik.uni-freiburg.de/publications/papers/kuemmerle11icra.pdf" rel="nofollow">Pose graph optimization frameworks</a> like e.g. <a href="http://openslam.org/toro.html" rel="nofollow">TORO</a>, <a href="http://openslam.org/hog-man.html" rel="nofollow">HogMan</a>, <a href="http://openslam.org/g2o.html" rel="nofollow">G2O</a> and so on will then give you the maximum likelihood of your vertex positions, given the constraints.</p> <p>In practical robot terms, this usually means:</p> <ul> <li>Each robot pose $p_k$ at time $k$ has its own reference frame and hence vertex</li> <li>Depending on you approach, you can also add landmarks as vertices. You don't have to however. </li> <li>Whenever you get new information on the relation between two poses, you add that to the constraint graph. E.g. your odometry will give you a transform between $p_k$ and $p_{k+1}$. </li> <li>If your approach works landmark based, you add transformations to your landmarks. If you only know the position to your landmark, you set a high uncertainty on the rotation information of your transformation. </li> <li>If your approach does not know about landmarks, e.g. you have large pointclouds that you match with ICP, you can add the ICP results to your constraint graph.</li> </ul> <p>The pose constraints are usuall stored as sparse matrices of size $n \times n$ where $n$ is the number of vertices (again robot poses and landmarks) in your graph. </p> <p>The file format itself provides initial guesses for the position of the vertices with the <code>VERTEX2</code> (for 2D models) and <code>VERTEX3</code> (for 3D models). You can't mix the two. Constraints are added so that you specify the transform between the reference frames (vertices) given by <code>from_id</code> and <code>to_id</code>. The transform is given by either <code>EDGE2</code> and <code>EDGE3</code> as translation and rotation in euler angles, as well as the information matrix of the uncertainty. In this case the information matrix is the inverse of the covariance matrix for the transform vector $[x\, y \, z\, \text{roll}\, \text{pitch}\, \text{yaw}]$. </p> <p>Depending on your framework, usually one of the vertices is grounded in a global reference frame. </p> <p>Graph based pose graph optimizers are considered SLAM backends. How you generate the constraints e.g. from you range data is a front-end problem. There is a nice overview in these <a href="http://ais.informatik.uni-freiburg.de/teaching/ws11/robotics2/pdfs/rob2-13-frontends.pdf" rel="nofollow">lecture notes</a>.</p>
764
2013-01-09T03:56:58.867
|slam|mapping|
<p>I am most familiar with SLAM maps that are point clouds, usually in the form of a vector like $&lt;x,y,\theta,f_{1x},f_{1y},...,f_{nx},f_{ny}&gt;$. I also understand how to create a map like this using an EKF.</p> <p>Today I came across a <a href="http://www.mrpt.org/Graph-SLAM_maps">.graph file format</a>, which as you would expect consists of vertices and edges in the format:</p> <p><code>VERTEX2 id x y orientation</code></p> <p><code>EDGE2 observed_vertex_id observing_vertex_id forward sideward rotate inf_ff inf_fs inf_ss inf_rr inf_fr inf_sr</code></p> <p>I know that there's a connection between matrices and graphs (an adjacency matrix for example). But it's not clear to me how this graph format of a map is equivalent to a point cloud map that I'm familiar with. </p> <p>What is the relationship? Are the vertices both poses and landmarks? Are they in a global reference frame? How is this created from say velocity information and a range/bearing sensor? Is there a transformation between a graph map and a point cloud? </p>
The relationship between point cloud maps and graph maps
<p>I believe a little Googling and you will find several of the existing controller boards have open source. For example I made a small change to <a href="http://www.hobbyking.com/hobbyking/store/__24723__Hobbyking_KK2_0_Multi_rotor_LCD_Flight_Control_Board.html" rel="noreferrer">hobby kings KK board</a> last week, compiled it and it was fine (look for firmware 1.5 in the files tab). I think <a href="http://www.rcgroups.com/forums/showthread.php?t=1568554" rel="noreferrer">openAero</a> is also readily available.</p> <p>I sure would not try and start from scratch, but if you do, it is multiple PID loops on each axis, mixed with control signals from the receiver. Roll and pitch vary by changing motor speed as you guessed. Yaw is changed by changing opposite rotating pairs of motors. altitude is controlled by changing all motors at once. What code I have looked at leads me to believe you probably should be comfortable with <a href="http://en.wikipedia.org/wiki/Euler_angles" rel="noreferrer">euler angles</a> in order to fuse sensor data from gyros and acceleration (which may or may not be <a href="http://en.wikipedia.org/wiki/I%C2%B2C" rel="noreferrer">I2C</a> so learn that to if you are not familiar with it).</p>
768
2013-01-10T05:53:51.920
|balance|quadcopter|
<p>I'm using my own code to create a quadcopter robot. The hardware part is done but I need to balance the copter. </p> <p><sup>The original video demonstrating the problem was shared via dropbox and is no longer available. </sup></p> <p>I have tried to play with the speed of each motor to get it balanced. It didn't go. I actually have a gyro and accelerometer onboard. But how shall I adjust the motor speed based on these values? What are the rules that I should beware of?</p> <p>Is there any better solution other that try and error? Where shall I begin? Any tips? </p>
How to balance a flying quadcopter?
<p>this page gives a nice overview of joint torque requirements for robot arms. and even provides a simple calculator. <a href="http://www.societyofrobots.com/robot_arm_tutorial.shtml" rel="nofollow">http://www.societyofrobots.com/robot_arm_tutorial.shtml</a></p>
775
2013-01-10T21:41:04.937
|design|otherservos|robotic-arm|
<p>I would like to design a robotic arm to hold a weight X at length Y (in my case I want to hold X=2.5 lbs at Y = 4 inches). Starting out simply, I would like try building an arm with a gripper plus one servo joint. </p> <p>[Servo Joint] ----- Y ------ [Gripper] </p> <p>When designing an arm, would I want to say that the gripper has to have enough torque to hold the desired weight (e.g. 2.5 lbs) at a minimal distance (however long the fingers are) then design the servo joint to bear the weight of the gripper + the load?</p> <p>I would like to be able to hold the object at full extension</p>
Getting started with robotic arm design
<p>The industry standard for solving this problem is with <a href="https://www.google.com/search?q=moving+faders&amp;hl=en&amp;tbo=u&amp;tbm=isch&amp;source=univ&amp;sa=X&amp;ei=Pg4KUfqlI-r7iwKmpYDYCQ&amp;ved=0CH4QsAQ&amp;biw=1710&amp;bih=1145" rel="nofollow noreferrer">moving faders</a>.</p> <p><img src="https://i.stack.imgur.com/ktnSQ.jpg" alt="enter image description here"></p> <p>They are readily available inexpensive components that do exactly what you ask for, except the being round part. </p>
777
2013-01-11T02:30:04.570
|motor|otherservos|
<p>I am trying to build a semi-analog timer. Something like those old egg timers that you rotate the face of. I want a knob that I can turn that can be read by a microcontroller, and I also want the microcontroller to be able to position the knob. I'd like to implement "stops" by letting the microcontroller push the knob towards certain positions. As it runs down, the knob should turn. This is my first project of this kind; I've built small robots in the past, but it's been many years.</p> <p>I've considered <a href="http://forums.trossenrobotics.com/tutorials/how-to-diy-128/get-position-feedback-from-a-standard-hobby-servo-3279/">hacking a servo motor</a> to read its position, but the small hobby servos I've tried are too hard to turn, very noisy, and pick up too much momentum when turned. They don't act like a good knob.</p> <p>I'm now considering a rotary encoder connected to a motor, but after hunting at several sites (SparkFun, ServoCity, DigiKey, Trossen, and some others), I haven't been able to find anything that seemed appropriate. I'm not certain how to find a motor that's going to have the right kind of low torque.</p> <p>This seems like it shouldn't be a really uncommon problem. Is there a fairly normal approach to creating a knob that can be adjusted both by the user and a microcontroller?</p>
Building a controllable "knob"
<p>Your first 2 reasons for having a robot are still wrong today, that is 2 years after you posted.</p> <ol> <li><p>There are no AI algorithms so far. What currently exists are somewhat smart action-reaction scenarios. I've been working in crane automization in a cement plant between 1997 and 2000. Various sensors sent notifications that new material was needed, so a new task was created and scheduled. Absolutely no magic in that. In the end 5 crane drivers lost their jobs because some software with lot of sensors did the same.</p></li> <li><p>For my needs there are still no usable sensors. I need a robot that cleans my appartment, especially the bathroom and kitchen. Where is the sensor that decides if a towel is dirty? If the window or floor needs cleaning? Where is the robot to wash my dishes and place it it into the cabinet afterwards?</p></li> </ol> <p>People are still waiting for a software that passes the Turing test. When that is successfully done, the first step to AI software has been made.</p>
780
2013-01-11T15:07:34.650
|mobile-robot|
<p>The fact is that the more I search the less I find autonomous (real) robots in use. The companion robots are all toys with limited useless functionality. Whenever there is a natural disaster you don’t see operational search and rescue robots in the news. Even military robots in service are all remotely controlled machines. They are not intelligent machines. Industrial robotic arms are deterministic machines. The only robots with some levels of autonomous functionality are cleaning bots, warehouse operations bots and farming robots.</p> <p>On the other hand, today:</p> <ul> <li>the artificial intelligence algorithms are very good in making decisions </li> <li>the sensing technologies are very sophisticated</li> <li>the communication technologies are very fast</li> <li>we can manufacture cheap parts</li> <li>people are extremely gadget savvy</li> </ul> <p>So, why there is no real robot in our day to day life? No investment in the domain? No market yet? Not enough knowledge in the domain? A missing technology? Any idea?</p>
What are the reasons for not having autonomous robots in our daily activities?
<p>The branch of science that most closely resembles "nano bots" is <a href="http://en.wikipedia.org/wiki/Capsule_endoscopy" rel="nofollow noreferrer">Capsule Endoscopy</a>. </p> <p><img src="https://i.stack.imgur.com/S6htU.jpg" alt="Camera Pill"></p> <p>The first generation of these were just "edible" video cameras with cameras inside, which passively recorded one's digestive tract. The most recent round of development is focusing on making them smaller and more maneuverable.</p> <p>They are still not small enough to pass through blood vessels (at least, not safely).</p>
788
2013-01-13T05:01:51.300
|mobile-robot|
<p>I was wondering, do we have real nano bots, like the ones in the movies? </p> <p>I think we have bots which can move through the blood vessels, am I right?</p>
Do "nano bots" (that can fit inside the human body) actually exist?
<p>In general, toy robots probably face fewer challenges, except in human safety. I'd imagine that they don't usually advance the knowledge base themselves.</p> <p>HOWEVER, I think it's beyond question that our crop of engineers and scientists will be larger and more competent if they're exposed early and often to technologies and sciences such as, in this example, robotics. The tinkering culture is known to facilitate and produce innovators more than who has memorized the most mathematical formulae or protein folding shortcuts.</p> <p>The FIRST Robotics Competition also doesn't "move technology forwards", but I dare people to claim that it isn't important for our overall success of our technology sector. There's little technological advancement in playing with Lego, but the creativity and persistence in problem solving that gets reinforced by playing with Lego is absolutely essential.</p> <p>So to answer your question: his statement is mostly true, in that things like what they're selling are sparks for the engine, but that particular company isn't directly pushing technology forward.</p>
790
2013-01-13T18:38:55.450
|research|
<p>Over the last month, I saw many robots that don't have any real purpose, which made me ask myself: "Does this have any value?" I saw dancing robot on CES, advanced lego based robots and also robots combined for very limited purpose. I saw ten year old children playing with robots, and competitions for them. Someone has told me that this is just for education and logic spreading. </p> <p>In other cases, there were arguments like, "this is for informing people that everything is going forwards". I know that people will buy robotic vacuum cleaners because they think that they'll save some time, but these robotic cleaners are not very reliable and I see it only as marketing. </p> <p>Do these things (children's education, dancing robots, and other instances of <a href="http://en.wikipedia.org/wiki/Pig_in_a_poke">selling a pig in a poke</a>) have any value in terms of robotics, and are really advancing the field as manufacturers say?</p>
Do "toy" robots move technology forwards?
<p>After some conversations with Microhard's technical support, the following procedure worked for me:</p> <ol> <li>Email Microhard technical support. Explain what you're trying to do, and ask for a version of 920 series firmware that will communicate with your 910 series modem. This firmware is available for the n920 via email, but is not on their website. There is a firmware file for the Spectra 920 on their downloads page; I have not tried it.</li> <li>Connect a serial cable to the data port of the modem. (Not the diagnostics port)</li> <li>You must have hardware with an "S2" switch accessible, or with access to the !bootprog pin on the OEM 920 module. Hold the switch down or hold the !bootprog line low, and power cycle the modem. Do not release until procedure is completed.</li> <li>Use the Microhard Firmware loader (available on the Microhard downloads page) to load the provided firmware.</li> <li>Release the S2 switch (or raise the !bootprog line) and power cycle the modem. Your 920 series modem is now ready to communicate with 910 series modems.</li> </ol>
795
2013-01-14T14:40:31.087
|radio-control|
<p>Microhard Systems currently sells several types of 900MHz modems, which are mostly used in robotics and SCADA applications. One of their product lines, the 910 series (MHX910, n910, Spectra 910), is obsolete and no longer sold. However, some older equipment is built with OEM versions of the 910 series soldered in place.</p> <p>Microhard currently sells a 920 series (MHX920, n920, Spectra 920) that shares many of the specs with the 910 series, but cannot establish a radio link with a 910 series modem due to differences in encryption and hopping pattern. Therefore, if you want to make new equipment communicate with equipment using the 910 series, your options are:</p> <ol> <li>De-solder the old 910 modem and replace it with the footprint-compatible 920 modem, or</li> <li>Reconfigure a 920 series modem to communicate with the 910 series modem.</li> </ol> <p>Option 1 is undesirable, since I don't have access to the firmware on the older equipment in question. Does anyone know how to accomplish option 2?</p>
How can Microhard 920 series modems be made compatible with Microhard 910 series?
<p>Really this is a question to be answered by experiments. Will it work? It seems like it could. Two things that will be important to look at are:</p> <ol> <li><em>Training time</em> - You are still using a neural network and they take time to train. Whether this formulation would reduce the number of rounds required for training is really to be seen. It will of course change with the number of connections in the net as your agent will need to test each multiple times.</li> <li><em>Training method</em> - Based on your description it seems you are planning to use a <a href="http://en.wikipedia.org/wiki/Recurrent_neural_network" rel="nofollow">Recurrent Neural Network</a> (RNN) which, if memory serves, makes training more computationally intense.</li> </ol> <p>There is a bit of literature on this topic already. For example a quick google reveals a paper titled <a href="http://www.sciencedirect.com/science/article/pii/S0925231201005045" rel="nofollow">A neural network model for quadruped gait generation and transitions</a>. It may be worth looking to see what has already been tried. But then sometimes it is just fun to run the experiments.</p>
801
2013-01-16T10:41:52.210
|microcontroller|machine-learning|walk|
<p>I'm building a quadrupedal robot that will learn how to walk. From the <a href="https://robotics.stackexchange.com/questions/568/is-it-possible-to-run-a-neural-network-on-a-microcontroller">responses</a> I got from asking if its possible to run a NN on a micro controller I realised I needed to think of a clever system that wouldn't take 1000 years to be effective and would still be able to demonstrate onboard learning. I've designed a system but I'm not sure how effective it will be.</p> <p>Firstly I hardcode 5-20 positions for the legs.</p> <p>I set up a (simple) neural network where each node is a different set of positions for the legs, which I will write.</p> <p>The robot moves from one node to another and the weight of the joint is determined by how far forward the robot moves.</p> <p>Eventually there will be strong connections between the best nodes/positions and the robot will have found a pattern of moves that are most successful in walking.</p> <blockquote> <p>How effective would this be in learning to walk?</p> </blockquote> <p>Note: instead of positions I could write short gaits and the process would work out which sets work best when combined.</p>
Simple Neural Network with hardcoded positions for walk optimisation
<p>You seem to have 2 separate (but related) problems you are trying to solve at once. Let's break down your conundrum into smaller pieces:</p> <ol> <li>How do I <em>communicate</em> commands from a slow system (30Hz) to a fast controller (200Hz), and how do I communicate data being received at 200Hz back to my 30Hz thinktank?</li> <li>How do I <em>control</em> what is happening at 200Hz, when I can only tell the robot what to do at 30Hz?</li> </ol> <p>Item 1 can be solved in hardware as your original question seems to point- you can queue up data at 200Hz and send the packet at 30Hz to your higher level system. You can do this with TCP/IP, or possibly CAN depending on how much data you want to send. Different hardware has different max data rates. Adding an architecture level like ROS to act as communication bridge/arbiter as suggested in other posts may also help. </p> <p>Item 2 is a <em>control theory</em> problem that can't be solved with hardware alone. The SLAM, position and speed determinations, and task determination that you want will need to be smarter since they send and receive information less often. You will probably want <strong>2 control loops</strong>: 1 at 200Hz and 1 at 30Hz. </p> <p>There's lots of other questions that cover feed-forward, feed-back, and PID control loops, but you specifically asked about scaleability- the way most giant systems scale is by <strong>layering</strong> control loops and logic so that the minimal necessary information goes across whatever hardware you end up with. For example, your top level controller might only give goal position points and an average goal speed to the lower level one, not change the speed 30 times a second.</p>
807
2013-01-17T08:50:58.617
|control|design|communication|
<p>We are currently designing a mobile robot + mounted arm with multiple controlled degrees of freedom and sensors. </p> <p>I am considering an architecture in two parts:</p> <ol> <li><p>A set of realtime controllers (either Raspeberry Pis running an RTOS such as Xenomai or bare metal microcontrollers) to control the arm motors and encoders. Let us call these machines RTx, with x=1,2,3… depending on the number of microcontrollers. This control loop will run at 200Hz.</p></li> <li><p>A powerful vanilla linux machine running ROS to compute SLAM, mocap, and execute high-level logic (decide the robot’s task and compute the motors' desired position and speed). This control loop will run at 30Hz.</p></li> </ol> <p>I know my framework needs to be scalable to account for more motors, more sensors, more PCs (eg. for external mocap). </p> <p>My main problem is to decide how to have the different RTx communicate with PC1. I have looked at papers related to robots architecture (e.g. <a href="http://global.kawada.jp/mechatronics/hrp2.html">HRP2</a>), most often they describe the high level control architecture but I have yet to find information on how to have the low level communicate with the high level and in a scalable way. Did I miss something?</p> <p>In order to connect the fast RT machines ensuring the motor control with PC1, I have considered TCP/IP, CAN and UART:</p> <ul> <li>TCP/IP: not deterministic but easy to put in place. Is non determinism a real issue (as it will only be used at at slow speed 30Hz anyways)?</li> <li>CAN: slow, very reliable, targeted to cars ( have seen there are some exemples using CAN with robots but it looked exotic)</li> <li>UART: if I had only had one RT machine for motor control I would have considered UART but I guess this port does not scale well with many RTx Is TCP/IP really a no go because of its non-deterministic characteristics? It is so easy to use…</li> </ul> <p>At the moment no solution really seems obvious to me. And as I can find no serious robot example using a specific reliable and scalable solution, I do not feel confident to make a choice. </p> <p>Does anyone have a clear view on this point or literature to point to? Are there typical or mainstream communication solutions used on robots?</p>
How can I control a fast (200Hz) realtime system with a slow (30Hz) system?
<p>The underlying microcontroller in the VEX Cortex is some sort of STM32F1 or STM32F2. I'm not sure which model exactly (just that it's a ST-made Cortex M3), but they all do have digital I/O capabilities on the pins used for analog inputs. Given the bare microcontroller, you could do so.</p> <p>However, the VEX Cortex is not the bare microcontroller. According to the <a href="http://www.vexforum.com/wiki/index.php/VEX_Cortex_Microcontroller" rel="nofollow">documentation</a>, the controller board has been designed to add a much higher input impedance to the analog inputs which would likely interfere with digital operation:</p> <blockquote> <p>Input Impedance: Analog - The 8 analog inputs each consist of a 470K pull-up to +5V, a series resistance of 10K and a 20K pull-down resistance to the uP.</p> </blockquote> <p>You can see the difference in the circuit connections on page 13 of the <a href="http://content.vexrobotics.com/docs/VEXnet_Cortex_UserGuide_081811.pdf" rel="nofollow">User's Guide (PDF)</a>.</p>
810
2013-01-17T23:17:15.437
|sensors|
<p>I'm running out of digital ports, and have no sensors that fit the definition 'analog'. Would it be possible to run a <a href="http://www.robotc.net/support/vex/WebHelpCortex/index.htm#page=functions_vex/sensors/VEX_2.0_Cortex_-_Sensors.htm">touch sensor, a quadrature encoder, or an ultrasonic sensor on an analog port?</a> </p> <p>I'm thinking not, but I didn't run across anything that said otherwise.</p>
Can ultrasonic and button sensors be run in a VEX analog port?
<p>I would suggest you to check the HC-SR04 Ultrasonic Sensor that could easily wired and programmed into your robo vehicle. By adjusting the sensor to correctly emit ultrasound on a fixed object and measuring echo/ping from the object, you can judge the distance quite accurately. <a href="https://i.stack.imgur.com/aTwd5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aTwd5.jpg" alt="enter image description here"></a> Another option would be to lay encoder track(alternate black and white zebra-striped track, so you can use an optical eye -an LDR or LED opto sensor to count the contrast changes while traversing through the track). <a href="https://i.stack.imgur.com/arX6r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/arX6r.png" alt="enter image description here"></a> (I am not so good with photoshop, HTH :) )</p>
812
2013-01-18T13:37:18.247
|mobile-robot|arduino|sensors|
<p>I have a small motorized vehicle with gears as wheels running up and down a track made of gear racks. How can this robot know when it has run half the track? And what's the best method to keep it from running off its track at the end and then return to start.</p> <p>The robot is carrying water, not exactly the same amount each time, so it will not weigh the same. Therefore it might not be the same amount of steps in the stepper-motor each time.</p> <p>Here I have some ideas that might work, though I am a beginner, and don't know what's the best solution.</p> <ul> <li>GPS tracking it (overkill on such a small scale?)</li> <li>Some kind of distance measurer</li> <li>Have a knob it will hit at the middle of the track, telling program to delay for a given time</li> <li>Track amount of steps the motor has performed (won't be as accurate?)</li> </ul>
How to find out how far a motor has taken a vehicle?
<p>Is the problem that the off-the-shelf motor shields all look like they mechanically interfere with the other shield you want to use? Perhap you're seeing the "both shields want to be the top shield, so they can't stack" problem? It's pretty simple to control servo motors with simple wire connections.</p> <p>Standard off-the-shelf servo motors have a 3-wire cable for <a href="https://en.wikipedia.org/wiki/servo_control" rel="nofollow">servo control</a>.</p> <p>Many shields -- in addition to their "main" circuit -- throw in a "power connection" and a few servo connections.<a href="http://www.scienceprog.com/testing-arduino-motor-shield-with-servo-motor/" rel="nofollow">(a)</a>, <a href="http://arduino-info.wikispaces.com/SensorShield" rel="nofollow">(b)</a>, <a href="http://www.robotshop.com/dual-stepper-motor-driver-shield-arduino.html" rel="nofollow">(c)</a>, etc. They connect the 3 servo wires connected -- the GND (G) and Power (+) wires connected to the appropriate power supply, and the Arduino GND and the power supply GND connected. Typically the signal wire (S) is connected to one of the six special <a href="http://arduino-info.wikispaces.com/Arduino-PWM-Frequency" rel="nofollow">Arduino PWM pins</a>. (This method can drive a maximum of 6 motors -- or less if that other shield needs some of these pins).</p> <p>If you make the same connections between the Arduino and the servo with simple wire connections, it works just as well. <a href="http://www.adafruit.com/blog/2012/12/18/tutorial-arduino-lesson-14-servo-motors/" rel="nofollow">(a)</a>, <a href="http://forum.arduino.cc/index.php?topic=111690.msg838987#msg838987" rel="nofollow">(b)</a>, <a href="http://www.instructables.com/id/Controlling-an-RC-Servo-motor-with-an-Arduino-and-/" rel="nofollow">(c)</a>, etc.</p>
815
2013-01-18T19:08:37.780
|control|arduino|microcontroller|motor|power|
<p>I'm working on a rather low budget project, and need some way to control four or more motors using one Arduino. I've looked at motor shields a little, but I have a shield on top of it already, It does have female input on the top though, so a motor shield may work. Any suggestions?</p>
Arduino Motor control
<p>An <a href="https://www.google.com/search?q=optical%20encoder%20disk&amp;hl=en&amp;tbo=u&amp;tbm=isch&amp;source=univ&amp;sa=X&amp;ei=hvv8UKSBEMraigLkg4GQBA&amp;ved=0CFkQsAQ&amp;biw=1524&amp;bih=1088" rel="nofollow noreferrer">optical encoder</a> is fairly simple to add to an existing wheel.</p> <p><img src="https://i.stack.imgur.com/GDZKa.png" alt="enter image description here"></p> <p>Essentially you mount a photosensor that can detect notches or patches on a disk as it rotates with the wheel. You could use through hole light detection as well.</p> <p>The circuit design is fairly straight foward <img src="https://i.stack.imgur.com/TIfWj.png" alt="enter image description here"></p> <p>And your arduino program counts the number of pulses it receives. Knowing the number of ticks per revolution, and wheel size, you can get accurate odometry.</p> <p>Another solution that may work for you is add on encoders for DC motors. Their original intent is to make a regular motor a Servo motors, but would work great for this task as well. The only real difference being that a servo motor uses the information in a closed loop to position the motor, as opposed to just keeping track of it.</p> <p><img src="https://i.stack.imgur.com/iPUKQ.jpg" alt="enter image description here"></p> <p>And one other solution I thought of - if your wheel chair has spokes like a bicycle, then a magnet like <a href="http://www.ebay.com/itm/Universal-Cat-Eye-Wheel-Spoke-Magnet-Speed-Sensor-CatEye-4-ANY-Bicycle-Computer-/150765504458" rel="nofollow noreferrer">this</a> would be easy to attach, and a magnetic sensor, very similar to the above light detection circuit could be located on the chair/frame. That would probably be the least intrusive to implement. If you needed more than 1 per revolution accuracy, just add more magnets to more spokes, as long as they are distributed evenly.</p> <p><img src="https://i.stack.imgur.com/Atj2x.jpg" alt="enter image description here"></p>
819
2013-01-21T06:25:29.593
|arduino|microcontroller|
<p>We have an electric wheel chair, and are looking to add a rotary encoder to each wheel. We don't want to hack the motor itself, so want to add the encoder without harming the motor-to-wheel connection. We will be using an arduino to read the signal.</p> <p>Does anyone have any experience adding rotary encoders to already assembled wheel assemblies? </p>
Adding Rotary Encoders to an Electronic Wheel Chair
<p>Your question is not clear. "Is this byte array converted from hex?" If you know what the values are, you should know how you got them. 128 is not a hex number, it is the same as a hex 80, also written as 0x80. It indicates a 'Direct command, reply not required' as a serial to NXT stream. 15 = 0x0F = LSWRITE.</p> <p>It does not make sense to decode any more, as it is your program but you have not told us what you are trying to do and how you are trying to do it.</p> <p>Just to be clear, there is NO connection between the serial port (usb or bluetooth) and the i2c bus. But the LSWRITE/LSREAD would be the appropriate direct commands (I was wrong earlier, had to look at it some more). I would suggest starting on just the NXT using NXC or LeJOS or RobotC and getting an understanding of the I2C protocol before trying to extend that to a remote PC connection with MRDS.</p> <p>Start <a href="http://mindstorms.lego.com/en-us/support/files/default.aspx" rel="nofollow noreferrer">here</a> And download all the PDFs. They describe the command that can be sent over the PC to NXT serial link.</p> <p>I2C is another step. Start with the <a href="http://en.wikipedia.org/wiki/I%C2%B2C" rel="nofollow noreferrer">Wiki</a>. But you will find that LEGO vendors often do not completely implement the protocol, or they implement it incorrectly. You pretty much need to analyse each device and adjust how you treat it.</p> <p><img src="https://i.stack.imgur.com/V5Wm6.jpg" alt="enter image description here"></p> <p>I would suggest something like the <a href="http://www.saleae.com/logic/" rel="nofollow noreferrer">Saleae logic</a> to analyse what is actually happening.</p>
832
2013-01-25T13:51:19.353
|nxt|i2c|
<p>I have been trying to write code to connect a <a href="http://www.hitechnic.com/cgi-bin/commerce.cgi?preadd=action&amp;key=SPR2010" rel="nofollow">HiTechnic prototype board</a> to my lego brick. Although I am using MSRDS studio, that isn't the issue; reading and writing to the serial port that the device is connected to works fine. </p> <p>Where I am lacking is that I don't understand the data is that is being sent and received. It goes out and comes back in the form of a byte array. For example: [128] [15] [0] [2] [16] [2] [8]</p> <p>Is this byte array converted from hex? What is this response telling me? </p> <p>Obviously I am a total newbie at this, I can program but I don't really understand electronics and I am trying to make that connection between what I have read about how an I2C controller works and what is happening when I send and receive data over a serial port. </p>
How do I interpret this data, received by the I2C controller on an NXT 2 brick?
<p>To expand on <a href="https://robotics.stackexchange.com/a/842/37">thisismyrobot's answer</a>, beam width is indeed important. However, there are a number of other factors, such as the reflectivity of the environment (acoustic "brightness"), transmission frequency, etc.</p> <p>Although it is from 1988, <a href="http://www-personal.umich.edu/~ykoren/uploads/Obstacle_avoidance_w_ultrasonic_sensors_IEEE.pdf" rel="nofollow noreferrer">Obstacle Avoidance with Ultrasonic Sensors</a> covers the challenges well - the physics of echo-ranging appear to have changed little over the years :) </p> <p>The <a href="http://www.generationrobots.com/ultrasonic-sonar-sensors-for-robots,us,8,19.cfm" rel="nofollow noreferrer">Ultrasonic sonar sensors</a> article on <a href="http://www.generationrobots.com/boutique/index.cfm" rel="nofollow noreferrer">Generation Robots</a> introduces some of the more interesting issues in ultrasonic ranging: beam shapes and lobes (see the 50kHz figure).</p> <p>From there, you should visit these more detailed articles on beam characteristics and sensor selection:</p> <ul> <li><a href="http://www.sensorsmag.com/sensors/acoustic-ultrasound/choosing-ultrasonic-sensor-proximity-or-distance-measurement-838" rel="nofollow noreferrer">Choosing an Ultrasonic Sensor for Proximity or Distance Measurement Part 2: Optimizing Sensor Selection</a></li> <li><a href="http://www.ndt-ed.org/EducationResources/CommunityCollege/Ultrasonics/EquipmentTrans/beamspread.htm" rel="nofollow noreferrer">Transducer Beam Spread</a></li> <li><a href="http://airmartechnology.com/uploads/AirPDF/Intro_Overview.pdf" rel="nofollow noreferrer">Overview for Applying Ultrasonic Technology</a></li> <li><a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;sqi=2&amp;ved=0CDIQFjAA&amp;url=http://www.olympus-ims.com/.downloads/download/?file=285213495=en_US/&amp;ei=Y6IEUZntGMWl2AX3loCgCQ&amp;usg=AFQjCNE6CyPgNs60UjeUishZz5c8AbL0mA&amp;sig2=APv39o2XxBbL181TI08rqw&amp;bvm=bv.41524429,d.b2I" rel="nofollow noreferrer">Important Characteristics of Sound Fields of Ultrasonic Transducers</a> [pdf]</li> </ul>
839
2013-01-26T17:51:53.783
|mobile-robot|sensors|navigation|acoustic-rangefinder|
<p>For avoiding obstacles during 2D robot navigation what is the best position/angle to place the sonar sensors? How many should there be?</p> <p>I would like to know if there is some theory or examples for the problem of placing. I realize that it depends on the way that the robot moves and its geometry, but I am searching for general answers.</p>
Sonar for obstacle avoidance: how many sensors and where to place them?
<p>As mentioned above 7-bits means possibility of 128 slave devices attached to the same i2c bus.</p> <p>This loop is run 128 times(0-127) to determine how many slaves are present on the bus by asking them to respond(via ack) and prints whenever a device presence is detected. </p>
848
2013-01-29T14:06:09.987
|arduino|i2c|
<p>I am learning about I2C on the Arduino. I was looking at a sample program to scan for I2C devices and saw this:</p> <pre><code>// This sketch tests the standard 7-bit addresses // from 0 to 127. Devices with higher bit address // might not be seen properly. </code></pre> <p>With the following code. </p> <pre><code> for(address = 0; address &lt;= 127; address++ ) { // The i2c_scanner uses the return value of // the Write.endTransmisstion to see if // a device did acknowledge to the address. Wire.beginTransmission(address); error = Wire.endTransmission(); if (error == 0) { Serial.print("I2C device found at address 0x"); if (address&lt;16) Serial.print("0"); Serial.print(address,HEX); Serial.println(" !"); } } </code></pre> <p>As far as I understand it, a bit is just 1. So, why how do 7 bits loop from 0 - 127? </p>
Why must I loop 127 times for a "7-bit" address in this example?
<p>I put together encoders for this exact chassis. Rather than reflecting ones, I used slot ones. I thought I could work off the hole in the white gear, but it turns out the plastic is pretty transparent to IR, so I ended up using some black electical tape (high tech, I know) to make opaque regions on the gear. </p> <p>After building two encoders, I discovered there's a similar product online: <a href="http://arduino-direct.com/sunshop/index.php?l=product_detail&amp;p=202" rel="nofollow noreferrer">LightBeam OptoInterrupter Module</a></p> <p>For my first robot, I’m just using a rough encoder, with 4 counts per wheel revolution. Here's my description from <a href="http://www.mcgurrin.com/robots/?tag=wheel-encoder-electronics" rel="nofollow noreferrer">my blog</a>: </p> <p>In looking in the chassis, there’s not a lot of room. As a result, I decided I’d use a small transmissive sensor, rather than a reflective sensor. Both have an IR emitter and an IR photo detector. For reflective units, they both face the same direction, and the detector measure IR reflected back to the sensor. For a transmissive or interrupt sensor, the two units are separated by a gap, and the detector picks up IR passing through the gap. So far, so good.</p> <p>I ended up using Vishay transmissive optical sensors (model TCST1202) I purchased from DigiKey. I wired them up based on the circuit posted by Aniss1001 in the “<a href="http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1257038219" rel="nofollow noreferrer">Homemade wheel encoder</a>“ thread on the Arduino forum.</p> <p><img src="https://i.stack.imgur.com/ZERQr.png" alt="Encoder circuit diagram - for a different encoder, but it worked fine"></p> <p>I built the circuit on a prototyping breadboard for testing and got a surprise. The circuit worked fine, but the gear is transparent to IR! It turns out that nylon and most plastics used for inexpensive gears are pretty transparent to IR. I first tried creating an opaque section with a black marker, but while that worked on paper, it didn’t adhere well enough to the gear. I ended up using a piece of black tape. Once that worked, I cut some small circuit boards down to size and built the encoders, practicing my soldering skills.</p> <p>I just used hot glue to mount the encoders. The encoder boards stick up above the chassis base, so I’m using standoffs to raise the plastic plate with the Romeo controller and other devices. Here’s a picture of one of the encoders before mounting,, and then mounted on the chassis:</p> <p><img src="https://i.stack.imgur.com/WUt0F.jpg" alt="Homemade wheel encoder"></p> <p><img src="https://i.stack.imgur.com/8Jo7F.jpg" alt="Mounted encoder"></p> <p>Hope this helps!</p>
854
2013-01-30T03:44:23.420
|arduino|two-wheeled|
<p>I have the following chassis along with an Arduino and a motor shield. <img src="https://i.stack.imgur.com/DgPKZ.jpg" alt="Robot Chassis"></p> <p>I'm in the process of developing a tracking mechanism for use with differential drive.</p> <p>Normally, a photo reflector can be placed adjacent to the wheel that will reflect when each <em>spoke</em> passes through therefore allowing code to be written that will accurately measure each wheels position.</p> <p>The problem I have is that you cannot see the wheels from inside the chassis, only small holes for the driveshaft. Placing sensors on the outside would look ridiculous and a wall crash would cause havoc.</p> <p>Would I be able to use a photo reflector on the gears (as shown) if I accurately placed it to count each spoke on the gear itself? I'm a bit hesitant though because even a small bump could misalign the sensor - again causing havoc.</p> <p>So does any one have an idea on how to track the wheel movements?</p>
Sensors for differential drive
<p>While this can be done it will be problematic. The issue comes from the fact that particle filters are a method for dealing with multi-modal probability distributions (PD) while Kalman filters assume your PD can be well represented with a Gaussian distribution.</p> <p>I can conceive of a few methods for doing so if you really must. The most naive would be to calculate the mean and covariance of each $[x, y, \theta]$ vector. This neglects the weights and as such will be impacted by outliers (in this case particles of low weights).</p> <p>It may be better to calculate the mean and covariance of each $[x, y, \theta]$ vector using the normalized weight values as the probability of the vector. This may be what you were driving at in your question.</p> <p>Another naive approach would be select the largest mode and calculate its mean and covariance. Doing so requires that you select which particles belong to the mode and which don't which you could do with something like <a href="http://en.wikipedia.org/wiki/K-means_clustering" rel="nofollow">K-means</a>.</p> <p>Yet another alternative would be to use K-means to find the centers for K clusters and then calculate the mean and covariance of of these centers. I suspect this will work out pretty close to the same things as the second option I offered but not having done the math I cannot say so for certain.</p>
857
2013-01-30T16:03:51.357
|localization|kalman-filter|particle-filter|
<p>Is there a way of initializing a Kalman filter using a population of particles that belong to the same "cluster"? How can you determine a good estimate for the mean value (compute weighted average ?) and the covariance matrix ? Each particle is represented as $[ x , y , θ , weight]$.</p>
Can you seed a Kalman filter with a particle filter?
<p>As a child I had a copy of a software called "<a href="http://www.mhj-tools.com/sps-visu/" rel="nofollow" title="SPS VISU">SPS Visu</a>" where you could do such simulations. (I thought it was a game and sure ha a lot of fun with it :D) Maybe this is what you are looking for?</p>
863
2013-01-31T10:58:41.593
|simulator|
<p>I need to simulate a stream of vehicles, such as on an assembly line. Automatons are performing operations on the vehicles when they come within reach. The automatons do not keep track of the individual vehicles, they simply collect data.</p> <p>We need to choose a method of matching the data gathered by each automaton with the vehicle it belongs to. For example, we could guess the identity of a vehicle using its timing when arriving in the operation range (sensors) of an automaton.</p> <p>I have to check the possible problems we will face, so I would like a little (hopefully simple) video/simulation tool that I could play with.</p> <ul> <li>vehicles could be symbolized has moving black squares</li> <li>automatons/sensors could be static points or circles.</li> <li>it should be possible to change the time interval between two vehicles, and their speed, and add some random delays.</li> </ul> <p>What kind of software should I search for, or where should I look?</p> <p>Should I consider to developing it from scratch? </p>
How do I simulate an assembly line?
<p>To detect if your steppers have stalled check this question<a href="https://robotics.stackexchange.com/questions/1270/how-to-detect-when-a-stepper-motor-has-stalled/1271?noredirect=1#comment2845_1271">how to detect when a stepper motor has stalled</a>. </p> <p>If you want to use absolute positioning you can create a linear potentiometer using a length of nichrome wire pulled across the axis, with the "wiper" touching where the bed/mill is moving. You will need some uC to report on the actual position of the system however a 10ADC should be enough for a small project.</p>
865
2013-02-01T00:06:46.373
|arduino|stepper-motor|current|cnc|
<p>I am creating a CNC machine on a budget, using old motors out of printers/scanners/etc. </p> <p>I am limited to about 650mA for the whole system, so my fear is that when the cutting bit touches the material, the stepper might be moving too quickly and won't have enough torque. This would mean it will become one rotation behind, which could really mess up a CNC project.</p> <p>Detecting when the motor "misses" a step would allow me to readjust the motor speed until it reaches a balance between working quickly and having adequate torque. How can I achieve this?</p>
How to tell a stepper motor's position, or detect slippage
<p>This looks very much like a simplification of a traditional <a href="http://en.wikipedia.org/wiki/SCARA" rel="nofollow">SCARA</a> robot design.</p> <p>It is a nice simple design in which the weight bearing axes are all nicely horizontal, which means these axes behave similarly irrespective of the load weight. The only downside of this design is that the some positions can only be accessed from a left handed configuration, some can only be accessed from a right handed configuration and some can be accessed from either (which can cause problems with the higher level control).</p> <p>The normal nomenclature for these joints are that the upper arm is between the shoulder axis and the elbow, so that is what I will call these joints below.</p> <p>If you want the elbow pulley to turn the lower arm then you need to use either a fixed shaft or drive shaft:</p> <ul> <li>With a drive shaft you bolt the pulley and the lower arm to the shaft and set the shaft in bearings on the end of the upper arms. Torque is transmitted from the belt to the lower arm through the drive shaft. <ul> <li>This route is easier, since both pulley and arm are probably designed to do this.</li> </ul></li> <li>With a fixed shaft you bolt the shaft rigidly between the upper arms, mount both the lower arm and the pulley on bearings, then fix the pulley to the lower arm directly. <ul> <li>This design could allow the upper arm to be much more rigid, which might be a concern if you are worried about the strength of your upper arm.</li> </ul></li> </ul> <p>The shoulder joint has similar options, but is complicated by the fact that you not only need to transmit torque to the lower arm, but you also have to turn the upper arm too. Now you have several options:</p> <ul> <li>Use the shoulder shaft as a drive shaft, fix it to both halves of the upper arm, and use the shaft rotation to drive the upper arm, then use a fixed shaft mechanism to drive the lower arm pulley (this extra joint will be freely rotating on the upper arm drive shaft). <ul> <li>This is probably the easiest option.</li> </ul></li> <li>Use the shoulder shaft as a drive shaft, but fix it to the lower arm pulley, and use the shaft rotation to drive the lower arm, then use a fixed shaft mechanism to mount and drive the upper arm. <ul> <li>The problem with this option is that unless you add a fixed shaft shoulder mechanism for both halves of the upper arm, you may end up twisting the arm when you apply torque to one half but not the other, a problem which is even more likely if you opted for a drive shaft elbow mechanism.</li> </ul></li> <li>Fix the shoulder shaft to the base, and use a fixed shaft mechanism to drive both upper and lower arms. <ul> <li>Again, this might give you a slightly stronger robot overall, but has the same problem with regard to driving both halves of the upper arm.</li> </ul></li> </ul> <p>It is this extra complexity which is why a single heavier duty upper arm might be preferable to increasing the strength of the upper arm by doubling up two lighter arms which are allowed to move (slightly) independently of each other.</p> <p>Another alternative to remove the need to transmit the elbow torque through the shoulder axis is to mount the lower arm motor <em>on the upper arm</em>. Thus you can treat upper and lower arms as mechanically independent systems, and your design decision for one won't have ramifications for the other.</p>
869
2013-02-03T21:22:36.800
|design|robotic-arm|joint|
<p>I am very new to robotic design and I need to determine what parts I will need to assemble an arm joint. The joint will contain one timing belt pulley which a remote motor will be turning, a forearm that the pulley will be rotating and an upper-arm piece that will actually be two parallel arms that will grip the pulley on top and bottom in order to brace the pulley from off axis torque from the timing belt.</p> <p>I am kind of at a lost as to how to mount all of these together. I would like to mount the forearm directly to the pulley and then the two parallel arms (comprising the upper-arm) sandwich the top of the pulley and the lower part of the forearm. This would be attached using a turn table. Any ideas on how a shaft would mount to these? Or how to attach the pulley to the arms themselves?</p> <p>Any kind of direction or links would be greatly appreciated, I don't even know the names of the parts I would be looking for.</p> <p>In this ASCII art model the dashed lines (-) are the arms. The arm on the left is the forearm and the two arms on the right are the two parallel parts of the upper arm. The stars are the belt and the bars (||) are the pulleys at the elbow |E| and shoulder |S|. </p> <pre><code> ----------------- |E|***********|S| ----------------- ----------------- </code></pre> <p>I am thinking of mounting the pulley to the left arm directly (a bushing?) and then maybe using turntables to mount the pulley to the top arm and another turn table to mount the left arm to the bottom arm.</p> <p>Here is a picture of the design to help you visualize:</p> <p><a href="https://i.stack.imgur.com/jOrvw.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/qyAQ9.png" alt="Double Joint Arm assembly"></a></p>
Building Robotic arm joint
<blockquote> <p>Am I correct in saying that this would not require a gyro, just a 3 (2?) axis accelerometer, to detect pitch and roll, then adjust the ailerons and elevator to compensate?</p> </blockquote> <p>No. The opposite is true. The accelerometer will be almost useless to detect rotations on a platform that's experiencing unknown accelerations. Your plane will be subject to two force vectors: gravity and lift+drag. Lift+drag will vary hugely as a function of the plane's pitch.</p> <p>But here's a more general way you can know this is impossible, and you can use this method in many other cases than just IMUs. A sensor, or set of sensors gives you N values. You can't interpret this into a space with more than N dimensions.</p> <p>A trivial example: You want a sensor to measure someone's position within a room. Would a single ultrasonic range finder be sufficient? No. A position in a room requires two values, (X, Y) coordinates. But an ultrasonic sensor gives you only one value, a length. There is no way to set up this sensor to solve your problem. But if you had two sensors, then it might be possible.</p> <p>Now let's look at the plane. A non-accelerating plane is subject to one force only, gravity. The direction of gravity relative to the plane is a 3D vector, but luckily (if you're on Earth) you know its magnitude. That's 1 value, leaving 2 unknowns, so you could theoretically get away with a 2-axis accelerometer to make up those 2 unknowns and calculate the vector of gravity.</p> <hr> <p>What about a plane in flight. Gravity and lift+drag are both 3D vectors, giving you 6 numbers. OK you know the magnitude of gravity, so 5 numbers. You'll need some kind of sensor that gives you at least 5 values. Therefore a 3-axis accelerometer cannot be enough.</p> <p>While neither a 3-axis gyro nor a 3-axis accelerometer will be enough on their own, the gyro would be much more useful. This is because it's directly measuring rotations, which is the very thing you're trying to control. </p> <p>Likewise, the accelerometer will be more useful for detecting and correcting deviations from travel in a straight line.</p>
872
2013-02-04T07:16:00.323
|uav|accelerometer|imu|gyroscope|
<p>I'm working on a basic airplane flight stabilization system, as the precursor to a full autopilot system. I'm using a salvaged Wii Motion Plus and Nunchuk to create a 6DOF IMU. The first goal is to keep the wings level, then mix in the users commands. Am I correct in saying that this would not require a gyro, just a 3 (2?) axis accelerometer, to detect pitch and roll, then adjust the ailerons and elevator to compensate?</p> <p>Secondly, if we extend my design goal from "keeping the wings level" to "flying in a straight line" (obviously two different things, given wind and turbulence), does the gyro become necessary, insofar as this can be accomplished without GPS guidance?</p> <p>I've tried integrating over the gyro values to get roll, pitch &amp; yaw from that, however (as evidenced by this question), I'm at a level in my knowledge on the topic where I'd prefer simpler mathematics in my code. Thanks for any help!</p>
Do I really need a gyro for an airplane flight stabilization system?
<p>A DC motor has 2 directions, <code>forwards</code> and <code>backwards</code>.</p> <p>The DC motor <strong>does not</strong> control the direction of an RC car, instead it's the <strong>steering mechanism</strong>.</p> <p>Rather than write everything here - I have found <a href="http://rc4beginners.blogspot.co.nz/p/lesson-4-steering-system_06.html" rel="nofollow">an article</a> for you that I believe will explain the principles of car steering. Hopefully it's a good starting point for you; feel free to Google for further information that you require. </p> <p>The term <a href="https://www.google.com/#hl=en&amp;sugexp=les;&amp;gs_rn=2&amp;gs_ri=hp&amp;tok=BBu9JIbGhlO8CVJL8oOmfQ&amp;cp=22&amp;gs_id=6&amp;xhr=t&amp;q=how+car+steering+works&amp;es_nrs=true&amp;pf=p&amp;tbo=d&amp;output=search&amp;sclient=psy-ab&amp;oq=how+car+steering+works&amp;gs_l=&amp;pbx=1&amp;bav=on.2,or.r_gc.r_pw.r_cp.r_qf.&amp;bvm=bv.41867550,d.dGI&amp;fp=6daf5ba8052c93de&amp;biw=1527&amp;bih=840" rel="nofollow"><code>how car steering works</code></a> returns <strong>a lot</strong> of results. Good luck!</p>
876
2013-02-02T23:41:45.237
|control|wheel|
<p>Before I start asking you for help let you know that I am newbie in electronic field.</p> <p>All I want to know is the principle of wheel rotation (left-right) from remote car gadget. I am not talking about changing the spin rotation of DC motor (up,down buttons from remote), I am asking about left and right movement of wheel.</p> <p>I know that spin change depends on polarity of DC motor, so changing polarity changes spin, but what is the principle of changing the left and right positions of front wheels.</p>
Remote car controlling
<p>What I've found 3d printing with SketchUp:</p> <ul> <li><p>Use good geometry. Free Drawing should pretty much <strong>never</strong> be used. Make sure all of your arcs &amp; angles are clean.</p></li> <li><p>You can <strong>not</strong> have unclosed areas. Some programs are very good about this. Sketchup is <strong>not</strong>. Lines that go no where should be removed, and all lines should be connected to the model.</p></li> <li><p>Pick a good ruby script for exporting as an STL. There are many free scripts available, and any of them should work if you do what is mentioned above.</p></li> </ul> <p>Good Luck!</p>
878
2013-02-05T06:36:50.580
|3d-printing|
<p>I have 3D printers at my school, but unfortunately they are not super high quality. I want to try 3D printing a model I made on google sketchup, but I would like for it to be fairly accurate.</p> <p>What measures can I take to prevent error in the model? I understand that I need to export the file as an STL; is there anything I can do to the model before hand to ensure accuracy? </p> <p>What can I do to calibrate a 3D printer for best results? </p>
Accurate 3D Printing W/Sketchup
<p>Some years ago I have used the supervisor to know the position of a simulated Khepera robot in Webots 4. The main components of the C code was the following:</p> <p>Declarations:</p> <pre><code> #define STEP 64 NodeRef robot_node; float robot_data[4]={0,0,0,0}; </code></pre> <p>Getting the node named Khepera:</p> <pre><code> robot_node=(NodeRef)supervisor_node_get_from_def("KHEPERA"); </code></pre> <p>Getting data from the node in every STEP milliseconds: </p> <pre><code>supervisor_field_get(robot_node,SUPERVISOR_FIELD_TRANSLATION|SUPERVISOR_FIELD_ROTATION_ANGLE,&amp;robot_data,STEP); </code></pre> <p>All the above was called before the main cycle of the supervisor. The main cycle contained a <code>robot_step(STEP);</code> call to let the system fill the elements of the robot data structure (x,y,z,head coordinates) regularly.</p> <p>Again it was Webots 4 may be there are better ways in Webots 7 to fulfill the task but it could be a starting point.</p>
883
2013-02-06T10:36:06.213
|mobile-robot|simulator|reinforcement-learning|webots|
<p>I have been experimenting with different fitness functions for my <a href="http://www.cyberbotics.com/overview" rel="noreferrer">Webots robot simulation</a> (in short: I'm using genetic algorithm to evolve interesting behaviour).</p> <p>The idea I have now is to reward/punish Aibo based on its speed of movement. The movement is performed by setting new joint position, and currently it results in jerky random movements. I have been looking at the nodes available in Webots, but apart from GPS node (which is not available in Aibo) I couldn't find anything relevant.</p> <p>What I want to achieve is to measure the distance from previous location to current location after each movement. How can I do this?</p>
Measuring speed of movement in Webots
<p>Per the schematics of the UltraSonic Sensor the P1.3/SCL is DIGIAI0 or J1.5 and P3.0/SDA is DIGIAI1 or J1.6. And the developer Kit Manual states it is I2C as per philips original standard, detailing all the memory address's of the ESC015 chip and with all the recommended interfacing circuitry. The only note that I see is that they state the I2C's SCL is 9600. Kind of slow. But all very do-able for an Arduino. check out <a href="http://www.openelectrons.com/index.php?module=pagemaster&amp;PAGE_user_op=view_page&amp;PAGE_id=7" rel="nofollow">http://www.openelectrons.com/index.php?module=pagemaster&amp;PAGE_user_op=view_page&amp;PAGE_id=7</a> as they have a shield to directly connect and libraries for the Arduino.</p>
885
2013-02-07T03:57:22.483
|arduino|
<p>Does anyone know if this is possible? It's just an i2c device right? I mean you would have to cut the cable and make it so you could plug into the pins on the Arduino but you should just be able to use the wire library and say something like. </p> <pre><code>Wire.beginTransmission(0x10); </code></pre> <p>the NXT hardware developers kit tells you what pins are which <a href="http://mindstorms.lego.com/en-us/support/files/default.aspx" rel="nofollow">http://mindstorms.lego.com/en-us/support/files/default.aspx</a> </p> <p>Thanks</p> <p>EDIT. Turns out this is very possible. The main problem was that HiTechnic says the address is 0x10 and it is actually 0x08 but here is a short sketch that reads and prints some into about the device, i.e. the manufacturer and version. </p> <pre><code>#include &lt;Wire.h&gt; #define ADDRESS 0x08 void setup() { Wire.begin(); Serial.begin(9600); } void loop() { readCharData(0, 7); Serial.println(); readCharData(8, 8); Serial.println(); readCharData(16, 8); Serial.println(); Serial.println("-----------------------------"); delay(1000); } void readCharData(int startAddress, int bytesToRead) { Wire.beginTransmission(ADDRESS); Wire.write(startAddress); Wire.endTransmission(); Wire.requestFrom(ADDRESS, bytesToRead); while(Wire.available()) { char c = Wire.read(); Serial.print(c); } } </code></pre>
Would is be possible to connect a HiTechnic prototype board to an Arduino?
<p>It is not possible to achieve <em>arbitrary</em> precision in camera calibration, precisely because of camera resolution and camera imperfections (e.g. lens distortion, which is only approximated in the calibration of intrinsic parameters). </p> <p>That, however, isn't usually the worst part of the deal. You can go a long way if you are able to fabricate your calibration target with a decent precision. According to Trucco (p.124):</p> <blockquote> <p>The accuracy of calibration depends on the accuracy of the measurements of the calibration pattern; that is, its construction tolerances. To be on the safe side, the calibration pattern should be built with tolerances one or two order of magnitudes smaller than the desired accuracy of calibration. For example, if the desired accuracy of calibration is 0.1mm, the calibration pattern should be built with tolerances smaller than 0.01mm.</p> </blockquote> <p>A good quote for common sense, also by Trucco (p.136):</p> <blockquote> <p>A practical guideline is: <em>the effort going into improving calibration accuracy should be commensurate to the requirements of the application</em>.</p> </blockquote> <p>[1] E. Trucco, A. Verri. <em>Introductory Techniques for 3-D Computer Vision</em>. Prentice Hall, 1998</p>
892
2013-02-08T17:27:08.697
|computer-vision|calibration|
<p>Is it possible to achieve arbitrary precision to the calibration of the extrinsic parameters of a camera or is there a minimum error wich can not be compensated (probably dictated by the camera's resolution)?</p>
Is it possible to achieve arbitrary precision in camera calibration?
<p>And for the 'I'd rather be designing the sensors and grippers' of us, consider a ready made motor shield for Arduino such as:</p> <p><a href="http://www.pololu.com/catalog/product/2504" rel="nofollow noreferrer">Polulu Zumo Shield</a> (1.2 amp motors) or the <a href="http://arduino.cc/en/Main/ArduinoMotorShieldR3" rel="nofollow noreferrer">Arduino Motor Shield</a> (2 amp motors) or the <a href="http://a.pololu-files.com/picture/0J4024.600.jpg?f1bcdd7324bf6307a61598052a9cede0" rel="nofollow noreferrer">Polulu Dual Motor Driver Shield</a> (3.0 amp motors). <img src="https://i.stack.imgur.com/aQUEm.jpg" alt="Polulu Zumo Shield"> <img src="https://i.stack.imgur.com/FHdyt.jpg" alt="Arduino Motor Shield"> <img src="https://i.stack.imgur.com/TI6SZ.jpg" alt="Polulu Dual Motor Driver Shield"></p>
897
2013-02-09T22:36:37.333
|arduino|motor|
<p>I'm trying to control a higher voltage motor than an arduino can source with a pin, with an arduino. I am trying to hook it up to a transistor. The battery pack is not supposed to be 4.8V, it's 6V, 4 D batteries.</p> <p>Here is the setup:</p> <p><img src="https://i.stack.imgur.com/m9QPz.png" alt="Circuitry setup"></p> <p>Here is the arduino code I'm trying to run to it:</p> <pre><code>int motorpin = 2; void setup() { pinMode(motorpin, OUTPUT); } void loop() { digitalWrite(motorpin, HIGH); delay(500); digitalWrite(motorpin, LOW); delay(500); } </code></pre> <p>Code gives me no errors, but no motor movement happens. What would make this work? Thanks.</p>
High voltage motor control with arduino
<p>It would be interesting if they would have these hard anodized, which would give the surface of the aluminum the same hardness as case-hardened steel. I agree though that the contact area is very small. I do not see the advantage of this over an undercut slot that captures a t-nut underneath, which would still give you the flexibility to tighten anywhere along the length, with full thread engagement, and much higher clamping force.</p>
908
2013-02-11T23:41:06.083
|mechanism|kit|
<p>I've been looking into a <a href="http://www.makeblock.cc/" rel="nofollow noreferrer">Makeblock robotics kit</a> but have found no information on the web that comes from end-users, and one of the main advertised features is not clear to me: The slot threads shown below are straight, while the screw thread that will mate with them is angled. Is there just very little contact between screw thread and rail thread vs. regular screw hole threads? Or would the screw want to rest angled somewhat- and then the head would not be flush with the rim of the rail? Or would the screw deform the aluminum rail if over-torqued?</p> <p>This is a close up picture of the slot with screws: <img src="https://www.wired.com/wiredenterprise/wp-content/uploads//2012/12/makeblock-threaded-slot.jpg" alt="makeblock closeup"></p>
How does the Makeblock threaded slot work?
<p>While the answer by <a href="https://robotics.stackexchange.com/a/914/37">freeman01in</a> references a <a href="http://www.scribd.com/doc/38698/Sizing-Electric-Motors-for-Mobile-Robotics" rel="nofollow noreferrer">useful presentation</a> (<a href="http://in.docsity.com/en-docs/Sizing+Electric+Motors+for+Mobile+Robotics-Robotics-Lecture+Slides" rel="nofollow noreferrer">alternative source</a>) on the practical application aspect of your question, it is probably worth answering the specific question in the title too, in terms of first principles.</p> <p>From the <a href="http://en.wikipedia.org/wiki/Power_%28physics%29" rel="nofollow noreferrer">Wikipedia Power</a> page:</p> <blockquote> <p>In physics, <strong>power</strong> is the rate at which energy is transferred, used, or transformed. The unit of power is the joule per second (J/s), known as the watt ...</p> <p>Energy transfer can be used to do work, so power is also the rate at which this work is performed. The same amount of work is done when carrying a load up a flight of stairs whether the person carrying it walks or runs, but more power is expended during the running because the work is done in a shorter amount of time. The output power of an electric motor is the product of the torque the motor generates and the angular velocity of its output shaft. The power expended to move a vehicle is the product of the traction force of the wheels and the velocity of the vehicle.</p> <p>The integral of power over time defines the work done. Because this integral depends on the trajectory of the point of application of the force and torque, this calculation of work is said to be &quot;path dependent.&quot;</p> </blockquote> <p>Meanwhile, from the <a href="http://en.wikipedia.org/wiki/Torque" rel="nofollow noreferrer">Wikipedia Torque</a> page:</p> <blockquote> <p>Torque, moment or moment of force (see the terminology below), is the tendency of a force to rotate an object about an axis,<a href="https://robotics.stackexchange.com/a/914/37">1</a> fulcrum, or pivot. Just as a force is a push or a pull, a torque can be thought of as a twist to an object. Mathematically, torque is defined as the cross product of the lever-arm distance and force, which tends to produce rotation.</p> <p>Loosely speaking, torque is a measure of the turning force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a torque (turning force) that loosens or tightens the nut or bolt.</p> <p>The symbol for torque is typically <span class="math-container">$\boldsymbol{\tau}$</span>, the Greek letter tau. When it is called moment, it is commonly denoted <span class="math-container">$M$</span>.</p> <p>The magnitude of torque depends on three quantities: the force applied, the length of the lever arm connecting the axis to the point of force application, and the angle between the force vector and the lever arm. ...</p> <p>The SI unit for torque is the newton metre (N·m).</p> </blockquote> <p>And from <a href="http://en.wikipedia.org/wiki/Torque#Relationship_between_torque.2C_power.2C_and_energy" rel="nofollow noreferrer">Relationship between torque, power, and energy</a> you get the formula:</p> <blockquote> <p>Power is the work per unit time, given by <span class="math-container">$$P = \boldsymbol{\tau} \cdot \boldsymbol{\omega}$$</span> where <span class="math-container">$P$</span> is power, <span class="math-container">$\boldsymbol{\tau}$</span> is torque, <span class="math-container">$\boldsymbol{\omega}$</span> is the angular velocity, and <span class="math-container">$\cdot$</span> represents the scalar product.</p> </blockquote>
913
2013-02-12T16:16:00.323
|mobile-robot|design|motor|
<p>I am designing a new platform for outdoor robotics and I need to calculate the power and/or torque that is needed to move the platform. I have calculated that I need about 720 W of total power to move it (360W per motor), but I don't know how to calculate the torque that I need. </p> <p>Is it really just about having the required power and ignoring the torque or is there a way to calculate it easily?</p> <p>Already known parameters of the platform are:</p> <ul> <li>Weight of the whole platform: 75 kg.</li> <li>Number of wheels: 4.</li> <li>Number of powered wheels: 4.</li> <li>Diameter of wheels: 30 cm.</li> <li>Number of motors: 2.</li> <li>Wanted speed: 180 RPM (3 m/s).</li> <li>Wanted acceleration: > 0.2 m/s^2</li> </ul>
Are power and torque required related in some way?
<p><strong>Can Kinect data be stored directly onto a USB drive?</strong></p> <p>Yes. In ROS that would be easily done using bag files. In Windows, however, you might have to look for something native in the Microsoft SDK/OpenNI or code it yourself. <em>Edit: In OpenNI there is something called .oni files for that purpose.</em></p> <p><strong>The implicit question: Does Kinect for Windows work in Linux?</strong></p> <p>As far as I am aware, yes. From what I researched there were <a href="https://groups.google.com/forum/#!topic/openni-dev/fFJM0DqLWq4" rel="nofollow noreferrer">compatibility issues with OpenNI initially</a> but <a href="https://groups.google.com/forum/#!topic/openni-dev/fFJM0DqLWq4" rel="nofollow noreferrer">they seem to be solved</a> as of May 2012. There are reports of users successfully using K4W in both Windows and Linux there. See <a href="https://robotics.stackexchange.com/questions/654/what-is-the-difference-between-kinect-for-windows-and-kinect-for-xbox">this previous question</a> for more info.</p> <p>I assume the changes have made their way upstream, but the only way to be sure is to test it. If they haven't, you can always use the patched version referred in the link.</p> <p><strong>Conclusion</strong></p> <p>My suggestion is get your sensor working in Linux and don't waste your time trying to figure out how to save Kinect data to a USB drive in Windows.</p>
918
2013-02-14T09:59:40.490
|kinect|ros|
<p>Does anybody know if Kinect Data can be stored directly onto a USB Drive?? I have a Kinect for Windows that i cannot use on Linux(ROS). However what i plan is to mount the Kinect on my robot, store the captured frames on a USB and then un mount the USB ,transfer to Linux and process them on ROS.</p> <p>Is this possible?? Any suggestions. </p>
Storing Kinect Data on a USB Drive
<p>Yes , it is possible but please note the following points.</p> <p>1)if you are using Bluetooth, it data transfer rate depends upon different versions like for v2 blue tooth its data transfer rate 3 Mb\s.</p> <p>2)Now let suppose you are using a camera like ov7670 in QVGA mode(320 x 240 pixels for one frame) each pixel is 8 bits, for a good video you require 30 frames per second, which is 76800 bytes for one frame, for 30 frame it is 2.3 Mb\s.</p> <p>3) Now 2.3 MB\S &lt; 3 MB\s, you can live vedio streaming.</p> <p>4) The rest depends how good you are in understanding programing and making Electronics connections with different modules.</p>
921
2013-02-14T16:56:23.807
|arduino|cameras|
<p>I'm trying to connect a camera module to my Arduino Mega, connect my Mega to my Android phone (throught BlueTooth or other), and send the live view of the camera to the mobile phone.</p> <p>I saw a <a href="http://www.youtube.com/watch?v=qEVXqOJz-GY">video online</a> that showed this for still images -- an image captured by the camera module on the Arduino was sent to Android and the output image was viewed after a couple of seconds (the time to send image by BT).</p> <p>Is this doable with live video instead of image? If yes, please guide me; if no, please suggest some workarounds.</p>
How can I send video from my Arduino camera module video to my Android screen?
<p>According to TI's website, <a href="http://www.ti.com/general/docs/wtbu/wtbuproductcontent.tsp?templateId=6123&amp;navigationId=12828&amp;contentId=50020" rel="nofollow noreferrer">some of the OMAP processors (specifically the OMAP-DM5x range) have stacked RAM</a>.</p> <p><img src="https://focus.ti.com/en/multimedia/wtbu/OMAP-DM299-500-diagram.jpg" alt="chip architecture"></p> <p>More info <a href="http://www.ti.com/lit/ml/swpt036/swpt036.pdf" rel="nofollow noreferrer">here (PDF)</a>.</p>
924
2013-02-15T08:20:41.940
|arm-cpu|
<p>Do any of the TI <a href="http://en.wikipedia.org/wiki/ARM_system-on-chip_architecture" rel="nofollow">ARM SOC</a>s, e.g. <a href="http://www.ti.com/lsds/ti/arm/sitara_arm_cortex_a_processor/sitara_arm_cortex_a8/omap3503_15_arm_cortex_a8/products.page?paramCriteria=no" rel="nofollow">OMAP</a> or <a href="http://www.ti.com/lsds/ti/dsp/video_processors/overview.page" rel="nofollow">Da Vinci</a>, have a version with stacked RAM? (e.g. DDR2 or mDDR) For miniature robots like micro drones, it would be really nice to not need to spend board area on an external RAM chip. Thanks!</p>
TI ARM with stacked RAM
<p>This guy put a spring in between his 2nd servo to protect them from any slight differences... Maybe this is what you're looking for? <a href="http://youtu.be/jqsmai2Nafk?t=1m13s" rel="nofollow">http://youtu.be/jqsmai2Nafk?t=1m13s</a></p>
939
2013-02-17T21:09:19.683
|motor|rcservo|
<p>For my robot, I am using two continuous rotation servos to spin a threaded rod. I am trying to make this project as cheap as possible. Here are the servos that I can find:</p> <ul> <li>Servo #1: This is a very cheap option and it has half of the torque I need.</li> <li>Servo #2: This has all of the torque my project requires, but it is much more expensive that two of servo #1.</li> </ul> <p>Can I hook up two of servo #1 to each end of the rod and have them move synchronized? I can spare a few extra pins on my microprocessor that I am using; that isn't a issue. I know hooking two together will increase torque, but I don't want 75% of the torque I want in this situation. Also, I don't care if I only have 98% of my torque "goal" with the extra weight (which probably won't happen) but I don't want to, like I said earlier, have 70, 80, 90% of my "target goal" of torque if possible.</p> <p>Any help appreciated. Thanks in advance.</p>
Will connecting two servo motors double the torque?
<p>The <a href="http://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters" rel="nofollow">DH Matrix</a> section of the DH page on wikipedia has the details.</p> <p>Basically you want to use the information in your table to create a set of homogeneous transformation matrices. We do so because homogeneous transformations can be multiplied to find the relation between frames seperated by one or more others. For example, $^0T_1$ represents the transformation from frame 1 to frame 0 while $^1T_2$ represents the transformation from frame 2 to frame 1. By multiplying them we get the transformation from frame 2 to frame 0, i.e. $^0T_2 = ^0T_1^1T_2$.</p> <p>An easy way to create each of the transformations is to make a homogeneous transformation or homogeneous rotation matrix for each column in the table and multiply them together. For example, the transformation from 1 to 0 (e.g. $^{i-1}T_i, i = 1$) is</p> <p>$^0T_1 = Trans(d_1)*Rot(\theta_1)*Trans(a_2)*Rot(\alpha_2)$</p> <p>where</p> <p>$Trans(d_1) = \begin{bmatrix}1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; \bf{d_1 = 0} \\ 0 &amp; 0 &amp; 0 &amp; 1 \end{bmatrix},$</p> <p>$Rot(\theta_1) = \begin{bmatrix} \text{cos}(\bf{\theta_1}) &amp; - \text{sin}(\bf{\theta_1}) &amp; 0 &amp; 0 \\ \text{sin}(\bf{\theta_1}) &amp; \text{cos}(\bf{\theta_1}) &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 \end{bmatrix},$</p> <p>$Trans(a_2) = \begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; \bf{a_2 = 0} \\ 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 \end{bmatrix},$</p> <p>$Rot(\alpha_2) = \begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; \text{cos}(\bf{\alpha_2 = 0}) &amp; -\text{sin}(\bf{\alpha_2 = 0}) &amp; 0 \\ 0 &amp; \text{sin}(\bf{\alpha_2 = 0}) &amp; \text{cos}(\bf{\alpha_2 = 0}) &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 \end{bmatrix}$.</p> <p>In this case</p> <p>$^0T1 = Rot(\theta_1)$.</p> <p>Once you have all your transformations you multiply them togther, e.g.</p> <p>$^0T_N = ^0T_1*^1T_2...^{N-1}T_N$.</p> <p>Finally you can read the displacement vector out of the homogenous transform $^0T_N$ (i.e. $d = [^0T_{N,14}, ^0T_{N,24}, ^0T_{N,34}]^T$). Similarly you can read out the rotation matrix from $^0T_N$ to find the X-Y-Z angles.</p>
940
2013-02-18T11:18:22.693
|kinematics|forward-kinematics|
<p>I'm doing robotics research as an undergraduate, and I understand the conceptual math for the most part; however, when it comes to actually implementing code to calculate the forward kinematics for my robot, I am stuck. I'm just not getting the way the book or websites I've found explain it.</p> <p>I would like to calculate the X-Y-Z angles given the link parameters (Denavit-Hartenberg parameters), such as the <a href="https://i.stack.imgur.com/j6Cf6.png" rel="nofollow noreferrer">following</a>:</p> <p>$$\begin{array}{ccc} \bf{i} &amp; \bf{\alpha_i-1} &amp; \bf{a_i-1} &amp; \bf{d_i} &amp; \bf{\theta_i}\\ \\ 1 &amp; 0 &amp; 0 &amp; 0 &amp; \theta_1\\ 2 &amp; -90^{\circ} &amp; 0 &amp; 0 &amp; \theta_2\\ 3 &amp; 0 &amp; a_2 &amp; d_3 &amp; \theta_3\\ 4 &amp; -90^{\circ} &amp; a_3 &amp; d_4 &amp; \theta_4\\ 5 &amp; 90^{\circ} &amp; 0 &amp; 0 &amp; \theta_5\\ 6 &amp; -90^{\circ} &amp; 0 &amp; 0 &amp; \theta_6\\ \end{array}$$</p> <p>I don't understand how to turn this table of values into the proper transformation matrices needed to get $^0T_N$, the Cartesian position and rotation of the last link. From there, I'm hoping I can figure out the X-Y-Z angle(s) from reading my book, but any help would be appreciated.</p>
How do I convert link parameters and angles (in kinematics) into transformation matrices in programming logic?
<p>In the end, I found that the best way to do this was to employ a very simple concept: <a href="https://en.wikipedia.org/wiki/Flood_fill" rel="nofollow">Flood Fill</a>. I used a <a href="https://en.wikipedia.org/wiki/Flood_fill#Alternative_implementations" rel="nofollow">stack-based iterative approach</a> instead of the recursive option, and modified it for physical space by using an A* search to find a path from the current location to the next location in the stack (using only those grid squares that have already been visited, since I'm guaranteed to have a path between them).</p> <p>The efficiency seems fairly reasonable.</p>
952
2013-02-19T21:02:05.790
|algorithm|coverage|planning|
<p>I'm trying to create a map of the obstacles in a fairly coarse 2D grid space, using exploration. I detect obstacles by attempting to move from one space to an adjacent space, and if that fails then there's an obstacle in the destination space (there is no concept of a rangefinding sensor in this problem).</p> <p><a href="http://www.eriding.net/resources/general/prim_frmwrks/images/asses/asses_y3_5d_3.gif" rel="nofollow noreferrer">example grid http://www.eriding.net/resources/general/prim_frmwrks/images/asses/asses_y3_5d_3.gif</a> (for example)</p> <p>The process is complete when all the reachable squares have been visited. In other words, some spaces might be completely unreachable even if they don't have obstacles because they're surrounded. This is expected.</p> <p>In the simplest case, I could use <a href="https://stackoverflow.com/a/11556238/2063546">a DFS algorithm</a>, but I'm worried that this will take an excessively long time to complete &mdash; the robot will spend more time backtracking than exploring new territory. I expect this to be especially problematic when attempting to reach the unreachable squares, because the robot will exhaust every option.</p> <p>In the more sophisticated method, the proper thing to do seems to be <a href="http://en.wikipedia.org/wiki/Boustrophedon_cell_decomposition" rel="nofollow noreferrer">Boustrophedon cell decomposition</a>.<br> <img src="https://i.stack.imgur.com/Kagw3.gif" alt="Boustrophedon cell decomposition"></p> <p>However, I can't seem to find a good description of the Boustrophedon cell decomposition algorithm (that is, a complete description in simple terms). There are resources like <a href="http://planning.cs.uiuc.edu/node352.html" rel="nofollow noreferrer">this one</a>, <a href="http://planning.cs.uiuc.edu/node262.html" rel="nofollow noreferrer">or this more general one on vertical cell decomposition</a> but they don't offer much insight into the high-level algorithms nor the low-level data structures involved.</p> <p>How can I visit (map) this grid efficiently? If it exists, I would like an algorithm that performs better than <span class="math-container">$O(n^2)$</span> with respect to the total number of grid squares (<em>i.e.</em> better than <span class="math-container">$O(n^4)$</span> for an <span class="math-container">$n*n$</span> grid).</p>
What's an efficient way to visit every reachable space on a grid with unknown obstacles?
<p>As Plecharts said, it depends on your surface. Magnets work really well for a heavy robot, but iff you're going to drive on metal. An example can be found on this VEXLabs robot: <a href="http://www.vexforum.com/wiki/Magbot_Model_3" rel="nofollow">http://www.vexforum.com/wiki/Magbot_Model_3</a></p>
957
2013-02-20T14:54:30.407
|mobile-robot|
<p>I have come across a number of methods for developing wall-climbing robots.</p> <ul> <li>Suction</li> <li>Chemical Adhesion</li> <li>Gecko like hair adhesion</li> <li>Electroadhesion</li> </ul> <p>Which method would be the best for heavy robots (5kg+)? Are there any other methods that I have missed?</p>
Adhesion for a heavy wall-climbing robot
<p>To your first question: "Does the same apply when using the odometry motion model?", the answer is Yes.</p> <p>The the EKF is pretty much the same thing as the KF, with the addition of the linearization step. What you are linearizing here is the motion model, whatever model that is. </p> <p>For your second question: "Is it a good practise to use odometry motion model instead of velocity for mobile robot localization?": I think the answer is 'it depends.' </p> <p>If you are using a data set that has velocity information and the localization is good enough for your purposes, then the simplicity of that model is probably preferred. If you are directly controlling the robot and have access to the odometry information, then you're likely to get a better result. </p>
964
2013-02-21T13:50:43.330
|localization|kalman-filter|
<p>In the prediction step of EKF localization, linearization must be performed and (as mentioned in <a href="http://books.google.co.uk/books/about/Probabilistic_Robotics.html?id=k_yOQgAACAAJ&amp;redir_esc=y" rel="noreferrer">Probabilistic Robotics [THRUN,BURGARD,FOX]</a> page 206) the Jacobian matrix when using velocity motion model, defined as</p> <p>$\begin{bmatrix} x \\ y \\ \theta \end{bmatrix}' = \begin{bmatrix} x \\ y \\ \theta \end{bmatrix} + \begin{bmatrix} \frac{\hat{v}_t}{\hat{\omega}_t}(-\text{sin}\theta + \text{sin}(\theta + \hat{\omega}_t{\Delta}t)) \\ \frac{\hat{v}_t}{\hat{\omega}_t}(\text{cos}\theta - \text{cos}(\theta + \hat{\omega}_t{\Delta}t)) \\ \hat{\omega}_t{\Delta}t \end{bmatrix}$</p> <p>is calculated as </p> <p>$G_{T}= \begin{bmatrix} 1 &amp; 0 &amp; \frac{υ_{t}}{ω_{t}}(-cos {μ_{t-1,θ}} + cos(μ_{t-1,θ}+ω_{t}Δ{t})) \\ 0 &amp; 1 &amp; \frac{υ_{t}}{ω_{t}}(-sin {μ_{t-1,θ}} + sin(μ_{t-1,θ}+ω_{t}Δ{t})) \\ 0 &amp; 0 &amp; 1 \end{bmatrix}$.</p> <p>Does the same apply when using the odometry motion model (described in the same book, page 133), where robot motion is approximated by a rotation $\hat{\delta}_{rot1}$, a translation $\hat{\delta}$ and a second rotation $\hat{\delta}_{rot2}$ ? The corresponding equations are:</p> <p>$\begin{bmatrix} x \\ y \\ \theta \end{bmatrix}' = \begin{bmatrix} x \\ y \\ \theta \end{bmatrix} + \begin{bmatrix} \hat{\delta}\text{cos}(\theta + \hat{\delta}_{rot1}) \\ \hat{\delta}\text{sin}(\theta + \hat{\delta}_{rot1}) \\ \hat{\delta}_{rot1} + \hat{\delta}_{rot2} \end{bmatrix}$.</p> <p>In which case the Jacobian is</p> <p>$G_{T}= \begin{bmatrix} 1 &amp; 0 &amp; -\hat{\delta} sin(θ + \hat{\delta}_{rot1}) \\ 0 &amp; 1 &amp; -\hat{\delta} cos(θ + \hat{\delta}_{rot1}) \\ 0 &amp; 0 &amp; 1 \end{bmatrix}$.</p> <p>Is it a good practise to use odometry motion model instead of velocity for mobile robot localization?</p>
Extended Kalman Filter using odometry motion model
<p>On a semi related note, is there a possibly of permanent damage to the magnemometer and other chips if they are exposed to a magnetic field of between 1000-2000 uT from a stack of small rare earth magnets? I tried doing that to the 9-DOF IMU from adafruit today as a test and it seems to have permanently altered the bias of the magnetometer.</p>
970
2013-02-23T07:23:28.393
|sensors|imu|
<p>Im in the process of making a robot which requires 12 3x10mm cylindric magnets for the construction. They are 30mm from the center of the robot where I plan to have the IMU. </p> <p>I was thinking about using MPU-6050. Do magnets affect the values? If yes, is there a solution for it? like maybe I could have a shield or something around the IMU?</p>
Do magnets affect IMU values?
<p>I would suggest that the key reason is reliability - once a probe has been launched into space, it <strong>HAS</strong> to work.</p> <p>As such, it is a lot safer to heat up a known, reliable, material that has been subject to extensive testing, than it is to "invent" a new material that cannot be fully tested on Earth.</p>
975
2013-02-24T12:18:01.820
|heat-management|cooling|
<p>I know that temperature influences the characteristics of semiconductors and other materials, but we know how and can take that into account. Furthermore, lower temperatures makes electronics more efficient, sometimes even superconducting.</p> <p>I remember reading somewhere that engineers building Curiosity even considered low temperature electronics for the motors driving the wheels but still decided against it in the end.</p> <p>Why is it, apparently, so hard to build components with operating temperatures matching those on Mars, Europa, or in space?</p> <p><strong>Edit:</strong> None of the answers address my question thus far. I know that all parts, both electronic and mechanical, and greases and so on have relatively narrow working temperatures. My question is, why don't we build special cold metals and cold greases and cold chips that have their narrow operating temperature band at -100 C or whatever?</p> <p>Valid answers could be: it's too expensive, insufficient science has been done to determine materials appropriate for such cold, such cold materials cannot be manufactured in the sweltering heat of planet Earth.</p>
Why do space probes need heating?
<p>Triumphant</p> <p>Check this out. Under normal circumstances you would configure your sensor by giving it a port. It would look like the following.</p> <pre><code>pxbrick.Registration registration = new pxbrick.Registration(new LegoNxtConnection((LegoNxtPort)_state.SensorPort), </code></pre> <p>Well, inside of the NxtBrick code there is a method called TestPortForI2CSensorHandler and reading the code you can see that if you pass in “AnySensorPort” it will try all four ports. Genuis that I am thought! Oh great. I will just let the already created code do the work for me, after all, at that point all I wanted to do was see of the thing worked. Well here is the problem. When you do that it creates a type of I2CReadSensorType inside of the ctor it executes this line of code.</p> <pre><code>ExpectedI2CResponseSize = 16; </code></pre> <p>That doesn’t seem to work. OK - not seems, it doesn’t work. It won’t work for the sonsor sensor or any other digital sensor. I assume because it’s 0 based, so 16 is actually 17 bytes? I am guessing. At anyrate, I changed it to 15 and low and behold it works. I even went back and tried it with the LEGO sonar sensor. It works to a point - that is to say it actually gets data back but it seems like the sensor type data (which is “SuperPr” on the prototype board) is 0xFF for all the bytes. The name manufacturer is indeed set to LEGO though so I know it read data. if you change the ExpectedI2CResponseSize back to 16 it fails.</p> <p>The other issue I had is that the NxtCommType contains the following. </p> <pre><code>[DataMember, Description("The default I2C Bus Address of the Ultrasonic or other I2C Sensor.")] public const byte DefaultI2CBusAddress = 0x02; </code></pre> <p>For my purposes at the moment I am just flipping it to 0x10 for the prototype board (which oddly enough when connected to my Arduino</p> <p>shows up as 0x08 but that is another story). I need to modify things so I can use sensors that have differing i2c addresses but for now I am thrilled!</p> <p>I would love to see someone like Trevor Taylor comment on this as to how it ever worked with 16 as the ExpectedI2CResponseSize.</p> <p>Awesome!</p>
984
2013-02-25T21:13:25.777
|nxt|i2c|
<p>Here is the background. I am trying to write a service for the HiTechnic prototype board. </p> <p>Using the Appendix 2 from the blue tooth developers kit from Lego's site I am able to understand what is going on with this service I am trying to build however the response I get is always 221 = 0xDD = "Communication Bus Error" or 32 = 0x20 = "Pending communication transaction in progress". </p> <p>I figured out that the HiTechnic prototype board is using <a href="http://refriedgeek.blogspot.com/2013/02/connecting-to-hitechnic-prototype-board.html" rel="nofollow">i2c address 0x08</a> so I modified the brick code to use that address instead of the standard 0x02. It goes out and configures the device, I get a response and then it does an LSWrite which seems OK then I get a get an error when it does the LSGetStatus. </p> <p>I know this thing works - I can bit bang it all day long with an Arduino but I only did that to test it out - <a href="http://refriedgeek.blogspot.com/2013/02/connecting-to-hitechnic-prototype-board.html" rel="nofollow">see this link</a></p> <p>I am not sure what else to try. Here is how I am setting it up in the connect to brick handler. </p> <pre><code> pxbrick.Registration registration = new pxbrick.Registration( new LegoNxtConnection(LegoNxtPort.Sensor1), LegoDeviceType.DigitalSensor, Contract.DeviceModel, Contract.Identifier, ServiceInfo.Service, _state.Name); Debugger.Break(); // Reserve the port LogInfo("ConnectToBrickHandler"); yield return Arbiter.Choice(_legoBrickPort.ReserveDevicePort(registration), delegate(pxbrick.AttachResponse reserveResponse) { Debugger.Break(); if (reserveResponse.DeviceModel == registration.DeviceModel) { registration.Connection = reserveResponse.Connection; } }, delegate(Fault f) { Debugger.Break(); fault = f; LogError("#### Failed to reserve port"); LogError(fault); registration.Connection.Port = LegoNxtPort.NotConnected; }); </code></pre> <p>I have also tried setting AnyPort as well so that it will hit the TestPortForI2CSensorHandler that just does what I explained before - it seems to set the mode fine and then gets an error when it tries to read the device information. </p> <p>Here is the data. - this first part is the set input more - both the message and response - You can see it is totally fine. </p> <p>Send command data. </p> <pre><code>0 5 0 11 0 receive command data. (_commState.SerialPort.Read(receiveData, 0, packetSize);) 2 5 0 </code></pre> <p>Then it does an LSWrite - everything still seems fine... You can see I have modified the NxtComm code to use 0x08 instead of 0x02 which it would normally use, then the last byte is also 0x08 which is the starting address of the manufacturer. It's asking for 16 bytes which would be the manufacturer and sensor type. like I said - I know that works I can print that info out using the Arduino. </p> <pre><code>128 15 0 2 16 8 // i2c address 8 //I assume this is what address I want to read from? Got response: True Error code Success [02/25/2013 02:20:31] -- SendCommandHandler (NxtComm) [02/25/2013 02:20:31] --- RequestResponseHandler (NxtComm) [02/25/2013 02:20:31] --- CommSendImmediateHandler (NxtComm) [02/25/2013 02:20:31] Send command data. </code></pre> <p>Then it tries to get the status</p> <pre><code>0 14 0 </code></pre> <p>Here is the response... </p> <pre><code> 2 14 32 0 </code></pre> <p>It's either 32 or 221. It's making me nuts... </p> <p>If anyone has anything that might help me out I would so much appreciate it. At this point I am running out of ideas. I can see what is going on, I can understand the entire transaction but can't seem to figure out why it just errors out like that. </p> <p>Also - just for grins I tried 0x10 which is what they tell you on the HiTechnic website. That gets a response of 2,14,0,0 from the NXT brick - that would indicate there is no data but as I pointed out I can get data using the Arduino. How could I have two different I2C device addresses? </p>
Does anyone know what might be giving me this error coming from an i2c device
<p>What kind of motors? ESCs typically are for brushless DC motors, which are electronically commutated DC three phase motors, and you can't run them with an H-bridge. </p> <p>That said, ESCs give you a lot of bang for the buck, and they're certainly worth $12 :)</p>
985
2013-02-26T02:47:45.793
|design|electronics|wheeled-robot|
<p>I was able to find a small ESC for about $12 off of ebay. If you were designing a robot, would you see that and think? </p> <blockquote> <p>\$12 bucks for an ESC that connects to simple pulse-wave interface - sign me up!</p> </blockquote> <p>Or would you think:</p> <blockquote> <p>\$12 just to control a motor? I could throw together an H-bridge for $0.50 and be done with it.</p> </blockquote> <p>My robot in particular actually has two motors and therefor $24 to control the two of them. But the interface is really easy (plus has the added advantage of being R/C vs computer controlled with a simple change of connectors.</p> <p>Which way would you go?</p>
Which is easier/cheaper: Hbridge vs ESC for controlling a motor?
<p>You can use the INS / GPS as updates to the output of your first EKF. This is, in fact, not chaining, but simply conditioning the estimate based on the added information from the INS / GPS.</p> <p>Suppose we have the following functions: </p> <p>$x_{t+1|t}$, $P_{t+1|t}$ = EKF_PREDICT($x_t$, $P_t$, $u_t$), for inputs as state $x$, covariance $P$, and control inputs (estimated by odometry) $u_t$.</p> <p>and</p> <p>$x_{t+1|t+1}$, $P_{t+1|t+1}$ = EKF_UPDATE($x_{t+1|t}$, $P_{t+1|t}$, $\hat{x}_{t+1}$). </p> <p>The estimates from sensors are the $\hat{x}_{t+1}$. We have things like:</p> <p>$\hat{x}^{gps}_{t+1} = f(GPS)$</p> <p>$\hat{x}^{map}_{t+1} = f(map)$</p> <p>$\hat{x}^{ins}_{t+1} = f(INS)$</p> <p>etc for all other ways of estimating the state of the robot. So running the function EKF_UPDATE for all of those sensors is good enough.</p> <p>Your loop will be something like this:</p> <p>for all time $t$</p> <ul> <li><p>Let $u_t$ be the current odometry / kinematic estimate of pose, and $R_u$ be the noise on that estimate.</p></li> <li><p>$x_{t+1|t}$, $P_{t+1|t}$ = EKF_PREDICT($x_t$, $P_t$, $u_t$, $R_u$)</p></li> <li><p>for all sensors $S$,</p> <ul> <li><p>Let $\hat{x}^{S}_{t+1}$ be the estimate of the pose from that sensor, and $R_{S}$ be the noise on that estimate</p></li> <li><p>$x_{t+1|t+1}$, $P_{t+1|t+1}$ = EKF_UPDATE($x_{t+1|t}$, $P_{t+1|t}$, $\hat{x}_{t+1}, R_S$). </p></li> <li><p>end-for</p></li> </ul></li> <li>end-for</li> </ul> <p>Some caveats are: </p> <ul> <li><p>Since we're using the EKF, there is no guarantee that the estimate is independent of the ordering of the updates. That is, if you do INS then GPS, the resulting estimate might be different than if you update with GPS then INS. This is usally not a big deal, but the filter will requiring significantly more tuning. </p></li> <li><p>Please be aware your INS has a bias and drift, which might affect your long-term reliability. GPS can help you a <em>lot</em> here. Most literature simultaneously estimates the bias and drift in the INS.</p></li> </ul>
994
2013-03-01T04:05:48.827
|kalman-filter|imu|navigation|
<p>My team is building a robot to navigate autonomously in an outdoor environment. We recently got a new integrated IMU/GPS sensor which apparently does some extended Kalman filtering on-chip. It gives pitch, roll, and yaw, north, east, and down velocities, and latitude and longitude.</p> <p>However, we also have some encoders attached to our wheels, which provide linear and angular velocities. Before we got this new IMU/GPS sensor, we made our own EKF to estimate our state using the encoders and some other low-cost sensors. We want to use this new sensor's on-chip filter, but also incorporate our encoders into the mix.</p> <p>Is there any problem with chaining the filters? What I mean is, we'd use the output of the IMU/GPS sensor's on-chip EKF as an update to our own EKF, just as we use the data read from the encoders as an update to our EKF. It seems reasonable to me, but I was wondering what is usually supposed to be done in this case. </p>
Chaining Kalman filters
<p>Two issues on hardware,</p> <p>a) Chip takes time to turn on and off. Heating up more while in-between on and off. So, no point to use too high frequency. Generally, 10, 15 or 20KHz max, out of human ear range is enough. </p> <p>See details (explain frequency effects with examples of 20 chips from many companies) (especially those items state ultrasonic frequency 20KHz) in<br> <a href="http://www.pololu.com/category/11/brushed-dc-motor-drivers" rel="nofollow">http://www.pololu.com/category/11/brushed-dc-motor-drivers</a></p> <p>b) Motor coil has inductance. Too high frequency reduce current flow, low torque, low RPM.</p> <p>Apparently, 0.5A per motor seems in normal range. Likely, at 6, 7.2 or 12 volts, a few watts to 10 watts per motor for fast robot motion. </p>
998
2013-03-02T15:57:31.173
|pwm|avr|
<p>We are making a junior soccer robot and we just got our brilliant motors from Maxon. Setting the PWM timer to low-frequencies (around 39kHz or 156 kHz ) the robot acts as expected. But this produces some problems. </p> <ol> <li>It puts a heavy current on batteries (around 1.5A for 3 motors which is far too high).</li> <li>The high current causes our motor drivers (L6203) to heat up very quickly and even heat-sinks won't help them.</li> <li>The motors make such a bad sound as they are screaming and this is not normal.</li> </ol> <p>In contrast when I configure the timer on high-frequencies (such as 1250 kHz or 10000 kHz) the current drops off to 0.2A which is ideal and the sounds quit down. But this causes a problem that our 3 motors when set to run on their highest speed (PWM set to 255) don't run by the same rpm. like one of them runs slower than others making robot turn to a specific side and so our handling functions fail to work correctly.</p> <p>Asking someone he told me that the drivers don't respond the same to frequencies thus resulting in different speeds and because on low frequencies the difference is very small I won't notice it but on higher frequencies the difference becomes bigger and noticeable.</p> <p>So is there any workaround for this problem? or I should continue using low frequencies? </p> <p>PS: I'm using ATMEGA16 as the main controller with a 10 mHz external crystal. </p>
Motors response different with high-frequency PWM
<p>Holding torque, by the stepper motor definition, is not a valid way to quantify servo performance. Stepper motor torque drops off with speed whereas in a servo it remains relatively constant. (Operating torque is never half of holding torque and is RPM dependent, your guide lied to you).</p> <p>A real servo will always have a torque (continuous/operating) rating under which it will remain controlled, and a maximum torque rating where it is just running full power.</p> <p>I cannot imagine ever buying a servo that does not list those two values, (especially considering that most servos will have a fairly substantial datasheet) but a quick survey shows that continuous torque is generally not half of the maximum torque.</p> <p>Note that hobby servo specifications are so inaccurate they might as well be made up so this question would not apply to them either.</p>
999
2013-03-02T17:54:46.930
|rcservo|stepper-motor|
<p>I am following a guide that recommends using stepper motors and it has an approximate holding and operating torque. It says that if you don't know the operating torque, it is often half of the holding torque. I am adapting this to use with a servo and I was wondering can this same formula be used with a servo. My servo has approximately <code>1.98 kg/cm</code> of torque so does that mean that I can estimate the operating torque would be <code>~1 kg/cm</code>?</p> <p><strong>A couple of things:</strong></p> <ul> <li>I know operating torque and holding torque are different. This is just a estimate-it isn't an exact science.</li> <li>I know servos are harder to find their location (75 degrees, etc.) than to use a stepper and assume that it worked. I have external means of finding the location.</li> </ul>
For servos, can it be implied that `Holding Torque = Operating Torque * 2` like with steppers?
<p>After the propagation step, we need to find the parameters of the Gaussian which describe our new estimate. These are, the mean $\mu$, and the co-variance $\Sigma$. You asked about the mean specifically, so here we go.</p> <p>Note that the definition of the mean of the propagated state is the expectation of the propagated state. Taking the expectation of the Taylor series expansion, we have (note I'm using shorter notation of superscript - for prior):</p> <p>$$x = g(x^-, u) $$</p> <p>And by Taylor series expansion,</p> <p>$$x \approx g(x^-, u) + g^\prime(x^- , u) (x-x^-)$$</p> <p>Evaluating the approximate function $g()$ at the mean of the prior yeilds:</p> <p>$$x \approx g(\mu, u) + g^\prime(\mu , u) (x-\mu)$$</p> <p>So, let's use $\mathbb{E}$ as the expectation operator to find the expected value of the output (which is the mean of our posterior estimate.</p> <p>$$\mu = \mathbb{E}[x]$$ $$\mu = \mathbb{E}[g(\mu^-,u)]$$ $$\mu \approx \mathbb{E}[g(x^-, u) + g^\prime(x^- , u) (x-\mu^-)]$$</p> <p>$$\mu \approx \mathbb{E}[g(x^-, u)] + \mathbb{E}[g^\prime(x^- , u) (x-\mu^-)]$$</p> <p>Now substitute in $G$, the Jacobian of the function $g$ also evaluated at the mean.</p> <p>$$\mu \approx \mathbb{E}[g(x^-, u)] + \mathbb{E}[G\cdot (x-\mu^-)]$$ $$\mu \approx \mathbb{E}[g(x^-, u)] + \mathbb{E}[G\cdot x - G\cdot \mu^- ]$$ And by linearity of expectation: $$\mu \approx \mathbb{E}[g(x^-, u)] + G\cdot \mathbb{E}[x] - G\cdot \mathbb{E}[\mu^- ]$$</p> <p>And guess what? The expected value of both of those things on the right is zero. (why? because we assumed they were Gaussian).** </p> <p>Now, we have to compute the co variance, defined as $\mathbb{E}\left[ (x-\mathbb{E}[x])(x-\mathbb{E}[x])^T\right]$. You can easily derive this by following the same steps. First expand the terms inside the first expectation, then substitute the linear approximation of $g$ as before. Some matrix algebra will yield the EKF co-variance update. Note that you have to include the "white noise" Jacobian. </p> <p>See <a href="http://services.eng.uts.edu.au/~sdhuang/Extended%20Kalman%20Filter_Shoudong.pdf" rel="nofollow">here</a> for a possibly better derivation.</p> <hr> <p>**The real lie the EKF tells you is that the term, $\mathbb{E}[g(x^-, u)]$ is the same as $g(\mu^-,u)$, which holds as long as the function $g()$ isn't too crazy.</p>
1006
2013-03-03T20:00:06.140
|kalman-filter|theory|ekf|
<p>In <a href="http://www.probabilistic-robotics.org/" rel="nofollow">Probablistic Robotics by S. Thrun</a>, in the first section on the Extended Kalman Filter, it talks about linearizing the process and observation models using first order Taylor expansion. </p> <p>Equation 3.51 states:</p> <p>$g(u_t,x_{t-1}) \approx g(u_t,\mu_{t-1}) + g\prime(u_t, \mu_{t-1})(x_{t-1} - \mu_{t-1})$</p> <p>I think $\mu_{t-1}$ is the state estimate from the last time step. My question is: what is $x_{t-1}$? </p> <p>Also, the EKF algorithm following this (on table 3.3) does not use the factor $(x_{t-1} - \mu_{t-1})$ anywhere, only $g\prime(u_t, \mu_{t-1})$. So after being confused about $x_{t-1}$, I'm left wondering where it went in the algorithm.</p>
Taylor Series expansion for EKF
<p>It sounds like you're planning to create a 2×2 configuration of individual cells, which is a common configuration: <a href="https://i.stack.imgur.com/q5qFK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q5qFK.jpg" alt="4 pack 6.4V lithium battery" /></a><br /> <sub>(source: <a href="http://img.diytrade.com/cdimg/113732/22434825/0/1310733466/lifepo4_lithium_battery_6_4v_1ah.jpg" rel="nofollow noreferrer">diytrade.com</a>)</sub></p> <p>The 6.4V charger is designed for a pair of 3.2V cells in series, so you should be able to add as many parallel stacks of these as you like. They will simply take longer to charge.</p> <p>At one of my previous jobs, we had a homemade 1.5kW battery that was made of 648 (yes, six hundred and forty eight) LiPoly 18650 cells: 24 supercells in series, each cell being 27 batteries in parallel. <strong>I do not recommend this setup.</strong> The cells within each supercell will charge evenly because they are in parallel, but the series cells do not. Here's an example of what the supercell charges looked like when we serviced the battery and each supercell was equally charged (ours were 3.7V min, 4.2V max):</p> <p><a href="https://1.bp.blogspot.com/_Hgp1nzunDXk/Ss-ltn0RyWI/AAAAAAAAAUo/9jzuvjOXeCw/s1600-h/chart_battery_recharge_456.png" rel="nofollow noreferrer"><img src="https://1.bp.blogspot.com/_Hgp1nzunDXk/Ss-ltn0RyWI/AAAAAAAAAUo/9jzuvjOXeCw/s320/chart_battery_recharge_456.png" alt="Mission 456" /></a></p> <p>As you can see, they mostly discharge and charge together -- you can see the point at which we plugged in the charger. A few testing days later, one of the cells began drifting apart:</p> <p><a href="https://4.bp.blogspot.com/_Hgp1nzunDXk/SuVxTh3O4AI/AAAAAAAAAX4/lIHVyUZ-8ZU/s1600-h/chart-batteryvoltage-546.png" rel="nofollow noreferrer"><img src="https://4.bp.blogspot.com/_Hgp1nzunDXk/SuVxTh3O4AI/AAAAAAAAAX4/lIHVyUZ-8ZU/s320/chart-batteryvoltage-546.png" alt="Mission 546" /></a></p> <p>Eventually, they look like this (what it looked like before servicing):</p> <p><a href="https://4.bp.blogspot.com/_Hgp1nzunDXk/SrOZSPKes0I/AAAAAAAAAQY/0DH79FlrRD0/s1600-h/Battery+Voltages+Recharging.png" rel="nofollow noreferrer"><img src="https://4.bp.blogspot.com/_Hgp1nzunDXk/SrOZSPKes0I/AAAAAAAAAQY/0DH79FlrRD0/s320/Battery+Voltages+Recharging.png" alt="Mission 394" /></a></p> <p>The moral of the story is that using many cells in series doesn't guarantee even charging, because each cell has a slightly different capacity. Using cells in parallel is better because it keeps their voltages equal.</p> <p>For your particular case: if you don't charge all 4 cells in parallel, make sure that you monitor the voltage of each set of cells in the series -- one parallel pair will have a higher voltage than the other parallel pair. Don't discharge them below the minimum rated voltage of the low pair, and don't charge them above the maximum rated voltage of the higher pair. The difference in voltage between the two sets of cells will impact your available operating time, so it will make sense to manually balance the voltages (using 3.2V parallel charging) every so often.</p>
1011
2013-03-04T14:46:52.133
|battery|
<p>I'm looking to use 4 of <a href="http://www.batteryspace.com/lifepo426650cell32v3300mah16.5arate10whunapproved.aspx" rel="nofollow noreferrer">these 3.2V LiFePO4 batteries</a>. I intend to have 2 pairs of 2 in series in parallel. So two 6.4V battery packs in parallel. Because my setup will be run off of this, it will also be easiest to recharge the batteries using the same setup. To accomplish this, I'm looking to charge all the batteries at once using <a href="http://www.batteryspace.com/Smart-Charger-0.7-A-for-6.4V-2-cells-LiFePO4-Battery-Pack-100-240VAC.aspx" rel="nofollow noreferrer">this 6.4V LiFePO4 smart charger</a>. From a simplistic standpoint, the resulting voltage should be correct and this should work fine. However, I know (from a <a href="https://robotics.stackexchange.com/questions/634/how-to-charge-a-lifepo4-battery">previous question</a>) that LiFePO4 battery chargers are a bit more complex then a basic voltage supply and check. Would the setup I've described work correctly? And in general, will a LiFePO4 smart charger be able to charge several batteries of the correct voltage at the same time so long as it doesn't try to charge them at too high an amperage? Or does a LiFePO4 battery also have a minimum amperage cutoff point to charge such that trying to charge more than one battery at a time will cause problems? Any other issues I didn't mention? Thank you much!</p>
Charging multiple LiFePO4 batteries at the same time?
<p>I've made a small robot that follows a virtual path drawn on a smartphone under an NSF funded research grant.</p> <p>The way you would make a robot follow a virtual path is to use the Pure Pursuit path following algorithm along with dead reckoning.</p> <p>I developed the following C++ Arduino libraries for virtual robot path following:</p> <ul> <li>Dead reckoning library - Dead reckoning for differential drive robots using encoders. The key limitation with this library is that only encoders are used to determine position.</li> <li>PreMo (Precise mover library) - Contains the implementation for pure pursuit for path following. Designed for easy implementation for students and hobbyists.</li> </ul> <p>These libraries can be modified to work on other platforms than the Arduino.</p> <p><a href="https://www.ri.cmu.edu/pub_files/pub3/coulter_r_craig_1992_1/coulter_r_craig_1992_1.pdf" rel="nofollow noreferrer">About pure pursuit algorithm</a></p> <p><a href="https://sites.google.com/stonybrook.edu/dead-reckon-lib/" rel="nofollow noreferrer">Dead reckoning library</a></p> <p><a href="https://github.com/purwar2016/PreciseMovement-library" rel="nofollow noreferrer">Precise movement library</a></p>
1020
2013-03-07T15:52:55.750
|mobile-robot|localization|wheeled-robot|industrial-robot|line-following|
<p>I would like to build a robot which follows a virtual path (Not a visible path like a 'black line on a white surface', etc).</p> <p>I'm just enthusiastic by seeing some sci-fi videos which show robots carry goods and materials in a crowded place. And they really don't follow a physical line. They sense obstacles, depth, etc. </p> <p>I would like to build one such robot which follows a specific (virtual) path from point A to B.</p> <p>I have tried a couple of things:</p> <ol> <li><p>Using a magnetic "Hall effect" sensor on the robot and wire carrying current (beneath the table). The problem here was that the vicinity of the hall effect sensor is so small (&lt; 2cms) that it is very difficult to judge whether robot is on the line or off the line. Even using series of magnets couldn't solve this issue, as my table is 1 inch thick. So this idea flopped :P</p></li> <li><p>Using an ultraviolet paint (on a line) and using UV leds on the robot as sensors. This will give more Zig-Zag motion for the robot. And due to potential threats of using UV light source, even this idea flopped :P</p></li> </ol> <p>I finally thought of having a camera on top and using image processing algorithms to see whether robot is on the line or diverging.</p> <p>Is there any better solution than this? Really looking for some creative and simple solutions. :)</p>
How to make an "invisible line following robot"?
<p>Servo technology has evolved a lot over the last few years.<br /> You can check out this video showcasing metal servos that power Petoi open source robot dog Bittle: <a href="https://www.youtube.com/watch?v=FRE-sp05ia8" rel="nofollow noreferrer">https://www.youtube.com/watch?v=FRE-sp05ia8</a> These digital servos are coreless and have operating travel of 270°, and stall torque of 3kg*cm.</p>
1023
2013-03-08T08:44:20.903
|rcservo|walking-robot|
<p>I'm building a quadruped and I'm not sure of the features I should be looking for in a servo motor. e.g. digital vs analog, signal vs dual bearings. Some of the ones I'm considering are <a href="http://www.servodatabase.com/compare?servos=806,1660,1664,1666http://www.servodatabase.com/compare?servos=806,1660,1664,1666" rel="nofollow">here</a></p>
Servo motor considerations for a quadruped
<p>A PID loop and a so-called PIV loop with equal gains should have the same response to a disturbance, so I'm not sure why the claim that the disturbance response is better or worse.</p> <p>As mentioned, the derivative "kick" will be less, which can be a good thing if you give the thing sharp inputs.</p> <p>In addition, there can be some benefits as the thing comes out of integrator saturation, depending on how you implement your anti-windup.</p> <p>Mostly, the so-called PIV loop is just a way of affecting the zeros of the closed-loop transfer function. It's a special case of a more general scheme where your controller output is (in Laplace notation) $$Y(s)=\frac{k_{fi}U(s) - k_{bi}X(s)}{s} + \left(k_{fp}U(s) - k_{bp}X(s)\right) + \left(k_{fd}U(s) - k_{bd}X(s)\right)s$$ where $Y$ is the controller output, $U$ is the system command and $X$ is the controlled variable, while the various $k_{xx}$ are forward and backward integral, derivative, and proportional gains. In this scheme you tune the various feedback gains ($k_{bx}$) to get the loop (and hence disturbance) response that you want, and you tune the forward gains ($k_{fx}$) to improve the response to a command change, by whatever criteria you have for "better".</p> <p>Setting all the forward and reverse gains equal gets you a plain ol' PID, while setting $k_{bp}=0$ and $k_{bd}=0$ gets you the so-called "PIV" controller.</p>
1032
2013-03-09T15:52:12.467
|control|otherservos|pid|
<p>I'm considering experimenting with PIV control instead of PID control. Contrary to PID, PIV control has very little explanation on the internet and literature. There is almost a single source of information explaining the method, which is a <a href="http://parkermotion.com/whitepages/ServoFundamentals.pdf">technical paper by Parker Motion</a>. </p> <p>What I understand from the control method diagram (which is in Laplace domain) is that the control output boils down to the sum of:</p> <ul> <li>Kpp*(integral of position error)</li> <li>-Kiv*(integral of measured velocity)</li> <li>-Kpv*(measured velocity)</li> </ul> <p>Am I correct? Thank you. </p>
How is PIV control performed?
<p>If you are speaking about RC style servos, then the actual drive to the motor is in the form of a pulse width modulated voltage: the servo amplifier just turns the motor on fully in one direction or the other, for a length of time determined by the difference between the measured position and the target position as determined by the width of the incoming pulse.</p> <p>Life is complicated by the newer &quot;digital&quot; servos -- I can't swear to the internal workings of these, but the drive to the motor is almost certainly PWM of some sort, just at a faster rate than the old-style analog servo amplifiers. There's no reason that it couldn't be as described below, except that there is precious little room inside of an RC servo, so there's a good chance that no current measurement is done, and hence that the parameter that the microprocessor controls is the PWM duty cycle.</p> <p>How servo mechanisms in industry are controlled is a much wider answer: basically, it can happen any way that the designer thought was appropriate. Motor drive can be via a linear amplifier that is configured to drive a current or a voltage, or it can be via a switching amplifier (i.e. PWM). In the case of a switching amplifier, the amplifier itself can be either analog or digital and (through feedback) configured for either constant-current, constant-voltage, or constant duty cycle.</p> <p>Usually in a &quot;pro&quot; servo mechanism with a brushed or brushless motor there will be an inner loop that servos the motor current (and therefor torque). This is not so much to provide any big advantage in control as it is because it vastly simplifies current limiting to the motor, which is necessary if you want to keep all that expensive smoke inside the motor where it belongs. That inner current (and hence torque) loop will often be wrapped by a speed loop that either senses the speed of the motor shaft or that deduces it from the motor's back-EMF (and which provides a convenient means of speed limiting the motor, if that is necessary). Finally, that speed loop is wrapped by a position loop.</p>
1033
2013-03-09T17:01:12.603
|otherservos|pid|
<p>I'm trying to learn about servo control. I have seen that the most generic position control method for servos is PID, where the control input is position error. However, I am not sure about what is the actuated quantity. I am guessing that it is one of:</p> <ul> <li>Voltage applied to the motor</li> <li>Current applied to the motor</li> </ul> <p>I am then guessing that the actuated quantity gets turned into one of:</p> <ul> <li>Torque that the motor exerts</li> <li>Angular velocity that the motor runs at</li> </ul> <p>I haven't been able to get my hands on and explicitly control a physical servo so I cannot confirm that the actuated quantity is any of these. I know very little of the electronics that controls the motor. It might well be that the controlled quantities are different for different series servos. </p> <p>My bet is on torque control. However, assume that the servo is holding a weight at a distance (so it is acting against gravity), which means an approximately constant torque load. In this case, if the position error is zero and the servo is at rest, then each of P, I and D components are zero, which means the exerted torque is zero. This would cause the weight to sink, which is countered by the error in its position causing P,I components to increase. Wouldn't this situation cause the lifted weight to oscillate and balance at a constant position which is significantly different from the goal position? This isn't the case with the videos of servos I have seen lifting weights. Or is this the case and friction is smoothing everything out? Please help me understand. </p>
What is the actual physical actuated quantity when controlling the position of a servo?
<p>The Rock framework has such a widget (based on OpenPilot/GCS).</p> <p>package</p> <p><a href="http://rock-robotics.org/master/pkg/gui/map2d/index.html" rel="nofollow">http://rock-robotics.org/master/pkg/gui/map2d/index.html</a></p> <p>code</p> <p><a href="http://gitorious.org/rock-gui/map2d" rel="nofollow">http://gitorious.org/rock-gui/map2d</a></p>
1036
2013-03-10T07:07:13.193
|gps|visualization|
<p>Is there web mapping tool that allows developers to use it to plot GPS data of autonomous vehicles/robots?</p> <p><code>Google Maps</code> forbids it. See <a href="https://developers.google.com/maps/terms">10.2.C</a>. <code>Google Earth</code> terms of use link jumps to the same page. <code>Bing</code> Maps looks the similar (see <a href="http://www.microsoft.com/maps/product/terms.html">3.2.(g)</a>).</p> <p>What I want is a internet-based tool that shows either/both satellite images and/or map, which can overlay plot using its API. <a href="https://github.com/ros-visualization/rqt_common_plugins/issues/41">I'm making a generic GPS plotter</a> on <code>ROS</code> that could be used both for slow robots or fast vehicles/cars.</p> <p>Thanks!</p>
Web mapping that can be used for autonomous vehicles/robots
<p>The Solid nodes (including Robot and Supervisor nodes) have a DEF name, like any other node and it is usually written in upper-case. These nodes also have a field "name". Both may be used by a Supervisor, it depends on which function is called. </p> <p>In my case, the function getFromDef() expects a DEF name as argument, and not the name from the "name" field. The DEF name of my e-puck is "EPUCK" - hence modifying controller to:</p> <pre><code>Node epuck = getFromDef("EPUCK"); </code></pre> <p>solved the problem.</p>
1039
2013-03-11T00:45:09.677
|mobile-robot|simulator|webots|
<p>I am writing a method (Java) that will reset the position of e-puck in Webots. I have been following tutorial on <a href="https://cyberbotics.com/doc/guide/tutorial-8-the-supervisor" rel="nofollow noreferrer">Supervisor approach</a>. I have two controllers in my project:</p> <ul> <li>SupervisorController extends <a href="https://www.cyberbotics.com/doc/reference/supervisor" rel="nofollow noreferrer">Supervisor</a> - responsible for genetic algorithm and resetting e-puck's position</li> <li>EpuckController extends <a href="https://www.cyberbotics.com/doc/reference/robot" rel="nofollow noreferrer">Robot</a> - drives the robot</li> </ul> <p>Robots are communicating via Emitter and Receiver, and everything works fine but the position reset. This is what I'm doing in SupervisorController:</p> <pre><code>412 Node epuck = getFromDef(&quot;epuck&quot;); 413 Field fldTranslation = epuck.getField(&quot;translation&quot;); </code></pre> <p>And as a result I get this exception:</p> <pre><code>[SupervisorController] Exception in thread &quot;main&quot; java.lang.NullPointerException [SupervisorController] at SupervisorController.initialise(SupervisorController.java:413) [SupervisorController] at SupervisorController.main(SupervisorController.java:497) </code></pre> <p>epuck variable is null. I tried calling different methods on epuck, and they all resulted in NullPointerException. The name of e-puck matches the world file.</p> <pre><code>DEF EPUCK DifferentialWheels { translation 0.134826 -0.000327529 0.107963 rotation 0.0244439 0.999246 -0.0301538 1.95838 children [ (........) ] name &quot;epuck&quot; controller &quot;EpuckController&quot; axleLength 0.052 wheelRadius 0.0205 maxSpeed 6.28 speedUnit 0.00628 } </code></pre> <p>I would appreciate any advice on how to get a handle to the robot or where to look for issues in simulation/code.</p>
Resetting position of e-puck in Webots using Supervisor node - problem with getting a handle to the robot
<p>I'm biased (since I have TA'd his course twice), but I think Prof. Marin Kobilarov's derivation of the KF/EKF is far superior to Thrun's. I have read almost every book on KF/EKFs mentioned in previous answers, and I think Marin's lecture notes are excellent for the theoretical aspects of optimal state estimation. Thrun's presentation is far from the worst, though. The beauty of Marin's approach is that he develops the KF measurement updates from an optimization standpoint: he starts with a simple least-squares approach and goes through the necessary and sufficient conditions for optimality for both linear and nonlinear measurement models. </p> <p>The course website also has some easily-digestible matlab code: <a href="https://asco.lcsr.jhu.edu/en530-603-f2017-applied-optimal-control/" rel="nofollow noreferrer">https://asco.lcsr.jhu.edu/en530-603-f2017-applied-optimal-control/</a></p> <p>Lecture 10 is specifically about optimal state estimation, namely the KF/EKF: <a href="https://asco.lcsr.jhu.edu/docs/EN530_603_F2015/lecture10.pdf" rel="nofollow noreferrer">https://asco.lcsr.jhu.edu/docs/EN530_603_F2015/lecture10.pdf</a></p> <p>PS. During my orals/quals, I was asked to derive the EKF by hand from memory. If I hadn't learned the above approach, which I find very intuitive, I think I would have been lost in mathematical minutia. As always, YMMV.</p>
1047
2013-03-12T07:59:49.377
|kalman-filter|books|
<p>Currently I am reading a book of Mr. Thrun: <a href="http://www.probabilistic-robotics.org/" rel="noreferrer">Probabilistic Robotics</a>. I find it really helpfull to understand concept of filters, however I would like to see some code in eg. Matlab. Is the book "Kalman Filter for Beginners: with MATLAB Examples" worth buying, or would you suggest some other source to learn the code snippets from?</p>
Source to learn Kalman Fusion, explanatory code snippets
<p>Yes, it is possible. You need the Android Accessory API and <a href="http://developer.android.com/tools/adk/index.html" rel="nofollow">Android Accessory Development Kit</a>. It is based off Arduino, and is open. </p> <p>In general, if the phone + the ADK is cheaper than buying the components separately (I suspect it would be), then I'd buy the phone + ADK.</p> <p>Your other option is to buy a phone, and a bluetooth shield for a microcontroller, then use the bluetooth connection to send data between the phone and microcontroller.</p>
1049
2013-03-12T16:25:48.563
|arduino|usb|
<p>I want to built a robot and i need bunch of modules to track it like GSM/GPS Wifi and Camera If i try to buy each of module separately it will cost me 300 dollar each aprox in Pakistan. On the other hand an android enable phone can be purchased on just 250$ having all of them. I was wondring if it is possible to interface android phones like (Huawaii or google nexus) with 8-bit microcontrollers or Arduino? The only port available with android phones are USB and Arduino supports USB. It is possible to some how attach both of them?</p>
Is it possible to interface android mobile as GSM and GPS module with arduino based robotic applications?
<p>That is a very good question, and depends on the design. There are in general two ranges for components which are temperature sensitive. The operational range gives the temperature at which the component can be actively used. Within the survival range the component should generally take no harm but may not be actively used. Often what is even more demanding on the components than extreme temperatures is temperature cycling. Mars for example usually has a large amount of cycles, where as moon has the more extreme temperatures, but less cycling.</p> <p>In principle there are two ways to handle low temperatures for space rovers</p> <ul> <li><p>Heating your rover so that all the components that are temperature sensitive stay within the respective limits. You can use active heating by for example using energy from your battery. Depending on the battery and your insulation you may not hold out very long though. Radioactive heater units (RHU) are another alternative that can be very effective.</p></li> <li><p>Increasing the temperature tolerance of your system is another option. This can be done by selection of tolerant components, or potentially extending the range of existing components. Usually the most temperature sensitive component is the battery. Insulation is also possible, but may only get you so far.</p></li> </ul> <p>In general one could say that the thermal design is one of the most critical aspects for space probes, and will often decide about the feasibility of a mission.</p>
1050
2013-03-12T17:18:03.467
|electronics|ugv|reliability|
<p>For example, if a rover has working temperature range of -70 to +120 Celsius, how does it survive and then restore itself if the temperature drops to -150 degrees for several months?</p>
How do space rovers survive at very low temperatures?
<p>Tripod gait is a stable gait. Moreover, it is a statically stable gait, meaning that if you lock the joints with 3 legs down and turn the power off, the robot will not topple. This implies that you don't need to dynamically balance the robot while walking. </p> <p>The reason when a legged robot falls down is that its center of gravity does not fall on its support polygon (in tripod gait case, it is a triangle with each corner on one leg that's touching the ground). So what you should do is to measure the center of gravity and guarantee that it never leaves the support polygon of your robot while walking via adjusting your leg motion. Of course, this is all for a statically stable gait. Dynamically stable gaits allow the center of gravity to leave the support polygon in a controlled manner. But as I said, the tripod gait is statically stable. </p> <p>I'm assuming that you are using naked DC motors without encoders or gear trains. There are two issues about this:</p> <ul> <li>First, make sure that you are getting enough total torque at each leg triplet that lift the robot. It may be the case that the DC motors cannot lift the robot on their own without a gear train. </li> <li>Second, errors tend to grow without bound when performing such a task, angular position errors being one example. This is simply due to noise and imperfect modeling of the motor. So in a sense, DC motors are not actually "controlled", you just apply an input while not knowing what the output is. You must apply closed loop control, which is done via getting feedback from an encoder. Then, you can sense what the angular position on your motor is, determine your error, and apply an input to correct that error.</li> </ul> <p>Both of these problems are addressed in a servomotor, which is basically a DC motor, a gear train to increase torque and reduce angular velocity, and an encoder. I very much suggest using servos for your leg joints. </p>
1056
2013-03-13T10:09:37.960
|motor|wheeled-robot|gait|
<p>I need help with figuring out the following things:</p> <p>I'm developing a hexapod R-Hex type model with a tripod gait. However, the angles obtained during the robot's walking in real life are not perfectly aligned. Because of this the robot often collapses and falls even on perfectly straight terrain.</p> <p>My configuration is: </p> <ul> <li>60 rpm dc motors for each leg</li> <li>H-bridge motor drivers for each dc motor</li> <li>Atmega 8</li> </ul> <p>Should I change the gait type?</p> <p>Or is Tripod sufficiently stable?</p> <p>Are DC motors providing fine enough control or do I need servos?</p> <p>Do I need a DC motor with an Encoder? What will be its benefits?</p> <p>What could be done to improve performance of the robot?</p> <p><strong>Added: Would a Stepper motor work as well, instead of a servo?</strong></p>
Perfecting Tripod Gait - Building a R-Hex Robot
<blockquote> <p>"I'm going to need analogue control"</p> </blockquote> <p>This is almost always done by rapidly connecting and disconnecting the motor to power, using PWM or something similar. The switching is faster than the motor can respond, and often the designer picks a switching frequency faster than the 20 kHz humans can hear.</p> <h2>generating PWM signals</h2> <p>There are 3 popular methods of generating PWM signals: ( <a href="https://electronics.stackexchange.com/questions/6676/which-sipo-chip-is-better-74hc4094-or-74hc595-or-something-else">a</a>, <a href="https://electronics.stackexchange.com/questions/11633/pov-globe-speed-questions/11664#11664">b</a> )</p> <ul> <li>Generate the PWM in software ("Bit-banging Pulse Width Modulation"). Typically this is the lowest-cost approach. With enough 74HC595 or TPIC6595 chips, one Arduino can control any number of motors. (But I doubt you'll be able to get 20 kHz PWM frequency this way).</li> <li>Generate the PWM using on-chip PWM hardware on the microcontroller. With enough Arduinos, you can control any number of motors at a high PWM frequency with 3 or 4 motors per Arduino.</li> <li>Generate the PWM using dedicated PWM peripheral chips such as the TLC5947, which the microcontroller occasionally loads with a new PWM duty cycle. With enough TLC5947 chips, one Arduino can control any number of motors and still maintain a high PWM frequency.</li> </ul> <h2>Converting PWM signals into something that can drive a motor</h2> <p>It appears your motors are rated at 85 mA start, 75 mA continuous. That is really tiny for a motor. But it's still more power than most digital logic chips can drive directly. As you can see from this list, only one of the chips mentioned above (the TPIC6595) is really designed to directly drive that amount of power:</p> <ul> <li>TPIC6595 datasheet: test conditions ... 250 mA continuous sink current capability.</li> <li>TLC5947 datasheet: test conditions ... 30 mA continuous sink current</li> <li>ATmega328P datasheet: "test conditions ... 20 mA at VCC = 5V, 10 mA at VCC = 3V... Pins are not guaranteed to source current greater than the listed test condition."</li> <li>74HC595 datasheet: test conditions ... 7.8 mA.</li> </ul> <p>We typically use some kind of "buffer" between the digital logic chips (any of the above chips) and a motor.</p> <p>For motors that we only need to turn in one direction, we typically we use flyback diode and either a nFET or a npn transistor as the "buffer":</p> <ul> <li>ULN2803A or ULQ2803A (transistor + flyback diode array) datasheet: test conditions ... over 150 mA continuous.</li> <li>There are thousands of discrete transistors and diodes that can easily handle: 10 V, 1 A. ( <a href="https://electronics.stackexchange.com/questions/57831/how-to-determine-what-transistor-is-needed">a</a> )</li> </ul> <p>For motors that we need to turn both "forwards" and "reverse", we typically use 4 transistors arranged in a H bridge as the "buffer":</p> <ul> <li>L293D H bridge datasheet: test conditions ... 600 mA.</li> <li>There are thousands of discrete transistors and diodes that can easily handle: 10 V, 1 A.</li> </ul>
1064
2013-03-14T11:18:41.797
|arduino|motor|
<p>I want to prototype a therapeutic device that will have a lot of tiny mobile-phone type vibration motors <a href="http://www.cutedigi.com/robotics/vibration-motor.html" rel="nofollow">like this one</a> in it, and I want to be able to activate them in any configuration I want. I'm going to need analogue control, and support for logic like perlin noise functions and so on. I'm not really going to need sensor data or any other kind of feedback beyond a few buttons for control. I just need fine control over lots of little motors. Depending on what results I can get out of, say, 25 motors on the initial prototype, I may decide that I'm done, or that it needs more motors. I also don't have an enormous budget.</p> <p>So the question is, is Arduino a good fit for a project like this? Is it feasible to get that many motors working off the same controller? I know some of the Arduino boards have up to 50-something serial outputs, but from what I can tell, that may only translate to 25 or so motors, so I'd need a way to extend the board with more serial outputs if I wanted to try more.</p> <p>Additionally, if Arduino isn't a good fit, what would be better? Could I try something directly out of a serial port on my PC? I've never tried to home-cook a robotics application before, so I'm not really aware of the options.</p>
Prototyping a device with 25-100 small DC 3.0V motors, is Arduino a good fit?
<p>I have seen this behaviour using the NxtMMX with the EV3 (while writing the driver for ev3dev). I am fairly certain that this is the speed regulation PID overshooting and then slowing down. In NXC you can use <code>MMX_SetPerformanceParameters</code> to change the PID parameters and possibly get rid of the overshoot.</p>
1068
2013-03-15T01:42:18.520
|nxt|mindstorms|not-exactly-c|
<p>I am trying to run a nxt motor using the mindsensors motor multiplexer at a slow speed. When I turn it on, it tends to jump approx 20 to 40 degrees before moving at a slow speed. Has anyone seen this behavior?</p> <p>I am using NXT 1.0 with firmware down loaded from <code>lms_arm_mindsensors_129.rfw</code>. Sample code in NXC (I am using Bricx Command Center as my IDE) is as follows:</p> <pre><code>MMX_Run_Degrees (SensorPort, Addr, MMX_Motor_2, MMX_Direction_Reverse, MMX_Speed_Slow, 220, MMX_Completion_Wait_For, MMX_Next_Action_Brake); Wait(500); MMX_Run_Unlimited( SensorPort, Addr, MMX_Motor_2,MMX_Direction_Forward, 5); // The jump happens here. while(Sensor(IN_2)&lt; SENSORTHRESHOLD); </code></pre>
Mindsensor Motor Multiplexer jump on run_unlimited
<p>The gear trains in many printers and multifunction devices, and in some scanners, have a transfer gear that drives two different gears, depending on which way the drive motor is running. The pictures below show several such transfer gears in the left-side gear train of a Brother MFC-1970MC (<a href="http://reviews.cnet.com/multifunction-devices/brother-mfc-1970mc/4507-3181_7-4842717.html" rel="nofollow noreferrer">1</a>, <a href="http://www.brother-usa.com/mfc/discontinued.aspx?ProductID=MFC1970MC#.UUkkblEVlkp" rel="nofollow noreferrer">2</a>), a 1998-vintage fax/copier/printer/scanner/answering machine multifunction device. The right-side gear train only has half-a-dozen gears in it and isn't shown.</p> <p>The 1970MC has one drive motor to power four different operations: printer paper pickup, printer paper drive, scan sheet pickup, and scan sheet drive. I suspect that the solenoid (shown in first and last pictures) selects between printer and scanner operation, and that the swinging transfer gears (some of which appear in all pictures) select between paper pickup and paper drive.</p> <p><img src="https://i.stack.imgur.com/7XqCF.jpg" alt="overview, left-side gear train"> <img src="https://i.stack.imgur.com/ry4aS.jpg" alt="closeup, rear transfer gear, CCW"></p> <p>Above: When A turns CW (clockwise), D is driven by the transfer gear C. Gears A and C go CW while B and D go CCW. The turning resistance of C causes the arm to swing to the left. When C encounters D, B continues forcing C to the left so that C and D mesh strongly.</p> <p>Below: When A turns CCW, E is driven by the transfer gear C. Gears A and C go CCW while B and E turn CW.</p> <p>(Note, there is a one-way clutch or ratchet beneath the black gear E; when E is driven, the white gear beneath E does not turn. When D is driven, it drives the two gears at very top center in picture, so driving the white gear beneath E. Thus E rotates CW whenever A rotates. D rotates CCW when A goes CW. D does not rotate when A goes CCW.)</p> <p><img src="https://i.stack.imgur.com/2ZGiH.jpg" alt="closeup, rear transfer gear, CW"> <img src="https://i.stack.imgur.com/ND0Uh.jpg" alt="closeup, front transfers, solenoid, and motor"></p> <p>At the center of the picture above are two swing transfer gears, labeled P and Q. When the solenoid is out (not energized) the hook at the end of arm R prevents Q from swinging. Also, a hook on P (more visible in first photo) slides over end of R so if P is operating Q cannot. When the solenoid is in, Q can swing, and the hook on P hits the end of R, so if Q is operating P cannot engage the gear to its right.</p>
1069
2013-03-15T13:23:35.370
|motor|
<p>Is there a way to use a single dc motor output for two different loads (by using gears, relays etc) ? Please see the illustration below:</p> <p><img src="https://i.stack.imgur.com/obCuf.png" alt=""></p> <p>To clarify the illustration, I get the dc motor power at "output 1" gear which is extended over an idler gear to output 2 gear. All three are in contact (though the picture doesn't quite show it). Load 1 and Load 2 are two separate gears connected to different loads (wheels etc) and initially not in contact with the bottom gears. On switching on the relays 1, 2 - the load bearing gears, move towards the output1 and output2 and mesh with them to drive Load 1, 2.</p>
Is there a way to use a single dc motor output for two different loads?
<p>The ring should be below the propeller. It is designed to keep the three screws in place in case some of the screws come out during the flight.</p>
1074
2013-03-16T12:37:15.563
|brushless-motor|
<p>I'm building a quadcopter and I've received my motors and propellers.</p> <p>What's the right way to assemble those together?</p> <p>I'm not confident with what I've done, as I'm not sure the propeller would stay in place on a clockwise rotating motor.</p> <p>I mean, if the motor rotates clockwise, will the screw stay tightly in place, even with the prop's inertia pushing counter-clockwise?</p> <p>Here's what I've done (of course i'll tighten the screw...) :</p> <p><img src="https://i.stack.imgur.com/mN36U.jpg" alt="Motor and propeller"></p>
How to assemble brushless motors and propellers?
<p>I like @kramer65's answer but because you indicated you might like to make an encoder, I thought I'd share some information on that.</p> <p><a href="http://www.societyofrobots.com/sensors_encoder.shtml" rel="nofollow noreferrer">http://www.societyofrobots.com/sensors_encoder.shtml</a> shows a very common way of making a rotary encoder. You attach a disk with evenly spaced holes or reflective surfaces depending on implementation to your shaft. You align a light sensor with the holes and count the number of holes that pass to determine how much the shaft has turned. This has the disadvantage of not being able to determine direction but there was ways to modify the design to add that feature. </p> <p><img src="https://i.stack.imgur.com/254ew.jpg" alt="An example of a homemade encoder."></p> <p>More information can be found in these places:</p> <p><a href="http://groups.csail.mit.edu/mac/users/pmitros/encoder/" rel="nofollow noreferrer">http://groups.csail.mit.edu/mac/users/pmitros/encoder/</a></p> <p><a href="http://thedenneys.org/pub/robot/encoders/" rel="nofollow noreferrer">http://thedenneys.org/pub/robot/encoders/</a></p>
1078
2013-03-17T17:38:49.310
|wheel|usb|encoding|
<p>For a pet project, I am trying to fly a kite using my computer. I need to measure how far a cord extends from a device. I also need to somehow read out the results on my computer. So I need to connect this to my pc, preferably using something standard like USB.</p> <p>Since the budget is very small, it would be best if I could get it out of old home appliances or build it myself.</p> <p>What technology do I need to make this measurement?</p>
How do I measure the distance that a cord (string) has moved?
<p>This sounds like a classic case for a <a href="http://en.wikipedia.org/wiki/PID_controller">PID controller</a>. The "derivative" part of this controller will help prevent the arm from oscillating as you move to a new angle, and the "integral" part will help counteract the force of gravity acting on the arm.</p>
1082
2013-03-18T01:08:24.437
|mobile-robot|control|robotic-arm|
<p>I have a 4-bar linkage arm (or similar design) for a telerobot used in the <a href="http://vexrobotics.com" rel="nofollow noreferrer">VEX Robotics Competition</a>. I want to be able to press buttons on my PS3-style controller to have the arm raise to certain angles. I have a potentiometer to measure the 4-bar's angle.</p> <p>The potentiometer measures the angle of one of the joints in the shoulder of the mechanism, which is similar to this:</p> <p><img src="https://i.stack.imgur.com/CwnAQ.jpg" alt="enter image description here"></p> <p>What type of control should I use to stabilize the arm at these angles?</p>
Stabilizing a Robot Arm at a Specified Height
<p>It is possible to do it. Take a look at this <a href="http://www.acroname.com/robotics/info/ideas/continuous/continuous.html" rel="nofollow">link</a>. Of course you will not have the encoder possibility with that servo. But you dont have it currently either. You have to solder middle pin from rotary encoder (resistor) and read the value of resistance to determine current position of shaft.</p> <p>If you need positioning i would suggest DC motor with encoder or stepper. With stepper you can count steps made. That is only rough estimation on distance made by robot as you dont have feedback if motor has actually turn or not.</p>
1086
2013-03-19T08:15:53.780
|motor|rcservo|
<p>How do I increase the rotation range of a standard servo? Most servos have a rotation range of ~ 180 degrees. I would like to access the entire 360 degree range on the servo, partially because I would be attaching the servo's shaft to the robotic wheel and would like it to be able to make full rotations. Or is that not possible? </p> <p>I would not like to however lose the 'encoder' part of the servo which allows me to clearly specify which angular position the wheel should stop at. </p> <p>If I use gears in order to transform this system range, would it lead to loss of precision?</p> <p>In addition, would such a transform allow the wheels to continuously rotate in one direction? From what I understand, this won't work.</p> <p>Would a stepper motor with an external encoder or a dc motor with an external encoder work?</p>
Increasing the rotation range of a servo motor
<p><a href="http://docs.opencv.org/modules/refman.html" rel="nofollow">OpenCV</a> is available with python bindings. It's not geared at strictly numerical applications, but it might be sufficient. </p> <p>It includes some <a href="http://docs.opencv.org/modules/core/doc/operations_on_arrays.html" rel="nofollow">matrix math</a> and <a href="http://docs.opencv.org/modules/ml/doc/ml.html" rel="nofollow">machine learning</a> algorithms.</p> <p>I can't say you won't ever experience numerical problems, but it's not based on gfortran.</p>
1092
2013-03-19T21:26:07.550
|kinematics|python|
<p>Are there any decent python numerical package libraries besides numpy for python? Numpy relies on gfortan which itself must be compiled correctly for your platform to avoid hidden/insidious numerical errors in numpy. </p> <p>I need a matrix algebra package to do kinematics, path planning, and machine learning in python that isn't sensitive to <code>gfortran</code> version and compiler options.</p>
Numpy alternatives for linear algebra and kinematics in python?
<p>The KF estimates the robot pose based on all sensor inputs <em>and the sensor correlation</em>. If you do an EKF on the compass data, you'd really need the robot pose to determine how likely a given compass reading <em>is</em>. Without that, you are just low-pass filtering (not using a probabilistic filter like the KF).</p> <p>If you filter before you put everything in the same frame, then I don't know what information you'd have to do filtering <em>on</em>. Since I don't know exactly what you mean by "usable" I assume you have converted all sensor data into the coordinate frame of the robot. In that case, filtering is very easy since you can put all the sensor readings directly into one EKF. In fact, this is the "normal" way to do filtering, that I'm familiar with.</p> <p><strong>Example</strong>: Let's say your IMU is attached to a robot. The IMU will be used in estimating the pose of the robot. It doesn't matter what <em>units</em> you use as long as the IMU is telling you something about how the robot is moving. Then you can use the <em>corellation</em> of the IMU to other things that measure movement, such as the compass or odometers. That is what the KF is for. The KF is <em>not</em> a sensor filter like a bandpass filter or something.</p> <p>There is a very relevant answer <a href="https://robotics.stackexchange.com/q/277/163">here</a>.</p>
1097
2013-03-22T01:46:47.187
|kalman-filter|imu|
<p>Shall I filter (kalman/lowpass) after getting the raw values from a sensor or after converting the raw values to a usable data? Does it matter? If so, why? </p> <p>Example: Filter after getting raw values from IMU or filter after converting raw values to a usable data eg. flight dynamics parameters.</p>
At which stage should filtering be applied to the sensors data?
<p>The definition of <code>robot</code> is as follows: </p> <p><em>"A machine built to carry out some complex task or group of tasks, especially one which can be programmed"</em><br> <a href="http://en.wiktionary.org/wiki/robot" rel="nofollow">wiktionary</a></p> <p>So a robot is a tool which can be programmed. The KUKA robot fits in this definition. But a hammer can not be programmed, so a hammer is not a robot.</p>
1100
2013-03-22T02:46:07.920
|industrial-robot|
<p>The definition of a robot is as follow: "A robotic paradigm can be described by the relationship between the three primitives of robotics: Sense, Plan, and Act."</p> <p>An example could be the famous "Kuka Robots". The Kuka robot is preprogrammed and does mainly one loop over and over again. Some of them could have measurement sensors but that is all. They do not think or plan nor do they make decisions. </p> <p>An automatic door opener, used in a building is not a robot either but according to the robotic paradigm definition they are more a robot than a Kuka machine. They actually get some data from a sensor followed by planning and acting. </p> <p>So why are Kuka machines called robots?</p>
Why are industrial machines called robots?
<p>A very effective and elegant approach is the resort to the so-called <strong>Potential Fields</strong>.</p> <p>Essentially, each robot of the swarm generates a repulsive potential field which is tuned to be limited in the space around it (pretty much as a charged particle), so as any specific target is responsible for an attractive field, which in turn can reach farther distances.</p> <p>Then, the single moving robot accumulates the field in the space location where it is by summing all the contributions in sign. The resulting potential will be the driving force to be integrated for yielding the final motion.</p> <p>To make things work easily, the negative fields must be set very strong but limited in the space around robot mates. However, the designer can have fun choosing the potential shapes and playing with tuning parameters.</p>
1110
2013-03-27T17:43:01.053
|algorithm|movement|
<p>I am trying to implement a mechanism to make robots avoid being too close (Say in a distance less than <code>d</code>). I am not familiar with those systems and I have to implement a strategy to avoid robots being too close to each other. Could anyone recommend me some readings for such a problem or a set of keywords to search for? I don't know yet how to start.</p>
Robots minimum distance
<p>The motor they listed apparently has 1.548 kg·cm of torque at 4.8 V. (As Ian pointed out, on this website -- like too many <a href="http://robosapiensindia.com/robomart/index.php?page=shop.product_details&amp;flypage=flypage.tpl&amp;product_id=238&amp;category_id=42&amp;option=com_virtuemart&amp;Itemid=64&amp;vmcchk=1&amp;Itemid=64" rel="nofollow">other websites</a> -- "kg·cm" is often misspelled as "Kg/cm").</p> <p>It is apparently a NEMA 17 motor. From the photo, it looks about half as long as a typical NEMA 17 motor, so I'm not surprised it has about half the torque of a typical NEMA 17 motor.</p> <p>A torque of 1.548 kg·cm is more than adequate for many robots -- <a href="http://reprap.org/wiki/stepper_motor" rel="nofollow">1.4 kg·cm of torque is adequate</a> for axis motors on a RepRap.</p> <blockquote> <p>They use this unit only for the stepper motors and not for the DC motors</p> </blockquote> <p>Huh? <s>Every motor</s> <a href="http://www.nex-robotics.com/motors-and-accessories.html" rel="nofollow">Several motors listed on that site</a>, both stepper and DC motor ( <a href="http://www.nex-robotics.com/products/motors-and-accessories/75-rpm-dual-shaft-plastic-gear-motor.html" rel="nofollow">a</a> <a href="http://www.nex-robotics.com/products/motors-and-accessories/300-rpm-centre-shaft-economy-series-dc-motor.html" rel="nofollow">b</a> ), are rated using the "Kg/cm" unit, which in every case is a misspelling of "kg·cm".</p> <blockquote> <p>0.222 ... its absurdly low again.Most DC motors on the website</p> </blockquote> <p>The "0.222 kg·cm" applies when driven at 0.4 V. That's a small fraction of its rated voltage, which is 6 V where the motor gives 1.869 kg·cm.</p> <p>As far as I can tell, all the DC motors on that website include a gear box, which multiplies the torque.</p> <p>It is unfair to compare motors by comparing only the torque at the output of a torque-multiplying gear box of one motor driven at its full rated voltage, to the torque of some other motor without a gearbox and when driven at a small fraction of its rated voltage.</p> <p>On the other hand, I have to laugh when this website calls this a "High torque" motor when it is almost the lowest-torque NEMA 17 motor I've ever seen. It reminds me of the "Schnell-Bahn" (literally "fast train"), which are the slowest trains still operating in Germany.</p> <p>EDIT: I shouldn't have said "every motor"; most of the motors are rated in "Kg-cm" or "kg-cm", which is close enough to "kg·cm".</p>
1113
2013-03-28T06:17:54.370
|stepper-motor|torque|
<p>I was looking up the motor parameters for some stepper motor where they listed the torque of the motor at different current/voltage but the torque they listed was in kg/cm.</p> <p>How is kg/cm even a remotely acceptable unit for torque?</p> <p>How do I calculate the torque in Nm from kg/cm?</p> <p>Clarity note: Its not kgcm which represents [0.098 kilogram force = 1 Nm.]</p> <p><a href="http://www.nex-robotics.com/products/motors-and-accessories/high-torque-bipolar-stepper-motor.html" rel="noreferrer">Website</a> where this happens.</p>
Torque in kg/cm?
<p><strong>How to obtain your point correspondences</strong></p> <p>There are many ways to do it, which can basically be classified into two categories: feature-based and dense matching.</p> <p>Feature-based methods include using corner detectors, SIFT/SURF/ORB descriptors and other similar feature detectors that provide point to point correspondences. A few of those methods that are implemented in OpenCV are compared <a href="http://computer-vision-talks.com/2011/07/comparison-of-the-opencvs-feature-detection-algorithms-ii/" rel="nofollow noreferrer">here</a>.</p> <p>Dense-matching methods usually involve some kind of comparison between sliding windows in both images. The most used approach is SSD or variants. A few of these variants are listed <a href="http://siddhantahuja.wordpress.com/tag/sum-of-squared-differences/" rel="nofollow noreferrer">here</a>.</p> <p><strong>What are disparity maps</strong></p> <p>Your intuition is right: disparity, in a rectified image, is defined as the difference between the x coordinate of a point in two images. It had a neat relationship from projective geometry: disparity = 1/depth. <a href="https://stackoverflow.com/questions/7337323/definition-of-a-disparity-map">This stackoverflow answer</a> has a few pointers about it. Disparity maps are basically images where each pixel value is the disparity of that point, so it gives you a sense of depth - the further away from the camera, the darker.</p> <p><strong>EDIT:</strong> Yes, in general terms your approach is correct AFAIK. Your description lacks detail, but as you mentioned you can find out how to do each of these pieces in H&amp;Z.</p>
1117
2013-03-28T09:18:05.097
|computer-vision|stereo-vision|
<p>I am currently reading into the topic of stereo vision, using the book of Hartley&amp;Zimmerman alongside some papers, as I am trying to develop an algorithm capable of creating elevation maps from two images.</p> <p>I am trying to come up with the basic steps for such an algorithm. This is what I think I have to do:</p> <p>If I have two images I somehow have to find the fundamental matrix, F, in order to find the actual elevation values at all points from triangulation later on. If the cameras are calibrated this is straightforward if not it is slightly more complex (plenty of methods for this can be found in H&amp;Z).</p> <p>It is necessary to know F in order to obtain the epipolar lines. These are lines that are used in order to find image point x in the first image back in the second image.</p> <p>Now comes the part were it gets a bit confusing for me: Now I would start taking a image point x_i in the first picture and try to find the corresponding point x_i’ in the second picture, using some matching algorithm. Using triangulation it is now possible to compute the real world point X and from that it’s elevation. This process will be repeated for every pixel in the right image.</p> <p>In the perfect world (no noise etc) triangulation will be done based on</p> <pre><code>x1=P1X x2=P2X </code></pre> <p>In the real world it is necessary to find a best fit instead.</p> <p>Doing this for all pixels will lead to the complete elevation map as desired, some pixels will however be impossible to match and therefore can't be triangulated.</p> <p>What confuses me most is that I have the feeling that Hartley&amp;Zimmerman skip the entire discussion on how to obtain your point correspondences (matching?) and that the papers I read in addition to the book talk a lot about disparity maps which aren’t mentioned in H&amp;Z at all. However I think I understood correctly that the disparity is simply the difference x1_i- x2_i?</p> <p>Is this approach correct, and if not where did I make mistakes?</p>
How to obtain stereo correspondences and what exactly is a disparity map?
<p>I've created an AI, which solves given IQ tests. You may test it on <a href="http://iqsolver.eu" rel="nofollow">http://iqsolver.eu</a></p>
1120
2013-03-28T16:42:32.603
|artificial-intelligence|
<p>And if so, what was the highest score so far?</p> <p>Some news articles suggest only parts of tests were aced.</p> <hr /> <p>Update since people censored this question and closed it. There was an AI that has taken an IQ test and scored similar to a 4 year old.</p> <p><a href="http://phys.org/news/2015-10-ai-machine-iq-score-young.html" rel="nofollow noreferrer">http://phys.org/news/2015-10-ai-machine-iq-score-young.html</a></p> <blockquote> <p>The AI system which they used is ConceptNet, an open-source project run by the MIT Common Sense Computing Initiative. Results: It scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5 to 7 year-olds</p> </blockquote> <h3>Abstract</h3> <blockquote> <p>We administered the Verbal IQ (VIQ) part of the Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III) to the ConceptNet 4 AI system. The test questions (e.g., &quot;Why do we shake hands?&quot;) were translated into ConceptNet 4 inputs using a combination of the simple natural language processing tools that come with ConceptNet together with short Python programs that we wrote. The question answering used a version of ConceptNet based on spectral methods. The ConceptNet system scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5 to 7 year-olds. Large variations among subtests indicate potential areas of improvement. In particular, results were strongest for the Vocabulary and Similarities subtests, intermediate for the Information subtest, and lowest for the Comprehension and Word Reasoning subtests. Comprehension is the subtest most strongly associated with common sense. The large variations among subtests and ordinary common sense strongly suggest that the WPPSI-III VIQ results do not show that &quot;ConceptNet has the verbal abilities a four-year-old.&quot; Rather, children's IQ tests offer one objective metric for the evaluation and comparison of AI systems. Also, this work continues previous research on Psychometric AI.</p> </blockquote> <hr /> <p>Update. A robot has passed the Japanese college entrance test and has an 80% chance of being accepted. Since it scored more than the average, that would make the IQ &gt; 100, especially since college applicants have an IQ greater than average, and especially since Japanese are smarter than average humans. <a href="http://gizmodo.com/an-ai-program-in-japan-just-passed-a-college-entrance-e-1742758286" rel="nofollow noreferrer">http://gizmodo.com/an-ai-program-in-japan-just-passed-a-college-entrance-e-1742758286</a></p> <blockquote> <p>The Wall Street Journal reports that the program, developed by Japan’s National Institute of Informatics, took a multi-subject college entrance exam and passed with an above-average score of 511 points out of a possible 950. (The national average is 416.) With scores like that, it has an 8 out of 10 chance of being admitted to 441 private institutions in Japan, and 33 national ones.</p> </blockquote>
Has a robot ever taken a complete IQ test?
<p>The <a href="https://robotics.stackexchange.com/a/1109/478">accepted answer</a> to your previous question presents several ways of developing enough drive for the vibrating motor, in its section called “Converting PWM signals into something that can drive a motor”. </p> <p>One of the most direct ways is using one ULN2803 device per 8 motors, or one ULN2003 per 7 motors. In some packages these devices support up to 500 mA/channel when properly heatsinked; other packages allow less but should have no problem with the 85 mA start current and 75 mA run current of the vibration motor (<a href="http://www.cutedigi.com/robotics/vibration-motor.html" rel="nofollow noreferrer">1</a>, <a href="http://www.cutedigi.com/pub/Robot/310-101_datasheet.pdf" rel="nofollow noreferrer">2</a>).</p> <p>An alternative is to substitute LED drivers with higher drive capability in place of the TLC5947. For example, the 16-channel <a href="http://www.toshiba-components.com/products/DriverLSI/LEDDrivers.html" rel="nofollow noreferrer">Toshiba TC62D722</a> allows up to 90mA/channel. (I don't know what the availability of these parts is.) Like the TI TLC5947, the TC62D722 has per-channel PWM settings. Both parts use an external current-limit-setting resistor and the TC62D722 also has an 8-bit (256 level) output current gain control. </p> <p>Note that TLC5947 channels can be paralleled. On page 7 the <a href="http://www.ti.com/lit/ds/sbvs114a/sbvs114a.pdf" rel="nofollow noreferrer">data sheet</a> says multiple outputs can be tied together to increase the constant current capability. Thus you could use each TLC5947 to drive eight 90mA devices; or could use a 25mA-setting resistor and parallel 3 outputs to get 75mA. Anyhow, <em>n</em> TLC5947's, daisy-chained together, would drive <em>n</em>·24/3 = 8·<em>n</em> vibration motors. </p> <p>If you have motors on hand, you could use a 12V power supply in series with a 166 ohm resistor (or 150 or 200 ohms for other testing) to run about 60mA through a motor and see if it starts and runs satisfactorily. If so, you might be able to use two TLC5947 channels instead of three per motor. </p> <p>Drivers like ULN2003 or ULN2803 have internal clamp diodes so they can drive inductive loads without needing such diodes added on. As an LED driver, the TLC5947 doesn't include clamp diodes for high-going voltage excursions since LED's are not much of an inductive load. (The output diagram on page 8 of specs shows a diode from ground to OUT_n for OUT0 through OUT23, which would clamp negative-going voltages, but no rating is shown for the diode.) You probably would need to add one clamp diode per motor. However, it certainly is possible that you could run motor tests and find that the motor transients don't exceed the 33 volt output ratings of the TLC5947 – whenever you test it on the bench.</p>
1143
2013-04-02T00:22:47.170
|motor|power|pwm|
<p>This is a follow-up to this question: <a href="https://robotics.stackexchange.com/questions/1064/prototyping-a-device-with-25-100-small-dc-3-0v-motors-is-arduino-a-good-fit">Prototyping a device with 25-100 small DC 3.0V motors, is Arduino a good fit?</a></p> <p>I've decided based on the answer that sending the control signals through multiple TLC5947 chips, then sending the PWM signal to the motors is the best way to go. What I need to know is how to turn the PWM signals into something of the required power, since the TLC5947's won't be able to drive the motors by themselves.</p> <p>I'm guessing an amplifier is what I'll need to make, but what's the best way to boost that many signals?</p>
How do I interface a TLC5947 with small motors?
<p>There are all sorts of mechanisms for handling dry goods. If you want to do it on a small scale, look to industrial devices for inspiration, but consumer and retail devices for quick solutions. For example, there are quite a few cereal dispensers on on the market like <a href="http://www.zevro.com/smartspace-dry-food-dispenser-single-canister/" rel="nofollow noreferrer">this Zevro cereal dispenser</a>:</p> <p><img src="https://i.stack.imgur.com/5tc5h.jpg" alt="Zevro cereal dispenser"></p> <p>Industrial systems often use augers to measure and move dry materials. I don't see why you couldn't build a small scale delivery device using <a href="http://www.harborfreight.com/7-piece-auger-bit-set-68166.html" rel="nofollow noreferrer">a cheap auger bit</a> rotating inside a pipe sized to fit the bit:</p> <p><img src="https://i.stack.imgur.com/Otdae.jpg" alt="auger bit set"></p> <p>Dry goods are often measured by weight, so you might want to think about ways to weigh grains and similar materials. There are lots of cheap digital scales with a range of about 0-5kg and accurate down to a few grams. You could try to integrate one of those into your project directly, or cannibalize it for its load sensor, or even <a href="http://forums.parallax.com/showthread.php/108049-Make-your-own-load-cell-(force-sensor)-from-stuff-you-already-have." rel="nofollow noreferrer">build your own load sensor</a>. It'd be easy to put a small hopper on a scale to measure the material, use a simple gate to start and stop the flow of material into the hopper, and a solenoid to dump the hopper into a chute that delivers it for preparation.</p>
1145
2013-04-02T07:50:25.177
|automatic|
<p>I'm trying to handle food grains like rice, wheat in an automated way (to cook simple dishes). For this I have to transfer grain from a larger container to a weighing scale. I know I can use solenoid valves for liquids but all solid handling valves seem to be too big (gate valves etc) and for larger applications. Is there any better way to do this ? </p>
What would be the best way to handle food grains?
<p>Rather than trying to limit the 'x' and 'y' velocities separately, you should be limiting the overall velocity and calculating the component velocities from that overall velocity. In other words, something like:</p> <pre><code>dx = currentX - nextX dy = currentY - nextY rate = min (maxRate, maxRate / sqrt( dx^2 + dy^2 ) xRate = dx * rate yRate = dy * rate </code></pre> <p>So for moving only in X:</p> <pre><code>dx = currentX - nextX = 1 dy = currentY - nextY = 0 rate = min (maxRate, maxRate / sqrt( dx^2 + dy^2 ) = maxRate xRate = 1 * rate = maxRate yRate = 0 * rate = 0 </code></pre> <p>While moving only in Y:</p> <pre><code>dx = currentX - nextX = 0 dy = currentY - nextY = 1 rate = min (maxRate, maxRate / sqrt( dx^2 + dy^2 ) = maxRate xRate = 0 * rate = 0 yRate = 1 * rate = maxRate </code></pre> <p>And for the diagonal movement you suggested:</p> <pre><code>dx = currentX - nextX = 1 dy = currentY - nextY = 1 rate = min (maxRate, maxRate / sqrt( dx^2 + dy^2 ) = maxRate / sqrt(2) xRate = dx * rate = maxRate / sqrt(2) yRate = dy * rate = maxRate / sqrt(2) </code></pre>
1158
2013-04-03T18:42:03.497
|design|
<p>Some vector math is involved here so prepare yourself.</p> <p>I am developing a robotic arm that moves in two dimensions. It is a rotary-rotary design which looks roughly like the picture in this post: <a href="https://robotics.stackexchange.com/questions/869/building-robotic-arm-joint">Building Robotic arm joint</a></p> <p>I am now trying to limit the speed of the end-effector. I am using Simulink and believe that the best way to limit the speed is the limit the rate of change of the X and Y coordinates that I tell it to move to. </p> <p>Now, I also want the end-effector to be able to move in a straight line and believe that I can accomplish this by defining functions that calculate the maximum rate for movement in the X or Y direction based on the distance the arm is trying to travel. The equasion I came up with is this :</p> <pre><code>xRate = (abs(currentX - nextX) / max(abs(currentX - nextX), abs(currentY - nextY)) yRate = (abs(currentY - nextY) / max(abs(currentX - nextX), abs(currentY - nextY)) </code></pre> <p>So basically, XRate: distance in x / max between distance in X and distance in Y.</p> <hr> <p>Now, for the actual problem. Because this limits the speed in both X and Y, the end-effector can travel (for instance) 1 in./sec in both directions at the same time. Meaning that it is travelling at OVER 1 in./sec overall. If, however, it is only moving in ONE direction then it will only move at that 1 in./sec speed because there is no second component. It boils down to the fact that the max speed the arm can move is 'sqrt(2)' and the minimum is '1'.</p> <p><strong>My main question is</strong>: Given that I need to calculate a max xRate and a max yRate, how can I limit the overall speed of the end-effector?</p> <p>Secondarily, is there a way for me to implement a rate control that will limit the overall rate instead of limiting X and Y independantly using Simulink?</p>
Equation to limit rate of change of end-effector in X and Y coordinates