Answer
stringlengths
44
28.2k
Id
stringlengths
1
6
CreationDate
stringlengths
23
23
Tags
stringlengths
4
94
Body
stringlengths
54
35.7k
Title
stringlengths
13
150
<p>Cameras that support syncing will probably do so via an external signal. Google "genlock" or "timecode sync" or "external trigger" to get started. For instance, <a href="https://www.ptgrey.com/KB/10250" rel="nofollow">here's a link</a> to external trigger modes for Point Grey cameras whose image sensors a lot of research roboticists use.</p> <p>For the sync signal, if the sync out on the DAQ is not sufficient, google "sync generator". You should get relevant hits from the video &amp; audio recording as well as electronics test worlds.</p>
9234
2016-02-24T00:21:17.950
|sensors|cameras|
<p>I am not sure if this is the best place to ask this question, but hopefully someone here can give me some advice. I have a device hooked up to a data acquisition system that can provide sync out signal and record sync in signals. I need to synchronize my recordings with this device to a video feed. I am having trouble finding a camera that can provide a sync signal or any other good way to accomplish this. Thanks for your help.</p>
Syncing camera with other signals
<p>After wrestling with this one for almost a week, I believe I have found the cause; and unfortunately the only solution seems to be a re-flash.</p> <p>The problem was ROS didn't seem to be linking properly to some of the shared resources (specifically, rospy or roscpp tutorials) and did not want to work with packages I had developed myself or downloaded from code repositories like Github or Bitbucket. After a few days of playing with it, and two reflashes, I believe I found the issue: <strong><em>Jetpack v2.0</em></strong> (though I do not know the exact details).</p> <p>roswtf did not find any issues, and I had already checked the exports and sources in the .bashrc file through:</p> <pre><code>gedit ~/.bashrc </code></pre> <p>I was looking right at the tutorial folders for rospy_tutorials and roscpp_tutorials, in the right folder <code>/opt/ros/indigo/share/</code>. Even the individual code files within could be opened and editted. I first wanted to make sure that it wasn't the way I configured ROS, I did a couple uninstalls/reinstalls of ROS itself (leaving Ubuntu in place). I tried two methods:</p> <p><strong>First</strong></p> <p>According to <a href="http://answers.ros.org/question/66194/complete-ros-uninstallation/" rel="nofollow">this</a> ros.answers' question about removing ROS</p> <pre><code>sudo apt-get purge ros-* sudo apt-get autoremove </code></pre> <p><strong>Second</strong></p> <p>Following <a href="http://answers.ros.org/question/86766/uninstall-and-reinstall-ros/" rel="nofollow">a different</a> ros.answers' question as a guide</p> <pre><code>sudo apt-get purge ros-* </code></pre> <p>Neither of these seemed to allow ROS to be installed and configured properly. So I booted up my host to re-flash, and I realized something: I had both Jetpack v1.2 and Jetpack v2.0 in my downloads folder - and I knew that I had used v2.0 for this latest flash (it provided some graphical fixes and bug fixes for OpenCV4Tegra that I thought would be helpful for what I wanted to do). The critical different between Jetpack v1.2 and v2.0 is that v2.0 was released to add support for the new TX1 development board. The TX1 uses a new, 'more standard' ARM-CPU, while the TK1 uses an ARM-CPU that was designed/heavily modified by nVidia (if I am not mistaken about my understanding of these two CPUs). They are also using different GPUs, but I suspect this was less of an issue, since I hadn't gotten to any code that utilized the GPU.</p> <hr> <h2><strong>Solution</strong></h2> <p>I re-flashed using Jetpack v1.2</p> <p>After reflashing, I installed the Grinch Kernel following the instructions <a href="https://devtalk.nvidia.com/default/topic/906018/jetson-tk1/-customkernel-the-grinch-21-3-4-for-jetson-tk1-developed/" rel="nofollow">on the nVidia devtalk forums</a>, and then installed ROS manually. Normally I just use the scripts provided by Jetsonhacks on their Github, but I wanted to see everything happening this time around. So I followed the instructions on the <a href="http://wiki.ros.org/indigo/Installation/UbuntuARM" rel="nofollow">wiki.ros</a>. Now, when installing ROS on an ARM-based platform or other embedded system, the desktop version <strong><em>WILL NOT</em></strong> install. I suspect something in the desktop version is looking for an x86 CPU, and it prevents the simulations (among other things) from installing correctly. Instead, what you can do is</p> <pre><code>sudo apt-get install ros-indigo-robot </code></pre> <p>This will install what is the 'most complete' (and appropriate, in my opinion) version of ROS for the TK1 if you are developing onboard and not cross-compiling your code. I do not believe using the robot variant of ROS prevents you from cross-compiling if you feel like it. The robot variant does not include (most of) the GUI components of ROS, you'll have to install those manually as you find yourself in need of them. I'm guessing the GPU and the GUI components aren't really sure what to do with one another in this case. The robot variant also does not install the tutorials (the broken thing that started this whole mess) either. Those will need to be installed with</p> <pre><code>sudo apt-get install ros-indigo-ros-tutorials </code></pre> <p>Once those are installed, start <code>roscore</code> in one terminal window, and type (in a separate terminal window):</p> <pre><code>rosrun rospy_tutorials talker </code></pre> <p>This will start the roscore processes, and the python talker program should start up just fine. All the other tutorials and packages also seem to work as well now too.</p> <p><strong>Jetpack v2.0</strong></p> <p>So I guess lesson learned? If you are planning on running ROS on a TK1, don't use v2.0 of the Jetpack. Not sure what is going on, but I am going to send what I did learn to nVidia and hope that v2.1 fixes the issues. Unfortunately, this amounts more to 'symptoms' than any actual causes. My guess it is a case of two different, highly specialized, experimental pieces of software just not knowing how to play nice together. There is always bound to be some issues. In the mean time, if you're like me, stick to v1.2 of the Jetpack, and you should be fine.</p>
9244
2016-02-24T19:30:28.667
|ros|
<p>Has anyone ever run into a case where a fresh install of ROS cannot run its tutorial packages?</p> <p>I am running ROS Indigo on an nVidia Jetson TK1, using the nVidia-supplied Ubuntu image. I just did a fresh install, Ubuntu and ROS, just to keep things clean for this project. I am building a kind of 'demo-bot' for some students I will be teaching; it will use both the demo files and some of my own code. Now, after setting things up, I try to run the talker tutorial just to check to make sure that everything is running, and rospack is pretty sure that the tutorials don't exist.</p> <p>For example, inputting this into the terminal</p> <pre><code>rosrun rospy_tutorials talker </code></pre> <p>Outputs</p> <pre><code>[rospack] Error: package 'rospy_tutorials' not found </code></pre> <p>This is the case for every tutorial file; python and C++. Now, I am sure the tutorials are installed. I am looking right at them in the file system, installed from the latest versions on github. So I think it is something on ROS' side of things.</p> <p>Has anyone ever bumped into something similar before, where a ROS package that supposedly was installed correctly isn't found by ROS itself? I would rather not have to reinstall again if I can avoid it.</p> <p><strong>EDIT #1</strong></p> <p>After playing with it some more, I discovered that multiple packages were not running. All of them - some turtlebot code, and some of my own packages - returned the same error as above. So I suspect something got messed up during the install of ROS.</p> <p><a href="http://wiki.ros.org/roswtf" rel="nofollow">roswtf</a> was able to run, but it did not detect any problems. However, going forward.</p> <p><strong>EDIT #2</strong></p> <p>I double checked the bashrc file. One export was missing, for ROS directory I was trying to work within. Adding it did not solve the problem. </p> <p>I am still looking for a solution, that hopefully does not involve reflashing the TK1.</p> <p><strong>EDIT #3</strong></p> <p>Alright, so I've been poking at this for a few days now and pretty much gave up trying to get ROS to work correctly, and decided a re-flash was necessary. But I think I found something when I booted up my host machine. In my downloads folder, I have the v2.0 and the v1.2 JetPack. I know I used the v2.0 for this latest install, and it has been the only time I have used it (it provides some useful updates for OpenCV and bug fixes, among other things). I'm going to re-flash using the v1.2 JetPack this time, see if things behave better with ROS under that version. Its a long shot, but it is all I have to work with at the moment, and it shouldn't lose any ROS capabilities (aside from some of the stuff I wanted to do with OpenCV). I'll update everyone if that seems to work.</p> <p><strong>EDIT #4</strong></p> <p>Ok, everything seems to be working now. The problem does seem to be an issue with Jetpack v2.0. I suspect that some change, somewhere between v1.2 and v2.0 (made to accommodate the new TX1 board), messes with running ROS indigo on a TK1. I'm going to be a more detailed explanation in an answer to this question.</p>
ROS tutorials no longer working
<p>I feel like I make the <a href="https://robotics.stackexchange.com/a/8999/9720">same comments</a> every time you ask a question about your controller:</p> <ol> <li><a href="https://robotics.stackexchange.com/a/9080/9720">How are you tuning the gains</a>?</li> <li>I think your slack line is interfering with your results. </li> </ol> <p>Your quadcopter is not pulling on the slack line, <a href="https://robotics.stackexchange.com/questions/9077/pid-tuning-for-an-unbalanced-quadcopter-when-do-i-know-if-the-i-gain-ive-set-i#comment14983_9077">it's sitting on it</a>. This is introducing a floating inverted (unstable!) pendulum on system to your quadcopter. The motors aren't making enough thrust to lift the quadcopter because it's sitting on the tether. </p> <p>That said, I think your statement,</p> <blockquote> <p>I could not tune it further due to irregular oscillations</p> </blockquote> <p>is revealing because I <em>think</em> that those oscillations are due to your tether. Not helping anything - you're pulling on the tether. So, extra forces your quadcopter <em>shouldn't</em> have to account for, that is being added to it:</p> <ol> <li>Spring force in the tether material itself.</li> <li>You are reacting to motion, so you're applying a time-varying force.</li> <li>The wooden pillar wobbles (I can see it at <a href="https://www.youtube.com/watch?v=NmbldHrzp3E&amp;t=34s" rel="nofollow noreferrer">34 seconds into your video</a>), which is applying a time-varying force. </li> </ol> <p>Your gains look very, very low, and I would be surprised that they're doing much of anything. To demonstrate, consider the quick simulation I ran in Simulink:</p> <p><a href="https://i.stack.imgur.com/kFnJJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kFnJJ.png" alt="PID Loop"></a></p> <p><a href="https://i.stack.imgur.com/oCdWk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oCdWk.png" alt="PID Output"></a></p> <p>The second figure above is the output of a step response to a PID controller with your PID gains. It is far too slow of a response for you to get meaningful control of an aerial vehicle. </p> <p>So, I reiterate: How are you tuning the gains? I suggested <a href="https://robotics.stackexchange.com/a/8660/9720">a couple months ago</a> a method you could you to do your tuning. I would expect your gains to be probably 100x larger than they are. Kp somewhere about 10-20 maybe, Ki at 5 or 10% of that, maybe 1 or 2, and Kd maybe 25% of the Ki gain, somewhere maybe 0.25 or 0.5. Those gains would look more along the lines of what I would expect, at least.</p> <h2>:EDIT:</h2> <p>I forgot to add that it does <em>not</em> look like anything is saturating. Your motors never appear to hit a sustained full-speed output and your integral error never reaches a point where it is clipped, either. </p>
9250
2016-02-25T09:20:46.543
|quadcopter|pid|stability|
<p>Good day,</p> <p>I am currently creating an autonomous quadcopter using a cascading PID controller specifically a P-PID controller using angle as setpoints for the outer loop and angular velocities for the inner loop. I have just finished tuning the Roll PID last week with only +-5 degrees of error however it is very stable and is able to withstand disturbances by hand. I was able to tune it quickly on two nights however the pitch axis is a different story.</p> <p><strong>Introduction to the Problem:</strong> The pitch is asymmetrical in weight (front heavy due to the stereo vision cameras placed in front). I have tried to move the battery backwards to compensate however due to the constraints of the DJI F450 frame it is still front heavy.</p> <p>In a PID controller for an asymmetrical quadcopter, the I-gain is responsible for compensating as it is the one able to "remember" the accumulating error.</p> <p><strong>Problem at Hand</strong> I saw that while tuning the pitch gains, I could not tune it further due to irregular oscillations which made it hard for me to pinpoint whether this is due to too high P, I or D gain. The quadcopter pitch PID settings are currently at Prate=0.0475 Irate=0.03 Drate=0.000180 Pstab=3 giving an error from the angle setpoint of 15degrees of +-10degrees. Here is the data with the corresponding video.</p> <p>RATE Kp = 0.0475, Ki = 0.03, Kd = 0.000180 STAB Kp=3 Video: <a href="https://youtu.be/NmbldHrzp3E" rel="nofollow noreferrer">https://youtu.be/NmbldHrzp3E</a></p> <p>Plot: <a href="https://i.stack.imgur.com/NcoAu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NcoAu.png" alt="enter image description here"></a></p> <p><strong>Analysis of Results</strong> It can be seen that the controller is saturating. The motor controller is currently set to limit the pwm pulse used to control the ESC throttle to only 1800ms or 180 in the code (The maximum is 2000ms or 205) with the minimum set at 155 or 1115ms (enough for the quad to lift itselft up and feel weightless). I did this to make room for tuning the altitude/height PID controller while maintaining the throttle ratio of the 4 motors from their PID controllers. </p> <blockquote> <p>Is there something wrong on my implementation of limiting the maximum throttle?</p> </blockquote> <p>Here is the implementation:</p> <pre><code> //Check if PWM is Saturating - This method is used to fill then trim the outputs of the pwm that gets fed into the gpioPWM() function to avoid exceeding the earlier set maximum throttle while maintaining the ratios of the 4 motor throttles. float motorPWM[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4}; float minPWM = motorPWM[0]; int i; for(i=0; i&lt;4; i++){ // Get minimum PWM for filling if(motorPWM[i]&lt;minPWM){ minPWM=motorPWM[i]; } } cout &lt;&lt; " MinPWM = " &lt;&lt; minPWM &lt;&lt; endl; if(minPWM&lt;baseThrottle){ float fillPwm=baseThrottle-minPWM; //Get deficiency and use this to fill all 4 motors cout &lt;&lt; " Fill = " &lt;&lt; fillPwm &lt;&lt; endl; motorPwm1=motorPwm1+fillPwm; motorPwm2=motorPwm2+fillPwm; motorPwm3=motorPwm3+fillPwm; motorPwm4=motorPwm4+fillPwm; } float motorPWM2[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4}; float maxPWM = motorPWM2[0]; for(i=0; i&lt;4; i++){ // Get max PWM for trimming if(motorPWM2[i]&gt;maxPWM){ maxPWM=motorPWM2[i]; } } cout &lt;&lt; " MaxPWM = " &lt;&lt; maxPWM &lt;&lt; endl; if(maxPWM&gt;maxThrottle){ float trimPwm=maxPWM-maxThrottle; //Get excess and use this to trim all 4 motors cout &lt;&lt; " Trim = " &lt;&lt; trimPwm &lt;&lt; endl; motorPwm1=motorPwm1-trimPwm; motorPwm2=motorPwm2-trimPwm; motorPwm3=motorPwm3-trimPwm; motorPwm4=motorPwm4-trimPwm; } </code></pre> <p><strong>Possible solution</strong> I have two possible solutions in mind</p> <ol> <li><p>I could redesign the camera mount to be lighter by 20-30 grams. to be less front heavy</p></li> <li><p>I could increase the maximum throttle but possibly leaving less room for the altitude/throttle control.</p></li> </ol> <blockquote> <p>Does anyone know the optimum solution for this problem?</p> </blockquote> <p><strong>Additional information</strong> The quadcopter weighs about 1.35kg and the motor/esc set from DJI (e310) is rated up to 2.5kgs with the recommended thrust per motor at 350g (1.4kg). Though a real world test <a href="http://www.rcgroups.com/forums/showthread.php?t=2409123" rel="nofollow noreferrer" title="here">here</a> showed that it is capable at 400g per motor with a setup weighing at 1600g take-off weight </p> <p>How I tune the roll PID gains</p> <p>I had set first the Rate PID gains. at a setpoint of zero dps</p> <ol> <li>Set all gains to zero.</li> <li>Increase P gain until response of the system to disturbances is in steady oscillation.</li> <li>Increase D gain to remove the oscillations.</li> <li>Increase I gain to correct long term errors or to bring oscillations to a setpoint (DC gain).</li> <li>Repeat until desired system response is achieved</li> </ol> <p>When I was using the single loop pid controller. I checked the data plots during testing and make adjustments such as increasing Kd to minimize oscillations and increasing Ki to bring the oscillations to a setpoint. I do a similar process with the cascaded PID controller.</p> <p>The reason why the rate PID are small because rate Kp set at 0.1 with the other gains at zero already started to oscillate wildy (a characteristic of a too high P gain). <a href="https://youtu.be/SCd0HDA0FtY" rel="nofollow noreferrer">https://youtu.be/SCd0HDA0FtY</a></p> <p>I had set the Rate pid's such that it would maintain the angle I physically placed it to (setpoint at 0 degrees per second) </p> <p>I then used only P gain at the outer loop stabilize PID to translate the angle setpoint to velocity setpoint to be used to control the rate PID controller.</p> <p>Here is the roll axis at 15 degrees set point <a href="https://youtu.be/VOAA4ctC5RU" rel="nofollow noreferrer">https://youtu.be/VOAA4ctC5RU</a> Rate Kp = 0.07, Ki = 0.035, Kd = 0.0002 and Stabilize Kp = 2 <a href="https://i.stack.imgur.com/p3RyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p3RyT.png" alt="enter image description here"></a></p> <p>It is very stable however the reaction time/rise time is too slow as evident in the video.</p>
Quadcopter PID: Controller is Saturating
<p>What is the problem you're having? You ask, &quot;Can anybody figure out what I'm doing wrong?&quot; but you don't state clearly what your problem is. Will the motors not spin up after? Is anything happening when you do the calibration?</p> <p>As an FYI, <a href="https://electronics.stackexchange.com/a/27024/76598">here</a> is an answer on EE stack exchange explaining the basic startup modes for an electronic speed controller. Quoting:</p> <blockquote> <p><strong>Normal starup</strong> [one style of ESC]:</p> <ul> <li><p>Turn On ESC</p> </li> <li><p>minimum throttle</p> </li> <li><p>wait 2 seconds</p> </li> <li><p>maximum throttle</p> </li> <li><p>wait 2 seconds</p> </li> <li><p>minimum throttle</p> </li> <li><p>wait 1 second</p> </li> </ul> <p>OK to Go</p> <p><strong>Normal starup</strong> [another style of ESC]:</p> <ul> <li><p>Turn On ESC</p> </li> <li><p>minimum</p> </li> <li><p>wait 3 seconds</p> </li> <li><p>OK to Go</p> </li> </ul> <p><strong>Calibration</strong>:</p> <ul> <li><p>Turn on ESC</p> </li> <li><p>maximum</p> </li> <li><p>wait 2 sec</p> </li> <li><p>minimum</p> </li> <li><p>wait 1 sec</p> </li> <li><p>OK to go</p> </li> </ul> </blockquote> <p>From that post, typically there's a beep from the ESC between each of these steps (where you're instructed to wait, wait for the beep).</p>
9254
2016-02-25T19:33:56.047
|arduino|quadcopter|esc|
<p>I recently bought a set of escs, brushless outrunner motors and propellers. I'm trying to perform a calibration on the esc, but I can't find how I can do that without using components other than the arduino uno itself. The setup I've managed to make is the one shown in the picture. The escs are a mystery, as there is no manual to be found. If it helps, the buy link is this : <a href="http://www.ebay.co.uk/itm/4x-A2212-1000KV-Outrunner-Motor-4x-HP-30A-ESC-4x-1045-prop-B-Quad-Rotor-/111282436897" rel="nofollow noreferrer">http://www.ebay.co.uk/itm/4x-A2212-1000KV-Outrunner-Motor-4x-HP-30A-ESC-4x-1045-prop-B-Quad-Rotor-/111282436897</a> There might also be a problem with the battery (LiPo 3.7V, 2500mAh). <a href="https://i.stack.imgur.com/vF9YJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vF9YJ.jpg" alt="My Setup"></a></p> <p>Can andybody figure out what I'm doing wrong? The sample arduino code I found was this:</p> <pre><code>#include &lt;Servo.h&gt; #define MAX_SIGNAL 2000 #define MIN_SIGNAL 700 #define MOTOR_PIN 9 Servo motor; void setup() { Serial.begin(9600); Serial.println("Program begin..."); Serial.println("This program will calibrate the ESC."); motor.attach(MOTOR_PIN); Serial.println("Now writing maximum output."); Serial.println("Turn on power source, then wait 2 seconds and press any key."); motor.writeMicroseconds(MAX_SIGNAL); // Wait for input while (!Serial.available()); Serial.read(); // Send min output Serial.println("Sending minimum output"); motor.writeMicroseconds(MIN_SIGNAL); } void loop() { } </code></pre>
Pure Arduino Quadcopter
<p>Two comments:</p> <p>One, you don't need a double/dual check valve; one on the inlet line and one on the outlet would work. See the image below. You haven't explained why two distinct check valves are less desirable than an all-in-one solution, so I'm suggesting it in the event it was overlooked. </p> <p>Two, I'm not sure how your measurements add up. You state, </p> <blockquote> <p>"The quantity of liquid to be dispensed would be very small (400mg)" </p> </blockquote> <p>and then later you state</p> <blockquote> <p>after every 100 dispensing operations, 1 ml syringe would be empty.</p> </blockquote> <p>Assuming the dispensing liquid is anywhere close to water, it will have a density of approximately 1g/cm3, or 1g/mL. So, a 400mg dispensing quantity means your 1mL syringe is empty after 2.5 doses. If you're going to get 100 doses of 400mg in a 1mL syringe, then that means that you've got 100*400 = 40,000mg, or 40g, in the 1mL syringe. That's 40g/mL, or about <a href="https://en.wikipedia.org/wiki/Lead" rel="nofollow noreferrer">four times the density of lead</a>. </p> <p><a href="https://i.stack.imgur.com/UMgJt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UMgJt.png" alt="Two check valves"></a></p>
9256
2016-02-26T00:13:05.127
|motor|design|
<p>I am working on a micro dispensing system, using syringe pump. The design involves a syringe on top to be moved by stepper motor. There would be one liquid reservoir form which the syringe would pull liquid from, and push it to eject liquid from other end. </p> <p>When we pull the syringe, the liquid is sucked into the syringe, while the other opening is shut. When the syringe is pushed, the liquid is ejected from the other end.</p> <p>The quantity of liquid to be dispensed would be very small (400mg) so i am using small syringe of 1 or 2 ml .. as per my measurement, after every 100 dispensing operations, 1 ml syringe would be empty and we would need to pull liquid from the reservoir into the syringe, and do the dispensing again. </p> <p>My question is, I am unsure about the check valve here. Is there a 'Single' check valve available which would allow this kind of flow to happen ?<a href="https://i.stack.imgur.com/TWKZy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TWKZy.jpg" alt="enter image description here"></a></p>
3 way check valve
<p>It is not extremely difficult to achieve this. It is probably a great way to learn after you complete the <a href="http://wiki.ros.org/ROS/Tutorials" rel="nofollow">main ROS tutorials</a>. Using ROS will get you free joystick or keyboard teleoperation of your bot along potential integration with the ROS navigation stack.</p> <p>You have the choice of either interfacing with your robot hardware using the GPIO onboard you raspi or using a microcontroller and interfacing that with the computer using <a href="http://wiki.ros.org/rosserial" rel="nofollow">rosserial</a>. I would recommend using rosserial and an Arduino because controlling a small robot and reading analog sensors is better documented on the Arduino platform. Either way is totally doable though. Good Luck!</p>
9260
2016-02-26T09:46:36.657
|ros|raspberry-pi|electronics|
<p>I want to build a low cost robot, running ROS for educational purposes. It can be a simple line follower using raspberry pi and an IR sensor. Is it overambitious as a beginner project? How difficult is it to make ROS run on custom hardware?</p> <p>P.S. I am newbie in both robotics and programming and I am more interested in building actual robots than running simulations. Also, I cant afford to buy ROS compatible robots.</p>
How difficult it is to build simple robots (for example Line follower) using raspberry pi and ROS?
<p>What an interesting question. It's interesting because operational space parameters are the same if a robot is in configuration 1, or configuration 2, or even if it is not present at all. Similarly, the robot can choose any joint space configuration it desires, independent of any operational space parameters. So it would seem these two spaces are completely independent.</p> <p>It is the act of having the robot perform tasks that relates and constrains the two spaces. Once you task a robot to "move to point A (in operational space)," you have just built an implicit constraint on the robot's joint space such that it must use the $\vec x = K (\vec q)$ kinematics to move to point A. So any function of operational space tasks $f(\vec x$) gets mapped via inverse kinematics $K^{-1}$ to constrain joint space $\vec q$. </p>
9263
2016-02-26T15:10:48.647
|kinematics|
<p>Some questions about this, my friends and I argued with this problem.</p> <p>Are operational space and joint space dependent on each other?</p> <p>I know that $x_e$ (end effector's pos.) and $q$ (joint var.) can be expressed by an equation with non-linear function $k$:</p> <p>$x_e = k(q)$</p> <p>But I don't think that it tells us operational space and joint space are dependent. </p>
Are operational space and joint space dependent on each other?
<h1><strong>Solution : Is there another solution without prerotating vectors ?</strong></h1> <p>I finally got a solution, and here it is.</p> <p>Python, <a href="https://github.com/ros/geometry/blob/7f1188a1ba35ff4d1b107fd05cc494b2a34f550c/tf/src/tf/transformations.py" rel="nofollow noreferrer">ROS geometry library</a>, numpy</p> <p>My actual code/maths in short : </p> <p>1) Rotate the position &amp; orientation of lasers by roll &amp; pitch. The <code>axes='sxyz'</code> means : Static axis, apply roll, pitch, yaw. </p> <p><code>quaternion_matrix</code> creates a 4x4 transformation matrix from the quaternion.</p> <pre><code>laser = (1,1,1,0) # laser position orientation = (1,0,0,0) # laser orientation roll, pitch, _ = list(euler_from_quaternion(q, axes='sxyz')) q = quaternion_from_euler(roll, pitch, 0, axes="sxyz") laser = numpy.dot(quaternion_matrix(q), laser) orientation = numpy.dot(quaternion_matrix(q), orientation) </code></pre> <p>2) Algebric solution : Rotation around Z in function of yaw</p> <p><a href="https://i.stack.imgur.com/rHaS5.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rHaS5.gif" alt="Rotation around Z"></a></p> <pre><code>laser = [-sin(a)*laser[1] + cos(t)*laser[0], cos(t)*laser[1] + sin(t)*laser[0], laser[2]] orientation = [-sin(a)*orientation[1] + cos(t)*orientation[0], cos(t)*orientation[1] + sin(t)*orientation[0], orientation[2]] </code></pre> <p>3) Algebric solution : Extrapolation from the measurments in function of yaw</p> <p>Important notice : Since the rotation do not scale vectors, the denominator of the K factor is a constant. Then, we can simplify it by precompute length of the orientation vector. </p> <pre><code>M = 100 # distance K = sqrt(M^2 / (orientation[0]^2 + orientation[01]^2 + orientation[1]^2)) PointOnWall = [ K * orientation[0] + laser[0], K * orientation[1] + laser[1], K * orientation[2] + laser[2]] </code></pre> <p>4) Algebric solution : From this, on two laser, get walls. </p> <p>The two "PointOnWall" equations should gives enough data to get the yaw. Knowing this is a (-1,0,0) normale, I can find 2 planes from the two points :</p> <p><a href="https://i.stack.imgur.com/0BqsP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0BqsP.gif" alt="Wall equation"></a></p> <p>5) Algebric solution : Measure the YAW.</p> <p>One plane in the other (Via XMaxima), we got : </p> <p><a href="https://i.stack.imgur.com/BLwIf.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BLwIf.gif" alt="Tan equation"></a></p> <pre><code>def getYaw(position1, orientation1, measure1, position2, orientation2, measure2): length1 = length(orientation1) length2 = length(orientation2) k1 = measure1/length1 k2 = measure2/length2 numerator = -k2*orientation2[0] + k1*orientation1[0] + position1[0] - position2[0] denominator = -k2*orientation2[1] + k1*orientation1[1] + position1[1] - position2[1] return atan(numerator/denominator) </code></pre> <p>As expected, roll &amp; pitch DO NOT interfere, since the positions and orientations are prerotated.</p>
9267
2016-02-27T11:51:07.267
|algorithm|geometry|
<p>I have to know where a multi-rotor is, in a rectangular room, via 6 lasers, 2 on each axis.</p> <p>The problem is like this: </p> <h1>Inputs :</h1> <ul> <li>Room : square => 10 meters by 10 meters</li> <li>6 positions of the lasers : <strong>Fixed</strong> on the frame</li> <li>6 orientations of the lasers : <strong>Fixed</strong> on the frame</li> <li>The 6 measurements of the lasers</li> <li>The quaternion from the IMU of my flight controller (PixHawk).</li> <li>The origin is centered on the gravity center of the multi-rotor and defined as if the walls are perpendicular to each axes (the normal of the wall in X is (-1,0,0)) </li> </ul> <h1>Output :</h1> <ul> <li>Position in 3D (X,Y,Z)</li> <li>Angular position (quaternion)</li> </ul> <p>Since I got the angular position of the multi-rotor, I rotated the laser positions and orientations via the quaternion, then extrapolate via the 6 measurements and I got the 3 walls. (orientations of the walls are trivial, then only one point is enough to determine its position.</p> <p>Badly, I noticed that the <strong>yaw</strong> (rotation about z) measurement from the PixHawk is <strong>unreliable</strong>. Then I should measure the yaw from the lasers, but I do not success to do it. Event if the 2D problem is easy, I am lost in 3D.</p> <p>Does someone know if it [<em>Algorithm to know XYZ position and quaternion from 6 measurments</em>] exists somewhere ? Or what is the right way to go on this problem ? </p> <p><strong>The question :</strong> How could I get the yaw from 2 measurements from 2 lasers which I know the original position, orientation and the pitch and roll. </p> <p>NOTE : Green pointers are the origin position, Red pointers are the "final" position, but could be rotated around the red circle (due to yaw).</p> <p><a href="https://i.stack.imgur.com/ARr3D.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ARr3D.png" alt="Representation"></a></p>
6D localization with 6 lasers
<p>I will attempt to answer slightly more than your question in order to expose you to alternative tracking methods. Note that what you are doing is essentially trying to locate and classify a certain object in an image. To do this robustly generally you need to have some training images and a label image that has each pixel labeled 0 or 1, where the pixels that are 1 are the object to be tracked. This is often done by hand. The training and label images are used to calculate a histogram of all the features of the object of interest, and those histograms are used either for simple range-based thresholding, or for use in a machine learning algorithm.</p> <p>The following rely on having somehow located a region of interest that you want to describe. These ROIs may have been found by using the opencv::findContours function, or even randomly, or by testing a [100x100] window in a grid of every [60x60] pixels. </p> <p><strong>Shape descriptions:</strong> Area, perimeter, convexity, centroid, mean and standard deviation of distance of each pixel from centroid, skeleton length</p> <p><strong>Colour descriptions:</strong> mean, standard deviation, range, skew and kurtosis of intensities.</p> <p><strong>Texture descriptions:</strong> gabor filters response, standard deviation of intensities (in a local window)</p> <p>Most of these are implemented in the python library <strong><em>scikit-image</em></strong> in the <em>measure</em> module.</p> <p>In any case you will need to do the above on a 1-channel image. That may be just a single R/G/B channel, or combined into a grayscale channel, or for instance the (R - 0.5*(G + B)) channel if you were tracking for instance red apples.</p> <p>Regardless of your detection method, it would be useful to track the object in pixel space with a Kalman Filter (KF). This would then give you additional spatial features which would be the difference between each contour centroid, and the expected position of the contour that the KF estimated.</p>
9269
2016-02-27T17:13:16.327
|quadcopter|cameras|opencv|
<blockquote> <p><strong><em>UPDATE</em></strong> I have aded 50 bounty for <a href="https://stackoverflow.com/questions/35672762/finding-features-to-track-from-ar-drone-2-camera-illumination">this</a> question on the StackOverflow</p> </blockquote> <p>I am trying to implement object tracking from the camera(just one camera, no Z info). Camera has 720*1280 resolution, but I usually rescale it to 360*640 for faster processing.</p> <p>This tracking is done from the robots camera and I want a system which would be as robust as possible. </p> <p>I will list what I did so far and what were the results.</p> <ol> <li>I tried to do <strong><em>colour tracking</em></strong>, I would convert image to hsv colour space, do thresholding, some morphological transformations and then find the object with the biggest area. This approach made a fair tracking of the object, unless there are no other object with the same colour. As I was looking for the max and if there are any other objects bigger than the one I need, robot would go towards the bigger one</li> <li>Then, I decided to track <strong><em>circled objects of the specific colour</em></strong>. However, it was difficult to find under different angles</li> <li>Then, I decided to track <strong><em>square objects of specific colour</em></strong>. I used this</li> </ol> <blockquote> <pre><code> // Approximate contour with accuracy proportional // to the contour perimeter cv::approxPolyDP( cv::Mat(contours[i]), approx, cv::arcLength(cv::Mat(contours[i]), true) * 0.02, true ); </code></pre> </blockquote> <p>and then I checked this condition</p> <blockquote> <p>if (approx.size() >= 4 &amp;&amp; approx.size() &lt;= 6)</p> </blockquote> <p>and afterwards I checked for</p> <blockquote> <p>solidity > 0.85 and aspect ratio between 0.85 and 1.15</p> </blockquote> <p>But still result is not as robust as I would expect, especially the size. If there are several squares it would not find the needed one.</p> <hr> <p>So, now I need some suggestions on what other <strong><em>features</em></strong> of the object could I use to improve tracking and how? As I mentioned above several times, one of the main problems is <strong><em>size</em></strong>. And I know the size of the object. However, I am not sure how I can make use of it, because I do not know the distance of the object from the camera and that is why I am not sure how to represent its size in pixel representation so that I can eliminate any other blobs that do not fall into that range.</p> <p><strong><em>UPDATE</em></strong></p> <p>In the third step, I described how I am going to detect squares with specific colour. Below are the examples of what I am getting. </p> <p>I used this HSV range for the red colour:</p> <blockquote> <p>Scalar(250, 129, 0), Scalar(255, 255, 255), params to OpenCV's inRange function</p> <p>HMIN = 250, HMAX = 255; SMIN = 129, SMAX = 255; VMIN = 0, VMAX = 255; (Would like to see your suggestions on tweaking this values as well)</p> </blockquote> <p>So, in this picture you can see the processing; gaussian blurring (5*5), morphological closing two times (5*5). And the image with the label "result" shows the tracked object (please look at the green square).</p> <p><a href="https://i.stack.imgur.com/XSffp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XSffp.png" alt="Thresholding and tracking based on HSV colour space 1"></a></p> <p>On the second frame, you can see that it cannot detect the "red square". The only main difference between these two pics is that I bended down the lid of the laptop (please look closer if you cannot notice). I suppose this happens because of the <strong><em>illumination</em></strong>, and this causes the thresholding to give not desired results. </p> <p><a href="https://i.stack.imgur.com/iJ5Vk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iJ5Vk.png" alt="Bended the lid of the laptop"></a></p> <p>The only way, I can think of is doing two separate processing on the image. First, to do thresholding based on the colour as I was doing above. Then if I find the object to move to the next frame. If not to use <a href="https://github.com/Itseez/opencv/blob/master/samples/cpp/squares.cpp" rel="nofollow noreferrer">this opencv's find squares</a> method.</p> <p>However, this method will involve doing too much of processing of the image. </p>
Suggestions on object types (features) to track from ARDrone 2 camera
<p>If 5V blocks were your only option, then I would say "yes, this is fairly safe" with the caveats:</p> <ul> <li>If one cell discharges faster than the other, it will start to deteriorate much faster than the other -- it will last for fewer recharge cycles</li> <li>You should re-charge them separately, not in series</li> </ul> <p>However, if you're looking for an off-the-shelf 10V lithium-ion battery pack, you should consider the 10.8V batteries that are available in many models of cordless drills. </p>
9282
2016-02-29T13:27:12.623
|mobile-robot|power|battery|
<p>For my robotics project I would like to utilise readily available mobile phone 'power banks' to simplify the power system for my robot. However, such power banks output 5V, great for the logic systems but not for the motors. </p> <p>I was wondering if I could wire the outputs of two power banks in series and get 10V or is this a very bad idea? Should I wire them in parallel and use a boost converter? Is a custom solution using 'ordinary' Li-Po batteries and associated charging circuit the best answer?</p> <p>Additional Information:</p> <ul> <li>This will be a two wheeled robot.</li> <li>5V Logic</li> <li>7+V Motor driver</li> <li>Power Banks: 5V 2.1Amp 2100mAh</li> </ul>
Mobile phone power packs
<p>Got it! The problem was that there's a feedforward term in the control loop that the datasheet doesn't talk about. The wheel and motor and I'm calibrating for are pretty light so the feedforward was enough to get pretty good tracking on its own (ie little error so little reliance on PID).</p> <p>To solve it I sent in known-to-be-wrong QPPS settings to induce a small error, tuned velocity PID per the directions in the datasheet, and then reset QPPS to where it should be.</p>
9296
2016-03-01T21:45:27.677
|control|pid|
<p>I'm seeing a behavior in my RoboClaw 2x7 that I can't explain. I've been trying to manually tune the velocity PID settings (I don't have a windows box so I can't use Ionmc's tuning tool) by using command 28 to set the velocity PID gains, then command 55 to verify that they're set correctly, then 35 to spin the wheel at half of its maximum speed. The problem is that no combination of PID gains seems to make any difference at all. I've set it to 0,0,0 and the motor still spins at roughly the set point.</p> <p>I must be doing something wrong, but I'm pouring over <a href="http://downloads.ionmc.com/docs/roboclaw_datasheet_2x7a.pdf" rel="nofollow" title="datasheet">the datasheet</a> and I just don't see what it is. By all rights the motor shouldn't spin when I use 0,0,0! Any ideas?</p>
Why does my RoboClaw seem to be ignoring the PID gain settings?
<p><strong>Quaternions</strong></p> <p>Mathematical definition</p> <p>$\mathcal{H} := \{ h_{1} + h_{2}i + h_{3}j + h_{4}k : h_{1},h_{2},h_{3},h_{4} \in \Re$ and $ i^{2} = j^{2} = k^{2} = ijk = -1 \}$</p> <p>A pure quaternion can be defined as:</p> <p>$\mathcal{H}_{p} := \{ p \in \mathcal{H} : Re(p)=0\}$</p> <p>Pure quaternions can be associated to vectors.</p> <p><strong>Translation</strong></p> <p>Since pure quaternions are directly related to vectors, you can add two pure quaternions to form a third one. Observe the figure, were we represent a pure quaternion and the real axis is omitted: </p> <p><a href="https://i.stack.imgur.com/IbfuE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IbfuE.jpg" alt="Translation quaternion p"></a></p> <p><strong>Rotation</strong></p> <p>A general rotation of $\phi$ around an arbitrary unit norm rotation axis n is givem by:</p> <p><a href="https://i.stack.imgur.com/y64Xh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y64Xh.jpg" alt="Quaternion rotaiton by an arbitrary axis n"></a> </p> <blockquote> <p>QUESTION 1: Given $q_{2}$ and $q_{3}$ how can I find $q_{1}$?</p> </blockquote> <p>If $q_{3} = q_{1}q_{2}$ and they are rotation quaternions:</p> <p>$q_{3} = q_{2}q_{1}$</p> <p>$q_{3}q_{2}^{*} = q_{1}q_{2}q_{2}^{*}$</p> <p>Since $q_{2}^{*}q_{2} = 1 = q_{2}q_{2}^{*}$</p> <p>$q_{1} = q_{3}q_{2}^{*}$</p> <p>were the * states for the conjugate of a quaternion.</p> <blockquote> <p>QUESTION 2: How to find a rotation angle over some other vector, given rotation quaternion? For example I want to find on what angle does $q_{3}$ turned over 2i+j−k quaternion.</p> </blockquote> <p>If your vector is represented by $n$ and you know the $r$, your rotation angle will be the double of the angle you see.</p> <p>Example:</p> <p>$n = i + 0j + 0k$</p> <p>$r$ = $q_{1} = cos(\frac{\pi}{4}) + sin(\frac{\pi}{4})i$</p> <p>from the formula presented in the section of <strong>rotation</strong>:</p> <p>$\frac{\phi}{2} = \frac{\pi}{4}$</p> <p>$\phi = \frac{\pi}{2}$</p> <p>Hope that was useful.</p> <p>Best regards</p>
9297
2016-03-01T21:46:16.840
|kinematics|
<p>I am trying to use a quaternions for robotics and there is one thing I don't understand about it. Most likely because I don't understand how to define position with quaternions and how to define rotation with quaternions if there is any difference..<br> Please watch my "understanding steps" and correct if I am wrong somewhere.</p> <p>Lets assume I we have 2 vehicle positions described by 2 rotation quaternions: $$ q_1 = w_1 + x_1i + y_1j +z_1k = \cos(\pi/4) + \sin(\pi/4)i $$ This quaternion is normalized and represents rotation over the $x$ axis for $\pi/2$ angle as I understand it. $$ q_2 = w_2 + x_2i + y_2j + z_2k = \cos(\pi/4) + \sin(\pi/4)k $$ And this one represents rotation for the same angle $\pi/2$ over the $y$ axis.</p> <p>$q_1*q_2 = q_3$ which would be the same rotation as if we made $q_1$ first and $q_2$ second. $$q_3 = \frac{1}{2} + \frac{i}{2} +\frac{j}{2} +\frac{k}{2}$$</p> <p><strong>QUESTION 1:</strong> Given $q_2$ and $q_3$ how can I find $q_1$?</p> <p><strong>QUESTION 2:</strong> How to find a rotation angle over some other vector, given rotation quaternion? For example I want to find on what angle does $q_3$ turned over $2i+j-k$ quaternion.</p>
Finding rotation quaternion
<p>The best solution is to use interrupts triggered by echo pins change of state. joan's pigpio example worked beautifully. Using multithreading did not give satisfactory results as there is still problem with the timing since multithreading on the raspberry pi operates via pipelining.</p>
9303
2016-03-02T11:22:52.573
|quadcopter|pid|stability|real-time|sonar|
<p>Good day, I would like to ask how is it possible to use an ultrasonic sensor for altitude hold in a quadcopter if the sampling rate of the Ultrasonic sensor (HC-SR04) is only 20Hz before incurring any errors through polling when I had tested it. I have seen this sensor being implemented on other projects however I could not find any papers that explain the use of this sensor in better detail. I have seen possible solutions on the raspberry pi one using interrupts and the other using Linux's multithreading.</p> <p>If my understanding is right, to use interrupts, I need a some sort of data ready signal from the ultrasonic sensor. However this is not available in this particular sensor. Is it possible to use the echo pin as the positive edge trigger for the interrupt service routine (read sonar/calculate distance function). But would this not introduce inconsistent loop execution times which is bad for a consistent PID loop.</p> <p>Another approach is to use multithreading using the wiring-pi library which enables me to run a function, let's say a function that triggers the sonar and calculates the distance along side the pid control loop. How would this affect the PID control loop rate?</p> <blockquote> <p>Which is the best way to implement sonar sensor based altitude hold?</p> </blockquote>
Ultrasonic Sensor's Lag (20Hz) effect on PID contol loop rate (150Hz)
<p>I assume you are working with a KRC4 or KRC2 controller since you have mentioned OrangeEdit. </p> <p>In order give your robot motion reference values you will need a technology packet called RSI Interface. This allows you to transmit ethernet packets to the controller and give new motion references with a 12 ms cylcetime. All you have to do is generate the required message (XML format) from a PC application based on the values of coming from your joystick. I think this approach is the best, preimplemented libraries you can find here: <a href="https://sourceforge.net/projects/kct/" rel="nofollow">https://sourceforge.net/projects/kct/</a> or <a href="https://github.com/erensezener/kuka-rsi3-communicator" rel="nofollow">https://github.com/erensezener/kuka-rsi3-communicator</a></p> <p>There are also other approches, exploiting some workarounds. <a href="https://www.mb.hs-wismar.de/cea/Kuka_KRL_Tbx/Kuka_KRL_Tbx.html" rel="nofollow">https://www.mb.hs-wismar.de/cea/Kuka_KRL_Tbx/Kuka_KRL_Tbx.html</a> or <a href="http://home.mis.u-picardie.fr/~fabio/Eng/documenti/articoli/ChScMoPr_RAM11.pdf" rel="nofollow">http://home.mis.u-picardie.fr/~fabio/Eng/documenti/articoli/ChScMoPr_RAM11.pdf</a></p> <p>EDIT: Once the src and dat files are loaded in memory they cannot be changed. You can still give new reference waypoints to the robot with the above mentioned tools/interfaces.</p> <p>If the said program is not loaded in memory, you can do a .net ftp server, ftp in the Kuka controller, do the file operations for changing the coordinates. However, I do not think that a Kuka program can be loaded into memory and executed without any manual intervention (e.g switch mode selector to AUT or EXT).</p> <p>EDIT2: you mentiond KRC4. KRC4 supports customized HMI so instead of a PC connection you might get the same functionality by making an HMI screen with buttons and use the 6D mouse on the console (KCP), instead of a joystick.</p>
9312
2016-03-03T10:57:03.057
|robotic-arm|kuka|
<p>I have a chance to develop a user interface program that lets the user control a KUKA robot from a computer. I know how to program stuff with the KUKA utilities, like OrangeEdit, but I don't know how to do what I want to do. I don't even know what's the "best" language to talk to the robot.</p> <p>My idea is to control the robot with the arrow buttons, like up/down controls the Z axis and left/right controls the X/Y axes.</p> <p>Can someone help me here? I know there's a lot of libraries to control the robot even with an Xbox controller, but if I limit the robot to 3 axes I might be able to control with simple buttons. </p> <p>Edit: Now imagine that i have a routine that consists on going from P1 to P2 then to P3. I know i can "touch up" the points to refresh its coordinates using the console, but can i do it in a .net application? like modifying the src/srcdat files?</p>
KUKA delimiter .NET
<p>The second design will put more stress on the servos over time, so there are indeed real structural reasons for the design.</p> <p>However, it also looks like the second design is more of a prototype compared to the first. And it's possible that the second design is trying to save weight, sacrificing some servo life in the interest of that goal. </p> <p>Reducing total weight and increasing the speed at which you can iterate on your design are just 2 possible factors for why you might use a component in a suboptimal way. Those tradeoffs really come down to individual preference.</p>
9323
2016-03-04T10:51:15.787
|joint|
<p>Looking at pictures of existing designs for quadropod robots, the servos in the legs seem to usually be mounted inside the chassis, with a second attachment at back of the servo as well, such as this:</p> <p><a href="https://i.stack.imgur.com/x6UxD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x6UxD.jpg" alt="Servo attached front and back"></a></p> <p>rather than putting what looks like an asymmetrical load, like the knees here:</p> <p><a href="https://i.stack.imgur.com/MgzPq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MgzPq.jpg" alt="Servo attached one side only"></a></p> <p>Is this for aesthetics or are there real structural reasons to minimize the lateral load on the axle on a robot of this size?</p>
Lateral load on a servo motor
<p>Currently I am performing my research on that, and in my case I decided to tune all the PIDs. Angles of the quadrotor tend to zero, while altitude controller tend to the objective height. That's habitual in bibliography. </p> <p>In my particular simulations on the computer, PD parameters were enough and all the controllers adapted the same Proportional and Derivative parameters.</p>
9324
2016-03-04T14:24:20.287
|quadcopter|pid|stability|real-time|sonar|
<p>Good day, I have just finished tuning the Pitch and Roll PID's. I did this by setting the throttle such that the quad is weightless. I did the tuning of each axes separately.</p> <p>I would just like to ask what is the best way to tune the PID for maintaining an altitude setpoint.</p> <blockquote> <p>Is it best to turn off the Pitch and Roll PID controllers while tuning the altitude PID or is it best to have them already active while tuning the latter controller?</p> </blockquote> <p>I am going to use a cascaded PID controller using the Velocity along the z-axis calculated from the accelerometer output for the inner PID loop (150Hz) and the altitude measurement of the HC-SRO4 ultrasonic sensor (20Hz) for the outer PID loop.</p>
Quadcopter PID Tuning for Altitude Hold/Position Hold along Z axis
<p>Jacobian does not have to be a square matrix. It is a matrix that maps joint velocities $\dot{q}$ to tool velocities $\dot{x}$ so its size is whatever fits with the two velocity vectors.</p> <p>In your case, since the tool velocity vector is $\dot{x} \in \mathbf{R}^7$, a Jacobian will be $J \in \mathbf{R}^{7 \times 6}$, when your robot has $6$ DOFs.</p>
9336
2016-03-06T01:37:43.133
|robotic-arm|inverse-kinematics|jacobian|
<p>Quaternion has four parameters. Calculating Jacobian for inverse-kinematics, 3 positions and four quaternion parameters make Jacobian $7\times7$ instead of $6\times6$. How to reduce Jacobian to $6\times6$ when using quaternion?</p>
Jacobian for Inverse Kinematics with quaternion of end effector
<p>A brief overview of some of these variants:</p> <ul> <li><strong>A*</strong> A variant Dijkstra's algorithm that maintains a heuristic distance to the goal to first explore parts of the graph that are more likely closer to the goal (same result as Dijkstra's algorithm, but faster).</li> <li><strong>Theta*</strong> An "any-angle" variant of A*. In other words, movements between nodes are not restricted to the immediate neighbourhood. So you get paths that appear more natural and are often the shortest possible distance between two points (e.g., in a grid environment, A* might plan a non-straight line between two points because of the constraint requiring the path to "stay in the grid", Theta* relaxes this).</li> <li><strong>D*</strong> A variant of A* that is fast to update changing edge costs in the graph. For example, let's say the robots path is suddenly blocked by a new obstacle. D* doesn't need to replay the whole path from scratch; rather, it amends the previous path.</li> <li><strong>Focused D*</strong> A variant of D* that uses heuristics to more efficiently choose which parts of the graph are updated when a change in the graph (e.g., an obstacle) is detected.</li> <li><strong>D* Lite</strong> Essentially the same as Focused D* but is more intuitive and easier to program. Is pretty much used in place of Focused D* in virtually all applications.</li> <li><strong>Field D*</strong> An "any-angle" version of D* Lite. You get natural paths that update rather quickly to changing obstacles. This algorithm has been used on Mars rovers. Downside is it is much trickier to implement and understand then Theta*, for example.</li> </ul>
9337
2016-03-06T04:47:56.207
|mobile-robot|motion-planning|algorithm|
<p>I am working on a non-holonomic motion planning problem of a mobile robot in a completely unknown environment. After going through some research papers, I found that D-star algorithm is widely used in such conditions. But there are many D-star variants like Focused D*, D*-Lite, Field D* etc... So which of these variants is suitable in this case? Also please suggest any other better approach for this problem?</p>
Suitable D star variant is for non-holonomic motion planning of mobile robots
<p>It probably depends on the size of the area where you want to operate your robot.</p> <p>For me it was always good enough to use <a href="https://en.wikipedia.org/wiki/Equirectangular_projection" rel="nofollow">equirectangular projection</a>, because for my assumptions "at most 1km large area at most 1000km from where I live" the error always comes up as negligible (I don't remember the value now and I don't have time to calculate it, but my guess would be under 1mm). You should calculate it and decide for yourself.</p> <p>If your robot will operate on a significantly larger part of the Earth and would have problems with this kind of distortion, I guess that it would be worth to invest into a proper 3D world model instead of fighting with projection deformations.</p>
9344
2016-03-07T09:23:06.443
|odometry|geometry|
<p>I would like to make a little survey regarding the (geo)spatial projection that you use when elaborating your GPS and movement data for the spatial awareness of your robots. </p> <p>Moving all GPS coordinates to a <strong>planar projection</strong> seems to be the more reasonable choice since not only distances (for which several <a href="http://www.movable-type.co.uk/scripts/latlong.html" rel="nofollow">formulas and approximations</a> exist), but bearings must be computed.</p> <p>Generally, although scales are pretty small here, avoiding <em>equirectangular approximation</em> seems a good idea in order to keep a more consistent system.</p> <p>Avoiding working in the 3D world (Haversine and other great-circle stuff) is probably a good idea too to keep computations low-cost.</p> <p>Moving the world to a 2D projection is hence what seems to be the best solution, despite reprojection of all input GPS coordinates is needed. </p> <p>I would like to get opinions and ideas on the subject (...if ever anyone is interested in doing it U_U).</p>
Which geo-projection to use for odometry
<p><a href="https://robotics.stackexchange.com/a/9363/350">Chuck's answer</a> is correct; applying motion capture to a robot will make its legs move in a walking motion, but won't guarantee that the entire setup will remain upright. </p> <p><a href="https://i.stack.imgur.com/RjNBE.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RjNBE.gif" alt="floor spinning"></a></p> <p>One simple fix for that would be to just use the legs for propulsion only, and use a set of wheels for balance.</p> <p><a href="https://www.youtube.com/watch?v=rz1xEu52UbM" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ijNUGm.png" alt="walking pants"></a></p>
9349
2016-03-08T03:11:01.897
|mobile-robot|motion-planning|otherservos|
<p>I have a Kinect Sensor, and <a href="http://ipisoft.com/" rel="nofollow">iPi software</a> I use to create motion capture data to use in film editing. I am looking at creating a small, Raspberry Pi driven bipedal robot just for fun, and I was wondering if it was possible to use the MoCap to control the robot? It will only be 20-30 cm tall, with six servos (hips, knees, ankles). Is it possible to apply the movement from these six joints on the human body to my robot, like having a string directly from my left knee joint to its left knee servo? It could either be in real-time, like following my actions, or using pre-recorded data.</p> <p>(NOTE: If needed, I can plug it directly to my Apple/Windows computer, if the Pi could not support this. Also, it will have no upper torso <em>at the moment</em>.)</p>
Applying MoCap data to real life robot
<p>It's just another force, so you would treat it pretty much the same as you would treat gravity, a propeller, or any other force. There are a few points I'll make to help clarify for you:</p> <ol> <li>The string binds the two systems together, meaning that each is now an input to the other (hopefully I'll clarify this with points below).</li> <li>A string in tension <strong>is effectively rigid</strong>. A string in compression doesn't transmit force. So you'll need case statements such that the force is only applied across the systems when the string is in tension and that there is no effect when the string is in compression. </li> <li>The location of the force is wherever the string is tied to the box (call that position_box or p_box for short). Similarly, for the balloon, the location of the force is wherever the string is tied to the balloon (p_bln). </li> <li>The force is a vector that acts along [p_bln-p_box].</li> <li>Relating to point (2) above, the force is only transmitted when norm([p_bln-p_box]) >= L_string. Technically the norm shouldn't ever exceed the length of the string, but resolving that length back to L_string could be the force that is applied to each body. </li> </ol> <p>So, this would differ a little from gravity where gravity is always a vector pointing down and always acts on the center of mass, the string is a vector force of variable direction, that always acts at the point of attachment, that may not exist (if the box is closer to the balloon than the length of the string). </p> <p>Gravity acts through the center of mass, so, provided there are no other reaction forces (not hinged/pivoting, negligible air resistance), there's no torque applied to the body. What you'll find is that, by acting on a point <em>not</em> the center of mass, the string <em>will</em> apply a torque to both bodies, in addition to pulling them. </p> <p>As I mentioned, the actual distance between the two points may exceed the length of the string; this has to deal with joint stability and how you choose to solve and resolve forces across joints. </p> <p>FYI, I'm not sure what your background is, but you're entering a pretty complex field by allowing force transmission across a flexible joint. I'm a fan of <a href="http://www.springer.com/us/book/9780387743141" rel="nofollow noreferrer">Rigid Body Dynamics Algorithms</a> by Roy Featherstone, but there are certainly a number of other algorithms that you can use to solve this dynamic system. It looks to me like Featherstone's algorithm is at the heart of Matlab's SimMechanics toolbox, but that's just an educated guess; it could of course be something else. </p> <p>If you <em>were</em> going to model this in SimMechanics, you would look for two spherical joints, one on each body, with a telescoping joint that allows for motion from 0 to L_string. At lengths &lt; 0, the objects collide, at lengths > L_string the string is taut. Please comment if you would like more guidance, but I think this is about what you were looking for.</p> <p>:EDIT:</p> <p>I found <a href="http://www.mathworks.com/matlabcentral/fileexchange/37636-simscape-multibody-3d---1d-interface-examples" rel="nofollow noreferrer">this group of examples</a> on the Matlab FileExchange to be very useful when developing models in SimMechanics. The <em>most</em> useful thing I found there was a model (screenshots below) that shows how to interface a SimMechanics block to a SimScape block. This allows a SimMechanics "Prismatic Joint" to be used with a SimScape "Translational Hard Stop" block. Using this setup, you can set the limits on the prismatic joint to start at L_String, the limits on the translational hard stop to be 0 to L_String, and then you can adjust the end stop spring constants and damping until you get a reasonable reaction. </p> <p>The spring/damping constants will adjust the "snap" applied to each body when the string pulls taut (or when the two collide). You can adjust the upper/lower end stop constants such that a collision (L = 0) is different than the string pulling tight (L = L_String). Again, this should be a prismatic joint inserted between your two spherical joints. </p> <p>Here's the screenshots! First, create a translational interface between SimScape and SimMechanics: (Provided in the FileExchange package I linked above). </p> <p><a href="https://i.stack.imgur.com/G7QSb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G7QSb.png" alt="SimMechanics-SimScape Translational Interface"></a></p> <p>Then, add that interface to a prismatic joint:</p> <p><a href="https://i.stack.imgur.com/ZNyIq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZNyIq.png" alt="SimMechanics Constrained Prismatic Joint"></a></p> <p>Configure the prismatic joint to start at L=L_String. Ensure that there's no internal mechanics set and that motion is automatically computed with force provided by input (this is required to work with the translational interface). </p> <p><a href="https://i.stack.imgur.com/UecYm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UecYm.png" alt="SimMechanics Prismatic Joint Setup"></a></p> <p>Then, configure the translational hard stop such that the limits are from 0 to L_String. At this point I would accept the default stiffness and damping and run the simulation to see how you think it works. My defaults are because I work with some colossal industrial equipment that uses tensioners bigger than I am. I think the stock defaults are somewhere ~100.</p> <p><a href="https://i.stack.imgur.com/idV9F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/idV9F.png" alt="SimMechanics Translational Hard Stop Setup"></a></p> <p>Finally, I would suggest you save that configuration as a custom library block so you can have it in the future. You can do more complex things, like mask the L_String parameters such that you can customize the default lengths, etc. from the custom library block, but a how-to for that is beyond the scope of what I'm covering here. Anyways, once you've set everything up, I would suggest you just link a block to your string and run a quick simulation. Edit the parameters as you see fit until you get the desired response or it matches your experimental data. </p> <p><a href="https://i.stack.imgur.com/w1zso.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w1zso.png" alt="SimMechanics Demo"></a></p>
9355
2016-03-08T17:04:20.727
|mobile-robot|kinematics|dynamics|
<p>I'm having a hard time trying to understand how to obtain the dynamic model for a system similar to the image.</p> <p>The balloon is a simple helium balloon, however the box is actually an aerial differential drive platform (using rotors). Now there's basically a model for the balloon and another for the actuated box. However I'm lost to how to combine both.</p> <p>The connection between both is not rigid since it is a string. How should I do it? Is there any documentation you could point me to, in order to help me develop the dynamics model for this system?</p> <p>Since I'm so lost, any help will be useful. Thanks in advance!</p> <p><a href="https://i.stack.imgur.com/A3ANA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A3ANA.png" alt="System: Balloon suporting a box" /></a></p>
Dynamic model of a robot lifted by a balloon (Multibody system)
<p>Underactuation leads to nonholonomic constraints (non-integrable kinematic or dynamic equations) in mechanical systems. </p>
9365
2016-03-09T18:53:49.807
|kinematics|
<p>What's the difference between an underactuated system, and a nonholonomic system? I have read that "the car is a good example of a nonholonomic vehicle: it has only two controls, but its configuration space has dimension 3.". But I thought that an underactuated system was one where the number of actuators is less than the number of degrees of freedom. So are they the same?</p>
Difference between an underactuated system, and a nonholonomic system
<ol> <li><blockquote> <p>Based on the calculations, it seems that any amount of torque can get the robot moving</p> </blockquote></li> </ol> <p>That is true only for a theoretical robot moving in vacuum without any friction in its mechanisms. Normally if you limit a current too much, you won't even get the motor moving without any load, not to mention a whole robot. </p> <ol start="2"> <li><blockquote> <p>the more torque, the faster the robot's acceleration</p> </blockquote></li> </ol> <p>Yes, the acceleration is directly proportional to the force you apply $$ a=\frac{F}{m} $$, which in turn is directly proportianl to the torque $$F=\frac{T}{R}$$ Both mass and wheel radius are constant, so when you increase torque, the acceleration increses also.</p> <ol start="3"> <li><blockquote> <p>if the power source cannot supply the full stall current of the motors, will the robot not be able to start moving</p> </blockquote></li> </ol> <p>The torque is proportional to the current you supply, so in the worst case scenario (depleated, undersized battery ant too weak motor) this may happen. However, for your robot and motors you should be fine. Your robot will just accelerate more slowly.</p> <ol start="4"> <li><blockquote> <p>Once the robot reaches full speed and is no longer accelerating, theoretically the motors will not be providing any torque, but I do not think this is the case. Is it, and if so, how will I know how much torque they will be providing?</p> </blockquote></li> </ol> <p>Again, you forget about the friction. Your robot will stop accelerating when friction will be equal to the torque. For Pololu motors this happens usually at 70-90% of motors no load speed, depending on robot's weight and mechanics. </p> <p>Generally, in my calculations I usually choose 50% safety margin, and I have never had any trouble. You have some 30%, but that should be fine too. </p>
9367
2016-03-09T20:24:28.860
|mobile-robot|motor|
<p>I am working on designing and building a small (1 1/2 lbs), 2-wheeled, differential drive Arduino-controlled autonomous robot. I have most of the electronics figured out, but I am having trouble understanding how much torque the motors will actually need to move the robot. I am trying to use the calculations shown <a href="http://www.robotshop.com/blog/en/drive-motor-sizing-tutorial-3661" rel="noreferrer">here</a> and the <a href="http://www.robotshop.com/blog/en/drive-motor-sizing-tool-9698" rel="noreferrer">related calculator tool</a> to determine what speed and torque I will need. I will be using wheels 32mm in diameter and one of Pololu's <a href="https://www.pololu.com/category/60/micro-metal-gearmotors" rel="noreferrer">High-Power Micro Metal Gearmotors</a>. I performed the calculations for a robot weight of 2 lbs to be safe and found that the 50:1 HP Micro Metal Gearmotors (625 RPM, 15 oz-in) should theoretically work fine, moving the robot at 3.43 ft/s with an acceleration of around 29 ft/s^2 up a 5-degree incline. </p> <p>However, I have not found an explanation for several things that I think would be very important to know when choosing drive motors. When the robot is not moving and the motors are turned on at full power, they should need to deliver their full stall torque. Based on the calculations, it seems that any amount of torque can get the robot moving, but the more torque, the faster the robot's acceleration. Is this true? Also, if the power source cannot supply the full stall current of the motors, will the robot not be able to start moving? In my case, I am powering the robot through a 7.2V (6S) 2200mAh NiMH battery pack that can provide around 2.6A continuously, and when it does that the voltage drops to less than 1V. Will this be able to power my motors? Once the robot reaches full speed and is no longer accelerating, theoretically the motors will not be providing any torque, but I do not think this is the case. Is it, and if so, how will I know how much torque they will be providing? Will the motors I chose have enough torque to move my robot?</p>
Relationship between motor torque and acceleration
<p>@Chuck 's comment is correct; You can have some sort of polling software which detects some image (continuously tries to detect an object from image stream).</p> <p>Else, if this does not work, then, it is better to add some other sensor like proximity sensor or so to detect objects like human and then turn on thermal camera if a threshold is crossed. But, as @Chuck mentions, unless the objectives are known, it is difficult to help you out.</p> <p>If an image-array is obtained, then image-processing can be done through many libraries. Some of the libraries which I remember are OpenCV, MATLAB, PIL (Python Imaging Library), BoofCV, JavaCV, SimpleCV etc (depending on your programming language).</p>
9370
2016-03-10T02:44:06.757
|wheeled-robot|
<p>So I am planning on building a robot that turns on when it detects some kind of heat source, I am currently looking at thermal imaging cameras, but am not sure as to how to go about writing code to send a ping or some sort of message when the camera detects a heat source.</p> <p>Does anyone know of any way to do this?</p> <p>Thanks </p>
Thermal Imaging camera activation upon detection
<p>I figured out my problem - the '149 35' I was sending to ask for the current OI state (so I could confirm communication) was wrong. I should be sending '149 1 35'. From then on, it was all confused about what I was sending it, because it was expecting 35 sensor-id bytes.</p> <p>Short version: make sure you follow the protocol right :(</p>
9371
2016-03-10T07:33:01.983
|irobot-create|
<p>I've started tinkering with a Create 2, but I'm having issues reliably getting it to accept my commands. I can occasionally get it right, but sometimes, it just seems to ignore me. I'm guessing my cleanup code isn't getting the state fully reset or something. Is there a good pattern to follow for fail-safe initialization code?</p> <p>Here's what I'm doing right now:</p> <ol> <li>Pulse BRC low for 1 second</li> <li>Wait 1 second</li> <li>Send 16x 0 bytes (to make sure if it's waiting for the rest of a command, this completes it - seemed to help a bit when I added this)</li> <li>Send 7 (reset)</li> <li>Wait 10 seconds</li> <li>Send 128 (start)</li> <li>Wait 2 seconds</li> <li>Send 149 35 (ask for the current OI state)</li> <li>Wait 1 second</li> <li>Send 131 (safe mode)</li> </ol> <p>Sometimes I'm then able to issue 137 (drive) commands and have it work. Most times it doesn't. The times when it doesn't, I'm seeing a lot of data coming from the Create 2 that I'm not expecting, that looks something like this (hex bytes):</p> <pre><code>00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3f 2a ff 73 21 09 cc 0a 88 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3f 2a ff 73 21 09 cc 0a </code></pre> <p>There's more, but my logging cut it off. I get the same pattern a couple of times, and it seems to be at least partially repeating. I thought maybe it's the 16 0-bytes I sent followed by <code>003f 2aff 7321 09cc 0a88</code>, but I still don't know how to interpret that.</p> <p>Sometimes it will make some noise for the reset command, but usually ignores the start/safe mode commands completely (I can tell because the green light stays on).</p>
Reliably establishing communication and OI mode with Create 2
<p>If you choose to use real time object detection, consider the two following:</p> <ul> <li><p><a href="https://web.stanford.edu/class/cs231a/prev_projects_2016/deep-drone-object__2_.pdf" rel="nofollow noreferrer">Deep Drone: Object Detection and Tracking for Smart Drones on Embedded Systems</a></p></li> <li><p><a href="https://pjreddie.com/darknet/yolo/" rel="nofollow noreferrer">YOLO: Real-Time Object Detection</a></p></li> </ul> <p>The first is a project out of Stanford that is highly relevant to your project. They consider the use of a few different object detection strategies. The strategy I would recommend for your application is listed in the second bullet point. The YOLO object detector (now on version 3) is currently state of the art. Assuming you don't have powerful computing devices available to your UAV, you can use the YOLOv3-tiny. It is very quick, although at the cost of accuracy, but I expect it will be good enough for your application. You will be able to use one of these methods in real-time and in multiple environments if your training data set is robust enough.</p> <p>Additionally, you can integrate a good object detection strategy within your SLAM technique. As a result, it would not be redundant.</p> <p>On a side note, practical, real-time object detection wasn't truly first achieved until about two years ago (YOLOv1 and MultiBox SSD). As a result, there are new possibilities that haven't yet been fully explored, so I say have fun with it! </p>
9372
2016-03-10T10:13:52.167
|mobile-robot|quadcopter|slam|opencv|
<p>I am designing an indoor autonomous drone. I am currently writing an object classification program in OpenCV for this purpose. My objects of interests for classification are: ceiling fans; AC units; wall and ceiling lamps, and; wall corners. I am using BoW clustering algorithm along with SVM classifier to achieve this (I'm still in the process of developing the code, and I might try other algorithms when testing).</p> <p>The primary task of the drone is to successfully scan (what I mean by scanning is moving or hovering over the entire ceiling space) a ceiling space of a given closed region while successfully avoiding any obstacles (like ceiling fans, AC units, ceiling and wall lamps). The drone's navigation, or the scanning process over the ceiling space, should be in an organised pattern, preferably moving in tight zig-zag paths over the entire ceiling space.</p> <p>Having said that, in order to achieve this goal, I'm trying to implement the following to achieve this:</p> <ol> <li><p>On take off, fly around the given closed ceiling space and <strong>use SLAM to localise and map its environment</strong>.</p></li> <li><p>While running SLAM, <strong><em>run the object classier algorithm to classify the objects of interests and track them in real time</em></strong>.</p></li> <li><p>Once obtained a detail map of the environment and classified all objects of interest in the local environment, <strong><em>integrate both data together to form an unified map. Meaning on the SLAM output, label the classified objects obtained from the classifier algorithm. Now we a have fun comprehensive map of the environment with labeled objects of interest and real-time tracking of them (localization).</em></strong></p></li> <li><p>Now pick random corner on the map and plan a navigation pattern in order to scan the entire ceiling space. </p></li> </ol> <p>So the question now here is, <strong><em>is using object classification in real-time will yield successful results in multiple environments (the quad should be able to achieve the above mentioned tasks in any given environment)?</em></strong>. I'm using a lot of train image sets to train my classifier and BoW dictionary but I still feel this won't be a robust method <strong><em>since in real-time it will be harder to isolate an object of interest</em></strong>. Or, in order to overcome this, should I use real-situation like train images (currently my train images only contain isolated objects of interests)?</p> <p><strong><em>Or in my is using computer vision is redundant? Is my goal completely achievable only using SLAM?</em></strong> If, yes, how can I classify the objects of interest (I don't want my drone to fly into a running ceiling fan mistaking it for a wall corner or edge). <strong><em>Furthermore, is there any kind of other methods or sensors, of any type, to detect objects in motion?</em></strong> (using optical flow computer vision method here is useless because it's not robust enough in real-time).</p> <p>Any help and advice is much appreciated.</p>
Real-time object classification for an indoor autonomous quad-rotor
<p>This is going to depend on the style of motor in the servo and the style of gearbox. If the servo can't be back-driven when unpowered, then it's likely some form of a worm gear assembly that will prevent static force transmission back to the motor. This means that you won't be able to tell weight by current draw for holding position because the holding current draw will (probably) be zero.</p> <p>You won't be able to tell anything about holding current draw if the motor is a stepper motor, either, because the stepper motor simply energizes a particular set of poles to full strength; any applied load under that holding torque is held, any applied load over causes the motor to slip. </p> <p>In both of those cases, you may still be able to glean some information about the load based on the torque required to <em>accelerate</em> the load, by $\tau = I\alpha$, but then this gives you the <em>moment of inertia</em> of the load, and not its weight. This is an important distinction because the moment of inertia depends on the <em>shape</em> of the load as well as its mass. </p> <p>If you know the shape of the load before hand, or if the load is something like a mass attached to a pulley (winch), then you can use the moment of inertia to calculate mass. If you don't know the shape, though, the moment of inertia alone doesn't get you enough information to find the mass. </p> <p>Finally, even if you have a back-drivable servo that is actuated by a continuous rotation motor (not stepper), the holding current is going to be based on the <em>torque</em> applied to the servo, where $\tau = mgL\sin{\theta}$. If you get into a situation where $\sin{\theta} = 0$, then you'll wind up with a zero applied torque to hold the load, and you won't be able to get a load mass estimation. </p> <p>In order to do what you want, you have to have a pretty specific situation setup where you have a lot of information in advance. That said, even if you <em>do</em> have all of that information, there are still other aspects that need to be taken into account: friction/damping losses in the gear box, gear box inertia, motor efficiency (input power (current@voltage) versus mechanical output power (torque@speed)), motor inertia, motor frictional and damping losses, etc.</p> <p>None of this means it <em>can't</em> be done, but it means that you have to have the system modeled well enough that you can take a "snapshot" of motor inputs and speed and be able to turn that into a useful output torque. The accuracy you can achieve is entirely dependent on your ability to model everything pertinent to the system and the system's ability to stay as-modeled (oil/grease doesn't age/cake, bearings don't wear, motor feedback stays true, etc.).</p> <p>You will probably find that the effort required to adequately model the servo system is more expensive (time and labor) than purchasing a weight sensor. </p>
9374
2016-03-10T15:50:05.353
|sensors|robotic-arm|servomotor|
<p>Assuming a quality industrial servo, would it possible to calculate the weight/resistance of a load? Maybe by comparing current draw in a holding position, or by the time it takes to lift/lower an object. Could it accurately measure grams, kilograms? What kind of tolerance could be achieved?</p> <p>I'm trying to eliminate the need for a dedicated weight measurement sensor.</p>
Measure weight of an object using a servo
<p>You could look at three books.</p> <ul> <li><a href="http://www.cds.caltech.edu/~murray/books/MLS/pdf/mls94-complete.pdf" rel="nofollow">A Mathematical Introduction to Robotic Manipulation - S.Sastry, X.Li and R.Murray</a> provides a theoretical foundation to the kinematics, dynamics and control aspects of robotic manipulators. The only downside to this book is that it lacks in algorithmic content like that available in <a href="http://www.probabilistic-robotics.org/" rel="nofollow">Probabilistic Robotics - Sebastian Thrun</a></li> <li><a href="http://rads.stackoverflow.com/amzn/click/0134687523" rel="nofollow">Robotic Engineering An Integrated Approach</a> deals with the A to Z of designing a robotic manipulator. It gives you a theoretical overview of the fundamental concepts. In terms of design it takes you from requirements through implementation and practical considerations while choosing actuators, computing systems etc. The downside for this book is the same as the previous book.</li> <li><a href="http://www.amazon.co.uk/Mechanical-Design-Robots-Eugene-Rivin/dp/0070529922" rel="nofollow">Mechanical Design of Robots</a> covers mechanical engineering topics such as stresses, strains, structures, linkages etc. It is a book to help you design a robotic system from a functionality standpoint. The author maintains that design of robots should not necessarily compromise on aesthetic values by always going by the worst case scenario.</li> </ul>
9382
2016-03-12T05:05:17.630
|mechanism|manufacturing|books|
<p>I plan to build a mechanism with multiple axis, which is similar to a robot. To start, I need to define some specifications such as repeatable precision, speed, acceleration, and payload. Then the motor and structure is selected and designed based on these parameters. After that, I need to choose methods to manufacture these components. I would like to consult experienced experts in this forum that is there any suggested books, textbooks, or website resources I can learn these knowledge? </p>
Any books or web resources for robotics mechanical design?
<p>First, determine the angle $\phi$ between the robot $&lt;\!a_{x},a_{y},\theta\!&gt;$ and the target $&lt;\!p_{x},p_{y}\!&gt;$ as follows </p> <p>$$ \phi = \tan^{-1} \left( \frac{ p_{y} - a_{y} }{ p_{x} - a_{x} } \right) - \theta $$</p> <p>See the below picture,</p> <p><a href="https://i.stack.imgur.com/BAwLD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BAwLD.png" alt="enter image description here"></a></p> <p>Based on $\phi$, you can determine the rest. </p>
9386
2016-03-13T12:28:18.497
|mobile-robot|kinematics|matlab|geometry|
<p>I want to simulate the detection of a moving object by a unicycle type robot. The robot is modelled with position (x,y) and direction theta as the three states. The obstacle is represented as a circle of radius r1 (<code>r_1</code> in my code). I want to find the angles <code>alpha_1</code> and <code>alpha_2</code>from the robot's local coordinate frame to the circle, as shown here:</p> <p><a href="https://i.stack.imgur.com/HdqrO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HdqrO.jpg" alt="Detection of a moving object"></a></p> <p>So what I am doing is trying to find the angle from the robot to the line joining the robot and the circle's centre (this angle is called <code>aux_t</code> in my code), then find the angle between the tangent and the same line (called <code>phi_c</code>). Finally I would find the angles I want by adding and subtracting <code>phi_c</code> from <code>aux_t</code>. The diagram I am thinking of is shown:</p> <p><a href="https://i.stack.imgur.com/xfqZj.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xfqZj.jpg" alt="Diagram #2"></a></p> <p>The problem is that I am getting trouble with my code when I try to find the alpha angles: It starts calculating the angles correctly (though in negative values, not sure if this is causing my trouble) but as both the car and the circle get closer, <code>phi_c</code> becomes larger than <code>aux_t</code> and one of the alphas suddenly change its sign. For example I am getting this:</p> <p>$$\begin{array}{c c c c} \text{aux_t} &amp; \text{phi_c} &amp; \text{alpha_1} &amp; \text{alpha_2} \\ \hline \text{-0.81} &amp; \text{+0.52} &amp; \text{-1.33} &amp; \text{-0.29} \\ \text{-0.74} &amp; \text{+0.61} &amp; \text{-1.35} &amp; \text{-0.12} \\ \text{-0.69} &amp; \text{+0.67} &amp; \text{-1.37} &amp; \text{-0.02} \\ \text{-0.64} &amp; \text{+0.74} &amp; \text{-1.38} &amp; \text{+0.1} \\ \end{array}$$</p> <p>So basically, the <code>alpha_2</code> gets wrong form here. I know I am doing something wrong but I'm not sure what, I don't know how to limit the angles from 0 to pi. Is there a better way to find the alpha angles?</p>
Angle to a circle tangent line
<p>Look for "parallel jaw gripper kit." Ignoring the pneumatic designs, you will find a typical parallel jaw gripper has a set of 1:1 gears which ensure the two jaws travel at equal speeds in opposite directions, connected to parallelogram 4-bar linkages which keep the clamping jaws parallel to each other. You can find them with or without the actuator. </p>
9388
2016-03-13T15:30:33.313
|robotic-arm|
<p>If I had a single stepper motor how could I use it to create a robotic clamp that could simply grab hold of something like a plank of wood and release it?</p> <p>Are there any standard parts that I could use for this? I'm having trouble finding out what the names of the parts would be.</p>
Building a robotic clamp
<p>Certainly, as @Hauptmech says, a brushless motor is preferred for long-term reliability testing. I would select between BLAC and BLDC based on the required smoothness of the acceleration profile. From your description it sounds like that isn't a key performance indicator, so a BLDC motor might be just fine (the difference being whether a trapezoidal or a sinusoidal profile is used to drive the motor). You could implement a stepper motor, also, which is really just a high pole count brushless motor. That would be simple to control and would provide the same reliability as any other brushless motor. </p> <p>Another mechanical aspect to consider is that of bearing spacing and the ability to withstand off-axis loads (for rotary motors), and off-axis moments (for your linear drives). The bearings are by far the most likely mechanical item to go bad during your reliability test, so I would compare the specs for those items, assuming all else is equal. </p> <p>Since you stated in your question that many technologies would seem to meet your requirements, I would also focus on how easy it would be to maintain the test system, and on how easy it will be to set up. The main issues I would look at for maintainability are fairly trivial items - going with a company that has a support person in your area, ensuring you can source any required replacement items locally (even though specs say something will last for 5M cycles, not every one will), and being able to get the ancillary components (drives, power supplies, etc) if any of them go bad. For ease of setup, I would look for a company that has quality applications engineers, and for technologies that your company has used in the past. If both a stepper and a BLDC motor meet your reliability requirements, and your company has used BLDC motors but not steppers in the past, it would make sense to go with the technology you have experience with. </p> <p>There are so many good options for you. I also suggest that you rationalize a strategy for making this decision, and then make it. Your testing won't be any better if it takes you three months to decide than it would if you came up with a solution this week :-)</p>
9400
2016-03-14T20:26:59.617
|actuator|reliability|
<p>I am designing a multi modal stent testing machine which will bend, twist, and compress stents (very thin, light, and fragile cylindrical meshes for in arteries) in a tube. The machine will operate at maximum 3.6 Hz for months at a time (> 40 million cycles). As the machine will be in a lab with people, the noise should be minimal. I am choosing actuators for my design but was overwhelmed by the range of products available.</p> <p>For rotating the stents around their axis, I will need a rotary actuator with the following specs:</p> <ul> <li>torque: negligible max angle: 20 deg</li> <li>angular velocity needed: max 70 deg/s</li> <li>hollow shafts are a plus</li> </ul> <p>For compressing the stents, I will need a linear actuator with the following specs:</p> <ul> <li>force: low (&lt;1N)</li> <li>max stroke: 20mm but if possible 70mm for allowing different stent lengths </li> <li>stroke velocity needed: max 120mm/s</li> </ul> <p>Price of these motors is not the driving factor.</p> <p>I looked into stepper motors, servo motors, and piezoelectric motors. There seems to be s huge selection that fits my requirements. If all motor types have a reliability that suits my needs, which characteristics/advantages/disadvantages should I consider that determine the selection of suitable actuators? I do know what the difference is between the motor types, but there is a lot of overlap. Concrete suggestions are welcome.</p>
Choosing motor type for high reliability for many cycles
<p>As other people have said, in a steady-state condition, a gyroscope+accelerometer sensor suite can successfully estimate the roll and pitch values of the UAV, but not its yaw angle.</p> <p>However, there have been early successful fixed-wing UAV autopilots which managed to solve the yaw value through differentiation of the GPS signal. One such example is the MatrixPilot system.</p> <p>This was possible since an airplane is aligned with the GPS track, more or less, with the roll and pitch values being independent of that solution. Naturally, this is not possible with a multirotor which can move at any direction regardless of heading.</p> <p>But in the case of sustained turns, the additive Corriolis force dubs the accelerometer into thinking that the aircraft is inclined in a different pose than the actual one. Gradually, the solution from an AHRS filter diverges and gets tricked into thinking that the plane is banked less than the actual angle.</p> <p>When coming out of the turn, the AHRS solution is badly corrupted and takes a long time to get re-adjusted by the GPS track.</p> <p>The magnetometer can solve this problem, by providing a stable reference vector, throughout the turn. </p>
9409
2016-03-15T14:36:05.523
|mobile-robot|magnetometer|
<p>Why would a drone need a magnetometer? What would the drone do with this information? I think it would be to tell direction, but why would it need this if it has an accelerometer and a gyroscope?</p>
Why would a drone need a magnetometer? Are an accelerometer and a gyroscope not sufficient?
<p>Let's assume that the inner loop, whose input is desired angular rate and output is motor input, can track the desired angular rate. Some internet fellow has taken care of that for us. I'm going to be considering a step change in desired position:</p> <p>Let <span class="math-container">$e = x_{des} - x$</span> and <span class="math-container">$ \dot e = -\dot x$</span> this will be our outer loop controller</p> <p><span class="math-container">$$\dot x = k_p e + k_d \dot e$$</span></p> <p>substituting in <span class="math-container">$\dot e$</span> gives</p> <p><span class="math-container">$$ -\dot e = k_p e + k_d \dot e$$</span></p> <p>which can be rearranged as </p> <p><span class="math-container">$$\dot e = - \frac{k_p}{k_d + 1} e$$</span></p> <p>which is a first order system. I've ignored inner loop dynamics but that should be fine, inner loop dynamics should be at least an order of magnitude faster than the outer loop dynamics. </p> <p>I am not saying that integral control is not useful, just that it is not strictly necessary in the outer loop. It is easy to imagine a situation where the inner loop is not behaving correctly that results in a steady state error. </p> <p>The px4 flight stack for multicopters approaches attitude control with pd control for angular position and pid control for body rate control. </p> <p><a href="https://pixhawk.org/users/multirotor_pid_tuning" rel="nofollow noreferrer">https://pixhawk.org/users/multirotor_pid_tuning</a> (link broken)</p>
9420
2016-03-16T18:50:21.413
|quadcopter|pid|
<p>I'm implementing a set of loops to control pitch-and-roll angular positions.</p> <p>In an inner-loop, motor speeds are adjusted to achieve desired angular rates of rotation (the "inner-loop setpoints").</p> <p>An outer-loop decides these desired angular rates (the "inner-loop setpoints") based on the aircraft's angular positions.</p> <hr> <h3>Outer-loop</h3> <ul> <li>Frequency = ~400Hz</li> <li>Outer PV = input angular position (in degrees)</li> <li>Outer SP = desired angular position - input angular position (in degrees)</li> </ul> <hr> <h3>Inner-loop</h3> <ul> <li>Frequency = ~760Hz</li> <li>Inner PV = input angular rotation (in degrees-per-second)</li> <li>Inner SP = constant1 * Outer MV (in degrees-per-second)</li> <li>PWM = Inner MV / constant2 (as percentile)</li> </ul> <hr> <p>I understand what I-gain does and why this is important, but I'm not able to see any practical reason for also having I-gain specified in the outer-loop. Surely the inner-loop would compensate for any accumulated error, leaving no error to compensate for in the outer-loop, or is my thinking flawed?</p> <p>Any example gain values to elaborate would be greatly appreciated.</p>
Why do I need I-gain in my outer-loop?
<p>The general name for the arm in your picture is an <em>Articulated Arm</em> or <a href="https://en.wikipedia.org/wiki/Articulated_robot" rel="nofollow noreferrer">Articulated Robot</a> but there are many variations on the theme.</p> <p>I have mostly worked with <a href="https://en.wikipedia.org/wiki/SCARA" rel="nofollow noreferrer">SCARA arms</a> as fewer motors need to hold the weight of the load:</p> <p><a href="https://i.stack.imgur.com/0gOBM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0gOBM.jpg" alt="enter image description here"></a></p> <p>See <a href="https://robotics.stackexchange.com/a/286/37">my answer</a> to <a href="https://robotics.stackexchange.com/q/284/37">Which type of actuator will be suitable for a very strong robot arm</a></p>
9445
2016-03-19T13:27:44.517
|robotic-arm|raspberry-pi|
<p>I've been looking for large robotic arms (with two fingers) and the arm so they are able to pick up and drop things in a space around the arm (and even spin around the 'wrist').</p> <p>I'm not sure what the terminology is for such an arm. I've seen this, <a href="http://www.robotshop.com/en/owi-535-robotic-arm-edge.html" rel="nofollow">OWI-535 Robotic Arm Edge</a>, and it looks close. Is there something larger that can be hooked up to a Raspberry Pi instead of the remote controller?</p> <p>Is there a particular term for this in a generic context? Or is there a way to build such an arm using off the shelf parts?</p>
Name of large robotic arms (two finger) with wrist, arm, hands and spinning shoulder axis
<p>Ravi - This is a great question. So many great ideas stop simply because we don't know how to kick things off. I'll try to list a few options below for each of your questions so I don't sound like I'm selling one method or product.</p> <p>Your question is at a very high level, so I'll try to point you to some great resources; if I went too deep, I'd have a novel for you - and there is no reason to recreate the wheel. Your (compound) question also allows for a lot of subjective opinion - I'll try to keep my answer as objective as possible. As a reference for when I inject my opinion, my 3-person university senior design team built an autonomous firefighting robot for RoboGames (A worldwide competition held annually near San Francisco, CA) and took 2nd place - not bragging, as we didn't win the gold, but we were in the same boat as you when we started: &quot;where the heck do we even start?!?&quot;</p> <p><strong>Where to Start</strong></p> <p>Planning is integral - the more time you spend here, the less money and frustration you waste later on, so again I reiterate that you have a great question here.<br /> There are a LOT of getting started books/resources out there - it is so helpful it can be overwhelming and may actually seem like it adds to the confusion.</p> <ul> <li><a href="https://www.edx.org" rel="nofollow noreferrer">EDX.org</a> - this is an amazing site that offers FREE full college courses on almost every course of knowledge from schools like MIT, Harvard, UC Berkley, etc - and its constantly adding new classes. Search 'Robot' or 'Embedded' for a dozen classes. Here are a couple EDX classes (<a href="https://www.edx.org/course/electronic-interfaces-bridging-physical-uc-berkeleyx-ee40lx-0" rel="nofollow noreferrer">Class 1</a> / <a href="https://www.edx.org/course/embedded-systems-shape-world-utaustinx-ut-6-03x" rel="nofollow noreferrer">Class 2</a>) that are using a $50 dev board and showing you how to program the microcontroller and interface with sensors. (If you buy the board, you are already on your way just watching the videos and following along with the exercises).</li> <li><a href="http://www.robotpark.com/academy/" rel="nofollow noreferrer">RobotPark</a> - A great site - just follow the links for 'learn' and 'design.'</li> <li><a href="http://freebookspot.es/" rel="nofollow noreferrer">FreeBookSpot</a> - This site has a WEALTH of free ebook links on all topics. Search 'microcontrollers' or 'robot' to be greeted with over a hundred good resources. Look for beginner/getting started titles. (The main options will be PIC or Arduino - more on that to come). Almost all of these books offer step-by-step getting started demos and code samples. (This site sometimes updates their domain, so if this link is broken, search for 'freebookspot' on your favorite search engine).</li> <li>Since this is a college project - I'd hit up a professor that teaches micro-controllers/robotics.</li> </ul> <p><strong>Language</strong></p> <p>Some feel this is one of the more important decisions, others feel it is one of the least. If you pick up a class or book from the above section, it will most likely already define what dev board, language and programming IDE you should use for it's code samples.</p> <p>As far as languages to use, here are some available options:</p> <ul> <li>C - C is a great language that enables one of the widest range of micro-controllers to interface with (assembly language is the probably the widest, but has a larger learning curve and can add to development times). It is also more respected in the engineering community than the other options I will list below. Since you are a student, <a href="http://www.microchip.com/development-tools/academic-corner" rel="nofollow noreferrer">MicroChip</a> offers you a free MPLab dev environment with C compiler for PIC programming.</li> <li>Arduino - Seen as a quicker/easier option to use when getting started with a trade off that you are more limited in hardware (there are many that would argue this point - or that it doesn't matter). Arduino boards (and shields) are relatively cheap and easy to pick up from Amazon/SparkFun/etc. Here is a great intro on <a href="https://learn.sparkfun.com/tutorials/what-is-an-arduino" rel="nofollow noreferrer">SparkFun's</a> site for Arduino.</li> <li>C#.NET - Many find C# extremely easy to pick up and use, and is a great fully-functional alternate option; it is also another free option, as it uses Microsoft's free libraries/IDE. Check out [Netduino's] site for more information.</li> <li>Raspberry PI - a dev board with many online resources that easily allows linux to be loaded on it (Not a language, but I list it here as an easy option for using an OS on a dev board).</li> </ul> <p><strong>RTOS</strong></p> <p>RTOS (Real Time OS) enables you to have a lot of power over processing data as it comes in without buffering. I found in academia, real time OS got a lot of interest from professors, but there are very few applications where I needed it for a rover. A good system of interrupts is usually more than adequate for prioritizing processor time for your different systems. I hope I'm not upsetting your professor/adviser if this is on their agenda - it is definitely a fun intellectual exercise (and has its uses); however, it can add unnecessary time and limit your hardware choices by forcing the use of an OS (real time or not).</p> <p><strong>Buying Parts</strong></p> <p>When you get started, sites like SparkFun and RobotShop offer great options and kits to make life easier. Amazon may even sell some of the same dev boards cheaper. If you need to get down to fine granularity and want the more advanced options, mouser and digikey are a great option.</p> <p><strong>Wrapping it up with a subjective opinion</strong></p> <p>Take it or leave it - but I'll offer you the advice I'd give myself if I was starting over again. See if you can get in a micro-controller / embedded system class at your university if offered as early as possible - this gives you access to a professor or TA that is yours for all your questions (use the heck out of them!). Visit the EDX site and start watching the free lecture / lab videos to see if you like the hardware/software in their demos/examples - if not, look at the free book site and find platform that interests you and run with it). Expect to run into overwhelming frustration when things that look correct don't go right. And keep up on this robotics stack exchange forum (among others) to keep getting fresh ideas and finding better ways of doing things.</p>
9447
2016-03-19T14:53:22.617
|programming-languages|c|
<p>I am part of my College team which is planning to enter a Mars Rover Challenge. In the point of view of a programmer, where should I start? I know C is the main language NASA used for their Rover and I have a basic understanding of it. Plus, how much should I look into the RTOS part for making a rover?</p> <p>Any books/links to this topic would be greatly appreciated. </p>
Programming A Rover
<p>There is no correct name for sheetmetal brackets designed for connecting hobby servos. Using sheetmetal, bent with holes, has been used forever for hobby projects (and industrial ones). It's cheap and easy to have the tools to make custom ones in your garage.</p> <p>The first usage of brackets like your pic with servos that I saw was <a href="https://en.wikipedia.org/wiki/Robo_One" rel="nofollow noreferrer">Robo One</a>, but I don't follow the hobby/toy market closely. </p> <p>For hobby servo robotics there was <a href="https://web.archive.org/web/19961109203419/http://lynxmotion.com/images.html" rel="nofollow noreferrer">Lynxmotion in 1996</a> but his techniques were in common use for hobby robotics at the time in custom projects.</p> <p>For punched and bent sheet metal, <a href="http://www.girdersandgears.com/index.html" rel="nofollow noreferrer">erector sets</a> come to mind, started being sold in 1913 and apparently had an <a href="http://www.girdersandgears.com/erector-motor.html" rel="nofollow noreferrer">electric motor</a> you could build. <a href="http://www.girdersandgears.com/meccano.html" rel="nofollow noreferrer">Meccano</a> Started in 1901.</p> <p>Regarding the U shape, mechanical engineers call this a clevis when used as a structural or connecting element.</p>
9454
2016-03-20T08:27:44.097
|mechanism|
<p>I refer to these types of brackets as <em>servo brackets</em>, or <em>robot brackets</em>:</p> <p><a href="https://i.stack.imgur.com/mbjUy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mbjUy.jpg" alt="Short U and Multi Function servo brackets"></a></p> <p>I know that the two specific brackets, shown above, are known as a short-U (some vendors refer to them as "C", en lieu of "U") and a multi-function bracket, respectively, and that there are other types available, namely:</p> <ul> <li>Long U bracket</li> <li>Oblique U bracket</li> <li>i bracket</li> <li>L bracket</li> <li>etc.</li> </ul> <p>However, I am sure that there is a correct name for these <em>types</em> of bracket (or this range of bracket, if you will), rather than just <em>servo brackets</em> - either a generic name or a brand name. I have seen the term once before, on a random web page, but the name escapes me. They are either named after their creator, or, if I recall correctly, the institution where they were developed.</p> <p>Does anyone have a definitive answer, preferably with a citation or web reference, or a little historical background?</p>
What is the correct name for "servo brackets"?
<p>The barometer carried on the pixhawk has an altitude resolution of 10 cm. If that isn't enough, you could write a kalman filter that uses the accelerometer data in the prediction step and the ultrasonic sensor and/or the barometer in the correction step. </p> <p>But I don't see this solving your problem. An accurate measurement of altitude at 20hz should be plenty if all you're trying to do is hold altitude. </p> <p>What is the time constant / natural frequency and damping on your controller? </p> <p>I guess I hadn't finished reading your question this morning ( it was before my coffee). The acceleration from the imu is the measurement of acceleration plus gravity. To get the inertial acceleration of the imu, subtract the gravity vector from the measurement. You'll never be able to control on integrated acceleration measurements. The measurements are corrupted by noise and you have no way to correct for this. </p> <p>--- answer to control portion of the question</p> <p>Lets assume that all you are trying to do is hold an altitude and are not concerned about holding a position for now ( though this approach will work for that as well). And assuming that you can command whatever thrust you want ( within reason), then this becomes an easy problem.</p> <p>A first pass at the system dynamics looks like</p> <p>$\ddot z = \frac{u}{m} - g$</p> <p>where positive $z$ is up. Lets add a hover component to our throttle that deals with gravity. So</p> <p>$ u = u_{fb} + u_{hover}$</p> <p>Our new dynamics look like</p> <p>$\ddot z = \frac{u_{fb} + u_{hover}}{m} - g = \frac{u_{fb}}{m}$</p> <p>Cool! Now we design a control law so we can tack a desired altitude.</p> <p>Our control system is going to be a virtual spring and damper between our quad and desired altitude ( this is a pd controller).</p> <p>$ \ddot z = k_p(z_{des} - z) + k_d(\dot z_{des} - \dot z)$</p> <p>The system should now behave like a second order system. $k_p$ and $k_d$ can be chosen to achieve the damping ratio and natural frequency you're looking for.</p> <p>At this point I would like to reiterate that integrating accelerometer data is not an okay way to generate state estimates. If you really want to hack something together fast, feed the sonar measurements through a lowpass filter with an appropriate dropoff frequency. Your vehicle isn't going to be oscillating at 20hz so controlling off just the sonar data will be fine. </p>
9456
2016-03-20T10:28:32.567
|quadcopter|control|pid|raspberry-pi|sensor-fusion|
<p>I am currently implementing an autonomous quadcopter which I recently got flying and which was stable, but is unable to correct itself in the presence of significant external disturbances. I assume this is because of insufficiently tuned PID gains which have to be further tweaked inflight.</p> <h3>Current progress:</h3> <ul> <li>I ruled out a barometer since the scope of my research is only indoor flight and the barometer has a deviation of +-5 meters according to my colleague.</li> <li>I am currently using an ultrasonic sensor (HC-SR04) for the altitude estimation which has a resolution of 0.3cm. However I found that the ultrasonic sensor's refresh rate of 20Hz is too slow to get a fast enough response for altitude correction.</li> <li>I tried to use the accelerations on the Z axis from the accelerometer to get height data by integrating the acceleration to get velocity to be used for the rate PID in a cascaded pid controller scheme. The current implementation for the altitude PID controller is a single loop pid controller using a P controller with the position input from the ultrasonic sensor.</li> <li>I had taken into account the negative acceleration measurements due to gravity but no matter how much I compute the offset, there is still the existence of a negative acceleration (eg. -0.0034). I computed the gravitational offset by setting the quadcopter to be still on a flat surface then collecting 20,000 samples from the accelerometer z axis to be averaged to get the "offset" which is stored as a constant variable. This variable is then subtracted from the accelerometer z-axis output to remove the offset and get it to "zero" if it is not accelerating. As said in the question, there is still the existence of a negative acceleration (eg. -0.0034). My quad then proceeds to just constantly climb in altitude. With only the ultrasonic sensor P controller, my quad oscillates by 50 cm.</li> </ul> <blockquote> <p>How can this consistent negative acceleration reading be effectively dealt with?</p> </blockquote> <p><strong>Possible Solution</strong>: I am planning to do a cascading PID contoller for the altitude hold with the innerloop (PID controller) using the accelerometer and the outer loop (P controller) using the sonar sensor. My adviser said that even a single loop P controller is enough to make the quadcopter hold its altitude even with a slow sensor. Is this enough? I noticed that with only the P gain, the quadcopter would overshoot its altitude.</p> <p><a href="https://i.stack.imgur.com/rdARR.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/rdARR.jpg" alt="enter image description here"></a></p> <ul> <li><p>Leaky Integrator: I found this article explaining how he dealt with the negative accelerations using a leaky integrator however I have a bit of trouble understanding why would it work since I think the negative error would just turn to a positive error not solving the problem. I'm not quite sure. <a href="http://diydrones.com/forum/topics/multi-rotors-the-altitude-yoyo-effect-and-how-to-deal-with-it" rel="noreferrer">http://diydrones.com/forum/topics/multi-rotors-the-altitude-yoyo-effect-and-how-to-deal-with-it</a></p></li> <li><p>Single loop PD controller with the ultrasonic sensor only: Is this feasible using feedback from a slow sensor?</p></li> </ul> <p><strong>Sources:</strong></p> <ul> <li><p>LSM303DLHC Datasheet: <a href="http://www.st.com/web/en/resource/technical/document/datasheet/DM00027543.pdf" rel="noreferrer">http://www.st.com/web/en/resource/technical/document/datasheet/DM00027543.pdf</a></p></li> <li><p>Leaky integrator: <a href="http://diydrones.com/forum/topics/multi-rotors-the-altitude-yoyo-effect-and-how-to-deal-with-it" rel="noreferrer">http://diydrones.com/forum/topics/multi-rotors-the-altitude-yoyo-effect-and-how-to-deal-with-it</a></p></li> <li><p>ArduPilot PID Loop: <a href="http://copter.ardupilot.com/wp-content/uploads/sites/2/2012/12/Alt-Hold-PID-version-3.0.1.jpg" rel="noreferrer">http://copter.ardupilot.com/wp-content/uploads/sites/2/2012/12/Alt-Hold-PID-version-3.0.1.jpg</a></p></li> </ul>
How can we use the accelerometer for altitude estimation?
<p>Technically, you can do it either way, but consider the following scenario: You are flying with a heavy load, and there is a wind gust. Each rotor is operating at almost 100% capacity due to the heavy load. Now the wind gust has gotten you un-level. Say you need 20% of full speed to make the correction. What happens?</p> <p>Well, if you are trying to control level only by increasing motor 2, then it's already essentially at full speed, so it cannot go any faster. The craft never levels because there is no "overhead" available - the actuator (fan) is operating at 100% so there is nothing left over for it to do when you ask for more.</p> <p>If you try to control by splitting the 20% between motor 1 and motor 2, then motor 2 <em>still</em> can't do anything, but motor 1 <em>can</em> lower its speed by 10%. The vehicle will now level. Not as fast as if you weren't operating at near-capacity, again because the lack of overhead hinders performance, but it still responds. </p> <p>This is typically how the quadrotor would be programmed. Each motor contributes to 1/4 of the thrust requested for altitude control, and then the <em><a href="https://en.wikipedia.org/wiki/Orientation_(geometry)">attitude</a></em> controller adds or removes from each motor <em>in pairs</em>. </p> <p>So say you had motors:</p> <p>AB</p> <p>CD</p> <p>If you wanted roll, you could speed up AC and slow down BD - this would cause roll with little change in altitude because A+B+C+D is still the same. Similarly, you could speed up AB and slow down CD to pitch. Again, A+B+C+D is the same. Finally, you can yaw by speeding up AD and slowing down BC.</p> <p>When you split the power request from all positive on one motor (side) to half positive on one and half negative on another, you gain a lot of advantages when it comes to stability and control. I would point out that it is possible to roll, pitch, and yaw, <em>simultaneously</em>, and maintain altitude by doing this. </p>
9463
2016-03-21T06:01:31.090
|quadcopter|pid|ardupilot|logic-control|
<p>I am building a quadcopter for my school project. I am trying to program my own flight controller using PID algorithm.</p> <p>I'll try to make my question simple using as an example below only two motors</p> <pre><code> 1-----------2 </code></pre> <p>Let's say I am trying to stabilize my two motor system using gyro from the diagram below to one above</p> <pre><code> 1-- ----- ----2 </code></pre> <p>Using the formula Output = (gyro - 0) * Pgain</p> <p>Do I need to increase the output only on the motor 2 or would I have to do both: increase the output on the 2nd motor while decreasing the output on the first motor? Thank you</p>
Quadrocopter PID
<blockquote> <p>what kind of ICs they use for the actual RC signals</p> </blockquote> <h2>Traditional FM RC</h2> <p>The over-the-air signal to a RC aircraft is often a FM-modulated binary PPM signal in a narrow slice of the 72 MHz band.</p> <p>That band is reserved only for RC aircraft.</p> <p>The Phillips NE5044 is a typical RC encoder IC.</p> <p>Often the receiver includes either a 4017 decade counter or a 4015 shift register -- that chip decodes the RC PPM signal at its CLK input into to several independent "RC PWM" outputs.</p> <p><a href="http://www.opencircuits.com/servo_control" rel="nofollow">Open Circuits: servo control</a> has more details.</p> <h2>Spread spectrum RC</h2> <p>The over-the-air signal to new RC aircraft is often a spread-spectrum modulated PCM signal spread out over a very wide 2.4 GHz ISM band. Modern spread-spectrum and error-correcting techniques allow more-or-less error-free communication, even though that band is shared with microwave ovens, WiFi, ZigBee, Bluetooth, and many other transmitters.</p>
9476
2016-03-23T17:49:17.070
|microcontroller|radio-control|
<p>I am thinking about working on alternative drone controllers. I am looking into making it more easy to use and a natural feel (debating between sensor bracelets, rings, etc.).</p> <p>The main issue I have is, I've been looking over all the standard RC transmitters that are used to control RC aircraft, but I am not sure what technology is inside of them, what kind of ICs they use for the actual RC signals. </p> <p>I want more information on how to make an RC transmitter myself, mainly the protocol that's used to send messages, and what circuitry is needed to actually transmit that, what kind of components do I need and how should I implement the software?</p> <p>I was aiming at doing this as a side project (hobby), but now I have the chance to use it as a uni project as well, so I'd like to give it a shot now, but I lack the proper information before getting started. </p> <p>I'd rather not take apart my current RC controller and use an oscilloscope to decode the protocol. </p> <p>Any answers (short or long) and reading material is appreciated.</p> <p>Other questions, can the protocol be implemented in software on an embedded system (Raspberry Pi, Arduino, Intel Galileo, etc.)? I am asking this because the frequency for these are 2.4 GHz.</p> <p>This is part of a bigger project, drone related currently, and I could use alternative methods of sending the information, through other wireless means, as the first prototype, suggestions are welcomed.</p> <p>Need: aircraft RC transmitter protocol info, RC transmitter components &amp; schematics, anything else that might help with the transmission side</p>
Remaking an RC transmitter for controlling aircraft
<p>Instead of screws, how about simlpe magnetic force? I extracted this parts from old harddisk drives, the real forceful parts are the small ones on the top. You wouldn't believe that this magnetic force could exist close to a disk drive.</p> <p><a href="https://i.stack.imgur.com/wXgwM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wXgwM.png" alt="enter image description here"></a></p>
9482
2016-03-24T04:32:46.890
|mechanism|
<p>I'm trying to attach a small piece of sheet steel (30mm x 50mm x 1mm) to a small piece of nylon (50mm x 50mm x 4mm). Does anyone know how they could be fastened using small screws ( <p>Any thought appreciated.</p>
Fastening sheet steel on nylon
<p>I'm going to ignore your section on aircraft and attempt to answer the (vague) question, </p> <blockquote> <p>Do inherently unstable systems desire to be stable for all cases when a closed loop control is implemented on them?</p> </blockquote> <p>First, I'll say that system response, stability, etc., are all based on mathematics, and math does not have feelings. That is, a system doesn't "desire" anything - it just <em>is</em>.</p> <p>Second, there is a difference between <a href="https://en.wikipedia.org/wiki/Controllability" rel="nofollow">controllability</a> and <em>stabilizability</em>. Just because you can't force a characteristic to be what you want doesn't necessarily mean that it's unstable.</p> <p>Similarly, just because you have feedback, that doesn't mean that your controller is stable. </p> <p>Consider a <a href="https://en.wikipedia.org/wiki/Bang%E2%80%93bang_control" rel="nofollow">bang-bang controller</a> for steering in a car. You get feedback of the car's position in the lane, and you either steer full left or full right depending on which side of center you are. </p> <p>A bang-bang controller <em>is</em> a closed-loop controller. The <em>stability</em> of the car's position depends on sampling and actuation frequency with respect to the car's speed. After a particular speed the position of the car no longer converges on the center of the road.</p> <p>Another example would be a system where I want to control speed and altitude of a car, and I have speed and altitude measurements, but my only method of actuation is cruise control. Cruise control can force speed to go to whatever value I want, but it can't influence altitude. Therefore, I rely on the system describing altitude (briefly, $F=ma$) to be <em>stable</em> and, as long as the car has sufficient mass and the speed is <a href="https://en.wikipedia.org/wiki/Newton%27s_cannonball" rel="nofollow">relatively slow</a>, that is true - the car's altitude won't randomly fluctuate. That is, speed is controllable and altitude is stable, so as a whole, the speed-altitude system is said to be <em>stabilizable</em>.</p> <p>So, to summarize, a <em>controllable</em> system is one where you are able to force every state to the value you desire. A <em>stabilizable</em> system is one where you can force all of the <em>unstable</em> states to do what you want, while the rest of the states are inherently stable. </p> <p>Output feedback doesn't guarantee either controllability or stabilizability. They instead depend on how your actuators are able to uniquely influence the states you care about.</p>
9488
2016-03-25T07:07:58.790
|control|pid|microcontroller|uav|stability|
<p>As we all know fixed wing vehicles are designed to have inherent instability which is what enables all fixed wing vehicles to fly.</p> <p>However does this apply to all cases?</p> <blockquote> <p>Do inherently unstable systems desire to be stable for all cases when a closed loop control is implemented on them?</p> </blockquote>
Fixed Wing UAV: Do inherently unstable systems desire to be stable for all cases when a closed loop control is implemented on them?
<p>The Create 2 is a remanufactured series 600 Roomba. In my case, mine was a Roomba 650 as evidenced by pulling back the little "2" sticker over the location where 650 was marked</p>
9495
2016-03-25T18:10:26.167
|irobot-create|
<p>I have read that certain iRobot products support or can be hacked to support something close to the open interace. There is even a book about hacking Roomba. What Robots have this capability?</p>
What iRobot products support the open interface besides the iRobot Create?
<p>For a DC motor, you can use a relay instead of a motor controller if you aren't too concerned about precision. Be careful to account for the inductive surge in current, and ensure your relay can handle the peak current. You'll also see a reverse current if you stop the motor abruptly, so flyback diodes are a good idea as @hauptmech indicates.</p>
9512
2016-03-28T07:15:19.500
|motor|driver|
<p>My project requires a DC motor for mobility, very similar to an RC car. If precision isn't critical, can I use a solid state relay instead of a motor driver? If the vehicle moves an extra inch on the ground, I don't really care.</p>
DC Motor Control
<p>If you have an unused smartphone with a camera, your best bet may be to simply use that for the robot. </p> <p>Many results came up in my brief google search for smartphone robots and smartphone-based robotics software.</p>
9518
2016-03-28T17:07:37.670
|cameras|
<p>There are tons of cameras in devices around us these days. There are used photo cameras, smartphones, tablets at my home gathering dust.</p> <p>I wonder, what the easiest way could be to get a camera module from some device, connect it to my robot project with a soldering iron and make it work from the software point of view. I am planning to use something like Arduino, an STM32 platform, or probably Intel Edison.</p> <p>May be some camera modules are easier to solder and program for a custom project? Or shouldn't I look this way and better find a camera module that is specially designed for custom projects?</p>
Choose and connect a camera to a robot
<p>Sign is a function that returns +1 whenever the operand is anything positive, -1 if it's anything negative, and 0 if it's zero. I'm not sure if it's mathematically correct for your equation as I'm not familiar with it, but it's not an uncommon function, especially in the realm of robotics. </p>
9536
2016-03-30T22:30:08.247
|mobile-robot|control|robotic-arm|wheeled-robot|motion-planning|
<p>Following is the equation of Writhe matrix from the article <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=2&amp;cad=rja&amp;uact=8&amp;ved=0ahUKEwiShMzIuOnLAhVIQyYKHS5RD-oQFggpMAE&amp;url=https%3A%2F%2Fipvs.informatik.uni-stuttgart.de%2Fmlr%2Fpapers%2F13-ivan-IJRR.pdf&amp;usg=AFQjCNHftBdmoPCF6xDv59fxlEfwGkLGvg&amp;sig2=m9a-rNtK1v6NAJxqr3rTqA&amp;bvm=bv.117868183,bs.1,d.eWE" rel="nofollow noreferrer">Topology based Representation</a>(page no. 6). <a href="https://i.stack.imgur.com/k8LQw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k8LQw.jpg" alt="enter image description here"></a></p> <p>What is the meaning of 'sign' in the second part of this equation? I am not sure if this is some typo in that article as the other article of <a href="http://www.roboticsproceedings.org/rss08/p59.pdf" rel="nofollow noreferrer">Hierarchical Motion Planning</a>(page no. 3), compleletely neglects the term 'sign[...]' <a href="https://i.stack.imgur.com/L2LJJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L2LJJ.jpg" alt="enter image description here"></a></p>
Meaning of 'sign' in Writhe Matrix
<p>They are modeling the probability as a normal distribution with the given mean and variance. </p>
9538
2016-03-31T06:18:40.863
|mobile-robot|control|robotic-arm|wheeled-robot|motion-planning|
<p>In the article of <a href="https://ipvs.informatik.uni-stuttgart.de/mlr/papers/13-ivan-IJRR.pdf" rel="nofollow noreferrer">Topological Based Representation</a>(Page no. 12), the equation of the Linear Gaussian system dynamics is given as </p> <p><a href="https://i.stack.imgur.com/8uSyq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8uSyq.jpg" alt="enter image description here"></a></p> <p>In above equation what is the meaning of 'curly N'? </p>
Meaning of symbol, 'curly N' in the equation of Linear Gaussian system dynamics
<p><em>Reverse engineering</em> is the term you are looking for. Search <strong>reverse engineering ar drone</strong> or <strong>reverse engineering phantom</strong> to get started.</p> <p>Reverse engineering embedded systems can be really educational and fun, if time consuming. Most embedded processors have code security subsystems, but manufacturers often don't bother to turn them on, and if they are turned on, a little research will give you lots of info on bypassing the typically inadequate security.</p>
9547
2016-04-02T00:50:34.623
|quadcopter|control|
<p>I'm wondering if there is a way to figure out the actual controllers used in the commercial drones such as <a href="http://ardrone2.parrot.com/" rel="nofollow">AR drone</a> and <a href="http://www.dji.com/product/phantom-4" rel="nofollow">Phantom</a>. According to AR drone SDK, users are not allowed to access the actual hardware of the platform yet they are only capable of sending and receiving commands from/to the drone. </p> <hr> <p><strong>Edit:</strong> I'm hoping to to check the actual controller utilized in the software. When I fly AR drone, it seems the platform can't stabilize itself when I perform aggressive maneuvers, therefore, I can guess that they use linearized model which is applicable for using simple controllers such as PD or PID</p>
Reverse engineering commercial drone control algorithms
<p>The concept you might be missing is <a href="https://en.wikipedia.org/wiki/Homeomorphism" rel="nofollow">homeomorphism</a>. The <em>donut=coffee cup</em> visual example at that link might help. Of all the ways you can change the environment (move the obstacles) there is a subset of changes you can make that change the location of the obstacles but keeps the topology the same. Because the topology is the same, the topology based space trajectory will remain valid, even though the obstacles have moved. (Again, depending on the topology, only a subset of all possible changes will keep the same topology.)</p>
9552
2016-04-03T00:43:21.523
|mobile-robot|control|robotic-arm|motion-planning|
<p>The article of <a href="https://ipvs.informatik.uni-stuttgart.de/mlr/papers/13-ivan-IJRR.pdf" rel="nofollow">Topology-based representation</a> (page no. 13, line 5) says that, topology-based representation is invariant to certain changes in the environment. That means the trajectory generated in topology-based space will remain valid even if there are certain changes in the environment. But how is this possible? Is there any simple example to understand this concept?</p>
How is Topology-based representation invariant to certain change in environment
<p>The glaring issue I see at the moment is that you are forcing polarity on the I and D terms. In general, you are using a lot of sign checks, sign <em>assignments</em>, and conditional programming.</p> <p>None of that belongs in a PID controller. The entire controller should look like:</p> <pre><code>pError = Input - Output; iError = iError + pError*dt; dError = (pError - previousError)/dt; previousError = pError; ControlSignal = kp*pError + ki*iError + kd*dError; </code></pre> <p>That's it. No need for checks, assignments, etc. See my <a href="https://robotics.stackexchange.com/a/9466/9720">related answer</a> on splitting the control signal as opposed to control each motor pair individually (which is what I believe your goal is with the if elseif code).</p> <p>Imagine a PID controller as three people watching your vehicle, all trying to give advice based on their &quot;experience&quot; (mode). Each &quot;person&quot; will give you one of three &quot;statements&quot; So you have:</p> <ul> <li><strong>Mr. Proportional</strong> - This person looks at where the output is and compares it to what you asked for. The statements given by this person are:</li> </ul> <ol> <li>There is a large difference between what you want and what you have - <em>take a big action</em>.</li> <li>There is a small difference between what you want and what you have - <em>take a small action</em>.</li> <li>What you have is what you asked for - <em>take no action</em>.</li> </ol> <ul> <li><strong>Mr. Integral</strong> - This person looks at the same error value Mr. Proportional does, but compares it to <em>how long</em> it's been that way. The statements given by this person are:</li> </ul> <ol> <li>You have chronic/acute error (small error for a long time or big error for a small time) - <em>take a big action</em>.</li> <li>You have mild error (small error for a short time) - <em>take a small action</em>.</li> <li>Your error history is neutral (time * positive error is equal to time * negative error) - <em>take no action</em>.</li> </ol> <ul> <li><strong>Mr. Derivative</strong> - This person looks at the same error value Mr. Proportional does, but compares it to <em>how it's changing</em>. The statements given by this person are:</li> </ul> <ol> <li>Your error is getting bigger - <em>take a bigger action</em>.</li> <li>Your error is getting smaller - <em>take a <strong>negative</strong> action</em>.</li> <li>Your error is not changing - <em>take no action</em>.</li> </ol> <p>It's important to note statement 2 of Mr. Derivative up there - imagine you're driving a car behind someone who is already at a stop. As you get closer to them (error is getting smaller), not only do you want to let off the gas, but you also want to brake! Derivative action is what gives you &quot;braking&quot; - no other term (P or I) gives you <em>negative</em> action until you're <em>past</em> the setpoint. Derivative is the only term that tells you to <em>slow down because you're getting close</em>.</p> <p>Another way to help understand these terms is to understand <em>physically</em> what they mean. Say your reference is a speed. This means:</p> <ul> <li>Proportional error compares your speed to your target speed. Want to be going 60 and you're going 55? Speed up. Going 65? Slow down. This is easy.</li> <li>Integral error compares the <em>integral</em> of target speed to the <em>integral</em> of actual speed. This means that it's comparing a target <strong>position</strong> to your actual <strong>position</strong>. Were you supposed to be in the city already? Then you need to speed up. Were you supposed to be in the city and you're still at your house? <em>SPEED THE ENTIRE WAY</em>.</li> <li>Derivative error compares the <em>derivative</em> of the difference between target and actual speeds. Is the person in front of you pulling away? <em>Speed up!</em> Is the person in front of you pushing their brakes? <em>Slow down!</em> As I mentioned above, if your target is to be immediately behind the person in front of you, then proportional and integral will both &quot;tell&quot; you to speed up. Derivative is the only &quot;person&quot; to tell you you're going to rear-end them if you don't start braking.</li> </ul> <p><strong>SO</strong>, what happens when you force signs on the error terms?</p> <p>Let's suppose you're in a car, trying to follow a friend to a restaurant. Here the friend's speed represents the speed reference and your speed represents your speed feedback. Speed limit on the road is 35mph (55kph). Here's what happens:</p> <ol> <li>Your friend begins to move.</li> <li>You are still stationary, so the following errors happen:</li> </ol> <ul> <li>Proportional error is positive (you want to go 35 and are actually going 0).</li> <li>Integral error is a little positive (your friend is farther from you).</li> <li>Derivative error is large and positive (your friend is quickly pulling away from you).</li> <li>You force signs: force integral error to be positive (already is) and derivative error to be negative.</li> <li>This means that derivative error is &quot;telling&quot; you that the car is getting farther away from you, but you <em>invert</em> that and assume that derivative error <em>meant</em> to say that you are getting closer to your friend. <strong>This is wrong.</strong></li> <li>Actions: <strong>Proportional</strong> - press the gas a moderate amount. <strong>Integral</strong> - Press the gas pedal a little bit. <strong>Derivative</strong> - Should be press the <strong>gas</strong> pedal a lot, but you inverted it, so instead you <strong>press the brake a lot</strong>.</li> </ul> <ol start="3"> <li>Eventually your friend gets far enough away that the proportional and integral error becomes large enough that it overrides your (incorrectly inverted) derivative term. At this point:</li> </ol> <ul> <li>Proportional error is large (still going zero and want to go 35).</li> <li>Integral error is very large (your friend is very, very far in front of you).</li> <li>Derivative term is still large (friend is still actively getting farther from you), but you are still forcing it to be negative.</li> <li>Actions: <strong>Proportional</strong> - press the gas a lot. <strong>Integral</strong> - Floor the gas pedal. <strong>Derivative</strong> - Should be push the gas pedal, but you inverted it, so instead you <em>press the brake</em>.</li> </ul> <ol start="4"> <li>After some time, you get to 34.999 mph. Proportional error is still (slightly) positive because you want to go 35 and you're actually at 34.999, so proportional error is 0.001</li> </ol> <ul> <li>Proportional error is barely positive (still going slower than 35mph)</li> <li>Integral error is at its largest (you are farthest from your friend thus far because your friend has been going 35 the whole time)</li> <li>Derivative error is roughly zero (you're almost the same speed your friend is, so now the proportional error stabilizes)</li> <li>You force signs: Force integral error to be positive (already is) and derivative error to be negative (it's almost zero, so negligible change).</li> <li>Action: <strong>Proportional</strong> - No action because you're almost at 35mph. <strong>Integral</strong> - You are now <em>really</em> far from your friend, so you floor the gas. <strong>Derivative</strong> - No action because the proportional error is almost stable.</li> </ul> <ol start="4"> <li>Now, because you were flooring the gas, you pass 35mph and hit 35.01mph. Now, your proportional error becomes negative (want 35, are going 35.01, so error is -0.01).</li> </ol> <ul> <li>Proportional error is almost zero (going just over the speed limit)</li> <li>Integral error is very large and still very positive because you are actually very far behind your friend.</li> <li>Derivative error is almost zero (because proportional error is still almost zero).</li> <li>You force signs: Force derivative error to be positive - little change because it's almost zero anyways. The problem here comes when you force integral error to be negative - it <em>was</em> very large and very positive! Now you're forcing it to be negative. This means Mr. Integral was telling you you're very far behind your friend, but you <strong>invert</strong> that and assume integral error <em>meant</em> to say that you are very far in front of your friend.</li> <li>Action: <strong>Proportional</strong> - No action because you're going 35mph. <strong>Integral</strong> - You are very far behind your friend and should floor the gas pedal, but <em>you inverted that</em> and now think that you are very far ahead of your friend, so you <strong>stomp the brake pedal instead!</strong> <strong>Derivative</strong> - No action because proportional error is pretty stable.</li> </ul> <p>At this point you hit a loop - you slam on brakes when your speed just passes 35mph because you invert integral error, then you floor the gas when you fall below 35mph because you un-invert integral error. This should (violently!) jerk the car (aircraft) around and prevent you from ever eliminating any steady state error.</p> <p>Worse, I'm not sure how this would behave once you get to the set position, but I think the constant sign flipping might prevent you from stopping anywhere close to where you wanted (if it ever stabilizes at all).</p>
9554
2016-04-03T20:39:32.727
|quadcopter|control|pid|imu|pwm|
<p>I'm trying to implement a PID control on my quadcopter using the Tiva C series microcontroller but I have trouble making the PID stabilize the system. </p> <p>While I was testing the PID, I noticed slow or weak response from PID controller (the quad shows no response at small angles). In other words, it seems that the quad's angle range has to be relatively large (above 15 degrees) for it to show a any response. Even then, the response always over shoots no matter what I, D gains I choose for my system. At low P, I can prevent overshoot but then it becomes too weak. </p> <p>I am not sure if the PID algorithm is the problem or if its some kinda bad hardware configuration (low IMU sample rate or maybe bad PWM configurations), but I have strong doubts about my PID code as I noticed changing some of the gains did not improve the system response. </p> <p>I will appreciate If someone can point out whether i'm doing anything wrong in the PID snippet for the pitch component I posted. I also have a roll PID but it is similar to the code I posted so I will leave that one out.</p> <pre><code>void pitchPID(int16_t pitch_conversion) { float current_pitch = pitch_conversion; //d_temp_pitch is global variable //i_temp_pitch is global variable float pid_pitch=0; //pitch pid controller float P_term, I_term, D_term; float error_pitch = desired_pitch - current_pitch; //if statement checks for error pitch in negative or positive direction if ((error_pitch&gt;error_max)||(error_pitch&lt;error_min)) { if (error_pitch &gt; error_max) //negative pitch- rotor3&amp;4 speed up { P_term = pitch_kp*error_pitch; //proportional i_temp_pitch += error_pitch;//accumulate error if (i_temp_pitch &gt; iMax) { i_temp_pitch = iMax; } I_term = pitch_ki*i_temp_pitch; if(I_term &lt; 0) { I_term=-1*I_term; } D_term = pitch_kd*(d_temp_pitch-error_pitch); if(D_term&gt;0) { D_term=-1*D_term; } d_temp_pitch = error_pitch; //store current error for next iteration pid_pitch = P_term+I_term+D_term; if(pid_pitch&lt;0) { pid_pitch=(-1)*pid_pitch; } //change rotor3&amp;4 pitchPID_adjustment (pid_pitch, 'n'); //n for negative pitch } else if (error_pitch &lt; error_min) // positive pitch- rotor 1&amp;2 speed up { P_term = pitch_kp*error_pitch; //proportional i_temp_pitch += error_pitch; if (i_temp_pitch &lt; iMin) { i_temp_pitch = iMin; } I_term = pitch_ki*i_temp_pitch; if(I_term &gt; 0) { I_term=-1*I_term; } D_term = pitch_kd*(d_temp_pitch - error_pitch); if(D_term &lt; 0) { D_term=-1*D_term; } d_temp_pitch = error_pitch; pid_pitch = P_term+I_term+D_term; if(pid_pitch&lt;0) { pid_pitch=(-1)*pid_pitch; } print(pid_pitch);//pitch printString("\r\n"); //change rotor1&amp;2 pitchPID_adjustment(pid_pitch,'p'); //p for positive pitch } } } </code></pre> <p> </p> <pre><code>void pitchPID_adjustment(float pitchPIDcontrol, unsigned char pitch_attitude) { if (pitchPIDcontrol&gt;(maximum_dutyCycle-set_dutyCycle)) { pitchPIDcontrol=maximum_dutyCycle-set_dutyCycle; } switch (pitch_attitude){ //change rotor1&amp;2 case 'p': //positive status PWM0_2_CMPA_R += (pitchPIDcontrol);//(RED)//motor1 PWM0_0_CMPA_R += (pitchPIDcontrol);//(Yellow)//motor2 break; //change rotor 3&amp;4 case 'n': //negative status PWM0_1_CMPA_R += pitchPIDcontrol;//(ORANGE)//motor3 PWM1_1_CMPA_R += pitchPIDcontrol;//(green)//motor4 break; } </code></pre> <p>Also, can someone please tell me how this motor mixing works?: </p> <pre><code>Front =Throttle + PitchPID Back =Throttle - PitchPID Left =Throttle + RollPID Right =Throttle - RollPID </code></pre> <p>vs what I did in the function:</p> <pre><code>void pitchPID_adjustment(float pitchPIDcontrol, unsigned char pitch_attitude) </code></pre>
Quadcopter PID Algorithm
<p>I came across a site that referred to CIM as a <em>Controlled <a href="https://en.wikipedia.org/wiki/Induction_motor" rel="nofollow noreferrer">Induction Motor</a></em>. Confirmed by <a href="https://acronyms.thefreedictionary.com/CIM" rel="nofollow noreferrer">CIM at The Free Dictionary</a></p> <p>It's entirely possible that the other answer is correct, as <a href="https://andymark-weblinc.netdna-ssl.com/media/W1siZiIsIjIwMTgvMTEvMjcvMDkvMzUvMjQvZWQxMzQxZjktNjg1My00Yzc2LTk5YTAtN2QyNDlmMWFhOGUzL2FtLTAyNTUgQ0lNIE1vdG9yIExheW91dC5wZGYiXV0/am-0255%20CIM%20Motor%20Layout.pdf?sha=816b33262672ebf2" rel="nofollow noreferrer">this datasheet</a> has <em>CCL Industrial Motor Limited (CIM)</em> at the bottom.</p>
9568
2016-04-05T03:03:34.643
|motor|
<p>I'm trying to make decisions for motors on a robot build. I keep running across CIM Motors. What is a CIM Motor? Where does the designation CIM come from? What does CIM mean?</p>
What is a CIM motor?
<p>A single transducer sensor has to both send out the sound and then sense the return sound. It takes time for the vibration to stop after making the sound and that is time it can't sense a return. So close distances can't be sensed by a single transducer sensor.</p>
9580
2016-04-05T23:12:25.040
|ultrasonic-sensors|
<p>I have been looking for a cheap ultrasonic sensor that is not blind under +/-30 cm but the only sensors I could find use the following shape, which is not suitable for my project (because of the robot design that only has 1 hole, and not 2..) : <a href="https://i.stack.imgur.com/PDYXh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PDYXh.jpg" alt="shape 1"></a></p> <p>Is there any chance to find a sensor with that other shape with a range starting around 5cm ?</p> <p><a href="https://i.stack.imgur.com/FebEN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FebEN.png" alt="shape 2"></a></p> <p>Actually I am wondering if that 2nd shape makes this constraint mandatory or if I just did not found the appropriate product.</p>
Ultrasonic sensor range and shape
<p>$s_{last}$ <em>does</em> change.</p> <p>Looking at the pseudo code, $s_{last}$ is updated upon <em>each</em> iteration of the <code>while</code> loop in <code>main()</code>, in line 31, $s_{last}=s_{start}$, <em>if</em> the condition on line 29 is met:</p> <pre><code>if any edge costs changed </code></pre> <p>Likewise, in figure 4 $s_{last}$ is updated on line 39:</p> <p><a href="https://i.stack.imgur.com/DUa7p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DUa7p.png" alt="Pseudo Code from figure 4"></a></p>
9592
2016-04-08T07:37:42.407
|mobile-robot|control|robotic-arm|motion-planning|algorithm|
<p>In the D*Lite algorithm, described in line 21 of Figure 3, on page 4, in <a href="http://idm-lab.org/bib/abstracts/papers/aaai02b.pdf" rel="nofollow noreferrer">D* Lite</a>, the <code>main()</code> starts with defining $s_{last}=s_{start}$. But value of $s_{last}$ is never updated in the entire algorithm. </p> <p>So what is the purpose of defining this term and what does it mean?</p> <p><a href="https://i.stack.imgur.com/8qTBu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8qTBu.png" alt="Pseudo Code from figure 3"></a></p>
Meaning of s_last in D star Lite algorithm
<p>If $x, y, z$ are the coordinates of the object in the camera frame, then the camera to object transform will be: $$\begin{pmatrix}1 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; cos\theta &amp; -sin\theta &amp; 0\\ 0 &amp; sin\theta &amp; cos\theta &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 1\\ \end{pmatrix} \begin{pmatrix}0 &amp; 0 &amp; 0 &amp; x\\ 0 &amp; 0 &amp; 0 &amp; y\\ 0 &amp; 0 &amp; 0 &amp; z\\ 0 &amp; 0 &amp; 0 &amp; 1\\ \end{pmatrix} $$</p> <p>The tool->cam transform includes a single rotation about the $x$ axis of the tool by $\theta$ from the tool frame to the camera frame, followed by the camera to object transform.</p> <p>Rotation of the object is unaccounted for.</p>
9597
2016-04-08T21:58:10.373
|robotic-arm|inverse-kinematics|rotation|
<p>I am having an issue with some hand-eye calibration. </p> <p>So i am using a simple robot which at its tool point has an stereo camera mounted on it. </p> <p>I want to perform some visual serving/tracking based stereo images extracted from the camera in the "hand". The camera provides me x,y,z coordinates of the object I want to track. </p> <p>I can at all time extract an homogenous transformation matrix from base to tool (not cam) as <code>(T_tool_base)</code>. </p> <p>Firstly... I guess i would need perform some form of robot to (vice versa) camera calibration, My idea was that would consist of something like this </p> <pre><code>T_base_world = (T_base_tool) (T_tool_cam) (T_cam_world) </code></pre> <p>Where the T_tool_cam would entail the calibration... since the camera is at the tool point, would that entail the T_tool_cam should entail information on how much the camera is displaced from the tool point, and how it is rotated according to the tool point? or is not like that?</p> <p>secondly... How do i based purely x,y,z coordinate make an homogeneous transformation matrix, which includes an rotation matrix ?</p> <p>thirdly.. Having a desired Transformation matrix which in theory this </p> <pre><code>T_base_world = (T_base_tool) (T_tool_cam) (T_cam_world) </code></pre> <p>would provide me, would an inverse kinematics solution provide me with one or multiple solution?... In theory should this only provide me one, or what?</p>
Hand-eye calibration?
<p>The first relevant line is line 6:</p> <p>$$U.Insert(s_{goal}, [h(s_{start}, s_{goal});0]);$$</p> <p>Basically speaking, this is an association of the goal to some vector with two values in it. Now $U$ looks like this:</p> <p>$$U:s_{goal}\rightarrow \left(\begin{matrix}h(s_{start}, s_{goal})\\0\end{matrix}\right)$$</p> <p>Of course, $U$ is a list of such associations, but within the context of this question there's only one element in it.</p> <blockquote> <p>Thus in the procedure <code>ComputeShotestPath()</code> (line 10-28), $u = s_{goal}$</p> </blockquote> <p>That is correct.</p> <blockquote> <p>And as, $k_{old}=k_{new}$ (because $k_m=0$)</p> </blockquote> <p>That is not necessarily correct. The relevant lines are 12</p> <p>$$k_{old}=U.TopKey();$$</p> <p>Looking at what $U$ is above, this line is equivalent to:</p> <p>$$k_{old}=\left(\begin{matrix}h(s_{start}, s_{goal})\\0\end{matrix}\right);$$</p> <p>and line 13 for $k_{new}$:</p> <p>$$k_{new}=CalculateKey(u)\color{red}{)};$$</p> <p><em>I think the extra parenthesis is a typo. I will ignore it.</em> This value actually comes from a function call and results in the following:</p> <p>$$k_{new}=\left(\begin{matrix}\min\big(g(s), rhs(s)\big) + h(s_{start}, s) +k_m\\rhs(s)\end{matrix}\right);$$</p> <p>Where $s$ is the parameter passed to the function, which is actually $u$ which in turn is actually $s_{goal}$, also as you stated $k_m=0$, thus:</p> <p>$$k_{new}=\left(\begin{matrix}\min\big(g(s_{goal}), rhs(s_{goal})\big) + h(s_{start}, s_{goal})\\rhs(s_{goal})\end{matrix}\right);$$</p> <p>I don't know what the functions $g()$ and $rhs()$ do (look them up in the paper) and what values they return, but if $rhs(s_{goal}) = 0$ and $g(s_{goal}) &gt; rhs(s_{goal})$ then yes it's true that $k_{new}=k_{old}$.</p> <blockquote> <p>condition $k_{old}\leq k_{new}$ is satisfied</p> </blockquote> <p>I think there's your problem. I for one cannot see a condition $k_{old}\leq k_{new}$ being evaluated in the code. I guess you are referring to line 14, but that line actually looks like that:</p> <p>$$if(k_{old} \color{red}{\dot\lt}k_{new})$$</p> <p>This is <strong>not</strong> a less-than-or-equal sign. That is a les-than sign with a dot above it. I don't think they are the same thing and I wonder if the following is actually valid way to express that: $$\le\neq\dot\lt$$</p> <p>Anyway, what the heck is $\dot\lt$? I don't know. What I think it means is that it's an <strong>element wise</strong> less-than operator. In languages like Matlab or Octave it's common to add a dot to denote that an operator is applied element wise. The thing is that it doesn't make too much sense to apply a less-than operator to a vector. It's defined for scalar values, not vectors or matrices. I admit that I'm not sure about this one.</p> <p>The condition in line 14 is not satisfied. </p> <hr> <p>Now all of the above is under the assumption that the condition of the while loop in <code>ComputeShotestPath()</code> is true and the body of the loop is executed.</p> <p>The (first part of the) condition looks like this:</p> <p>$$U.TopKey()\dot\lt CalculateKey(s_{start})$$</p> <p>With the assumptions from above this expands to:</p> <p>$$\left(\begin{matrix}h(s_{start}, s_{goal})\\0\end{matrix}\right) \dot\lt \left(\begin{matrix}\min\big(g(s_{start}), rhs(s_{start})\big) + h(s_{start}, s_{start})\\rhs(s_{start})\end{matrix}\right)$$</p> <p>Again, I don't know how this algorithm works exactly, I had a quick look at the paper and from what I understood is that $h(s_{start}, s_{start}) = 0$ and $rhs(s_{start}) = 0$, which makes me doubt that the condition is true.</p> <p>In other words: I don't think the while loop is even running at all and if it actually does, it's not because of that part of its condition.</p> <hr> <p><strong>tl,dr;</strong> Holy cow, stop staring at nontextual pseudocode! Get yourself an implementation (or write it yourself) and use a debugger to step through the program.</p> <p><strong>Run the actual code on an actual machine</strong>, this is what matters in the end.</p>
9602
2016-04-10T00:28:02.920
|mobile-robot|control|robotic-arm|motion-planning|algorithm|
<p>In optimized D*Lite algorithm as shown in the figure below (page 5, of the paper <a href="http://robotics.cs.tamu.edu/dshell/cs625/aaai02b.pdf" rel="nofollow noreferrer">D*Lite</a>), when the procedure ComputeShortestPath() is called for the first time in line 31, U(list of inconsistent vertices) contains only goal vertex ($s_{goal}$). Thus in the procedure ComputeShotestPath()(line 10-28), $u = s_{goal}$. And as, $k_{old}=k_{new}$ (because $k_m=0$), condition $k_{old}\leq k_{new}$ is satisfied and $u = s_{goal}$ is again inserted in U with same value of $k_{old}=k_{new}$. Thus, it seems that line(11-15) will run forever, and the algorithm will not be able to find the shortest path from goal to start.</p> <p>I know that this algorithm has been widely used and I am failing to understand it. But where am I going wrong? </p> <p><a href="https://i.stack.imgur.com/uXuNY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uXuNY.jpg" alt="enter image description here"></a></p>
ComputeShortestPath() in Dstar lite algorithm
<p>Thanks to the comment by Biscuits, in the end I realized that I have been adding random number to the predicted state instead of true state. Therefore, I am essentially accumulating my gyro errors which leads to the wrong results. I hope that other people won't make the same mistakes again.</p>
9608
2016-04-11T08:42:22.780
|quadcopter|kalman-filter|
<p>I have been stuck on this for weeks, I really hope that someone can help me with this,thank you in advance. I am trying to write an IMU attitude estimation algorithm using quaternion kalman filter. So based on this research paper: <a href="https://hal.archives-ouvertes.fr/hal-00968663/document" rel="nofollow noreferrer">https://hal.archives-ouvertes.fr/hal-00968663/document</a>, I have developed the following pseudo code algorithm:</p> <p>Predict Stage:</p> <p>Qk+1/k = Ak * Qk; where Ak contains the gyro measurement. <a href="https://i.stack.imgur.com/CDjLC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CDjLC.png" alt="enter image description here"></a></p> <p>Pk+1/k = Ak * Pk *Ak.transpose() + Q; where Q is assumed to be zero.</p> <p>After prediction, we can use this formula to get the supposed gravity measurement of accelerometer Yg in body frame :</p> <p>Yg = R * G; // R is the rotation matrix generated from quaternion Qk+1/k and G = (0,0,0,9.81).</p> <p>This equation then translates to the following equation which allows me to get measurement model matrix H.</p> <p>H * Qk+1/k = 0; //where H stores value related to (Yg-G).</p> <p>Update Stage:</p> <p>K = P * H * (H * P * H.transpose()+R)^(-1); //R should be adaptively adjusted but right now initialized as identity matrix</p> <p>Qk+1/k+1 = (I-KH)Qk+1/k;</p> <p>Qk+1/K+1 = (Qk+1/K+1)/|Qk+1/k+1|; //Normalize quaternion</p> <p>Pk+1/K+1 = (I - KH)Pk+1/k;</p> <p>The following is the main part of my code. The complete C++ code is at here <a href="https://github.com/lyf44/fcu" rel="nofollow noreferrer">https://github.com/lyf44/fcu</a> if you want to test.</p> <pre><code>Matrix3f skew_symmetric_matrix(float a, float b, float c, float d){ Matrix3f matrix; matrix &lt;&lt; a,d*(-1),c, d,a,b*(-1), c*(-1),b,a; return (matrix); } void Akf::state_transition_matrix(float dt,float gx,float gy, float gz){ Vector3f tmp; tmp(0) = gx*PI/180; tmp(1) = gy*PI/180; tmp(2) = gz*PI/180; float magnitude = sqrt(pow((float)tmp(0),2)+pow((float)tmp(1),2)+pow((float)tmp(2),2)); /*q(k+1) = | cos(|w|*dt/2) | quaternion_multiply q(k) | w/|w|*sin(|w|*dt/2) | */ //w/|w|*sin(|w|*dt/2) tmp = tmp/magnitude*sin(magnitude*dt/2); //quaternion multiplication A(0,0) = cos(magnitude*dt/2); A.block&lt;3,1&gt;(1,0) = tmp; A.block&lt;1,3&gt;(0,1) = tmp.transpose()*(-1); Matrix3f skew_symmetric; skew_symmetric = skew_symmetric_matrix((float)A(0,0),(float)tmp(0),(float)tmp(1),(float)tmp(2)); A.block&lt;3,3&gt;(1,1) = skew_symmetric; } void Akf::observation_model_matrix(Vector3f meas){ Vector3f G; Vector3f tmp; G &lt;&lt; 0,0,9.81; /* H = | 0 -(acc-G).transpose | * | (acc-G) -(acc+G).skewsymmetric | */ tmp = meas-G; H(0,0) = 0; H.block&lt;3,1&gt;(1,0) = tmp; H.block&lt;1,3&gt;(0,1) = tmp.transpose()*(-1); tmp = tmp+G+G; Matrix3f matrix; matrix = skew_symmetric_matrix(0,(float)tmp(0),(float)tmp(1),(float)tmp(2)); H.block&lt;3,3&gt;(1,1) = matrix*(-1); //H = H*(0.5); cout&lt;&lt;"H"&lt;&lt;endl; cout&lt;&lt;H&lt;&lt;endl; cout&lt;&lt;"H*X"&lt;&lt;endl; std::cout&lt;&lt;H*X&lt;&lt;std::endl; } void Akf::setup(){ X_prev = Vector4f::Zero(4,1); X_prev(0) = 1; Q = Matrix4f::Zero(4,4); Z = Vector4f::Zero(4,1); R = Matrix4f::Identity(4,4); P_prev = Matrix4f::Identity(4,4); P_prev = P_prev*(0.1); I = Matrix4f::Identity(4,4); sum = Vector4f::Zero(4,1); noise_sum = Matrix4f::Zero(4,4); counter=1; } void Akf::predict_state(){ cout&lt;&lt;(60*counter%360)&lt;&lt;endl; X = A*X_prev; A_T = A.transpose(); P = A*P_prev*A_T+Q; } void Akf::update_state(){ Matrix4f PH_T; Matrix4f tmp; PH_T = P*H.transpose(); S = H*PH_T+R; if (S.determinant()!= 0 ) { tmp = S.inverse(); K = P*H*tmp; //std::cout&lt;&lt;"K"&lt;&lt;std::endl; //std::cout&lt;&lt;K&lt;&lt;std::endl; X_updated = (I-K*H)*X; X_updated = X_updated /(X_updated.norm()); P_updated = (I-K*H)*P; } else{ X_updated = X; std::cout&lt;&lt; "error-tmp not inversible!"&lt;&lt;std::endl; } X_prev = X_updated; P_prev = P_updated; } void rotation_matrix(Vector4f q,Matrix3f &amp;rot_matrix){ int i; for (i=1;i&lt;4;i++){ q(i) = q(i)*(-1); } Matrix3f matrix; matrix(0,0) = pow((float)q(0),2)+pow((float)q(1),2)-pow((float)q(2),2)-pow((float)q(3),2); matrix(0,1) = 2*(q(1)*q(2)-q(0)*q(3)); matrix(0,2) = 2*(q(0)*q(2)+q(1)*q(3)); matrix(1,0) = 2*(q(1)*q(2)+q(0)*q(3)); matrix(1,1) = pow((float)q(0),2)-pow((float)q(1),2)+pow((float)q(2),2)-pow((float)q(3),2); matrix(1,2) = 2*(q(2)*q(3)-q(0)*q(1)); matrix(2,0) = 2*(q(1)*q(3)-q(0)*q(2)); matrix(2,1) = 2*(q(0)*q(1)+q(2)*q(3)); matrix(2,2) = pow((float)q(0),2)-pow((float)q(1),2)-pow((float)q(2),2)+pow((float)q(3),2); rot_matrix = matrix; } Vector3f generate_akf_random_measurement(Vector4f state){ int i; //compute quaternion rotation matrix Matrix3f rot_matrix; rotation_matrix(state,rot_matrix); //rot_matrix*acceleration in NED = acceleration in body-fixed frame Vector3f true_value = rot_matrix*G; std::cout&lt;&lt;"true value"&lt;&lt;std::endl; std::cout&lt;&lt;true_value&lt;&lt;std::endl; for (i=0;i&lt;3;i++){ noisy_value(i) = true_value(i) + (-1) + (float)(rand()/(float)(RAND_MAX/2)); } return (noisy_value); } int main(){ float gx,gy,gz,dt; gx =60; gy=0; gz =0; //for testing, let it rotate around x axis by 60 degree myakf.state_transition_matrix(dt,gx,gy,gz); // dt is elapsed time myakf.predict_state(); Vector4f state = myakf.get_predicted_state(); Vector3f meas = generate_akf_random_measurement(state); myakf.observation_model_matrix(meas); myakf.measurement_noise(); myakf.update_state(); q = myakf.get_updated_state(); </code></pre> <p>The problem that I face is that my code does not work.The prediction stage works fine but the updated quaternion state is only correct for the first few iterations and it starts to drift away from the correct value. I have checked my code against the research paper multiple times and ensured that it is in accordance with the algorithm proposed by the research paper.</p> <p>In my test, I am rotating around x axis by 60 degree per iterations. The number below the started is the angle of rotation. state and updated state is the predicted and updated quaternion respectivly while true value, meas, result are acceleration due to gravity in body frame.As the test result indicates, everything is way off after rotating 360 degrees. The following is my test result: </p> <pre><code>1 started 60 state 0.866025 0.5 0 0 true value 0 8.49571 4.905 meas 0.314533 7.97407 4.98588 updated state 0.866076 0.499913 -2.36755e-005 1.56256e-005 result 0.000555564 8.49472 4.90671 1 started 120 state 0.500087 0.865975 -2.83164e-005 1.69446e-006 true value 0.000306622 8.4967 -4.90329 meas -0.532868 8.79841 -4.80453 updated state 0.485378 0.862257 -0.129439 -0.064549 result 0.140652 8.37531 -5.10594 1 started 180 state -0.0107786 0.989425 -0.0798226 -0.12062 true value -2.35843 -0.0203349 -9.52226 meas -1.39627 -0.889284 -8.74243 updated state -0.0195091 0.981985 -0.151695 -0.110965 result -2.19598 -0.0456112 -9.56095 1 started 240 state -0.507888 0.840669 -0.0758893 -0.171946 true value -3.59229 -8.12105 -4.16894 meas -4.52356 -7.73113 -4.98735 updated state -0.53758 0.811101 -0.212643 -0.0889171 result -3.65783 -8.18397 -3.98485 1 started 300 state -0.871108 0.433644 -0.139696 -0.183326 true value -3.94732 -6.909 5.73763 meas -4.36385 -6.98853 5.39759 updated state -0.86404 0.436764 -0.102296 -0.228487 result -3.69216 -6.94565 5.86192 1 started 0 state -0.966663 -0.0537713 0.0256525 -0.249024 true value 0.749243 0.894488 9.74036 meas -0.194541 0.318586 10.1868 updated state -0.78986 -0.0594022 0.0311688 -0.609607 result 1.1935 0.547764 9.72171 1 started 60 state -0.654338 -0.446374 0.331797 -0.512351 true value 8.74674 2.39526 3.74078 meas 9.36079 2.96653 3.57115 updated state -0.52697 -0.512048 0.221843 -0.64101 result 8.73351 2.50411 3.70018 </code></pre> <p>Can someone help me confirm that my understanding about the theory of this quaternion kalman filter and my pseudo code is correct? Also, if anyone has implemented attitude estimation using maybe a different version of quaternion kalman filter, I would greatly appreciate if you can provide a pseudo code and a little explanation. Thank you guys very much!</p>
Quaternion Kalman Filter Algorithm
<p>I found the solution to my problem : I was misunderstanding $R_k$ notation.</p> <p>$r_k:$ the reference vector $r_k = (0, \psi_k)^T$</p> <p>$R_k = \begin{bmatrix} r_k, \ldots, r_{k+N}\end{bmatrix}^T$</p> <p>To compute $R_k$, I was replacing each value of $r_k$ and then, doing the transpose. But instead I have to do the transpose first, and then replace the values of $r_k$. Doing this, I get $R_k$ a (22x1) matrix and the computation works.</p> <p>I did some tests with Matlab, but didn't manage to really make it work and as I didn't have the time to continue to work on it, it's in stand by. Anyway, if someone reading this, managed or trying to make a similar algorithm work, I'm interested to talk about it (but this is out of the scope of my original question).</p>
9620
2016-04-13T11:34:24.927
|mobile-robot|navigation|
<p>I'am trying to implement a path following algorithm based on MPC (Model Predictive Control), found in this paper : <a href="http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/189/pdf/imm189.pdf" rel="noreferrer">Path Following Mobile Robot in the Presence of Velocity Constraints</a> </p> <p><strong>Principle:</strong> Using the robot model and the path, the algorithm predict the behavior of the robot over N future steps to compute a sequence of commands $(v,\omega)$ to allow the robot to follow the path without overshooting the trajectory, allowing to slow down before a sharp turn, etc.<br> $v:$ Linear velocity<br> $\omega:$ Angular velocity</p> <p><strong>The robot:</strong> I have a non-holonomic robot like this one (Image extracted from the paper above) :<br> <a href="https://i.stack.imgur.com/K6MRY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/K6MRY.png" alt="Non-holonomic robot with castor wheel. (Extracted from the paper above)"></a></p> <p><strong>Here is my problem:</strong> Before implementing on the mobile robot, I'am trying to compute the needed matrices (using Matlab) to test the efficiency of this algorithm. At the end of the matrices computation some of them have dimension mismatch</p> <p><strong>What I did:</strong><br> For those interested, this calculation is from §4 (4.1, 4.2, 4.3, 4.4) p6-7 of the paper. </p> <blockquote> <h2>4.1 Model</h2> <p>$z_{k+1} = Az_k + B_\phi\phi_k + B_rr_k$ (18) with:<br> $A = \begin{bmatrix} 1 &amp; Tv \\ 0 &amp; 1 \end{bmatrix}$ $B_\phi = \begin{bmatrix} {T^2\over2}v^2\\ Tv \end{bmatrix}$ $B_r = \begin{bmatrix} 0 &amp; -Tv \\ 0 &amp; 0 \end{bmatrix}$<br> $T$: sampling period<br> $v$: linear velocity<br> $k$: sampling index (i.e. $t= kT$)<br> $z_k:$ the state vector $z_k = (d_k, \theta_k)^T$ position and angle difference to the reference path $r_k:$ the reference vector $r_k = (0, \psi_k)^T$ with $\psi_k$ is the reference angle of the path at step k </p> <h2>4.2 Criterion</h2> <p>The predictive receding horizon controller is based on a minimization of the criterion<br> $J= \Sigma^N_{n=0} (\hat{z}_{k+n} - r_{k+n})^T Q(\hat{z}_{k+n} - r_{k+n}) + \lambda\phi^2_{k+n}$, (20)<br> Subject to the inequality constraint<br> $ P\begin{bmatrix} v_n \\ v_n\phi_n \end{bmatrix} \leq q,$<br> $n=0,..., N,$<br> where $\hat{z}$ is the predicted output, $Q$ is a weight matric, $\lambda$ is a scalar weight, and $N$ is prediction horizon.</p> <h2>4.3 Predictor</h2> <p>An n-step predictor $\hat{z}_{k+n|k}$ is easily found from iterating (18). Stacking the predictions $\hat{z}_{k+n|k},n = n,...,N$ in the vector $\hat{Z}$ yields<br> $\hat{Z} = \begin{bmatrix} \hat{z}_{k|k} \\ \vdots \\ \hat{z}_{k+N|k}\end{bmatrix} = Fz_k + G_\phi\Phi_k + G_rR_k$ (22)<br> with<br> $\Phi_k = \begin{bmatrix} \phi_k, \ldots, \phi_{k+N}\end{bmatrix}^T$,<br> $R_k = \begin{bmatrix} r_k, \ldots, r_{k+N}\end{bmatrix}^T$,<br> and<br> $F = \begin{bmatrix}I &amp; A &amp; \ldots &amp; A^N \end{bmatrix}^T$<br> $G_i = \begin{bmatrix} 0 &amp; 0 &amp; \ldots &amp; 0 &amp; 0 \\ B_i &amp; 0 &amp; \ldots &amp; 0 &amp; 0 \\ AB_i &amp; B_i &amp; \ddots &amp; \vdots &amp; \vdots \\ \vdots &amp; \ddots &amp; \ddots &amp; 0 &amp; 0 \\ A^{N-1}B_i &amp; \ldots &amp; AB_i &amp; B_i &amp; 0 \end{bmatrix}$ </p> </blockquote> <p>where index $i$ should be substituted with either $\phi$ or $r$</p> <blockquote> <h2>4.4 Controller</h2> <p>Using the N-step predictor (22) simplifies the criterion (20) to $J_k = (\hat{Z}_k - R_k)^T I_q (\hat{Z}_k - R_k) + \lambda\Phi^T_k\Phi_k$, (23) where $I_q$ is a diagonal matrix of appropriate dimension with instances of Q in the diagonal. The unconstrained controller is found by minimizing (23) with respect to $\Phi$:<br> $\Phi_k = -L_zz_k - L_rR_k$, (24)<br> with<br> $L_z = (lambda + G^T_wI_qG_w)^{-1}G^T_wI_qF$ $L_r = (lambda + G^T_wI_qG_w)^{-1}G^T_wI_q(Gr - I)$</p> </blockquote> <p>I'am trying to compute $\Phi_k = -L_zz_k - L_rR_k$ but the dimension of $L_r$ and $R_k$ does not match for matrix multiplication.</p> <p>Parameters are : </p> <ul> <li>$T=0.1s$</li> <li>$N=10$</li> <li>$\lambda=0.0001$</li> <li>$Q=\begin{bmatrix} 1 &amp; 0 \\ 0 &amp; \delta \end{bmatrix}$ with $\delta=0.02$</li> </ul> <p>I get :<br> $R_k$ a (11x2) matrix (N+1 elements of size 2x1, transposed)<br> $G_w$ a (22x11) matrix<br> $G^T_w$ a (11x22) matrix<br> $I_q$ a (22x22) matrix<br> $F$ a (22x2) matrix<br> $G_r$ a (22x22) matrix </p> <p>so Lz computation gives (according to the matrix sizes)<br> $L_z=(scalar + (11x22)(22x22)(22x11))^{-1} (11x22)(22x22)(22x22)$<br> a (11x2) matrix.<br> as $z_k$ is (2x1) matrix, doing $L_zz_k$ from (24) is fine. </p> <p>and Lr computation gives (according to the matrix sizes) $L_r=(scalar + (11x22)(22x22)(22x11))^{-1} (11x22)(22x22)((22x22) - (22x22))$<br> a (11x22) matrix.<br> as $R_k$ is (11x2) matrix, doing $L_rR_k$ from (24) is not possible.<br> I have a (11x22) matrix multiplicated by a (11x2) matrix.</p> <p>I'm sure I'm missing something big here but unable to see what exactly. Any help appreciated.</p> <p>Thanks</p>
Mobile robot path following using Model Predictive Control (MPC)
<p>I would recommend changing the naming convention since it is a bit misleading. In robotics the world Coordinate system (CS) is usually your fixed, absolute coordinate system. Lets call the transformation matrix from your camera to your object $T_{Object,Tool}$ If it cannot include any rotation, then you are right is should have the form as you specified. You are also right with your multiplications, but I would advise keeping the two thing separate. I actually do not think that you need the $T_{Object,Tool}$ matrix</p> <p>The $T_{Tool,Base}$ is a transformation matrix form your tool to your base. This is what you can use to solve the inverse kinematics problem. </p> <ol> <li>Define your coordinate systems (with the Denavit Hartenberg convention)</li> <li>Build your transformation matrix as $T_{Tool,Base} = A1(q1) \cdot A2(q2) \cdot A3(q3)\cdot A4(q4)\cdot A5(q5)\cdot A6(q6)$</li> <li>Build the Cartesian transformation Matrix as $T_{TCP} = X_{trans}\cdot Y_{trans}\cdot Z_{trans}\cdot X_{rot}\cdot Y_{rot}\cdot Z_{rot}$</li> <li>Solve the inverse kinematics problem from </li> </ol> <p>$T_{TCP} = T_{Tool,Base}$</p> <p>as presented in <a href="http://thydzik.com/academic/robotics-315/chap4.pdf">this description</a>. After this step you should have a function that for a given Cartesian coordinates of the TCP as input $X = (x, y, z, a, b, c)$ calculates the corresponding joint position as output $Q = (q_1, q_2, q_3, q_4, q_5, q_6)$</p> <ol start="5"> <li>Your camera will measure the relative position of the object. Lets define the vector of measurments as $X_{Object} = (x, y, z, 0, 0, 0)$. The current Cartesian position of the robot is known (after solving the IK problem). Lets call this $X_{TCP} = (x_r, y_r, z_r, a_r, b_r, c_r)$. If you only want to track positions then $X_{new} = X_{Object} + X_{TCP}$ will give you the coordinates you want to position your TCP. Use this as input for the inverse kinematic function and get the $Q$ angles which correspond to this position. If you want to orient the TCP toward the object calculate also the new realative angles geometrically then add those also (instead of the $0, 0, 0$). That is it!</li> </ol>
9622
2016-04-13T20:00:56.027
|robotic-arm|kinematics|inverse-kinematics|
<p>I am a uncertain about how to compute the right homogeneous transformation matrix to compute an inverse kinematic Q-configuration. </p> <p>Looking at robot like this <a href="https://i.stack.imgur.com/cuV2q.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cuV2q.jpg" alt="enter image description here"></a></p> <p>Where at the end of this robot I have a camera mounted on to it. </p> <p>The purpose of my application is to make the robot follow an object, so basically tracking it. The camera provide me with an X,Y,Z coordinate, which the position i want place my robot arm. </p> <p><strong><em>First question - How do i set up the desired homogenous transformation matrix?</em></strong></p> <p>The way i see it, I have 2 transformation matrices being <code>T_tool_base</code> and <code>T_world_tool</code> which become <code>T_world_base = (T_tool_base) (T_world_tool)</code></p> <p>My question is that how do i compute my desired transformation matrix. I think i know how i should setup the transformation matrix for the camera which would be like this</p> <p>T_world_tool = <code>0 0 0 x 0 0 0 y 0 0 0 z 0 0 0 1</code></p> <p>(Second question is regarding the rotation matrix, how do prescribe such that rotation in arbitrary as long the endpoint has the desired position in the world frame?)</p> <p>but what should t_tool_base entail? should it entail the transformation of its current state or the desired transformation, and if so how do i extract the desired t_tool_base transformation?...</p>
inverse kinematics for 6 jointed robots
<blockquote> <p><strong>disclaimer</strong>: This is more of a longer comment than an answer. I'm not sure what the problem is. There is a lot going on here (thankfully a lot of information is provided, too) and my attention shifted around from one thing to others.</p> </blockquote> <h2>my original thought</h2> <p>from the comments</p> <blockquote> <p>There's not much between 160 min value and 200 max. In either case (with or without the yaw controller), you are constantly hitting the limit as I see it in your diagram. The more often you limit the controller value, the more it is actually the limiter acting as a (nonlinear!) controller and not the controller itself. A limiter should kick in occasionally and not all the time. </p> </blockquote> <p>This is mostly based on how clearly the limit cuts down the PWM:</p> <p><a href="https://i.stack.imgur.com/nxKuW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nxKuW.png" alt="limited PWM"></a></p> <p>hence the following analogy came to my mind:</p> <blockquote> <p>Strap a belt tightly around your breast and run a marathon. </p> </blockquote> <p>And I concluded that:</p> <blockquote> <p>I'd try to use stronger motors so that your controller has enough air to do its job and not hit the limit so often.</p> </blockquote> <p>A stronger motor would not have run at full power and would not hit the limit so often. Hitting the limit is not bad in of itself. Imagine your PWM would be either at the upper or the lower limit. You'd have a (nonlinear) bang-bang controller. Such a controller isn't bad in of itself either, but it somewhat overrides the PID controller.</p> <p>It can happen that you can manipulate the PID parameters at will and nothing changes. One reason for that can be a limit that makes your changes irrelevant. New parameters might change the output from 260 to 280 or to 220, which will have to effect if they are all going to be limited to 200. This is why constantly hitting the limit can be a bad thing.</p> <p>With the yaw part, the phenomenon seems to be more pronounced, but not by much. I guess this is something to be aware of especially when summing up controller values, but not the actual problem.</p> <p>Anyway, from the given diagrams I cannot tell why one would fly and the other would not.</p> <h2>500Hz -> 300Hz</h2> <p>Then another attempt was made and again one configuration works and the other does not.</p> <blockquote> <p>I have tried to run the control loop at a slower rate and noticed that there is a drastic change in each of the PID components such that it is not exceeding the limit too much and now its flying again.</p> </blockquote> <p>I tried to find differences between both diagrams. You are right, the limit is not exceeded as often as before in the flying operation. But what else is different?</p> <blockquote> <p>May I ask what happens to the P gain contribution to the output if the speed of the control loop is decreased (dT increased). Does it have any significant effect like on the I and D gain which is time dependent (D decreases I increases)</p> </blockquote> <p>Now I had a look at the separate controller error values. I combined them into the following image for comparison:</p> <p><a href="https://i.stack.imgur.com/Wk12g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wk12g.png" alt="error values"></a></p> <p>The first thing that I notice is that there's a steady state error and thus $I_{error}$ constantly grows aka the red is the integral of blue and blue is bigger than zero all the time. This is not a good thing.</p> <p>If there's a steady state error that the P gain cannot compensate, the integral part should grow and compensate the constant offset error.</p> <p><strong>The integral part should converge to some value.</strong> At least if it's compensating for a constant offset. I don't know what integral error from what controller this is, but an error that keeps growing is undesirable.</p> <p>But maybe there is something inherently different between fly and nofly? The diagrams are a <em>little</em> hard to read because the axis are of different scale. </p> <p>I tried to scale the images in gimp to fit and this is the result:</p> <p><a href="https://i.stack.imgur.com/sJSie.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sJSie.png" alt="flynoflynofly"></a></p> <p>The pixelated plots are those two when it did not fly. Their overall $P_{error}$ is bigger and thus the $I_{error}$ is steeper.</p> <p>But I don't think this is the problem. Look at the difference between the two $I_{error}$ of the cases when it flew. (the non-pixelated ones)</p> <p>I'd say there's no other significant difference between the two cases.</p> <h2>conclusion</h2> <p>There's a constant error in the system. The controller cannot reduce that error. It looks as if the control loop is not closed properly.</p> <p>Take a step back. Even in your first image, the height value oscillates and $I_{error}$ increases constantly. Inspect what's going on there.</p> <p>Then create the angle controller separately. Put the quadcopter onto a pivot point or pole and see if it can maintain its angle. The height controller should be turned off.</p> <p>When both controllers work on their own, try to combine both controllers in a weighted sum. $PID_{total} = 0.5 PID_{yaw} + 0.5PID_{height}$ Change the ratio as you see fit. </p>
9629
2016-04-14T04:46:44.323
|quadcopter|control|pid|raspberry-pi|stability|
<p>Good day,</p> <p>I would like to ask why is it that when I add the Yaw control to my PID controller for each motor. The quadcopter refuses to take off or maintain its altitude. I am curently using a Cascaded PID controller for attitude hold using an Accelerometer, a Magnetometer and a Gyroscope, and a 40Hz Ultrasonic Sensor for Altitude Hold. Since the scope is indoor I have done away with the barometer due to its +-12m error. </p> <p><strong>Resulting Response</strong></p> <p>Without Yaw Control, the plot below shows the response of the quadrotor. <a href="https://i.stack.imgur.com/4WSGy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4WSGy.png" alt="enter image description here"></a></p> <p>With Yaw Control, the plot below shows the response of the quadrotor. <a href="https://i.stack.imgur.com/ow2tB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ow2tB.png" alt="enter image description here"></a></p> <p><strong>Debugging</strong></p> <p>I found out that each of the outputs from each PID's give a too high of a value such that when summed together goes way over the PWM limit of 205 or Full Throttle.</p> <ol> <li><p>Without yawPID contribution The limiter kicks in without damaging the desired response of the system thus is still able to fly albeit with oscillatory motion along the z axis or height <a href="https://i.stack.imgur.com/6w8rv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6w8rv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/xaSx3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xaSx3.png" alt="enter image description here"></a></p> <ol start="2"> <li>With yawPID contribution The added yaw components increases the sum of the PID's way above the limit thus the limiter compesates the excess too much resulting in an over all lower PWM output for all motors thus the quad never leaves the ground. <a href="https://i.stack.imgur.com/fli5D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fli5D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/9GvBM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9GvBM.png" alt="enter image description here"></a></li> </ol> <p>//Motor Front Left (1) float motorPwm1 = pitchPID + rollPID + yawPID + baseThrottle + baseCompensation;</p> <p>//Motor Front Right (2) float motorPwm2 = pitchPID - rollPID - yawPID + baseThrottle + baseCompensation; </p> <p>//Motor Back Left (3) float motorPwm3 = -pitchPID + rollPID - yawPID + baseThrottle + baseCompensation; </p> <p>//Motor Back Right (4) float motorPwm4 = -pitchPID - rollPID + yawPID + baseThrottle + baseCompensation;</p></li> </ol> <p><strong>Background</strong></p> <p>The PID parameters for the Pitch, Yaw and Roll were tuned individually meaning, the base throttle was set to a minimum value required for the quadcopter to be able to lift itself.</p> <p>The PID parameters for the Altitude Sensor is tuned with the other controllers active (Pitch and Roll).</p> <p><strong>Possible Problem</strong></p> <ol> <li>Limiter algorithm</li> </ol> <p>A possible problem is that the algorithm I used to limit the maximum and the minimum throttle value may have caused the problem. The following code is used to maintain the ratio of the motor values instead of limiting them. The code is used as a two stage limiter. In the 1st stage, if one motorPWM is less than the set baseThrottle, the algorithm increases each motor PWM value until none of them are below that. In the 2nd stage, if one motorPWM is more than the set maxThrottle, the algorithm decreases each motor PWM value until none of them are above that. </p> <pre><code>//Check if PWM is Saturating - This method is used to fill then trim the outputs of the pwm that gets fed into the gpioPWM() function to avoid exceeding the earlier set maximum throttle while maintaining the ratios of the 4 motor throttles. float motorPWM[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4}; float minPWM = motorPWM[0]; int i; for(i=0; i&lt;4; i++){ // Get minimum PWM for filling if(motorPWM[i]&lt;minPWM){ minPWM=motorPWM[i]; } } cout &lt;&lt; " MinPWM = " &lt;&lt; minPWM &lt;&lt; endl; if(minPWM&lt;baseThrottle){ float fillPwm=baseThrottle-minPWM; //Get deficiency and use this to fill all 4 motors cout &lt;&lt; " Fill = " &lt;&lt; fillPwm &lt;&lt; endl; motorPwm1=motorPwm1+fillPwm; motorPwm2=motorPwm2+fillPwm; motorPwm3=motorPwm3+fillPwm; motorPwm4=motorPwm4+fillPwm; } float motorPWM2[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4}; float maxPWM = motorPWM2[0]; for(i=0; i&lt;4; i++){ // Get max PWM for trimming if(motorPWM2[i]&gt;maxPWM){ maxPWM=motorPWM2[i]; } } cout &lt;&lt; " MaxPWM = " &lt;&lt; maxPWM &lt;&lt; endl; if(maxPWM&gt;maxThrottle){ float trimPwm=maxPWM-maxThrottle; //Get excess and use this to trim all 4 motors cout &lt;&lt; " Trim = " &lt;&lt; trimPwm &lt;&lt; endl; motorPwm1=motorPwm1-trimPwm; motorPwm2=motorPwm2-trimPwm; motorPwm3=motorPwm3-trimPwm; motorPwm4=motorPwm4-trimPwm; } </code></pre> <p>This was obtained from pixhawk. However the difference is that they employ only upper bound compensation limiting, while mine also performs lower bound compensation limiting which may cause more saturation once it reaches the second stage.</p> <p><a href="https://i.stack.imgur.com/Rt7rZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rt7rZ.png" alt="enter image description here"></a> From:<a href="https://pixhawk.org/dev/mixing" rel="nofollow noreferrer">https://pixhawk.org/dev/mixing</a></p> <ol start="2"> <li>Gains are set too high.</li> </ol> <p>It is also possible that I've set my P gains too high thus exceeding the max RPM limit of the motors causing the Limiter algorithm to overcompensate.</p> <p><strong>Current PID Settings:</strong></p> <p>The minimum motor value for the quad to lift itself is 160 while the maximum limit is 200 from the PWM time high of 2000ms </p> <ol> <li>Pitch (Cascaded P-PID controller) Rate P = 0.07 Rate I = 0.03 Rate D = 0.0001 Stabilize P = 2</li> <li>Roll (Cascaded P-PID controller) Rate P = 0.09 Rate I = 0.03 Rate D = 0.0001 Stabilize P = 2</li> <li>Yaw (Cascaded P-PID controller) Rate P = 0.09 Rate I = 0.03 Rate D = 0.0001 Stabilize P = 2</li> <li>Hover (Single loop PD controller) P = 0.7 D = 35</li> </ol> <p><strong>Possible Solution</strong></p> <p>I think I have set the PID parameters particularly the P or D gain too high that the computed sum of the outputs of the controller is beyond the limit. Maybe retuning them would help.</p> <blockquote> <p>I would just like to ask if anyone has encountered this problem or if you have any suggestions. Thank you :) </p> </blockquote> <h2><strong>EDIT</strong></h2> <p>I have added the plots of the response when the control loop is fast (500Hz) and Slow (300Hz)</p> <blockquote> <p>500Hz: Does not fly <a href="https://i.stack.imgur.com/OzU9n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OzU9n.png" alt="enter image description here"></a></p> <p>300Hz: Flies <a href="https://i.stack.imgur.com/5MZ6j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5MZ6j.png" alt="enter image description here"></a></p> </blockquote>
Quadcopter refuses to fly when the Yaw PID component is added
<p>I encountered this problem myself when making an extended Kalman filter for a quadrotor.</p> <p>You have to check if the estimate goes above or below +/- pi, and then correct it if it does.</p> <p>You can do this with a simple <code>if</code> statement:</p> <pre><code>if (angle &gt; pi) angle = angle - 2*pi; else if (angle &lt; -pi) angle = angle + 2*pi; </code></pre> <p>You could also throw in a <code>while</code> loop over that, if the value ever becomes larger than <code>2*pi</code> for some reason, due to spikes etc. This looks like so:</p> <pre><code>while (abs(angle) &gt; pi) { if (angle &gt; pi) angle = angle - 2*pi; else if (angle&lt; -pi) angle = angle + 2*pi; } </code></pre> <p>Remember to have this check after your prediction, as well as on the difference between your measured compass angle and your prediction, that you use to calculate the corrected estimate.</p> <p>After this you also check the corrected estimate for the same, which should fix your problem.</p>
9636
2016-04-14T23:42:47.753
|mobile-robot|kalman-filter|compass|
<p>I have trouble estimating the heading when close to the "pivot" point of the compass, and could use some input on how to solve it. I have set up my angles to be from 0-360 degrees during the testing but will be using radians (-pi, pi) from now on.</p> <p>The setup is a differential robot with IMU, wheel encoders and a magnetic compass. </p> <p>A complementary filter is used for fusing gyroZ and odo measurements to estimate the heading, and then correct it with a Kalman filter using the magnetic compass.</p> <p>My problem occurs when the robot heading is close to -pi/pi .</p> <p>The estimated heading is useless even though the robot is not even moving.</p> <p>I am thinking this must be a very common problem and probably has a better solution than what I came up with, which was either re-initializing the integrator when crossing zero, adding 180 degrees each time the error is larger, or just ignoring the compass if the error is too large...</p> <p>It's my first Kalman filter so I may have made a poor implementation if this is not a common issue...</p> <p>Edit: trudesagen's solution solved my problem.</p>
Regarding Kalman filter and estimating heading with magnetic compass
<p>According to the <a href="https://www.parallax.com/sites/default/files/downloads/900-00008-Continuous-Rotation-Servo-Documentation-v2.2.pdf" rel="nofollow noreferrer">data sheet</a> and example code, this continuous rotation servo takes an input of 1.3ms full speed clockwise to 1.7ms full speed counter-clockwise, with 1.5ms in the middle being stopped.</p> <p>As <a href="https://robotics.stackexchange.com/a/9638/37">jwpat suggests</a> you are better off using <code>Servo.writeMicroseconds(1500)</code> to stop your servo, since this guarantees that you are sending the correct value, even if you accidentally get the <code>min</code> and <code>max</code> calibration values wrong in your <code>Servo</code> object. </p>
9637
2016-04-15T00:15:49.927
|arduino|
<p>I'm using processing to send strings to Arduino, using functions like </p> <pre><code>else { int u=90; port.write(u+"z"); } </code></pre> <p>on the processing side and in the Arduino side I'm using calls like </p> <pre><code> case 'z': z.write(v); v = 0; break; case 'L': z.write(0); //v = 0; break; } </code></pre> <p>yet I can't get the servo to stop at all. How do I make it shut off?</p> <p>If it was a regular servo I wouldn't even ask because that's easy but I write 0 or 90 or LOW and nothing, it just keeps spinning in one direction but when it meets one of the conditions in my statements it switches polarity/direction and that's good - I want that but I made this function to make it stop and it is not doing so, does anyone have any ideas ?</p> <p>I am using a <a href="https://www.parallax.com/sites/default/files/downloads/900-00008-Continuous-Rotation-Servo-Documentation-v2.2.pdf" rel="nofollow">Parallax Continuous Rotation Servo</a>.</p>
How do we write a STOP to a continuous Servo?
<p>Firstly its important to consider the worst case current draw by your motors. Since this determines how fast your battery will drain out. The size of the battery will be determined by its capacity and cell count. Generally bigger batteries like in laptops with their 4-cell 48Wh can power them up to an hour under intensive load. Which brings me to mention that one of the factors is how long do you want the quad to run. Though the laptop batteries run long, they are big in size so depending on the size of your quad, the battery size will have to be chosen. Again since, the battery adds weight, motors will draw more current, thus inline with the first point, determine the peak current draw of the entire system. I used motors since it draws the most. You can find more info on how to choose a LiPo battery <a href="http://www.ultimaterc.com/forums/showthread.php?t=115264" rel="nofollow">here</a></p>
9647
2016-04-16T17:48:56.013
|quadcopter|arduino|raspberry-pi|battery|
<p>I will have this configuration: </p> <ul> <li>A2212 Brushless Motor 1000KV - 4 each</li> <li>ECS - 30A Electronic Speed Control (ESC) - 4 each </li> <li>Propeller - 1045 Propeller CW &amp; CCW Pair 10 inch * 4.5 pitch</li> <li>Arduino Mega - 2560 board</li> <li>Raspberry Pi 3 </li> <li>Open pilot CC3D flight controller </li> </ul> <p>I want to know what rating Li-Po battery should I get for this configuration. The reason behind my asking here is because a simple google search is not able to satisfy me with an explanation...</p> <p>Also, my weight will be 1.5 kg for the quadcopter, so I need a stable current discharge.</p> <p>This is my first quadcopter, I am a Computer Science guy, so I have little knowledge of electronics, I'm learning, but need help...</p>
What rating Li-Po battery should I get for this configuration?
<p>If I told you my truck has 200 horsepower, could you tell me the cargo weight capacity? Not really. There are more factors than just the motor specs that play into your assessment. You might be able to ball-park it, but be prepared to be off by a factor of 2 or more. Here's why...</p> <ol> <li><p>A motor needs current to produce torque. The available power supply current comes into play. </p></li> <li><p>Total power available depends on both current and voltage. If you run your motor at a higher voltage (without saturating the back iron) you will have more power available for a given current. </p></li> <li><p>Every motor has a torque <em>vs</em> speed inverse relationship. How fast you want the device to go affects how much torque each motor can generate. </p></li> <li><p>The gear trains (or other transmission) and frictional losses also affect the capability to operate at given torque and speed. </p></li> </ol> <p>You can use the ratings of the motors to get you in the ballpark. They can tell you the maximum power available to be transmitted to the device. However, the other factors play more than a nominal role. </p>
9654
2016-04-18T20:39:26.660
|mobile-robot|
<p>I'm doing a mobile robot project with robotic arms, I wanted to buy a chassis for my robot that can carry enough weight, but many websites don't give definitive answers about maximum payload.</p> <p>Is there is a way to figure this out just by knowing details about the motors?</p>
How to know the payload of the chassis from its motors?
<p>The Lie Bracket is 6x1, not 6x2, so both terms should be 1x1. </p>
9657
2016-04-19T03:50:25.497
|dynamics|matlab|
<p>I'm stuck on equation 4.30 of page 176 in<br> <a href="http://www.cds.caltech.edu/~murray/books/MLS/pdf/mls94-complete.pdf" rel="nofollow">http://www.cds.caltech.edu/~murray/books/MLS/pdf/mls94-complete.pdf</a></p> <p>This equation:<br> $\frac {\partial M_{ij}} {\partial \theta_k} = \sum_{l=\max(i,j)}^n \Bigl( [A_{ki} \xi_i, \xi_k]^T A_{lk}^T {\cal M}_l' A_{lj} \xi_j + \xi_i^T A_{li}^T {\cal M}_l' A_{lk} [A_{kj} \xi_j, \xi_k] \Bigr)$ </p> <p>seems impossible to process because it requires adding a 2x1 to a 1x2 matrix. going by ROWSxCOLUMNS notation. Matrices M and A are 6x6 and $\xi$ is a 6x1, so how does this addition statement fit the rules of matrix addition? This must be my mistake, I just don't see how.</p>
Dynamic Model of a Manipulator
<p>Given a desired pose:</p> <p>$$T^{desired} = \begin{bmatrix} s_x&amp;n_x&amp;a_x&amp;P_x\\ s_y&amp;n_y&amp;a_y&amp;P_y\\ s_z&amp;n_z&amp;a_z&amp;P_z\\ 0&amp; 0&amp; 0&amp; 1 \end{bmatrix}$$</p> <p>we know that this is given as follows:</p> <p>$$T^{desired} = {}^{0}T_{1}{}^{1}T_{2}{}^{2}T_{3}{}^{3}T_{4}{}^{4}T_{5}{}^{5}T_{6}$$</p> <p>where each ${}^{j-1}T_{j}$ is the transformation between frame attached to joint j with respect to joint j-1.</p> <p>Now the idea is to start pre-multiplying the equation with the inverse transformation and try to deduce the joint angles as we go along, as an example let's start by the first joint (i.e. $q_1$):</p> <p>$${}^{1}T_{0}T^{desired} = {}^{1}T_{2}{}^{2}T_{3}{}^{3}T_{4}{}^{4}T_{5}{}^{5}T_{6}$$</p> <p>When you do this you find the following:</p> <p>$$-\cos(q_5) = -a_x \sin(q_1) + a_y \cos(q_1)$$</p> <p>and</p> <p>$$-d_4 - d_6\cos(q_5) = -p_x\sin(q_1)+p_y\cos(q1)$$</p> <p>so by manipulating those two equations you end up with a quadratic equation in $\sin(q_1)$ or $\cos(q_5)$ which gives two solutions, only one of them satisfy the second equation above which gives the other parameter (i.e. the $\sin(q_1) or \cos(q_1)$) then you get $q_1$ as</p> <p>$$q_1 = atan2(\sin(q_1),\cos(q_1))$$</p> <p>then you can get $q_5$ and then proceed in the same manner to get the rest. If you want I can share with you some document that explain the procedure in more details, and also some matlab m files that help you to symbolically advance through the procedure. </p> <p>Small remark: I am not using exactly the Denavit-Hartenberg parameters but a slightly different ones (i.e. Khalil-Kleinfinger) which is also explained in the document.</p>
9662
2016-04-19T22:40:40.373
|robotic-arm|inverse-kinematics|industrial-robot|c++|
<p>People have recommended me implement an analytic version of inverse Jacobian solver, such that I won't be forced only the least square solution, but would have an local area of solution near to the one I desire. </p> <p>I can't seem to implement it correctly, I mean how much does it differ from the least square inverse kinematics which I have implemented here?</p> <pre><code>Eigen::MatrixXd jq(device_.get()-&gt;baseJend(state).e().cols(),device_.get()-&gt;baseJend(state).e().rows()); jq = device_.get()-&gt;baseJend(state).e(); //Extract J(q) directly from robot //Least square solver - [AtA]⁻1AtB Eigen::MatrixXd A (6,6); A = jq.transpose()*(jq*jq.transpose()).inverse(); Eigen::VectorXd du(6); du(0) = 0.1 - t_tool_base.P().e()[0]; du(1) = 0 - t_tool_base.P().e()[1]; du(2) = 0 - t_tool_base.P().e()[2]; du(3) = 0; // Should these be set to something if i don't want the tool position to rotate? du(4) = 0; du(5) = 0; ROS_ERROR("What you want!"); Eigen::VectorXd q(6); q = A*du; cout &lt;&lt; q &lt;&lt; endl; // Least square solution - want a vector of solutions. </code></pre> <p>I want a vector of solution - how do I get that?</p> <p>the Q is related to this <a href="https://robotics.stackexchange.com/questions/9672/how-do-i-construct-i-a-transformation-matrix-given-only-x-y-z-of-tool-position">https://robotics.stackexchange.com/questions/9672/how-do-i-construct-i-a-transformation-matrix-given-only-x-y-z-of-tool-position</a></p> <p>The robot being used is a UR5 - <a href="https://smartech.gatech.edu/bitstream/handle/1853/50782/ur_kin_tech_report_1.pdf" rel="nofollow noreferrer">https://smartech.gatech.edu/bitstream/handle/1853/50782/ur_kin_tech_report_1.pdf</a></p>
Implementing an analytic version of an inverse kinematic
<p>A few guidelines to help you search:</p> <p>Max displacement: +/- 5mm. Frequency: 60Hz = 377 rad/s.</p> <p>Now, some math:</p> <p>$$ x = 0.005 \sin{(377 t)} \\ v = (377)(0.005) \cos{(377 t)} \\ a = -(377^2)(0.005) \sin{377 t)} \\ $$</p> <p>Then, the max values are wherever the particular trig function is equal to 1, so:</p> <p>$$ x_{\mbox{max}} = 0.005 \mbox{m}\\ v_{\mbox{max}} = 1.9 \mbox{m/s}\\ a_{\mbox{max}} = 710 \mbox{m/s}^2 \approx 72 \mbox{g}\\ $$</p> <p>Now, $F = ma$, so the peak force you are applying is:</p> <p>$$ F = ma \\ F = (0.3)(710) \\ F = 213 \mbox{N} \\ $$</p> <p>Peak power is force times speed. $$ P = (213)(1.9) \\ P = 405 W \\ $$</p> <p>Per your guidelines, you want (I'm assuming) a cylinder that's 0.05m long and 0.01m in diameter. This means that the volume is (approximately)</p> <p>$$ V = Ah \\ V = \pi r^2 h \\ V = \pi (0.005)^2 (0.05) \\ V = 0.00000393 \mbox{m}^3 \\ V = 0.00393 \mbox{liters} \\ V = 3927 \mbox{mm}^3 \\ $$</p> <p>Now, you can try looking at <a href="https://en.wikipedia.org/wiki/Power_density" rel="nofollow">power density</a> and see that you want 405W of peak power in a space of 0.00393 liters, or a power density of about $405/0.00393 = \boxed{103 \mbox{W/l}}$, or about the <a href="http://www.dukeengines.com/advantages/power-density/" rel="nofollow">same power density as a Ferrari engine</a>.</p> <p>I'm not saying it's not possible, but good luck.</p> <p>If you <em>are</em> a PhD student at a university, you should consider asking your committee members to guide you to another grad student for assistance with this. </p> <p>Typically I would vote to close an open-ended design/shopping question like this, but I figured I'd go through the steps to show that what you want isn't really attainable. </p>
9669
2016-04-20T17:15:30.637
|actuator|motion|
<p>I need to find out if There is a way to get at least 60 Hz of linear Motion with at least 5 mm of stroke that I intend to make linear persistence of vision device(not rotating one)It must be small and light as possible. ( maybe 50 mm long and 10-15 mm diameter or around these) (less than 500 grams) The Load will be around 50 grams. There are voice coils that is very expensive, can I use solenoids for instance or what do you recommend? </p> <p>Thanks </p>
What type of actuator should I use?
<p>One could break down your problem to three areas: memory, speed and I/O. </p> <p>Memory is pretty straightforward, but you have to estimate it yourself. Things to consider: Are you doing a map? Are you saving past state? If yes how long/much? etc.</p> <p>Speed is a bit different: Assume you are building a quadcopter: If you have high latency in the controller's feedback loops the thing will not even fly. If you have a ground vehicle on the other hand you could simply drive slower. </p> <p>Consider how you want to structure the program's control flow: Do you poll for new data all the time or do it interrupt-driven? How fast should your control loop run and what do you need to calculate during it? How many floating point operations do you need? How large is your map for obstacle avoidance? </p> <p>Lastly what peripherals, sensors, debugging tools etc. do you want to connect and does the uC have enough connections to allow that? </p> <p>If this is going to be a rather serious project, I would shy away from arduino, because if you need to have more control over the program and write low-level code things are likely to get messy. The learning curve to develop with Atmel Controllers directly is not that high. </p> <p>With the meager information you provided I feel like the ATmega32 will be suitable, but if you need a fast control loop consider ARM with a floating point unit. </p>
9675
2016-04-21T01:15:21.443
|mobile-robot|arduino|localization|microcontroller|
<p>I am going to build an autonomous robot with Kalman-filter for localization integrated from Lidar, encoder, IMU and GPS. I will also add obstacle avoidance while moving to the required position.</p> <p>Is the ATmega32 8-bit suitable for that (or Arduino Mega) or do I have to use avr32, ARM, or PIC32 and which is better?</p>
Suitable uC for atonomous robot
<p>It's hard to tell what's going wrong from just from you describing the error (might be better to paste the message directly..Sounds like a problem with your ROS_PACKAGE_PATH environment variable. I'm assuming you're following <a href="http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28python%29" rel="nofollow noreferrer">this tutorial</a>)? I remember having had this problem before, so some possible things to try:</p> <ul> <li>You may need to just type in <code>source devel/setup.bash</code> into your terminal.</li> <li><p>Make sure your bash.rc file is properly set. Refer to <a href="https://stackoverflow.com/questions/27053334/ros-package-not-found-after-catkin-make">this question.</a> (If I remember correctly, if you don't want to type in <code>source devel/setup.bash</code> every time you open up a new terminal you can edit the bash.rc file.)</p></li> <li><p>Check that your CMAKE_PREFIX_PATH is right.</p></li> </ul> <p>Also, try entering <code>rospack find [package_name]</code> into the terminal to find the path to the package.</p>
9680
2016-04-21T15:49:32.687
|ros|
<p>I created a package in catkin workspace and put a publisher.py node inside the src directory of package which worked fine. Then i added another node subscriber.py node and used catkin_make to build. Now when I try to run any of the nodes or find package i am getting above error. Am I missing any step ?</p> <p>Thanks.</p>
Getting “rospack package not found error” in ROS
<p>The problem appears to be that your drive motors are <a href="https://en.wikipedia.org/wiki/Wheelspin" rel="nofollow noreferrer">providing too much torque</a>.</p> <p>In other words, there isn't enough friction between the wheels and the floor to counteract the force that the motors are exerting on the wheels. To determine whether this is actually the problem, try adding weight to the vehicle (directly above the wheels, if possible); this should balance the forces by increasing the static friction.</p> <p>Of course, adding weight to your vehicle is not a good long term solution. The proper fix would be to configure your motor controllers to <a href="https://physics.stackexchange.com/q/38707">limit the torque</a> produced by the motors. This might also be expressed as a current limit, depending on your setup.</p>
9683
2016-04-21T21:04:38.310
|design|wheel|
<p>I am part of my college robotics team which is preparing for Robocon 2017.</p> <p>We have used Mecanum wheels in last Robocon competition, but we have faced huge slip and vibration. I have looked for all kinematic and dynamic formulas and all stuff about Mecanum wheels, but still can't get to a conclusion for my problem.</p> <p><a href="https://drive.google.com/file/d/0BwdLwLAVvVLGdFBLS3RSdlN4dXc/view" rel="nofollow">Video of the problem</a></p> <p>The robot is around 25 kg and the Mecanum wheel diameter is about 16 cm with 15 rollers (single type). Please help me why it happened like that!?</p> <p>Also suggest me what to do now - Should I design a new Mecanum wheel or bring it from market? </p> <p>If I should design, what parameters should I consider, and please help me how to design in CAD software like SolidWorks? And then, shall I give it to 3D printing?</p> <p>If I should buy directly from market, where should I buy?</p>
Choosing the right Mecanum wheel
<p>You always have both position ad orientation. In your case the orientation can be chosen by you to be constant (e.g. 0, 0, 0), in which case you will be not be able to track the object if it is for example above the camera.</p> <p>Th camera measures the relative position of the object (relative to the camera). You subtract the distance on the z axis that you want to keep and you add the difference to the current poseition of the robot. Morever you can calculate the angle how much the endefector has to rotate to face the object. Eg. If the camera measures: Dx = 0,5, dy=0 and dz=0,5 then you would add the dx to the current tcp position (since dz=0,5 which is the distance you want to keep). After that you can calulate the change in orientation needed to face the object by</p> <p>$RotY = atan2(dx,dz) = atan2(0.5,0.5) = 45deg$</p> <p>You add this (also calculate RotX and RotZ if not defined as "tool axis") to the current orientation of the end-effector and give it as reference (together withe the changed position) value for the next motion.</p>
9686
2016-04-22T12:56:10.390
|robotic-arm|inverse-kinematics|stereo-vision|
<p>I think i have an simple problem, but can't my head around how i should resolve it...</p> <p>My setup looks like this: </p> <p><a href="https://i.stack.imgur.com/fMAEU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fMAEUm.jpg" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/BbdFh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BbdFhm.jpg" alt="enter image description here"></a></p> <p>The grey box on end effector is supposed to be an camera, which measures a dx,dy,dz between the object and the camera. These are used to position the camera such that dz between the object and the camera is equal to 0.5, and dx = dy = 0. </p> <p>I know that I using inverse kinematics can determine the Q which positions it according the given rotation and position, but what if I only provide it a position only?</p> <p>How do extract all Q that make dx = dy = 0, and dz = 0.5, while keeping the object in sight at all time?</p> <p>An example could be if an object was placed just above the base (see second image), it should then find all possible configurations which in this case would consist of the arm rotating around the object, while the camera keeps the object in sight...</p> <p><strong>Update</strong></p> <p>I just realized a possible solution would be to create a sphere with the object in centrum an radius of dz, and then use this sphere to extract all pairs of rotations and position... But how would one come by with such an solution?</p>
Performing inverse kinematics based on a displacement of the end effector?
<p>You could use player/stage or gazebo</p>
9711
2016-04-26T05:54:46.687
|navigation|routing|
<p>I'm looking for a testbed (simulator or web-based interface that lets me to have control on a robot) for testing different routing and navigation algorithms. Is there such a system on the web?</p>
Testbed for testing navigation algorithms
<p>This is how I would go about simulating a nonlinear ODE in Matlab. As I mentioned in a (now-deleted) comment on your question, I typically work with <em>linear</em> ODE's in my line of work, which means that I usually use the (awesome) functions in the Control System Toolbox. </p> <p>Now, I'll start by saying that you haven't given any definitions of what the matrices are in your equation, so I can't give you a concrete example of how to solve your specific example. If you <em>update your question</em> to give numbers for everything then I can modify this answer to solve your problem. That said, you give:</p> <p>$$ M(q)\ddot{q}+C(q,\dot{q})\dot{q}+G(q)=Q $$</p> <p>First I would say that this can be re-written as:</p> <p>$$ \ddot{q} = \frac{1}{M(q)}\left(Q-C(q,\dot{q})\dot{q} -G(q)\right) \\ $$</p> <p>So, for an example for you, I chose an RLC circuit, which takes the form:</p> <p>$$ \begin{array} .\dot{I} = \frac{1}{L}\left(V_{\mbox{in}} - V_C - IR\right) \\ \dot{V} = \frac{1}{C}I \end{array} $$</p> <p>Typically your input signal would be a smooth function of time. Here you're looking for a bang-bang signal, which is akin to a light switch. At some time, the input signal goes from nothing <em>immediately</em> to some value, then later from that value <em>immediately</em> back to nothing. </p> <p>So, where typically you would use an <code>interpolate</code> command to get values defined sample time increments, here you <strong>don't</strong> want those commands interpolated. You don't want a ramp, you want the signal to immediately shift. </p> <p>So this question really is two parts:</p> <ol> <li>How do I pass parameters to an ODE45 (or 23 or whatever else) function in Matlab, and </li> <li>How do I define a step change in an input signal for an ODE45 function?</li> </ol> <p>The answers are (examples to follow)</p> <ol> <li>Define your ODE function as a function in its own script, such that the first line of the script is something like <code>function [dq] = AlFageraODE(t,x,parameter1,parameter2,...,parameterN)</code>. Then, when you want to solve the ODE, call Matlab's built-in ODE solver as follows: <code>[t,y] = ode45(@AlFageraODE,[t0,t1],[y0,y1],</code><strong>[ ]</strong><code>,parameter1,parameter2,...,parameterN);</code>. The square brackets in bold there need to be included because that is where you can pass specific options to the ODE solver. If you don't want to pass anything specific and are okay with the default settings, you still need to put something, so put an empty array - this is done with an empty array <code>[]</code>. After that you can put in parameters that will be passed to your custom function. </li> <li>To get a true step function, you need to split the simulation into three distinct sets - before the step, during the step, and after the step. Anything else will result in the need to interpolate the input command. The last entry in the outputs will be the initial conditions for the next segment of the simulation. </li> </ol> <p>Below, I've written the custom ODE function, and then below that is the script used to solve the custom ODE. Note that the R, L, and C values are set in the calling script, not in the custom ODE. This is because those values are <em>passed</em> in to the custom ODE, along with what the applied voltage should be during that particular segment.</p> <p>The examples:</p> <ol> <li>The custom ODE function (again, this simulates an RLC circuit, but you can modify it for any custom ODE needing parameters passed in.)</li> </ol> <p>.</p> <pre><code>function dx = RoboticsODE(t,x,Vin,R,L,C) dx = zeros(2,1); I = x(1); V = x(2); di_dt = (1/L)*(Vin - V - I*R); dv_dt = (1/C)*I; dx(1) = di_dt; dx(2) = dv_dt; </code></pre> <ol start="2"> <li>The script that solves that ODE for a bang-bang signal. </li> </ol> <p>.</p> <pre><code>%% Clear/close everything clear; close all; clc; %% Simulation Configuration v0 = 0; % Initial *output* voltage i0 = 0; % Initial circuit current C = 47*(1/1000); % Capacitance, F R = 0.1; % Resistance, Ohm L = 22*(1/1000); % Inductance, H appliedVoltages = [0,1,0]; % The applied voltages at t0, bangOn, bangOff tStart = 0; % t0 bangStart = 1; % bangOn time bangWidth = 1; % bangOff time = bangOn time + bangWindow endWindow = 5; % endWindow is how long to "watch" the simulation after % the bang-bang signal goes "off". %% Output Initialization outputTime = zeros(0,1); outputVoltage = zeros(0,1); outputCurrent = zeros(0,1); inputVoltage = zeros(0,1); %% Dependent Configuration currentValues = [i0;v0]; samplePoints = cumsum([tStart,bangStart,bangWidth,endWindow]); % A note on the above - I use the cumulative sum (cumsum) because I defined % each point as the number of seconds *after* the previous event. If you % defined absolute time points then you'd just use those time points % directly with no cumulative sum. nSegments = numel(samplePoints)-1; %% Simulation for currentSegment = 1:nSegments % Setup the simulation by getting the current time window and "intial" % conditions. Vt = appliedVoltages(currentSegment); t0 = samplePoints(currentSegment); t1 = samplePoints(currentSegment+1); sampleTime = [t0;t1]; % Run the simulation by solving the ODE for this particular segment. [intermediateTime,intermediateOutput] = ... ode45(@RoboticsODE,sampleTime,currentValues,[],Vt,R,L,C); % Assign outputs nOutputPoints = numel(intermediateTime); outputTime(end+1:end+nOutputPoints) = intermediateTime; outputCurrent(end+1:end+nOutputPoints) = intermediateOutput(:,1); outputVoltage(end+1:end+nOutputPoints) = intermediateOutput(:,2); inputVoltage(end+1:end+nOutputPoints) = Vt*ones(nOutputPoints,1); % Setup the next simulation by setting the "initial" conditions for % that simulation equal to the ending conditions for the current % simulation. currentValues(1) = outputCurrent(end); currentValues(2) = outputVoltage(end); end %% Output Plot plot(outputTime,inputVoltage); hold on; plot(outputTime,outputVoltage); title('RLC Circuit with Step Input to ODE45'); xlabel('Time (s)'); ylabel('Voltage (V)'); legend('Input Voltage','Output Voltage'); </code></pre> <ol start="3"> <li>The plot of the output.</li> </ol> <p><a href="https://i.stack.imgur.com/IhuDv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IhuDv.png" alt="ODE45 Example Output"></a></p> <p>Finally, as I mentioned, if you would be willing to give concrete numbers for your equation then I could tailor this to solve your particular problem. As it stands, I can't provide any solution to a symbolic nonlinear ODE so this example is the best I can give you.</p> <h2>:EDIT:</h2> <p>I've got the problem solved for you. The code is attached below. I'll say this, though: It's important, for a step input (bang-bang, etc.) to <em>segment</em> the ODE solver process, as I described above. This is because Matlab tries to optimize solving the ODE and may not take process time in exactly the way you would expect.</p> <p>The segmenting method is again, as described above, where you split the solving process at every discontinuity. The initial conditions for the following step is equal to the ending conditions of the current step. </p> <p>The images below are the solutions I got with the segmented method and the "all-in-one" method. The all-in-one is the way you had it setup, where the force function was determined by an <code>if</code> statement in the ODE function. Because Matlab chooses the sample time increments, the positive and negative segments aren't guaranteed to have exactly the same number of samples. I think this is the reason for the drift in the output of the all-in-one solution. </p> <p>I found several problems with your method. When I corrected them, I got a plot that looked (to me) to exactly duplicate the plots from the paper you linked. </p> <ol> <li>The biggest problem - fx and fy should be the same. </li> <li>Also a problem, the pulse width should be 1s. You used a square wave of <em>period</em> 1s, meaning that there was a 0.5s "positive" and 0.5s "negative" signal. I halved the frequency and got the proper 1s signal width. </li> <li>Your initial conditions for x and y were not zero. They are zero in the paper, so I set them to zero in the simulation to replicate the figures in the paper. </li> </ol> <p><a href="https://i.stack.imgur.com/iYba1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iYba1.png" alt="ODE Solver Comparison"></a></p> <p>Here's the code! First, the ODE script:</p> <p>.</p> <pre><code>function varDot = AlFagera(t,var,spec,F) % In general, I did a lot of cleanup with this function to make things % easier for me to read. %% Misc Declarations % varDot = zeros(8,1); varDot = zeros(12,1); % to include the input torque g = 9.80; % Acceleration of gravity (m/s^2) %% Define Forces if Undefined % If the segmentSolution is being used, then the force is supplied to the % function. If it's not being used (the "all-in-one" solution), then the % force is determined by the simulation time. if isempty(F) if t &gt;=1 &amp;&amp; t&lt;=3 fy = -1.*sign(sin(t*pi)); else fy = 0; end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % I'll highlight this, because I think this was the key problem % % you were having. The force for fx and fy should be the same. You % % had fx = 0 for all cases. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %fx = 0; fx = fy; F = [fx;fy;0;0]; end %% Crane Specifications mp = spec(1); mc = spec(2); mr = spec(3); L = spec(4); J = spec(5); %% Breakout the Input Variable x = var(1); y = var(2); theta = var(3); phi = var(4); xDot = var(5); yDot = var(6); thetaDot = var(7); phiDot = var(8); %% Repackage the Inputs into Useful Form q = [... x; ... y; ... theta; ... phi]; qDot = [... xDot; ... yDot; ... thetaDot; ... phiDot]; %% Simplified expressions for (extensive) use later mpL = mp*L; cosT = cos(theta); sinT = sin(theta); cosP = cos(phi); sinP = sin(phi); %% Matrix Expressions M = [... mr+mc+mp , 0 , mpL*cosT*sinP , mpL*sinT*cosP; ... 0 , mp+mc , mpL*cosT*cosP , -mpL*sinT*sinP; ... mpL*cosT*sinP , mpL*cosT*cosP , mpL*L+J , 0; ... mpL*sinT*cosP , -mpL*sinT*sinP , 0 , mp*(L*sinT)^2+J]; C = [... 0 , 0 , -mpL*sinT*sinP*thetaDot+mpL*cosT*cosP*phiDot , mpL*cosT*cosP*thetaDot-mpL*sinT*sinP*phiDot; ... 0 , 0 , -mpL*sinT*cosP*thetaDot-mpL*cosT*sinP*phiDot , -mpL*cosT*sinP*thetaDot-mpL*sinT*cosP*phiDot; ... 0 , 0 , 0 , -mpL*L*sinT*cosT*phiDot; ... 0 , 0 , mpL*L*sinT*cosT*phiDot , mpL*L*sinT*cosT*thetaDot]; G = [... 0; ... 0; ... mpL*g*sinT; .... 0]; %% Assign Outputs qDdot = M\(-C*qDot-G+F); varDot = [... qDot; ... qDdot; ... F]; </code></pre> <p>Then, the script that solves the ODE:</p> <p>. </p> <pre><code>clear all; close all; clc; % Compare the all-in-one method of solving the problem with the segmented % method of solving the problem by setting the variable below equal to % "true" or "false". segmentSolution = true; t0 = 0;tf = 20; % Your initial conditions here for x- and y-positions were not zero, so I % set them to zero to reproduce Figure 2 and Figure 3 in the paper you % linked. % Also - you don't ever use the last 4 values of this except as a way to % output the force. This isn't used in the segmentSolution because there % the input force is supplied to the function. x0 = [0 0 0 0, 0 0 0 0,0 0 0 0]; % Initial Conditions %% Specifications Mp = [0.1 0.5 1]; % Variable mass for the payload figure plotStyle = {'b-','k','r'}; %% SegmentSolution Settings fx = [0,1,-1,0]; fy = [0,1,-1,0]; tStart = 0; tOn = 1; bangWidth = 1; tEndWindow = 17; sampleTime = cumsum([tStart,tOn,bangWidth,bangWidth,tEndWindow]); nSegments = numel(sampleTime)-1; %% Simulation for i = 1:3 mp = Mp(i); mc = 1.06; mr = 6.4; % each mass in kg L = 0.7; J = 0.005; % m, kg-m^2 respe. spec = [mp mc mr L J]; %% Call the the function initialConditions = x0; if segmentSolution t = zeros(0,1); x = zeros(0,numel(x0)); outputFx = zeros(0,1); outputFy = zeros(0,1); for currentSegment = 1:nSegments inputForce = [fx(currentSegment),fy(currentSegment),0,0].'; t0 = sampleTime(currentSegment); t1 = sampleTime(currentSegment+1); [intermediateT,intermediateX] = ode45(@AlFagera,[t0 :0.001: t1],initialConditions,[],spec,inputForce); nOutputSamples = numel(intermediateT); index1 = size(t,1)+1; index2 = size(t,1)+nOutputSamples; t(index1:index2) = intermediateT; x(index1:index2,:) = intermediateX; initialConditions = x(end,:).'; outputFx(index1:index2) = inputForce(1)*ones(nOutputSamples,1); outputFy(index1:index2) = inputForce(2)*ones(nOutputSamples,1); end tt = t; else inputForce = []; % Leave this empty for the all-in-one solver. % There's a check in the code to setup the force % when it's not defined. [t,x] = ode45(@AlFagera,[t0:0.001:tf],initialConditions,[],spec,inputForce); outputFx = diff(x(:,9))./diff(t); outputFy = diff(x(:,10))./diff(t); tt=0:(t(end)/(length(outputFx)-1)):t(end); end legendInfo{i} = ['mp=',num2str(Mp(i)),'kg']; %fx = diff(x(:,9))./diff(t); %fy = diff(x(:,10))./diff(t); %tt=0:(t(end)/(length(fx)-1)):t(end); % this time vector % to plot the cart positions in x and y direcitons subplot(1,2,1) plot(t,x(:,1),plotStyle{i}) axis([0 20 0 0.18]); grid xlabel('time (s)'); ylabel('cart position in x direction (m)'); hold on legend(legendInfo,'Location','northeast') subplot(1,2,2) plot(t,x(:,2),plotStyle{i}) axis([0 20 0 1.1]); grid xlabel('time (s)'); ylabel('cart position in y direction (m)'); hold on legend(legendInfo,'Location','northeast') end % to plot the input torque, (bang-bang signal), just one sample figure plot(tt,outputFy); grid set(gca,'XTick',[0:20]) xlabel('time (s)'); ylabel('input signal, f_y (N)'); </code></pre> <p>I took the liberty of cleaning up the code in the ODE function, I hope you don't mind. </p> <p>So, to summarize, the solution to your problem is:</p> <ol> <li>Pass inputs to the ODE function (such as parameters and applied forces), and</li> <li>Segment the ODE solution at each discontinuity.</li> </ol>
9724
2016-04-28T04:58:47.817
|control|robotic-arm|dynamics|matlab|input|
<p>I working on dynamic modeling and simulation of a mechanical system (overhead crane), after I obtained the equation of motion, in the form: $$ M(q)\ddot{q}+C(q,\dot{q})\dot{q}+G(q)=Q $$</p> <p>All the matrices are know inertia, $ M(q)$, Coriolis-Centrifugal matrix $ C(q,\dot{q})$, and gravity $ G(q)$ as functions of the generalized coordinates $q$, and their derivatives $\dot{q}$.</p> <p>I want to solve for $q$, using Matlab <em>ODE</em> (in m-file), I got the response for some initial conditions and zero input, but, I want to find the response, for the aforementioned control signal (<strong>A bang-bang signal of amplitude 1 N and 1 s width</strong>), I'm trying to regenerate some results from the literature, and what the authors of that work said, regrading the input signal is the following: "A bang-bang signal of amplitude 1 N and 1 s width is used as an input force, applied at the cart of the gantry crane. A bang-bang force has a positive (acceleration) and negative (deceleration) period allowing the cart to, initially, accelerate and then decelerate and eventually stop at a target location." I didn't grasp what do they mean by bang-bang signal, I know in Matlab we could have step input, impulse, ...etc. But bang-bang signal, I'm not familiar with. According to <a href="https://en.wikipedia.org/wiki/Bang%E2%80%93bang_control" rel="nofollow noreferrer">this site</a> and <a href="http://www.brown.edu/Departments/Engineering/Courses/En123/Lectures/FdbkBasic.html" rel="nofollow noreferrer">this</a> bang bang is a controller rather.</p> <p>Could anyone suggest to me how to figure out this issue and implement this input signal? preferably in m-file.</p> <p>The code I'm using is given bellows, two parts:</p> <pre><code>function xdot = AlFagera(t,x,spec) % xdot = zeros(8,1); xdot = zeros(12,1); % to include the input torque % % Crane Specifications mp = spec(1); mc = spec(2); mr = spec(3); L = spec(4); J = spec(5); g = 9.80; % accelatrion of gravity (m/s^) % % matix equations M11 = mr+mc+mp; M12 = 0; M13 = mp*L*cos(x(3))*sin(x(4)); M14 = mp*L*sin(x(3))*cos(x(4)); M21 = 0; M22 = mp+mc; M23 = mp*L*cos(x(3))*cos(x(4)); M24 = -mp*L*sin(x(3))*sin(x(4)); M31 = M13; M32 = M23; M33 = mp*L^2+J; M34 = 0; M41 = M14; M42 = M24; M43 = 0; M44 = mp*L^2*(sin(x(3)))^2+J; M = [M11 M12 M13 M14; M21 M22 M23 M24; M31 M32 M33 M34; M41 M42 M43 M44]; C11 = 0; C12 = 0; C13 = -mp*L*sin(x(3))*sin(x(4))*x(7)+mp*L*cos(x(3))*cos(x(4))*x(8); C14 = mp*L*cos(x(3))*cos(x(4))*x(7)-mp*L*sin(x(3))*sin(x(4))*x(8); C21 = 0; C22 = 0; C23 = -mp*L*sin(x(3))*cos(x(4))*x(7)-mp*L*cos(x(3))*sin(x(4))*x(8); C24 = -mp*L*cos(x(3))*sin(x(4))*x(7)-mp*L*sin(x(3))*cos(x(4))*x(8); C31 = 0; C32 = 0; C33 = 0; C34 = -mp*L^2*sin(x(3))*cos(x(3))*x(8); C41 = 0; C42 = 0; C43 = -C34; C44 = mp*L^2*sin(x(3))*cos(x(4))*x(7); C = [C11 C12 C13 C14; C21 C22 C23 C24; C31 C32 C33 C34; C41 C42 C43 C44]; Cf = C*[x(5); x(6); x(7); x(8)]; G = [0; 0; mp*g*L*sin(x(3)); 0]; fx = 0; if t &gt;=1 &amp;&amp; t&lt;=2 fy = 1.*square(t*pi*2); else fy = 0; end F =[fx; fy; 0; 0]; % input torque vector, xdot(1:4,1)= x(5:8); xdot(5:8,1)= M\(F-G-Cf); xdot(9:12,1) = F; </code></pre> <p>And:</p> <pre><code>clear all; close all; clc; t0 = 0;tf = 20; x0 = [0.12 0.5 0 0, 0 0 0 0,0 0 0 0]; % initional conditions % % spectifications Mp = [0.1 0.5 1]; % variable mass for the payload figure plotStyle = {'b-','k','r'}; for i = 1:3 mp = Mp(i); mc = 1.06; mr = 6.4; % each mass in kg L = 0.7; J = 0.005; % m, kg-m^2 respe. spec = [mp mc mr L J]; % % Call the the function [t,x] = ode45(@(t,x)AlFagera(t,x,spec),[t0 :0.001: tf],x0); legendInfo{i} = ['mp=',num2str(Mp(i)),'kg']; fx = diff(x(:,9))./diff(t); fy = diff(x(:,10))./diff(t); tt=0:(t(end)/(length(fx)-1)):t(end); % this time vector % to plot the cart positions in x and y direcitons subplot(1,2,1) plot(t,x(:,1),plotStyle{i}) axis([0 20 0 0.18]); grid xlabel('time (s)'); ylabel('cart position in x direction (m)'); hold on legend(legendInfo,'Location','northeast') subplot(1,2,2) plot(t,x(:,2),plotStyle{i}) axis([0 20 0 1.1]); grid xlabel('time (s)'); ylabel('cart position in y direction (m)'); hold on legend(legendInfo,'Location','northeast') end % to plot the input torque, (bagn-bang signal), just one sample figure plot(tt,fy) grid set(gca,'XTick',[0:20]) xlabel('time (s)'); ylabel('input signal, f_y (N)'); </code></pre> <p>Furthermore, the results I'm getting and what I supposed to get are shown: <a href="https://i.stack.imgur.com/ha31n.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ha31n.jpg" alt="This my output, for the give code"></a> <a href="https://i.stack.imgur.com/TTJH6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TTJH6.jpg" alt="This what I supposed to get"></a></p> <p><strong>Major difficulties, initial conditions are not clearly stated in the paper, the input force direction, is only in y (which it should be), or it has different direction. I appreciate any help.</strong></p> <blockquote> <p>the paper I'm trying to recreate is: R. M. T. Raja Ismail, M. A. Ahmad, M. S. Ramli, and F. R. M. Rashidi, “Nonlinear Dynamic Modelling and Analysis of a 3-D Overhead Gantry Crane System with System Parameters Variation.,” International Journal of Simulation–Systems, Science &amp; Technology, vol. 11, no. 2, 2010. <a href="http://ijssst.info/Vol-11/No-2/paper2.pdf" rel="nofollow noreferrer">http://ijssst.info/Vol-11/No-2/paper2.pdf</a></p> </blockquote>
How to apply A bang-bang signal of amplitude 1 N and 1 s width as an input force to reproduce certain results in Matlab?
<p>It seems that you can have 4 possible readings (w,w), (w,b), (b,w) and (b,b). So, the program what to decide what to do in each situation. To help figure out the next action, the past history can also be used. So, it is up to you to decide how much history is useful, or practical. I suggest some kind of state diagram, but use whatever you are already familiar with. </p> <p>Now, in your pseudocode, how much do you turn left and right? How far are you moving between each reading? Can you measure how far each wheel has gone?Is it possible to go backwards until line is found? How straight are your moves forward and backward? These questions are just some of the considerations for you to think about as you refine your strategy.</p> <p>The bottom line is that the dual sensor does not provide enough data from a single reading, to, for example, identify whether you've just run over the edge of a straight line, the line has bent, or (if it has bent) by how much!</p> <p>The whole point of a line robot competition is to get each builder to design (and test) their strategy.</p>
9728
2016-04-28T17:23:55.337
|arduino|motor|line-following|
<p>I have the <a href="http://www.makeblock.cc/mbot/" rel="nofollow noreferrer">mBot</a> robot and I want to program it to follow the line. So far it can pass any kind of line that is >90°.</p> <p>I want it to be able to pass 90°-ish angles as well. Like this one:</p> <p><a href="https://i.stack.imgur.com/FGkQm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FGkQm.png" alt="&gt;90° angle"></a></p> <p>The problem is that my mBot robot has only 2 line following sensors (they are 5 mm apart and the line is 2 cm wide) so I can't use just the sensors.</p> <p><a href="https://i.stack.imgur.com/qHQHt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qHQHt.png" alt="enter image description here"></a></p> <p>Most of the times it just goes to the line and when it's supposed to turn it just misses the line (goes on the white) and goes back to get back on track. Once it's back on the black line it once again tries to go forward but goes on the white instead of taking a turn. This happens endlessly.</p> <p>Sometimes it passes the angle by going back and forth and accidentally turning, but that's not even a workaround, let alone a solution.</p> <p>Here's a test course of the first round of the competition. <a href="https://i.stack.imgur.com/pzf8y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pzf8y.png" alt="course"></a></p> <p>My robot can pass this without a problem, but it gets stuck on this (poorly edited, sorry) course:</p> <p><a href="https://i.stack.imgur.com/IevUP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IevUP.png" alt="I don&#39;t currently have Photoshop installed. Sorry &#39;bout that."></a></p> <p>It can't pass the 20 block if the robot enters it from a 15 or 20 block (so basically it gets stuck if it's coming from an angle and hits a 90 degree turn).</p> <p>The sensor value could be read as either 0, 1, 2 or 3 depending on what the robot currently sees:</p> <p><a href="https://i.stack.imgur.com/PNkB5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PNkB5.png" alt="sensor with all possible outcomes"></a></p> <p>0 - on the line <br> 1 - on the right of the line <br> 2 - on the left of the line <br> 3 - not on the line <br></p> <p>Pseudo code of my current program:</p> <pre><code>loop forever: if (on the right of the line): turn_left() if (on the left of the line): turn_right() if (on the line): go_forward() if (not on the line): go_backwards() </code></pre> <p>So how would I go about taking such sharp turns?</p>
How to check for a sharp angle with a line follower?
<p>It appears the industry need to consider some standardization of the terms. In my search I have found differing definitions for the term DOF. </p> <p>From Yaskawa/Motoman: "Degrees Of Freedom : The number of independent directions or joints of the robot (R15.07), which would allow the robot to move its end effector through the required sequence of motions. For arbitrary positioning, 6 degrees of freedom are needed: 3 for position (left-right, forward-backward, and up- down) and 3 for orientation (yaw, pitch and roll)." Note: R15.07 would appear to reference the ANSI/RIA/ISO materials but at RIA, I only found reference to R15.06.</p> <p>From Robot Worx: Degrees of Freedom - The amount of values in a system possible of variation. A robotic joint is equal to one degree of freedom.</p> <p>From ATI automation another RIA member: "Six Degrees of Freedom - A fancy way of saying Fx, Fy, Fz, Tx, Ty and Tz."</p> <p>I would have to disagree with the Robot Worx, because at a minimum both directions of rotation or travel must be considered for each plane. So Yaskawa's explanation of 3 planar linear directions and three planar rotations makes more sense as well as the definition from ATI. If this is accepted, then what of DOM? This term must inevitably be more complex and account for many things other than direction. I see no terminology so far that accounts for actual degrees of motion of servos/actuators for example servos designed for 180°/360° motions. There should be some way to relate this functionality in the glossary of terms whether it be DMO or yet another term. Some method of using all the available functions of a robotic device should be somehow described.</p> <p>In my opinion there should also be a standard definition for a description that includes, such other functions as: time, acceleration, speed, grip/release, etc.</p>
9729
2016-04-28T17:32:44.097
|manipulator|theory|
<p>What is a difference between degrees of freedom (DOF) and degrees of motion (DOM)? I know that DOF is the number of independent movements a manipulation arm can make and robot system can have max 6 independent DOF and unlimited number of DOM but I do not distinguish them from each other.</p>
Difference between Degrees of Freedom (DOF) and Degrees of Motion (DOM)
<p>How the third equation is used in the derivation would be helpful in order to explain <em>why</em> $\delta x$ is being transposed. But when you take $$\delta x = J \delta q$$ and transpose it, you must reverse the order of the terms of the right hand side. This makes sense if you consider the dimensions of $J$. If $x \in \Re^n$ and $q \in \Re^m$ then $J$ is $n$ x $m$. To postmultiply $q^T$ (which is 1 x $m$) you must transpose $J$. </p> <p>The fourth equation is just a statement that the seesaw is in equilibrium - the total moment is zero. </p>
9741
2016-04-30T12:23:50.403
|kinematics|jacobian|
<p>I'm an Electronics student taking a module in Robotics. </p> <p>From the example,</p> <p><a href="https://i.stack.imgur.com/tg0sx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tg0sx.png" alt="Example from lecture notes"></a></p> <p>I understand line 1 as the Jacobian is found from the time derivative of the kinematics equation and such relates joint angles to velocity.</p> <p>I do not understand why the transpose has been taken on line 3 and how line 4 is produced.</p>
Principle of virtual force - General help in understanding / explanation
<p>On the page you indicated <a href="http://wiki.ros.org/jade/Installation/Ubuntu" rel="nofollow">http://wiki.ros.org/jade/Installation/Ubuntu</a> it is said that it only support 14.04 14.10 and 15.04, therefore i don't think it is available yet for the 16.04 version. Either try to see via auto-completion if there is any other version available, or compile from source. </p> <p>You will have better answers on the ros site <a href="http://answers.ros.org/question/226098/ros-on-ubuntu-xenial-1604/" rel="nofollow">http://answers.ros.org/question/226098/ros-on-ubuntu-xenial-1604/</a></p>
9752
2016-05-02T09:33:30.443
|ros|
<p>I want to install ROS on my Xubuntu 16.04, Xenial Xerus. I have followed the ROS's site instruction: <a href="http://wiki.ros.org/jade/Installation/Ubuntu" rel="nofollow">http://wiki.ros.org/jade/Installation/Ubuntu</a>, and did the following: First, setup my sources.list:</p> <p><code>sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" &gt; /etc/apt/sources.list.d/ros-latest.list'</code></p> <p>Second, set up keys:</p> <p><code>sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 0xB01FA116</code></p> <p>Then, make sure my package is up-to-date: </p> <p><code>sudo apt-get update</code></p> <p>Last, try to install ROS jade:</p> <p><code>sudo apt-get install ros-jade-desktop-full</code></p> <p>And get this error:</p> <p><code>E: Unable to locate package ros-jade-desktop-full</code></p> <p>Where did I go wrong, and how can I get ROS (any version is ok) running on my Xubuntu 16.04?</p>
E: Unable to locate package ros-jade-desktop-full
<p>There are standard protocols, and then there are proprietary protocols. The choice of what protocol to use depends on many things. </p> <p>Without listing the precise set of flight controllers you are planning to buy, it's difficult to provide anything more than a very general answer to this question.</p>
9754
2016-05-02T18:31:59.303
|quadcopter|radio-control|research|
<p>I'm about to start a project, where I'm sniffing data between remote controls and flight controllers on RC copters and doing stuff with that information. Do all (or most) flight controllers use the same protocol to communicate with the remote controls, or does it vary based on which one you buy? I would be testing on drones (DJI phantom and the like). </p> <p>So, my real question is:</p> <p>If I want to write something to read the data, will I need to buy a different flight controller for each protocol used, or do they all use the same protocol, and I can just buy one flight controller, and the info I can get out will be the same for all types of flight controllers?</p> <p>Also, are the protocols only spoken by the ground remote control and the flight controller? Does the receiver care what protocol is being used, or is it just a middle man?</p>
Are all Flight Controllers and Remote Controls using the same protocol?
<p>I am now able to get position from the accelerometer.<a href="https://i.stack.imgur.com/oKXTa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oKXTa.png" alt="enter image description here"></a> Plot (Green - Acc, Blue -Vel, Red - Pos)</p> <p>I was able to get the propper values through disabling the leaky integrator (setting the factor from 0.99 to 1). </p>
9755
2016-05-03T05:12:45.210
|quadcopter|sensors|localization|integration|dead-reckoning|
<p>Good day,</p> <p>I have been reading papers about position integration from accelerometer readings.</p> <p>I have consulted <a href="http://perso-etis.ensea.fr/~pierandr/cours/M1_SIC/AN3397.pdf" rel="nofollow noreferrer">this paper from freescale</a> on how that is achievable and <a href="http://diydrones.com/forum/topics/multi-rotors-the-altitude-yoyo-effect-and-how-to-deal-with-it" rel="nofollow noreferrer">this article regarding leaky integrators</a> to help in preventing accumulation of errors from integration.</p> <p>I was testing this algorithm by moving the imu by approximately 0.1 meter. The algorithm does get it right at the instant it arrives at approx 0.1 meter however when left still at that position, the integrated position goes to zero.</p> <p>It turns out the velocity readings become negative at a certain period after reaching 0.1 meters.</p> <blockquote> <p>Does anyone have any suggestions in dealing with this error?</p> </blockquote> <p><strong>Plots</strong> (Red is the position, Blue is the velocity.)</p> <p>The imu(accelerometer) was moved alternating positions 0 meters and 0.1 meters with a stop of approximately 3-5 seconds in between before moving to the next position</p> <ol> <li><p>Actual Data <a href="https://i.stack.imgur.com/bioEh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bioEh.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/iJMmi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iJMmi.png" alt="enter image description here"></a></p></li> <li><p>Desired Data output (Green - Desired position integration) <a href="https://i.stack.imgur.com/C85lq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C85lq.png" alt="enter image description here"></a></p></li> </ol> <p><strong>Code:</strong></p> <pre><code>// Get acceleration per axis float AccX = accelmagAngleArray.AccX; float AccY = accelmagAngleArray.AccY; float AccZ = accelmagAngleArray.AccZ; AccX -= dc_offsetX; AccY -= dc_offsetY; AccZ -= dc_offsetZ; //Calculate Current Velocity (m/s) float leakRateAcc = 0.99000; velCurrX = velCurrX*leakRateAcc + ( prevAccX + (AccX-prevAccX)/2 ) * deltaTime2; velCurrY = velCurrY*leakRateAcc + ( prevAccY + (AccY-prevAccY)/2 ) * deltaTime2; velCurrZ = velCurrZ*0.99000 + ( prevAccZ + (AccZ-prevAccZ)/2 ) * deltaTime2; prevAccX = AccX; prevAccY = AccY; prevAccZ = AccZ; //Discrimination window for Acceleration if ((0.12 &gt; AccX) &amp;&amp; (AccX &gt; -0.12)){ AccX = 0; } if ((0.12 &gt; AccY) &amp;&amp; (AccY &gt; -0.12)){ AccY = 0; } //Count number of times acceleration is equal to zero to drive velocity to zero when acceleration is "zero" //X-axis--------------- if (AccX == 0){ //Increment no of times AccX is = to 0 counterAccX++; } else{ //Reset counter counterAccX = 0; } if (counterAccX&gt;25){ //Drive Velocity to Zero velCurrX = 0; prevVelX = 0; counterAccX = 0; } //Y-axis-------------- if (AccY == 0){ //Increment no of times AccY is = to 0 counterAccY++; } else{ //Reset counter counterAccY = 0; } if (counterAccY&gt;25){ //Drive Velocity to Zero velCurrY = 0; prevVelY = 0; counterAccY = 0; } //Print Acceleration and Velocity cout &lt;&lt; " AccX = " &lt;&lt; AccX ;// &lt;&lt; endl; cout &lt;&lt; " AccY = " &lt;&lt; AccY ;// &lt;&lt; endl; cout &lt;&lt; " AccZ = " &lt;&lt; AccZ &lt;&lt; endl; cout &lt;&lt; " velCurrX = " &lt;&lt; velCurrX ;// &lt;&lt; endl; cout &lt;&lt; " velCurrY = " &lt;&lt; velCurrY ;// &lt;&lt; endl; cout &lt;&lt; " velCurrZ = " &lt;&lt; velCurrZ &lt;&lt; endl; //Calculate Current Position in Meters float leakRateVel = 0.99000; posCurrX = posCurrX*leakRateVel + ( prevVelX + (velCurrX-prevVelX)/2 ) * deltaTime2; posCurrY = posCurrY*leakRateVel + ( prevVelY + (velCurrY-prevVelY)/2 ) * deltaTime2; posCurrZ = posCurrZ*0.99000 + ( prevVelZ + (velCurrZ-prevVelZ)/2 ) * deltaTime2; prevVelX = velCurrX; prevVelY = velCurrY; prevVelZ = velCurrZ; //Print X and Y position in meters cout &lt;&lt; " posCurrX = " &lt;&lt; posCurrX ;// &lt;&lt; endl; cout &lt;&lt; " posCurrY = " &lt;&lt; posCurrY ;// &lt;&lt; endl; cout &lt;&lt; " posCurrZ = " &lt;&lt; posCurrZ &lt;&lt; endl; </code></pre>
Dead Reckoning: Obtaining Position Estimation from Accelerometer Acceleration Integration
<p>This answer valid only if $\Delta{t} = \mathbf{t}[k] - \mathbf{t}[k-1]$ is a constant. Then you can rewrite your equation as: $$\dot{\mathbf{x}} = \dfrac{\mathbf{x}[k] - \mathbf{x}[k-1]}{\Delta{t}}$$</p> <p>Consider:</p> <p>$$ \dot{\mathbf{x}}_l = \dfrac{1}{h}\sum_{i=1}^{h}\dot{\mathbf{x}_i} = \dfrac{(\mathbf{x}[k] - \mathbf{x}[k-1])+(\mathbf{x}[k-1] - \mathbf{x}[k-2])+\dotsb+(\mathbf{x}[k-h+1] - \mathbf{x}[k-h])}{h\Delta{t}} = \bigg(\dfrac{\mathbf{x}[k] - \mathbf{x}[k-h]}{h\Delta{t}}\bigg) $$ </p> <p>$$ h\Delta{t} = \mathbf{t}[k] - \mathbf{t}[k-h] $$</p> <p>Here $\dot{\mathbf{x}}_i$ is the $i^{th}$ sample of the reading and passing it through a moving average filter (which is a low pass filter) you can obtain $\dot{\mathbf{x}_l}$. So $\dot{\mathbf{x}_l}$ is smooth as it is a low pass signal. When you increase the value of $h$ you can minimize the bandwidth of $\dot{\mathbf{x}_l}$. So when you increase the value of $h$ the result is getting more smoother. So peaks begin to fade(Peaks means high frequency components).</p> <p>As I know there isn't a generalize way to determine $h$ to get the result smoother and accurate. You have to choose appropriate $h$ by a trial and error or if you know the transfer function of the sensor, you can use that to determine an appropriate value for $h$. </p>
9756
2016-05-03T06:54:57.413
|motion|pose|
<p>I get position information and a corresponding timestamp from a motion tracking system (for a rigid body) at 120 Hz. The position is in sub-millimeter precision, but I'm not too sure about the time stamp, I can get it as floating point number in seconds from the motion tracking software. To get the velocity, I use the difference between two samples divided by the $\Delta t$ of the two samples:</p> <p>$\dot{\mathbf{x}} = \dfrac{\mathbf{x}[k] - \mathbf{x}[k-1]}{t[k]-t[k-1]}$.</p> <p>The result looks fine, but a bit noisy at times. A realized that I get much smoother results when I choose the differentiation step $h$ larger, e.g. $h=10$:</p> <p>$\dot{\mathbf{x}} = \dfrac{\mathbf{x}[k] - \mathbf{x}[k-h]}{t[k]-t[k-h]}$.</p> <p>On the other hand, peaks in the velocity signal begin to fade if I choose $h$ too large. Unfortunately, I didn't figure out why I get a smoother signal with a bigger step $h$. Does someone have a hint? Is there a general rule which differentiation step size is optimal with respect to smoothness vs. "accuracy"?</p> <p>This is a sample plot of one velocity component (blue: step size 1, red: step size 10):</p> <p><a href="https://i.stack.imgur.com/3aaOU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3aaOU.png" alt="Sample plot of step size 1 vs. step size 10."></a></p>
Step size in numerical differentiation
<p>So I'll point out what <em>might</em> be some problems, at-a-glance, but wow this is one of the longest questions I've seen here. I'm pretty swamped with real-life stuff at the moment, so I'll just point these few issues out, you try them, <strong>update the question with the results</strong> (please don't respond with new information in comments on this answer), and then if it's still broke I'll take another pass. </p> <p>For starters, some of your code uses variables that I didn't see mentioned anywhere, such as (<strong>confusingly</strong>) <code>tk</code> and <code>kt</code>. Furthermore, there are some <a href="https://en.wikipedia.org/wiki/Magic_number_(programming)#Unnamed_numerical_constants" rel="nofollow">magic numbers</a> that I can't understand, such as the assignments for <code>tk</code> and <code>kt</code> - they're both set to <code>2</code> despite the comment beside <code>tk=2;</code>, which says, <code>% Locks in the chosen sign</code>. If you manually set <code>tk=2;</code>, then it's not locking in anything chosen programmatically; it's just a hard-coded value that becomes difficult to change. </p> <p>That said, I would guess that your real problem is probably in your use of <code>atan</code>. In fact, this function has bitten me <strong>so many times</strong> that I will never use it again. Use <code>atan2</code> everywhere you ever would consider using <code>atan</code>. The problem with <code>atan</code> is that if you have (-y)/x or y/(-x), both of those expressions evaluate to -(y/x), and you lose quadrant information, giving you incorrect angles. </p> <p><code>atan2</code> fixes this by accepting two arguments, <code>y</code> and <code>x</code>, as opposed to <code>y/x</code>, and it treats the signs correctly. In your <code>ApplyRule10</code> function, under the line, <code>% AUXILIARY ANGLES, I AM NOT SURE IF THIS IS THE BEST WAY TO CALCULATE THEM:</code>, you have:</p> <pre><code>aux_angle_1 = atan(vr_t(2)/vr_t(1)) - atan(aux_1(2)/aux_1(1)) ; aux_angle_2 = atan(vr_t(2)/vr_t(1)) - atan(aux_2(2)/aux_2(1)) ; </code></pre> <p>My suggestion would be to change these atan functions to atan2 and see if that fixes anything. Again, update your question with the results and we'll go from there!</p>
9770
2016-05-04T13:09:03.107
|mobile-robot|kinematics|matlab|geometry|
<p>I am working in reproducing a robotics paper, first simulating it in MATLAB in order to implement it to a real robot afterwards. The robot's model is:</p> <p>$$\dot{x}=V(t)cos\theta $$ $$\dot{y}=V(t)sin\theta$$ $$\dot{\theta}=u$$</p> <p>The idea is to apply an algorithm to avoid obstacles and reach a determines target. This algorithm uses a cone vision to measure the obstacle's properties. The information required to apply this system is:</p> <p>1) The minimum distance $ d(t) $ between the robot and the obstacle (this obstacle is modelled as a circle of know radius $ R $).</p> <p>2) The obstacle's speed $ v_{obs}(t) $</p> <p>3)The angles $ \alpha_{1}(t)$ and $ \alpha_{2}(t)$ that form the robot's cone vision, and</p> <p>4) the heading $ H(t) $ from the robot to the target</p> <p>First a safe distance $ d_{safe}$ between the robot and the obstacle is defined. The robot has to reach the target without being closer than $ d_{safe}$ to the obstacle.</p> <p>An extended angle $ \alpha_{0} \ge arccos\left(\frac{R}{R+d_{safe}} \right) $ is defined, where $ 0 \le \alpha_{0} \le \pi $ </p> <p>Then the following auxiliary angles are calculated:</p> <p>$ \beta_{1}(t)=\alpha_{1}(t)-\alpha_{0}(t)$ </p> <p>$ \beta_{2}=\alpha_{2}(t)+\alpha_{0}(t)$ </p> <p>Then the following vectors are defined:</p> <p>$ l_{1}=(V_{max}-V)[cos(\beta_{1}(t)),sin(\beta_{1}(t))]$ </p> <p>$ l_{2}=(V_{max}-V)[cos(\beta_{2}(t)),sin(\beta_{1}(2))]$ </p> <p>here $ V_{max}$ is the maximum robot's speed and $ V $ a constant that fulfills $ \|v_{obs}(t)\| \le V \le V_{max} $ </p> <p>This vectors represent the boundaries of the cone vision of the vehicle</p> <p>Given the vectors $ l_{1} $ and $ l_{2}$ , the angle $ \alpha(l_1,l_2)$ is the angle between $ l_{1}$ and $ l_{2} $ measured in counterclockwise direction, with $ \alpha \in (-\pi,\pi) $ . Then the function $f$ is </p> <p>The evasion maneuver starts at time $t_0$. For that the robot find the index h:</p> <p>$h = min|\alpha(v_{obs}(t_0)+l_j(t_0),v_R(t_0))|$</p> <p>where $j={1,2}$ and $v_R(t)$ is the robot's velocity vector </p> <p>Then, from the two vectors $v_{obs}(t_0)+l_j(t_0)$ we choose that one that forms the smallest angle with the robot's velocity vector. Once h is determinded, the control law is applied:</p> <p>$u(t)=-U_{max}f(v_{obs}(t)+l_h(t),v_R(t))$</p> <p>$V(t)=\|v_{obs}(t)+l_h(t)\| \quad \quad (1)$ </p> <p>This is a sliding mode type control law, that steers the robot's velocity $v_R(t)$ towards a switching surface equal to the vector $v_{obs}(t)+l_h(t)$. Ideally the robot avoids the obstacle by surrounding it a </p> <p>While the robot is not avoiding an obstacle it follows a control law:</p> <p>$u(t)=0$</p> <p>$V(t)=V_{max} \quad \quad (2) $ </p> <p>Hence the rules to switch between the two laws are:</p> <p><strong>R10</strong> Switching from (2) to (1) occurs whenthe distance to the obstacle is equal to a constant C, which means when $d(t_0)=C$ and this distance is becoming smaller in time i.e. $\dot{d(t)}&lt;0$</p> <p><strong>R11</strong> Switching from (1) to (2) occurs when $d(t_*)&lt;1.1a_*$ and the vehicle is pointing towards the obstacle, i.e. $\theta(t_*)=H(T_*)$</p> <p>where $a_*=\frac{R}{cos\alpha_0}-R $</p> <p>Ideally the result should be similar to this</p> <p>But I'm getting this instead</p> <p>While I understand the theory there's obviously a flaw in my implementation that I haven't been able to solve. In my opinion the robot manages to avoid the obstacle but at certain point (in the red circle), the robot turns to the wrong side, making impossible the condition $H(t) = \theta(t) $ to be achieved.</p> <p>I feel that I am not measuring properly the angle alpha between the $v_{obs}(t)+l_h(t)$ and $v_{R}(t)$ , because while debugging I can see that at certain point it stops switching between negative and positive values and become only positive, leading the robot's to the wrong side. It also seems to be related with my problem here: <a href="https://robotics.stackexchange.com/questions/9386/">Angle to a circle tangent line</a></p> <p><a href="https://i.stack.imgur.com/BDYtU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BDYtU.png" alt="angles"></a></p>
Mobile robot algorithm implementation error
<p>Off the shelf expresscard or cardbus cards for <strong>rs422/485</strong> will do 921Kbit/sec which, depending on the protocol you use and its overhead, should get you more than 100Bytes of data in less than 2ms.</p> <p>Considering the ease at which you can use RS422 with a microcontroller this is the first thing to try. You can even forgo a protocol and use the raw communications during development to get up and running quickly (though for robustness, a protocol that can handle errors and comms start and stop will be wanted).</p> <p><strong>Ethercat</strong> is the other option, which can use a standard ethernet card, (if you are lucky your laptop will use a compatible chipset), but you will need to add an Ethercat chip and ethernet phy on the microcontroller side. Additionally, Ethercat uses a robust but complicated communication protocol so it can take some time to learn. The benefit of doing so is that you have fast communications designed for realtime control.</p>
9801
2016-05-08T04:26:07.590
|real-time|
<p>Lets say that I needed to send sensor readings in increments of 100 bytes from a micro controller to a laptop with sub 2 ms latencies in real time (the data needs to be processed and acted upon immediately (to control a robot)). What interfaces would one use? </p> <p>FTDI usb-serial converters aren't an option because they introduce 5-10 ms latencies both ways. PCI cards are an option though.</p>
Low latency control from a laptop
<p>I think I may have reason for your reviewer's caution. You are updating the proportional gain of a PID controller, and you are concerned with the stability of your update law. Restating your update: $$K_p(t+1)=K_p+e(t)(\mu_1(Pe(t))+\mu_4(Pe(t))).$$ You have re-stated that $e(t)$ is the difference, which means that the error is not positive semi-definite ($e(t)\geq 0$). However, the membership functions $\mu_1$ and $\mu_4$ are positive semi-definite; this implies that $E(t)=e(t)(\mu_1(Pe(t))+\mu_4(Pe(t)))$ is either positive or negative semi-definite, which means that $K_p$ grows unbounded in some region $D$. </p> <p>At this point, it is clear that the update law is not stable in some region (in the sense of Lyapunov) for $e(t)\neq 0$. We can substantiate this with discrete-time Lyapunov analysis. </p> <p>$Proof:$ Dropping the time dependence for clarity, let $$V(K_p)=\frac{1}{2}K_p^2$$ be a candidate Lyapunov function. The rate of change along solutions is given by $$\begin{align}\nabla V(K_p)&amp;=V(K_p+E)-V(K_p)\\&amp;=\frac{1}{2}(K_p+E)^2-\frac{1}{2}K_p^2\\&amp;=K_pE+\frac{1}{2}E^2\end{align}.$$ For stability of the system, we must have $\nabla V(K_p)&lt;0$. This implies $$K_pE+\frac{1}{2}E^2&lt;0$$$$\to K_p&lt;-\frac{1}{2}E.$$ We can conclude from converse Lyapunov results that the system is unstable for at least $K_p&gt;-\frac{1}{2}E$, which is an obvious reason for concern. There may be a better Lyapunov candidate function to demonstrate this result, but we can be sure that the system is locally unstable.</p>
9825
2016-05-12T13:30:16.710
|quadcopter|control|pid|
<p>A reviewer of the last paper I sent replied me that the it is very dangerous to update a PID with next kind of formula (paper is about quadrotor control):</p> <p>$$ K_p (t + 1) = K_p (t)+e(t) (μ_1 (Pe(t)) + μ_4 (Pe(t))) $$</p> <p>$Pe(t)$ is the % relationship between the desired angles and the real angles, and $e(t)$ is the difference between those angles. $μ_1$ and $μ_4$ are the membership functions of a fuzzy function. I think that the reviewer is talking about the time increment update rather than the fuzzy usage and specific formula.</p> <p>How can stability of this formula be tested, please?</p> <p>EDIT: </p> <p>membership functions are represented in following graph: <a href="https://i.stack.imgur.com/rJ3vy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rJ3vy.jpg" alt="enter image description here"></a></p> <p>$e(t)$ is not the absolute difference between angles, just the difference. It can be negative</p>
Stability of PID values update function for quadrotor
<p>In McGeer's 1990 work on Passive Dynamic Walking, the angular momentum equation that you have posted is the angular momentum of the system <em>just</em> before the impact of the swing foot, about the impending impact point. It consists of two parts: </p> <ul> <li><p>the angular momentum due to the velocity of the center of mass about the impact point, given by $\cos(2\alpha_0) ml^2 \Omega_-$; and </p></li> <li><p>the angular momentum due to the velocity of the rotational inertia about the center of mass, given by $r^2_{gyr} ml^2 \Omega_-$.</p></li> </ul> <p>To answer question 1, the equation is already in the form $H = I*\omega + ml^2\Omega$, as you would expect. The challenge comes in because of two factors. </p> <ul> <li><p>First, the overall rotational inertia $I$ about the impact point is defined using a radius of gyration, since the system is rotationally symmetric about the center of mass: $I = r^2_{gyr} ml^2$. </p></li> <li><p>Second, the angular velocity $\Omega_-$ is about the foot of the supporting leg, some of which is lost due to the impact of the swinging foot with the ground. This means the required $\Omega$ to find the angular momentum about the impact point is actually equal to $cos(2\alpha_0) \Omega_-$, leading to the equation as written in McGeer's work.</p></li> </ul> <p>To answer question 2, both $\alpha_0$ and $r_{gyr}$ are constants for a rimless wheel (as @NBCKLY said in his answer):</p> <ul> <li><p>$\alpha_0$ is the angle between legs, given by $\alpha_0 = 2\pi/n$ (where $n$ is the number of legs). This means that in order to have $\alpha_0 = \pi/2$, there would need to be 4 legs on the wheel and the majority of the energy will be lost at every step (not a likely scenario).</p></li> <li><p>$r_{gyr}$ is the radius of gyration, a proxy for the constant rotational inertia of the model about the center of mass. Since the distribution of mass of the model does not change, this value also will not change.</p></li> </ul> <p>There has been lots of work done more recently on the concepts that you are exploring in passive dynamic walking and underactuated robotics which I would invite you to investigate using your preferred research platform.</p>
9826
2016-05-12T14:16:27.713
|wheel|walking-robot|
<p>In Tad McGeer's work, <a href="https://www.google.com.tw/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0ahUKEwjz9KOX5NTMAhWEj5QKHbauD9YQFgggMAA&amp;url=http%3A%2F%2Fruina.tam.cornell.edu%2Fresearch%2Fhistory%2Fmcgeer_1990_passive_dynamic_walking.pdf&amp;usg=AFQjCNG1l9gbWCJgcd5mlARdYvOcD123cg&amp;sig2=cE3ne4MxgyCi_9lnRHT3Bw" rel="noreferrer">Passive Dynamic Walking</a>, in 1990, he mentions the rimless wheel model, which is used to approximate the bipedal locomotion. I can't understand why the angular momentum is written as follows.</p> <p>$H^-=(\cos 2\alpha_0+r^2_{gyr})ml^2\Omega^-$</p> <p>I have the following questions:</p> <ol> <li><p>Isn't the angular momentum be $I*\omega$, $m^2l\Omega$ as the paper's notation?</p></li> <li><p>If $\alpha_0$ is $\frac{\pi}{2}$ and $r_{gyr}$ approaches to 0, shouldn't the angular momentum before impact, $H^-$, be negative? Then how the conservation goes?</p></li> </ol>
Angular momentum of rimless wheel in Passive Dynamic Walking
<p>Sounds like a burnt (shorted) motor.</p> <p>If you are not able to inspect the ESC with an oscilloscope, try connecting every combination of the motor leades to each other and spin the motor by hand. If only one phase is damaged, then one combination of phase wires connected to each other will provide more resistance than the other two.</p>
9828
2016-05-13T12:56:22.223
|brushless-motor|esc|radio-control|
<p>I recently bought a RC car kit and after 10 minutes it stopped going. </p> <p>When I throttle, I can see the motor trying to spin but it will just grind and get hot quite fast.</p> <p>The motor does move if I disconnect it from the big gear, but not as fast as it did when new and it will still get very hot. Also, I can stop it with my fingers with a very slight touch.</p> <p>I don't know anything about motors or ESCs, so I'm not sure if my problem is the motor or the ESC. Did I burn it out?</p>
Brushless motor from RC car won't spin with even small resistance
<p>I'd go for a linear axis. The cheapest version could be from a printer or you buy a small belt and two gears. A brushless DC would however fit better in this scenario. Such systems are easily faster than a normal user and easy to control. </p>
9839
2016-05-14T02:54:51.380
|robotic-arm|raspberry-pi|stepper-motor|industrial-robot|automation|
<p>I'm trying to build a test-automation robotic arm which can repeatedly present an ID-tag (such as RFID or NFC card or fob) to a card reader.</p> <p>I suspect our reader fails either (a) after hundreds of presentations or due to fast presentations or (b) at a specific <em>moment</em> in the reader duty cycle.</p> <p>The tag needs to move in a well-controlled manner:</p> <ol> <li>Quickly present the card, </li> <li>Pause (mark)</li> <li>Quickly remove the card,</li> <li>Pause (space)</li> <li>Repeat at 1.</li> </ol> <p>I'm calling the present/remove sequence the mark-space ratio for simplicity.</p> <p>The tests I want to perform involve varying (a) the frequency and (b) the mark-space ratio, to (a) stress-test and (b) boundary-test the re-presentation guard times built into the reader to debounce presentations.</p> <p>The guard times are around 400ms, response around 100ms, so I need something that can move in and out of a 5-10cm range quickly and repeat within those sorts of timescales. </p> <p>The distance the card needs to move depends on the reader model, as they have different field ranges. I want to get through the edge of the field quickly to avoid any inconsistencies in testing.</p> <p>I'm able to do any programming (professional) and simple electromechanical design and build (ex-professional, now hobbyist). I only need to build one, it doesn't have to be particularly robust, but it does need to be fairly accurate with regard to the timings to do the second test.</p> <p>What I've done so far:</p> <p>I've built one version already using a Raspberry Pi, GPIO, a stepper motor with an aluminium arm screwed to a wheel. It works, but it's a bit jerky and too slow, even with a 30cm arm to amplify the motion. It will probably do for the repeat test, but it's not time-accurate enough to do the timing tests.</p> <p>My other design ideas were: </p> <ul> <li>Servo (are these also slow?)</li> <li>Solenoid (fast, but too limited range? and might cause EM?)</li> <li>Motor (too uncontrollable, and will require too much mechanical work for me)</li> <li>Rotating drum (fast, stable, but cannot control mark-space ratio)</li> </ul> <p>I'm not a electro-mechanical design expert, so I'm wondering if I'm missing an electrical device or mechanical design which can do this more easily.</p>
Which mechanical device could repeatedly present an ID tag to a card-reader
<p>Path planning is essentially obstacle avoidance.</p> <p>If there are no obstacles, chances are path planning is Overkill. However, one obstacle that you always have and that is a constraint for the configuration space is the robot itself. </p> <p>Now if you have no obstacles and start and end configurations can be connected by a straight line in configuration space, the task of path planning indeed simplifies to knowing the line that connect them.</p> <p>Especially if the camera captures images frequently, movement will be small.</p> <blockquote> <p>a <em>simple</em> path planning algorithm</p> </blockquote> <p>There's no such thing. You need to define configuration space which gets very complicated with higher dimensions, perform all kinds of geometric operations on it and finally have some algorithm navigate through it.</p> <p>I'd say it's not just over-kill, it's over-annihilation.</p>
9845
2016-05-14T16:58:48.057
|robotic-arm|
<p>I at moment trying to convince myself that what I need is a simple path planning algorithm, instead of linearly interpolating between a current and a desired state. </p> <p>I am at moment working with an robot arm (UR) with an camera mounted on to its TCP. The application i am trying to create is a simple ball tracking application which tracks the movement of the ball, while always keeping the object within sight. </p> <p>which meant that I needed some form of path planning algorithm which plans the path between my current state and the desired state. The path should should be such that the ball is always kept in focus while the arm moves to the desired state.</p> <p>But then I began question myself whether it was a bit overkill, and whether a simple straight line interpolation wouldn't suffice?.. I am actual not sure what form of benefit i would have by choosing a pathplanner, than a simple interpolation..</p> <p>Interpolation would also generate the path I desire, so why choose a pathPlanner at all?</p> <p>would someone elaborate? It should be noted that obstacle avoidance is also a part of the task, which could cause trouble for a straight line interpolating. </p>
Path planning vs. linear interpolation?
<p>The glider definitely is a nonholomonic system, but not (only) because it has no propulsion.</p> <p>As Wikipedia defines it:</p> <blockquote> <p>A nonholonomic system in physics and mathematics is a system whose state depends on the path taken in order to achieve it.</p> </blockquote> <p>So, if an aircraft can reach a pose (position and orientation) in space regardless of its prior states (path taken) than it is holomonic. To simplify it even more:</p> <p>A car is a nonholomonic system. It can occupy any position (in 2D, X and Y translations) and orientation (1 orientation around the Z axiz, perpendicular to the road) but what makes it nonholominic is its steering system which does not allow it to move sideways. It can occupy the position just 1 mm (or <span class="math-container">$\delta y$</span> distance) to the right of its current position, but ONLY if an adecvate path leads the car to that position.</p> <p>An omnidirectional robot (e.g. with mecanum wheels, <a href="https://www.google.com/search?q=omni%20directional%20robot" rel="nofollow noreferrer">Google search it</a>) is holomonic because it can occupy any pose (2 translations 1 orientation) regardless of its previous states (path taken). It can move to the immediate right, unlike the car, with <span class="math-container">$\delta y$</span> distance.</p> <p>You might want to take a look at underactuated systems. Those are the ones that require external forces.</p>
9850
2016-05-15T10:09:37.157
|quadcopter|control|uav|
<p>Good day I would just like to ask if a fixed wing aircraft such as a glider(without thrust capability therefore needs external forces such as air flow to move constraining its movement) can be considered a non-holonomic system considering the fact that it cannot move freely compared to a quadcopter that is holonomic.</p> <p><strong>I found this information from:</strong> <a href="https://robotics.stackexchange.com/questions/9642/whats-the-difference-between-a-holonomic-and-a-nonholonomic-system">What&#39;s the difference between a holonomic and a nonholonomic system?</a></p> <p>Mathematically:</p> <p>Holonomic system are systems for which all constraints are integrable into positional constraints.</p> <p>Nonholonomic systems are systems which have constraints that are nonintegrable into positional constraints.</p> <p>Intuitively:</p> <p>Holonomic system where a robot can move in any direction in the configuration space.</p> <p>Nonholonomic systems are systems where the velocities (magnitude and or direction) and other derivatives of the position are constraint.</p>
Holonomic and Non-holonomic UAV's: Gliders vs Quadcopters
<p>Yes, you can. </p> <p>same as for the translations, you extract the 3 angles form the matrix, you have the starting angles and the target angles. Interpolate as you wish (sync-async, cos-spline-linear) and for each timestamp you can re-create the matrix (knowing the 3 angels and the 3 translations).</p> <p>Advantage for cos-interpolation would be the continuity of higher order derivates (same as in the case of the interpolation for the linear axes)</p>
9856
2016-05-16T08:55:30.147
|robotic-arm|stereo-vision|
<p>Is it possible to perform cosine interpolation between two transformation matrices?</p> <p>It make sense for the translation part, but how about the rotational part?</p>
Cosine interpolation between two transformation matrices?
<p>For your particular question about m = Inf, you should first check if vertex_x(2) = vertex_x(1). Actually, I would implement the check to see if the absolute value of vertex_x(2) - vertex_x(1) &lt; <em>some_very_small_number</em>. If that condition is true, then the equation for that line segment is y = vertex_x(1) and you just need to see if the other line segment contains that value for x. If the condition is false then use the check you already have.</p>
9863
2016-05-16T13:40:26.823
|mobile-robot|motion-planning|matlab|
<p>I need to calculate the configuration space obstacle to planning a path with a mobile robot. The idea is to divide the obstacles and the robot in triangles and test whether is there any triangle from the robot colliding with any triangle from the obstacles.</p> <p>The approach to solve this is to test this between two triangles each time so I need to look if any of the 6 edges (3 for each triangle) divide the triangles so 3 vertex from one of them lie in one side and the other 3 vertex lie on the other side of the line.</p> <p>I wrote some code to calculate the line equation (y = m*x + b) and I think it is correct, but I am having problems when the line is vertical (this means that m = -Inf) because MATLAB gives me a NaN when I calculate the equation for it. I am not sure how to handle this.</p> <p>Here you can see a snippet from the code where I test the 3 edges from the robot triangle:</p> <pre><code>for i = 1:1:3 vertex_x = P1(edge(i,:),1); vertex_y = P1(edge(i,:),2); m = (vertex_y(2) - vertex_y(1))/(vertex_x(2) - vertex_x(1)); b = -m*vertex_x(1) + vertex_y(1); for j = 1:1:6 % For each vertex... pto = listaVertices(j,:); if (m*pto(1) + b &gt; pto(2)) % Vertex lies below the edge... cont1 = cont1 + 1; elseif (m*pto(1) + b &lt; pto(2)) % Vertex lies above the edge... cont2 = cont2 + 1; else % Vertex lie inside the edge... % Do nothing end end % 3 vertex on one side and 1 on the others side means they do not % collide. Two of the vertex always lie inside the line (the two vertex % of each edge). if (cont1 == 1 &amp;&amp; cont2 == 3) || (cont1 == 3 &amp;&amp; cont2 == 1) flag_aux = false; % Do not collide... end % Reset the counters for the 3 next edges... cont1 = 0; cont2 = 0; end </code></pre> <p>Anyone could help with this issue?</p>
Configuration space obstacle - calculating collision
<p>Assuming you have solved the inverse kinematics (IK) problem already...</p> <p>I suppose you have a transformation matrix for each joint, build up from one line in the DH table (if you used DH to describe the robot). Form the IK you have obtained</p> <p>$Q = [q_1, q_2, q_3, q_4, q_5, q_6]$</p> <p>Having all Q values you can now write:</p> <p>$H_{0,1} = A_1(q_1)$</p> <p>$\Rightarrow$ $H_{0,1}(1:3, 4)$ will give you the $x$, $y$ and $z$ Coordinates of joint 1</p> <p>$H_{0,2} = H_{0,1}*A_2(q_2)$</p> <p>$\Rightarrow$ $H_{0,2}(1:3, 4)$ will give you the $x$, $y$ and $z$ Coordinates of joint 2</p> <p>$H_{0,3} = H_{0,2}*A_3(q_3)$</p> <p>$\Rightarrow$ $H_{0,3}(1:3, 4)$ will give you the $x$, $y$ and $z$ Coordinates of joint 3</p> <p>$H_{0,4} = H_{0,3}*A_4(q_4)$</p> <p>$\Rightarrow$ $H_{0,4}(1:3, 4)$ will give you the $x$, $y$ and $z$ Coordinates of joint 4</p> <p>$H_{0,5} = H_{0,4}*A_5(q_5)$</p> <p>$\Rightarrow$ $H_{0,5}(1:3, 4)$ will give you the $x$, $y$ and $z$ Coordinates of joint 5</p> <p>$H_{0,6} = H_{0,5}*A_6(q_6)$</p> <p>$\Rightarrow$ $H_{0,6}(1:3, 4)$ will give you the $x$, $y$ and $z$ Coordinates of joint 6</p> <p>$H_{0,tool} = H_{0,6}*A_{tool}$</p> <p><strong>where</strong></p> <ul> <li>$A_i$ is the DH-transformation matrix associated with joint $i$.</li> <li>the index $_0$ represents the world frame</li> <li>the intex $_{tool}$ represents the tool frame</li> <li>$H_{i,j}$ represent the transformation matrix from frame $i$ to frame $j$</li> </ul>
9871
2016-05-17T08:48:39.467
|kinematics|matlab|visualization|
<p>I am not sure how i should explain this, I am looking for a way to plot the trajectory an robot arm. An object is seen from the toolFrame frame, but how do I plot the position of each joint, such that the frame they uses are the same.</p> <p>One way would be to use the world frame as reference, but how would i plot the position of the object related to the world frame?</p>
convert toolframe coordinate to world frame coordinates?
<p>If the robot's orientation is fixed, then determining its linear displacement is sufficient to determine its location in the world. This is because one unit of "forward" on the robot is equivalent to one unit of "forward" in the world. </p> <p>However, when the robot is allowed to rotate, one unit of "forward" for the robot now corresponds to a partial unit "forward" and a partial unit "sideways". </p> <p>The clearest way to imagine this is to imagine that the robot has rotated 180 degrees, such that it is looking in the direction opposite to that which it had when it started. Now one unit of "forward" for the robot actually translate to one unit of "backward" in the world. Similarly, one unit of "left" now corresponds to one unit of "right" and vice-versa. </p> <p>In this manner, local coordinates, as established relative to the robot, are only useful when the <em>orientation</em> of the robot is known relative to the world. <em>If you don't know which way you're pointed, you can't know which way you're moving.</em></p>
9895
2016-05-20T12:36:19.817
|quadcopter|motion-planning|uav|
<p>Assuming a drone is in two dimension, it has to predict its future position by calculating its future displacement:</p> <p><a href="https://i.stack.imgur.com/eifIG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eifIG.png" alt="enter image description here"></a></p> <p>For a real quad-rotor, <strong>why should we not only estimate the displacement of a robot in three dimensions but also the change of orientation of the robot, its linear velocity and its angular velocity?</strong></p>
Estimating the displacement of a drone in three dimensions
<ol> <li><p>You have to analytically compute all IK solutions. This is basically done with straightforward geometry. Most robotics textbooks with a section on manipulation will have a detailed explanation of this. But you might also want to check out the <a href="http://openrave.org/docs/0.8.2/openravepy/ikfast/" rel="nofollow noreferrer">ikfast</a> library.</p></li> <li><p>I am not sure if your gradient decent solution is guaranteed to have any special properties. But I think it will probably be the closest solution in joint space except for in some special cases. Note that there may be reasons to choose a different IK solution. My answer to this post has a list of some metrics that you may be interested in: <a href="https://robotics.stackexchange.com/questions/9228/evaluating-the-similarity-of-two-7-degree-of-freedom-arms/">Evaluating the similarity of two 7 Degree of Freedom Arms</a>.</p></li> <li><p>Yes, I believe that there are circumstances that can cause gradient decent to fail. (But this probably depends on the degrees of freedom and workspace of your arm). See this thread for more info: <a href="https://robotics.stackexchange.com/questions/7181/is-the-geometric-inverse-problems-solution-continuous-for-a-redundant-robot">Is the geometric inverse problem&#39;s solution &quot;continuous&quot; for a redundant robot?</a>. </p></li> </ol>
9904
2016-05-21T04:55:59.443
|robotic-arm|kinematics|inverse-kinematics|jacobian|
<p>I am trying to implement my own inverse kinematics solver for a robot arm. My solution is a standard iterative one, where at each step, I compute the Jacobian and the pseudo-inverse Jacobian, then compute the Euclidean distance between the end effector and the target, and from these I then compute the next joint angles by following the gradient with respect to the end effector distance.</p> <p>This achieves a reasonable, smooth path towards the solution. However, during my reading, I have learned that typically, there are in fact multiple solutions, particularly when there are many degrees of freedom. But the gradient descent solution I have implemented only reaches one solution.</p> <p>So my questions are as follows:</p> <ol> <li><p>How can I compute all the solutions? Can I write down the full forward kinematics equation, set it equal to the desired end effector position, and then solve the linear equations? Or is there a better way?</p></li> <li><p>Is there anything of interest about the particular solution that is achieved by using my gradient descent method? For example, is it guaranteed to be the solution that can be reached the fastest by the robot?</p></li> <li><p>Are there cases when the gradient descent method will fail? For example, is it possible that it could fall into a local minimum? Or is the function convex, and hence has a single global minimum?</p></li> </ol>
Solving Inverse Kinematics with Gradient Descent
<p>A fully charged LiPo battery cell should has a voltage of 4.2 V. Assuming that you have three cells in series, that would give 12.6 V. When you apply a big load, this voltage of course, will decrease instantly but if you measured 11.98 V in open-circuit that means the batteries were not fully charged. I guess your batteries were not charged at all.</p> <p>Here is the discharge characteristic of the Samsung IRC16850, taken from this <a href="http://budgetlightforum.com/node/33814" rel="nofollow noreferrer">site</a>.</p> <p><a href="https://i.stack.imgur.com/slL38.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/slL38.png" alt="enter image description here"></a></p> <p>A discharge current of 7 A will drain your battery flat in 20 minutes.</p> <p>Furthermore a discharged battery's voltage is not the same when the load is still connected and when it is in open circuit. After disconnecting the load the battery voltage will rise, this will be the OCV (open circuit voltage).</p> <p><a href="https://i.stack.imgur.com/Yrijp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yrijp.png" alt="enter image description here"></a></p> <p>This is a figure of a LiPo battery pack, 12 cells in series. When the load is disconnected the voltage has risen from 2.7 V to ~3.2 V in a short period of time (PB 6 voltage). This happens with your batteries too but if you reconnect the motor the voltage will drop back.</p> <p>And finally, if you measure 5-6 V on your battery pack that means one cell's voltage is around 2 V and the cells are probably damaged now. The Samsung ICR18650's discharge cut-off voltage 2.75V, which means that you should not use it when the voltage is lower. Actually you should not use the cells but recharge when the voltage drops to 3 V.</p> <p>If a cell is heating up it means that its internal resistance has risen. The dissipated heat will be:</p> <p>$$ P = R_{internal} \times I_{discharge}^2 $$</p> <p>this means that a part of the stored energy is wasted on heating the cell itself (which is very <strong>DANGEROUS</strong> as LiPo cells can easily catch fire). The effective battery capacity is lower in this case.</p> <hr> <p>Without load your motor should drain 800 mA and the batteries should last 2.5 hours with that. I suggest to make some tests without any load attached to you motor, or just some light load. These batteries would be capable of supporting that, just make sure that they are fully charged. But probably you will need new cells too.</p>
9918
2016-05-22T10:44:40.757
|motor|battery|
<p>I am trying to run this <a href="http://www.amazon.in/Johnson-geared-Specially-Robowar-competitions/dp/B010V2O6CO?ie=UTF8&amp;psc=1&amp;redirect=true&amp;ref_=oh_aui_detailpage_o01_s00" rel="nofollow">motor</a>. </p> <p>Using the batteries stated in the title. The motor requires 12 V and I am supplying 11.98V to the motor, through a motor driver. After a while, the motor keeps slowing down and the battery voltage drops down to 5-6 V, but after I remove the battery from the motor driver it again shows 11.9V.</p> <p>Is this battery capable enough to run my motors, or do I need a new one?</p>
Tring to run 12 V DC geared motor using Samsung Li Ion ICR16850 batteries
<p>You don't need to send any mavlink message. You can simply change the parameter <strong>SR1_EXTRA_1</strong> to desired Attitude data rate and your data rate will be changed. Similarly you can change other parameter to change data rate of other messages. More information of messages can be found at <a href="https://github.com/ArduPilot/ardupilot/blob/master/ArduCopter/GCS_Mavlink.cpp#L821" rel="nofollow">https://github.com/ArduPilot/ardupilot/blob/master/ArduCopter/GCS_Mavlink.cpp#L821</a></p>
9923
2016-05-22T17:41:50.193
|quadcopter|c++|ardupilot|mavlink|
<p>I am using Mavlink protocol (in c++) to communicate with the ArduPilotMega, I am able to read messages such as <a href="https://pixhawk.ethz.ch/mavlink/#ATTITUDE" rel="nofollow">ATTITUDE</a> for example.<br> I am currently getting only 2Hz (message rate) and I would like to increase it. I found out that I should use <a href="https://pixhawk.ethz.ch/mavlink/#MESSAGE_INTERVAL" rel="nofollow">MESSAGE_INTERVAL</a> in order to change it, and that I probably need to use the command <code>MAV_CMD_SET_MESSAGE_INTERVAL</code> to set it.</p> <p>So my question is, how do I send that command using mavlink in c++?</p> <p>I tried doing this with the code below but it did not work, I guess that I should use the command that I mentioned above but I don't know how.</p> <pre><code>mavlink_message_t command; mavlink_message_interval_t interval; interval.interval_us = 100000; interval.message_id = 30; mavlink_msg_message_interval_encode(255, 200, &amp;command, &amp;interval); p_sensorsPort-&gt;write_message(command); </code></pre> <p><strong>Update:</strong> I also tried this code below, maybe I am not giving it the right system id or component id.</p> <pre><code>mavlink_message_t command; mavlink_command_long_t interval; interval.param1 = MAVLINK_MSG_ID_ATTITUDE; interval.param2 = 100000; interval.command = MAV_CMD_SET_MESSAGE_INTERVAL; interval.target_system = 0; interval.target_component = 0; mavlink_msg_command_long_encode(255, 0, &amp;command, &amp;interval); p_sensorsPort-&gt;write_message(command); </code></pre> <p>Maybe I am missing something about the difference between <code>target_system</code>, <code>target_component</code> and <code>sysid</code>, <code>compid</code>. I tried few values for each but nothing worked. Is there any ack that will be able to tell me if it even got the command? </p>
Change Message Interval ArduPilot
<p>I have found out that my code works. It is just that most of the literature I have read uses Lidar or Sonar sensors for histogram updates. I had assumed that in the case of a stereo vision set-up, all sectors are updated simultaneously. However only the sectors in the field of view of the camera is update unlike in the lidar implementation that samples a complete 360 degree sweep of its surroundings. Another thing I have found out that when deriving the X and Y coordinates from the depthmap, the resulting map is a mirror image of the actual surrounding thus the sectors must be labeled counter clockwise. I used the formula from NI' s paper linked at the code. The same case is also true when using the camera matrix with open cv to obtain the real world X,Y coordinates from the depth map. I shall edit this question to a clearer one soon :)</p>
9925
2016-05-22T20:20:04.140
|mobile-robot|motion-planning|vfh|path-planning|
<p>Good day</p> <p>Note: I have found out that my code works. I have placed a minor explanation to be further expounded.</p> <p>I have been having trouble obtaining the right directional output from my implementation. I noticed that every time I put an obstacle at the right, it gives left, it gives the right steering direction, the problem is with the presence of a left obstacle where it still tends to steer itself towards that obstacle. I have checked the occupancy map generated using matlab and was found to be correct. I couldn't pinpoint what is exactly wrong with my code for I have been debugging this for almost a week now and was hoping if someone can see the error I cannot.</p> <p>Here is my code implementation:</p> <pre><code>//1st:: Create Occupancy Grid from Data------------------------------------------------- // &gt; Cell Size/Grid Resolution = 5meters/33 cells = 0.15meters each = 15cm each // &gt; Grid Dimension = 5meters by 5meters / 33x33 cells //Field of view of robot is 54 degrees //or 31 meters horizontal if subject is 5 meters away // &gt; Robot Width = 1meter 100cm // &gt; Because the focal length of the lens is roughly the same as the width of the sensor, // it is easy to remember the field of view: at x meters away, you can see about x meters horizontally, // assuming 4x3 stills mode. Horizontal field of view in 1080p video mode is 75% // of that (75% H x 55% V sensor crop for 1:1 pixels at 1920x1080). //Converting the Position into an Angle-------------------------------------------- //from: // A. https://decibel.ni.com/content/docs/DOC-17771 // B. "USING THE SENSOR KINECT FOR LANDMARK" by ANDRES FELIPE ECHEVERRI GUEVARA //1. Get angle // &gt; Each pixel from the image represents an angle // &gt; angle = ith pixel in row * (field of view in degrees/number of pixels in row) // &gt; field of view of Pi camera is 54 degrees horizontal //2. Convert Polar to Cartesian // &gt; x = z*cos(angle) // &gt; y = z*sin(angle) int arrOccupancyGrid[33][33]; float matDepthZ[33][33]; int robotPosX = 0; int robotPosY = 0; int xCoor=0; //Coordinates of Occupancy Map int yCoor=0; int xPosRobot=0; //Present cordinates of robot int yPosRobot=0; float fov = 54; // 54 degrees field of view in degrees must be converted to radians float nop = 320; //number of pixels in row int mapDimension = 33; // 33by33 array or 33*15cm = 5mby5m grid int mapResolution = 15; //cm //Limit max distance measured /* for(i=0; i&lt; nop ;i++){ if(arrDepthZ.at(i)&gt;500){ arrDepthZ.at(i) = 500; } } */ for (i=0 ; i &lt; nop; i++){ //Process data/Get coordinates for mapping //Get Angle int angle = ((float)(i-160.0f) * ((float)fov/(float)nop)); //if robot is centered at zero add -160 to i //cout &lt;&lt; "arrDepthZ " &lt;&lt; arrDepthZ.at(i) &lt;&lt; endl; //cout &lt;&lt; "i " &lt;&lt; i &lt;&lt; endl; //cout &lt;&lt; "fov " &lt;&lt; fov &lt;&lt; endl; //cout &lt;&lt; "nop " &lt;&lt; nop &lt;&lt; endl; //cout &lt;&lt; "angle " &lt;&lt; i * (fov/nop) &lt;&lt; endl; arrAngle.push_back(angle); //Get position X and Y use floor() to output nearest integer //Get X -------- xCoor = (arrDepthZ.at(i) / mapResolution) * cos(angle*PI/180.0f); //angle must be in radians because cpp //cout &lt;&lt; "xCoor " &lt;&lt; xCoor &lt;&lt; endl; arrCoorX.push_back(xCoor); //Get Y -------- yCoor = (arrDepthZ.at(i) / mapResolution) * sin(angle*PI/180.0f); //angle must be in radians because cpp //cout &lt;&lt; "yCoor " &lt;&lt; yCoor &lt;&lt; endl; arrCoorY.push_back(yCoor); //Populate Occupancy Map / Cartesian Histogram Grid if((xCoor &lt;= 33) &amp;&amp; (yCoor &lt;= 33)){ //Condition Check if obtained X and Y coordinates are within the dimesion of the grid arrOccupancyGrid[xCoor][yCoor] = 1; //[increment] equate obstacle certainty value of cell by 1 matDepthZ[xCoor][yCoor] = arrDepthZ.at(i); } //cout &lt;&lt; "arrCoorX.size()" &lt;&lt; arrCoorX.size() &lt;&lt; endl; //cout &lt;&lt; "arrCoorY.size()" &lt;&lt; arrCoorY.size() &lt;&lt; endl; } for (i=0 ; i &lt; arrCoorX.size(); i++){ file43 &lt;&lt; arrCoorX.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrCoorY.size(); i++){ file44 &lt;&lt; arrCoorY.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrDepthZ.size(); i++){ file45 &lt;&lt; arrDepthZ.at(i) &lt;&lt; endl; } //------------------------- End Create Occupancy Grid ------------------------- //2nd:: Create 1st/Primary Polar Histogram ------------------------------------------------------ //1. Define angular resolution alpha // &gt; n = 360degrees/alpha; // &gt; set alpha to 5 degrees resulting in 72 sectors from 360/5 = 72 ///// change 180/5 = 35 //2. Define number of sectors (k is the sector index for sector array eg kth sector) // &gt; k=INT(beta/alpha), where beta is the direction from the active cell //to the Vehicle Center Point (VCP(xPosRobot, yPosRobot)). Note INT asserts k to be an integer cout &lt;&lt; "2nd:: Create 1st/Primary Polar Histogram" &lt;&lt; endl; //Put this at the start of the code away from the while loop ---------------- int j=0; int sectorResolution = 5; //degrees 72 sectors, alpha int sectorTotal = 36; // 360/5 = 72 //// change 180/5 = 36 int k=0; //sector index (kth) int maxDistance = 500; //max distance limit in cm //vector&lt;int&gt;arrAlpha; //already initiated float matMagnitude[33][33]; //m(i,j) float matDirection[33][33]; //beta(i,j) float matAngleEnlarge[33][33]; //gamma(i,j) int matHconst[33][33]; //h(i,j) either = 1 or 0 float robotRadius = 100; //cm float robotSafeDist = 50; //cm float robotSize4Sector = robotRadius + robotSafeDist; for (i=0; i&lt;sectorTotal; i++){ arrAlpha.push_back(i*sectorResolution); } //---------end initiating sectors---------- //Determine magnitude (m or matMagnitude) and direction (beta or matDirection) of each obstacle vector //Modify m(i,j) = c(i,j)*(a-bd(i,j)) to m(i,j) = c(i,j)*(dmax-d(i,j)) from sir Lounell Gueta's work (RAL MS) //Compute beta as is, beta(i,j) = arctan((yi-yo)/(xi-xo)) //Enlarge robot and compute the enlargment angle (gamma or matAngleEnlarge) int wew =0; int firstfillPrimaryH = 0; //flag for arrayPrimaryH storage for (k=0; k&lt;sectorTotal; k++){ for (i=0; i&lt;mapDimension; i++){ for (j=0; j&lt;mapDimension; j++){ //cout &lt;&lt; "i" &lt;&lt; i &lt;&lt; "j" &lt;&lt; j &lt;&lt; "k" &lt;&lt; k &lt;&lt; endl; //cout &lt;&lt; "mapDimension" &lt;&lt; mapDimension &lt;&lt; endl; //cout &lt;&lt; "sectorTotal" &lt;&lt; sectorTotal &lt;&lt; endl; //Compute magnitude m, direction beta, and enlargment angle gamma matMagnitude[i][j] = (arrOccupancyGrid[i][j])*( maxDistance-matDepthZ[i][j]); //m(i,j) //cout &lt;&lt; "matMagnitude[i][j]" &lt;&lt; (arrOccupancyGrid[i][j])*( maxDistance-matDepthZ[i][j]) &lt;&lt; endl; matDirection[i][j] = ((float)atan2f( (float)(i-yPosRobot), (float)(j-xPosRobot))*180.0f/PI); //beta(i,j) //cout &lt;&lt; "matDirection[i][j]" &lt;&lt; ((float)atan2f( (float)(i-yPosRobot), (float)(j-xPosRobot))*180.000/PI) &lt;&lt; endl; //cout &lt;&lt; "matDepthZ[i][j]" &lt;&lt; matDepthZ[i][j] &lt;&lt; endl; if(matDepthZ[i][j] == 0){ //if matDepthZ[i][j] == 0; obstable is very far thus path is free, no enlargement angle matAngleEnlarge[i][j] = 0; //gamma(i,j) //cout &lt;&lt; "matAngleEnlarge[i][j]" &lt;&lt; 0 &lt;&lt; endl; } else{ //if matDepthZ[i][j] &gt; 0 there is an obstacle so compute enlargement angle matAngleEnlarge[i][j] = asin( robotSize4Sector / matDepthZ[i][j])*180/PI; //gamma(i,j) //cout &lt;&lt; "matAngleEnlarge[i][j]" &lt;&lt; asin( robotSize4Sector / matDepthZ[i][j])*180.0f/PI &lt;&lt; endl; } wew = k*sectorResolution; //k*alpha //cout &lt;&lt; "wew" &lt;&lt; k*sectorResolution &lt;&lt; endl; //Check if magnitude is a part of the sector if ( ((matDirection[i][j]-matAngleEnlarge[i][j]) &lt;= wew) &amp;&amp; (wew &lt;= (matDirection[i][j]+matAngleEnlarge[i][j])) ){ matHconst[i][j]=1; //Part of the sector //cout &lt;&lt; "Part of the sector ---------------------------------------------------------------" &lt;&lt; endl; //cout &lt;&lt; "matHconst[i][j]=1" &lt;&lt; matHconst[i][j] &lt;&lt; endl; } else{ matHconst[i][j]=0; //Not Part of the sector //cout &lt;&lt; "Not Part of the sector" &lt;&lt; endl; //cout &lt;&lt; "matHconst[i][j]=0" &lt;&lt; matHconst[i][j] &lt;&lt; endl; } //Compute primary polar histogram Hp(k) //cout &lt;&lt; "firstfillPrimaryH" &lt;&lt; firstfillPrimaryH &lt;&lt; endl; if (firstfillPrimaryH==0){ //If first fill at sector //cout &lt;&lt; "matMagnitude[i][j]" &lt;&lt; matMagnitude[i][j] &lt;&lt; endl; //cout &lt;&lt; "matHconst[i][j]" &lt;&lt; matHconst[i][j] &lt;&lt; endl; float temp = matMagnitude[i][j]*matHconst[i][j]; //cout &lt;&lt; "matMagnitude[i][j]*matHconst[i][j]" &lt;&lt; temp &lt;&lt; endl; arrPrimaryH.push_back(temp); firstfillPrimaryH=1; //Trigger flag //cout &lt;&lt; "arrPrimaryH kth" &lt;&lt; arrPrimaryH.at(k) &lt;&lt; endl; } else{ //If sector filled previously arrPrimaryH.at(k) = arrPrimaryH.at(k)+(matMagnitude[i][j]*matHconst[i][j]); //cout &lt;&lt; "arrPrimaryH kth" &lt;&lt; arrPrimaryH.at(k) &lt;&lt; endl; } } } firstfillPrimaryH=0; //Reset flag } </code></pre>
VFH (Vector Field Histogram+): Obtaining the Primary Polar Histogram
<p>Because a the pose has six degrees-of-freedom, the covariance matrix representing its uncertainty should be $6 \times 6$. To determine how the covariance matrix propagates after each new IMU measurement, you must determine the <em>error state</em> propagation equation, where the error is a minimal parameterization of the error in your state. How to do this for a rotation parameterized as a unit quaternion is explained in detail in Section 2 of <a href="http://www-users.cs.umn.edu/~trawny/Publications/Quaternions_3D.pdf" rel="nofollow">Indirect Kalman Filter for 3D Attitude Estimation</a> by Trawny. Note that this implementation explicitly estimates the gyroscope bias as part of the state vector. You would need to replace that with the position and add your accelerometer measurements. In fact, I recommend reading that document (or at least the first two sections) very carefully, I learned a lot the first time I read it.</p>
9926
2016-05-23T09:01:47.133
|slam|ekf|jacobian|
<p>This question is strongly related to my other question over <a href="https://robotics.stackexchange.com/questions/9129/how-to-compute-the-error-function-in-graph-slam-for-3d-poses/9137#9137" title="here">here</a>.</p> <p>I am estimating 6-DOF poses $x_{i}$ of a trajectory using a graph-based SLAM approach. The estimation is based on 6-DOF transformation measurements $z_{ij}$ with uncertainty $\Sigma_{ij}$ which connect the poses. </p> <p>To avoid singularities I represent both poses and transforms with a 7x1 vector consisting of a 3D-vector and a unit-quaternion:</p> <p>$$x_{i} = \left( \begin{matrix} t \\ q \end{matrix} \right)$$</p> <p>The optimization yields 6x1 manifold increment vectors </p> <p>$$ \Delta \tilde{x}_i = \left( \begin{matrix} t \\ log(q) \end{matrix} \right)$$</p> <p>which are applied to the pose estimates after each optimization iteration:</p> <p>$$ x_i \leftarrow x_i \boxplus \Delta \tilde{x}_i$$</p> <p>The uncertainty gets involved during the hessian update in the optimization step:</p> <p>$$ \tilde{H}_{[ii]} += \tilde{A}_{ij}^T \Sigma_{ij}^{-1} \tilde{A}_{ij} $$</p> <p>where </p> <p>$$ \tilde{A}_{ij} \leftarrow A_{ij} M_{i} = \frac{\partial e_{ij}(x)}{\partial x_i} \frac{\partial x_i \boxplus \Delta \tilde{x}_i}{\partial \Delta x_i} |_{\Delta \tilde{x}_i = 0}$$</p> <p>and</p> <p>$$ e_{ij} = log \left( (x_{j} \ominus x_{i}) \ominus z_{ij} \right) $$</p> <p>is the error function between a measurement $z_{ij}$ and its estimate $\hat{z}_{ij} = x_j \ominus x_i$. Since $\tilde{A}_{ij}$ is a 6x6 matrix and we're optimizing for 6-DOF $\Sigma_{ij}$ is also a 6x6 matrix.</p> <hr> <p>Based on IMU measurements of acceleration $a$ and rotational velocity $\omega$ one can build up a 6x6 sensor noise matrix</p> <p>$$ \Sigma_{sensor} = \left( \begin{matrix} \sigma_{a}^2 &amp; 0 \\ 0 &amp; \sigma_{\omega}^2 \end{matrix} \right) $$</p> <p>Further we have a process model which integrates acceleration twice and rotational velocity once to obtain a pose measurement.</p> <p>To properly model the uncertainty both sensor noise and integration noise have to be considered (anything else?). Thus, I want to calculate the uncertainty as</p> <p>$$ \Sigma_{ij}^{t} = J_{iterate} \Sigma_{ij}^{t-1} J_{iterate}^T + J_{process} \Sigma_{sensor} J_{process}^T$$</p> <p>where $J_{iterate} = \frac{\partial x_{i}^{t}}{\partial x_{i}^{t-1}}$ and $J_{process} = \frac{\partial x_{i}^{t}}{\partial \xi_{i}^{t}}$ and current measurement $\xi{i}^{t} = [a,\omega]$.</p> <p>According to this formula $\Sigma_{ij}$ is a 7x7 matrix, but I need a 6x6 matrix instead. I think I have to include a manifold projection somewhere, but how?</p> <hr> <p>For further details take a look at the following publication, especially at their algorithm 2:</p> <p>G. Grisetti, R. Kümmerle, C. Stachniss, and W. Burgard, “A tutorial on graph-based SLAM,” IEEE Intelligent Transportation Systems Maga- zine, vol. 2, no. 4, pp. 31–43, 2010.</p> <hr> <p>For a similar calculation of the uncertainty take a look at the end of section III A. in:</p> <p>Corominas Murtra, Andreu, and Josep M. Mirats Tur. "IMU and cable encoder data fusion for in-pipe mobile robot localization." Technologies for Practical Robot Applications (TePRA), 2013 IEEE International Conference on. IEEE, 2013.</p> <hr> <p>.. or section III A. and IV A. in:</p> <p>Ila, Viorela, Josep M. Porta, and Juan Andrade-Cetto. "Information-based compact Pose SLAM." Robotics, IEEE Transactions on 26.1 (2010): 78-93.</p>
Calculate the uncertainty of a 6-dof pose for graph-based SLAM
<p>Some very general advice.</p> <p>My first impression from your question is that you are struggling with the fact that no hardware provides infinite resources. You can either slow down your control loops or look for better hardware (which may not exist).</p> <p>If your control has nested loops, then run each one a slow as possible. Are you controlling position at 20kHz? Pull that out and run that at 500Hz (A little faster if your mechanical mechanism is designed for high performance). Is you DAQ measuring current for the controller? Try running your current control at 5kHz but commutating at 20kHz. </p> <p>All those numbers are assuming a meso scale device using electro-magnetic actuators. </p> <p>Regarding interrupts, the rule of thumb is to do as little as possible, as quickly as possible, inside the interrupt and everything else outside it. However it feels like you might be using a timer interrupt to substitute for a real-time task scheduler (nothing wrong with that), in which case, you will want to do everything needed for that cycle in the interrupt. Add a time-overrun check at the end so that if your interrupt is taking longer than 50us you know about it and can adjust your strategy or optimize code.</p> <p>If you are writing interrupts then you need to go learn the details about data corruption issues. They are not scary and they are straightforward to manage on all modern processors. You can protect data in your idle loop by disabling interrupts in critical sections or using higher level synchronization constructs if you really need to. </p> <p>A final thought; even though you are coding in C++, you should read the processor datasheet and instruction set so that you have an idea of the capabilities and limitations of the hardware if you have not already. </p> <p>EDIT:</p> <p>From the limited code you added it looks like high level control for a series elastic actuator. The spring will dominate the mechanical time constant of the actuator system and you probably do not need that loop to run at 20kHz in lockstep with your motor commutation.</p>
9932
2016-05-23T21:08:26.110
|c++|interrupts|
<p>I'm working on a robotics project where I have 3 services running. I have my sensor DAQ, my logic ISR (motor controller at 20kHz) and my EtherCAT slave controller.</p> <p>DAQ and EtherCAT run in the idle and the logic runs during an interrupt. The logic does some calculations and controls the motor. The EtherCAT service (kinda like CANbus) runs together with my DAQ in the idle loop. I can not run the DAQ in the interrupt because that leaves me with less than 100ns for the EtherCAT service to run.</p> <p>I'm not sure whether this is the right way to do this especially considering all the scary things i've read regarding data corruption when using interrupts.</p> <p>Does anyone have some nice ideas on how to handle these services?</p> <p>I'm running all my code on a Zynq 7020 (on the ARM Cortex) and it's written in C++.</p> <p>Here is an example of my code:</p> <pre><code>/** * Get all sensor data */ void Supervisor::communication(void) { // Get all the sensors data dispatchComm.getData(&amp;motorAngle, &amp;motorVelocity, &amp;jointAngle, &amp;springAngle, &amp;tuningParameter); } /** * Run all the logic */ void Supervisor::logic(void) { dispatchLogic.calculate(tuningParameter, motorAngle, motorVelocity, jointAngle, springAngle); dispatchLogic.getData(&amp;angle, &amp;magnitude); // Dispatch values to the motor drive dispatchComm.setMotorDriveSetpoint(angle, magnitude); dispatchComm.writeToPiggyback((uint32_t) (tuningParameter), motorAngle, motorVelocity); } </code></pre>
How to split tasks between interrupts and the main loop on a bare metal controller?
<p>You haven't given a nominal (average or anticipated "normal") current draw for what you expect during the routine operation, so what I'm going to do will probably give you an over-specified battery pack for what you need. </p> <p>Typically you would have some "worst case" load (all your sensors powered on, microcontroller powered on, all motors at full load, etc.) - this will provide the worst-case current draw and will establish what kind of a "C" rating the battery pack needs to have (more on this later).</p> <p>Also, you typically aren't going to be operating at worst-case all the time, so you assume maybe your average duty cycle has all your electronics on and maybe your motors at 60% - again, this would be the <em>average</em> motor power during the entire time your device is powered on. This would establish what total capacity was required to provide the desired run time. </p> <p>A "C" rate is the current that would drain (or charge) 100% of battery capacity in 1 hour. So, for a 1000mAh battery, 1C is (1000mAh/1h) = 1000mA, or 1A. Different battery styles and chemistries have different charging and discharging C rates, or maximum allowable current limits. The nice thing about using a C rate instead of an actual current limit is that it scales correctly for all batteries of a particular style and chemistry, regardless of capacity. </p> <p>So, the easiest way to determine capacity is to figure your nominal total load and multiply that by your run time requirement. The best way to work with load is to multiply the current for each component by its operating voltage and get an operating <em>power</em>. Sum all of your component power requirements, then divide that by the battery pack's nominal voltage. Once you get a total current requirement <em>in terms of your battery voltage</em>, then multiply that by your desired run time to get required battery capacity. </p> <p>In your case, you don't give an <em>average</em> load current, just a <em>maximum</em> current. So, based on that, you need a $(12\mbox{V})*(9.5\mbox{A})*(4\mbox{h}) = 456\mbox{Wh}$ battery. This is a significant amount of capacity.</p> <p>If you are using a lead-acid battery, with a nominal voltage of 12V, then you need a capacity of 38Ah. If you are using a LiPO battery, with a nominal cell voltage of 3.6V, then you would need 3 (10.8V) or 4 (14.4V) in series (3S or 4S pack), and your required capacity is 42.2Ah or 31.7Ah, respectively. </p> <p>Depending on how detailed you want to be with your battery pack specifications (or how stringent your requirements are) you may want to dig a little deeper with the pack design. </p> <p>For instance, quality battery manufacturers should provide a curve that shows battery capacity versus discharge limits. As battery discharge current increases, the pack heats up and the internal resistance of the battery increases. This means that more power is wasted in the battery pack, reducing the effective output capacity of the battery.</p> <p>More important even than that is checking the battery cycles versus depth of discharge. Some batteries could give you 2000 cycles if you discharge down to 40 percent state of charge but only 500 cycles if you went to 20 percent state of charge. Then it's up to you to determine what the benefit is of driving to a lower state of charge. </p> <p>For instance, if you <em>need</em> to have 10Ah of capacity, you could either go to a (10/0.6) = 16.6Ah battery to go to a 40% state of charge during one "run" or you could use a (10/0.8) = 12.5Ah battery and push to a 20% state of charge. Both packs get you the capacity you need, but you'll wear out the smaller pack first because you're driving to a deeper depth of discharge. However, the benefits (reduced weight, reduced volume) may outweigh the cost (literally, cost of replacing the packs on a 4:1 basis) of using the larger pack. The smaller pack may also actually be less expensive than the larger pack, which could also help justify that decision. </p> <p>I'll add too that, while you may commonly "float" a lead-acid battery to near 100% state of charge, it's bad (read:fires happen) when you try to do the same for lithium chemistry batteries. That is, lithium batteries may typically only ever get charged to 90% state of charge. </p> <p><strong>So, in summary</strong>, </p> <ol> <li>Multiply your load current by the load voltage to get load power. Do this for each device you want to run. </li> <li>Add these load powers together to get the total load power required, then multiply <em>that</em> by the desired run time to get the capacity requirement. </li> <li>Choose a battery chemistry. This is generally based on energy density (how much the batteries weigh for a given capacity) and specific energy (how much space the batteries take up for a given capacity). </li> <li>Based on the chemistry selected, add <em>nominal</em> cell voltages together until the desired operating voltage is achieved. </li> <li>Using the same number of batteries selected above to achieve operating voltage, add the <em>maximum</em> cell voltages together and ensure that no maximum voltage specification is exceeded. </li> <li>Divide the total capacity requirement in Wh or mWh by the <em>nominal</em> pack voltage calculated in step 4 to get the ideal battery capacity. </li> <li>Find the allowable charge limits on the battery, based on depth of discharge limits and common charge limits. For instance, if the battery manufacturer suggests limiting discharge to 40% state of charge (a 60% depth of discharge), and suggests not charging above 90% state of charge, then the battery only really has 50% of its rated capacity available as useful output. </li> <li>Find the actual required battery capacity by (ideal capacity)/(max charge - min charge). So a 10Ah battery that can only run between 40% and 90% would be (10Ah)/(0.9 - 0.4) = 20Ah pack required. </li> <li>Finally, design the pack. The number of cells in series was calculated in step 4 above. Find a particular cell to use and get that battery's capacity. Divide the actual required capacity found in step 8 above and divide that by the cell capacity to get the number of cells to put in parallel. Take the ceiling of that number, which means if you find you need 2.05 cells in parallel, you need 3 cells in parallel to meet your requirements. </li> <li>The total number of batteries required is the number in series (from step 4) multiplied by the number in parallel (from step 9). </li> </ol>
9941
2016-05-25T10:33:57.157
|motor|battery|
<p>I am trying to run 2 12V Geared DC motors which has No-load current = 800 mA(Max), Load current = upto 9.5 A(Max). Runtime to be atleast 3-4 hours.<br> The motor takes about 10-12 V for operation.<br> I need a proper battery pack for these, but how can I determine the specs I should go for?</p>
Suggestion for correct battery pack
<p>Your setup as is will certainly not work. Roll and Pitch do not map to X and Y one-to-one. You will need to use linear algebra to convert demanded XY (see <a href="https://stackoverflow.com/questions/1568568/how-to-convert-euler-angles-to-directional-vector">this stack overflow question on how</a>) into roll, pitch, and yaw for control. Moving only one will not move the other. </p> <p>For your general question about cascaded PID controllers, that part of your strategy is on the right track. It is possible to cascade PID controllers, and PID controllers are frequently used in cases where an inner loop or plant is unmodelable. </p>
9946
2016-05-26T13:03:08.070
|quadcopter|control|pid|raspberry-pi|stability|
<p>Good day,</p> <p><strong>Introduction</strong></p> <p>I am currently working on an autonomous quadcopter project. I have currently implemented a cascaded PID controller consisting of two loops. The inner rate loop takes the angular velocity from the gyroscope as measurements. The outer stabilize/angle loop takes in angle measurements from the complementary filter (gyroscope + accelerometer angles). </p> <p><strong>Question:</strong></p> <p>I would like to ask if it is effective to cascade a Lateral Velocity (X and Y - axis) PID controller to the the Angle Controller (Roll and Pitch) to control drift along the X-Y plane. For the outermost PID controller, the setpoint is 0 m/s with the measured velocities obtained from integrating linear accelerations from the accelerometer. This then controls the PID controller responsible for the Pitch (if Y velocity PID) and Roll (if X velocity PID).</p>
Quadcopter: X-Y Velocity PID Controller
<p>Series elastic actuators tend to have more stable force control because the spring filters out the high-frequency motion of the mechanism. A low frequency in the system dynamics means that you can use slower control techniques, which is important when using digital controllers with naive control implementations, and sensors with significant abbe error and latency.</p> <p>In short, springs make the implementation of the controller easier when the designer does not have the skill or the time to optimize the performance of the actuator.</p> <p>Your intuition is generally correct, except that SEA designers try to use springs without much dissipation. They are often (usually) measuring the displacement of the spring to estimate force.</p> <p>Beware of people using SEA as a buzzword they don't really understand. Once upon a time it referred to a specific thesis and patent from MIT. The Series Elastic Actuator as designed at MIT was hyped in typical MIT fasion as innovative and orginal, probably a bit beyond what it actually was. Over time it's come to mean the general approach of intentionally inserting extra compliance into an actuator. Meanwhile, using appropriate compliance has been good mechanism design for hundreds of years. </p> <p>The <a href="http://www.cc.gatech.edu/fac/Chris.Atkeson/legs/jh1c.pdf">original</a> <a href="https://groups.csail.mit.edu/lbr/hrg/1995/mattw_ms_thesis.pdf">papers</a> are worth reading. </p> <p>As a final note, the latency in your sensing->actuation response (including the mechanical time constants) is the key issue. If you don't specifically need mechanical filtering (maintaining smooth contact with the environment was one of the original motivations) you can filter electronically.</p>
9951
2016-05-28T03:34:12.307
|robotic-arm|actuator|dynamics|torque|
<p>The other day, somebody was telling me about a robot in their lab, and they mentioned that it has "series elastic" actuators. But after doing a bit of Googling, I'm still not sure as to what this means, and have been unable to find a simple explanation. It seems that it is something to do with the link between the actuator and the load having a spring-like quality to it, but this is rather vague...</p> <p>In any case, the what I am really interested in is the advantages and disadvantages of series elastic actuators. Specifically, I have read that one of the advantages is that it allows for "more accurate and stable force control". However, this appears counter-intuitive to me. I would have thought that if the link between the actuator and the load was more "springy", then this would lower the ability to have accurate control over the force send to the load, because more of this force would be stored and dissipated in the spring, with less directly transferred to the load.</p> <p>So: Why do series elastic actuators have "more accurate and stable force control"?</p>
Why do series elastic actuators have more accurate and stable force control?
<p>Are you sure about your expression for $HH^T$?</p> <p>I get</p> <p>$$ HH^T = \begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 \\ 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 \end{bmatrix} $$ which agrees with your intuition.</p>
9953
2016-05-28T10:08:01.727
|kalman-filter|
<p>Suppose I have one robot with two 3D position sensors based on different physical principles and I want to run them through a Kalman filter. I construct an observation matrix two represent my two sensors by vertically concatenating two identity matrices.</p> <p>$H = \begin{bmatrix} 1&amp;0&amp;0\\0&amp;1&amp;0\\0&amp;0&amp;1\\1&amp;0&amp;0\\0&amp;1&amp;0\\0&amp;0&amp;1 \end{bmatrix}$ $\hspace{20pt}$ $\overrightarrow x = \begin{bmatrix} x\\y\\z \end{bmatrix}$</p> <p>so that </p> <p>$H \overrightarrow x = \begin{bmatrix} x\\y\\z\\x\\y\\z \end{bmatrix}$</p> <p>which represents both sensors reading the exact position of the robot. Makes sense so far. The problem comes when I compute the innovation covariance</p> <p>$S_k = R + HP_{k|k-1}H^T$</p> <p>Since </p> <p>$H H^T = \begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 \\ 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 \\ 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 \\ \end{bmatrix}$ </p> <p>then, no matter what $P$ is, I'm going to wind up with $x$ innovations from the first sensor being correlated to $z$ innovations from the second, which seems intuitively wrong, if I'm interpreting this right. </p> <p>Proceeding from here, my gain matrix ($K = P_{k|k-1} H^T S_k^{-1}$) winds up doing some pretty odd stuff (swapping rows and the like) so that, when updating a static system ($A = I_3, B = [0]$) with a constant measurement $\overrightarrow z = [1,0,0]$ I wind up with a predicted state $\hat x = [0,0,1]$.</p> <p>If I separate the sensors and update the filter with each measurement separately, then $H H^T = I_3$, and I get sensible results.</p> <p>I think I am confused about some technical points in one or more of these steps. Where am I going wrong? Does it not make sense to vertically concatenate the observation matrices?</p> <p>I suppose that I could just set the off-diagonal 3x3 blocks of $S_k$ to 0, since I know that the sensors are independent, but is there anything in the theory that suggests or incorporates this step?</p>
kalman filter with redundant sensors