Answer
stringlengths
44
28.2k
βŒ€
Id
stringlengths
1
6
CreationDate
stringlengths
23
23
Tags
stringlengths
4
94
Body
stringlengths
54
35.7k
Title
stringlengths
13
150
<p>Modern solders are lead free. Even with older solders that contained lead, the amount present in the &quot;smoke&quot; was minimal. Having said that common sense say to thoroughly wash your hands when get done and don't eat or drink while soldering.</p> <p>If you are concerned there are number of small desktop &quot;filters&quot; you can purchase. Most are not much more than a simple fan, like one for a PC power supply, with a carbon filter element over the fan inlet or outlet.</p>
7894
2015-08-19T00:13:19.007
|untagged|
<p>So, I need to know a couple of things about soldering. My primary workspace, robotics and otherwise, is a desk with a computer and only a little bit of free space (4 ft. by 6 in.). I am wondering if it is safe to solder in such a small area. Also, what level of ventilation do I need to solder safely? My desk is in a normal house room and my desk is write next to an air vent. My house has heating and A/C. Do I need a fan or a fume sucker thing? I plan to only solder a little to get things to stay in my solder less bread board (soldering header pins onto wires and such). So, basically, what are the minimum requirements for soldering safely (space and ventilation)? Also, if anyone could point me to some hobby/beginner level soldering must-haves on Amazon that would be great, thanks.</p>
Beginner Soldering question
<p>Given that you are doing a more "constrained" goal, with a "mostly" static background, I would recommend simply doing a "<strong>background image subtraction</strong>" method. The "hard part" which has come a long way over the last decade is how you deal with shadows, light changes, and foliage moving. </p> <p>There are tons of resources on this topic, but here is a good one I found after a quick cursory search: <a href="http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/" rel="nofollow noreferrer">http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/</a></p> <p>This should get you to a 80% solution for what you want.</p> <p>If you want to go deeper, and try to identify specific animals, there are two main approaches you can potentially follow. The easy one is <strong>Template Matching</strong>, the harder one is creating a <strong>Bayes Classifier</strong>. </p> <p>In either approach, you would:</p> <ol> <li>Gather a sample set of data (most likely by using the output from above)</li> <li>Either: <ul> <li>Create templates you would match against</li> <li>Train your classifier to identify the animals you would want</li> </ul></li> </ol> <p>A couple of notes:</p> <ul> <li>Template matching out of the box is highly scale and orientation dependent. While you can start with basic template matching, you'll probably quickly want to create a Gaussian pyramid. Here is a good reference: <a href="https://stackoverflow.com/questions/22480485/image-matching-in-opencv-python">https://stackoverflow.com/questions/22480485/image-matching-in-opencv-python</a></li> <li>Doing Bayesian Classifiers well is <strong>hard</strong>, and if you just search on Google Scholar, you'll see a tons of papers on the subject. However, it seems to be the "way to go" for high accuracy. Generally you would combine the base classifier with some other machine learning technique (such as a Markov Model). If you do go this route, I would recommend trying to do something "simpler" than trying to find a whole bird. Instead, I would recommend perhaps identifying a simple feature that would "mean" bird/animal, such as locating an "eye" or "beak".</li> </ul> <p>Hope this helps.</p>
7910
2015-08-20T15:47:39.520
|computer-vision|
<p>I'm a software developer not experienced in AI or machine learning, but I'm now interested in developing this kind of software. I want to develop software that recognizes some specific objects, specifically, animals from a video stream (or a sequence of static images).</p> <p>I saw there's a library called openCV which is often commented in this forum, but what I saw so far is this library is a helper for working with images, I didn't find the <em>object recognition</em> or <em>self learning</em> part.</p> <p>Is <em>openCV</em> a good starting point? better go for some theory first? or there are other already developed libraries or frameworks aimed for object recognition?</p> <p><strong>EDIT</strong> To give some context: I will have ona camera checking a landscape, mostly static but some leaves may move with the wind or some person may step in, and I want to get an alert when some animal is into view, I can reduce the "animals" to only birds (not always I will have a nice bird/sky contrast).</p> <p>I did some work with supervised neural networks some 15 years ago and studied some AI and machine learning theory, but I guess things have improved way too much since then, that's why I was asking for some more practical first steps.</p> <p>Thank you</p>
How can I recognize animals in a video stream or static images with openCV or other library/software?
<p>Short answer - No, it is not possible.</p> <p>None of the motor drivers will deliver 24W, see <a href="https://docs.google.com/viewer?url=http%3A%2F%2Fwww.irobotweb.com%2F~%2Fmedia%2FMainSite%2FPDFs%2FAbout%2FSTEM%2FCreate%2FBatteryPower_2015.pdf%3Fla%3Den" rel="nofollow">link</a>. In addition the power supply that provides charging power to the robot is limited to just over 24W under the best case. See the dock rating on the label (22.5V @ 1.25A). All of this power is used to charge the robot battery and power the robot electronics to control charging. Even if you waited until the robot battery was finished charging you would still need to convert the 22.5V to 12V but there is no suitable access to the 22.5V.</p> <p>You will either have to live with charging your Tango independently from the robot or use a lower power computer (Arduino, RasPi, etc.) to control Create 2. </p>
7912
2015-08-21T06:42:12.547
|irobot-create|
<p>Project Tango Development Kits come with a mini-dock (see picture below). </p> <p>I am controlling the iRobot Create 2 by the mounted Tablet using the USB cable provided plugged into the mini-dock. (see <a href="https://developers.google.com/project-tango/hardware/tablet#get_started" rel="nofollow noreferrer">docs</a>).</p> <blockquote> <p>The USB 3.0 port on the mini-dock is only functional when the tablet is docked. The port can be used to attach an external memory drive or standard peripherals to the tablet.</p> </blockquote> <p>I wish to recharge the tablet using the power from the iRobot. The mini dock comes with a port for external charging:</p> <blockquote> <p>The mini-dock accepts a power adapter for faster charging (not provided). The power adapter output must be 12V, 2A, and the connector must be a barrel plug with 5.5mm outer diameter, 2.1mm inner diameter, center positive.</p> </blockquote> <p>Ideally the charging would happen only when the iRobot is also charging, but charging all the time is acceptable. </p> <p>Is this possible? If so, how?</p> <p><a href="https://i.stack.imgur.com/595xX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/595xX.jpg" alt="enter image description here"></a></p>
Powering a Project Tango Tablet with iRobot Create 2
<p>If you see integral, think area. Derivative, think slope. Integral error is error <em>times</em> sample time (plus previous integral error), derivative error is the change in error divided by time. </p> <p>Your derivative error term is correct, but your integral error term is not. This shouldn't be an issue because you aren't using an integral term (even though you should!)</p> <p>Where I would guess your problem lies is with your controller output - you have </p> <p><code>motor_setpoint += k_p * e + k_i * e_i + k_d * e_d;</code></p> <p>Here you have the <code>+=</code> operator, which indicates that <code>motor_setpoint</code> is <em>accumulating</em> the output of your controller. This is incorrect. The output of your PID controller alone is the input to your motor. </p> <p>I would guess you might be thinking that you are adding the control signal to your reference, but remember that your reference was already taken into account when calculating the error! </p> <p>This is, for me at least, the magical aspect of the PID controller. It doesn't care what is between the controller and the output, it just adjusts until there is zero error. </p> <p>If error is accumulating quickly (integral error is high), the controller "opens up", but then if error starts to drop quickly (derivative error is high) it will "ease off" to minimize the overshoot. </p> <p>If anything, I would suggest eliminating the derivative term and using only the PI controller - this is commonly done in industry because the derivative term is highly sensitive to noise. </p> <p>Anyways, hopefully this fixes your issue, but if not please at least make the changes and then re-generate and re-post your plots. If the base controller isn't right we can't divine anything from the plots.</p> <p>Also, check out <a href="https://en.wikipedia.org/wiki/Ziegler%E2%80%93Nichols_method" rel="nofollow">Ziegler-Nichols</a> tuning if you're looking for some professional PID tuning advice. </p>
7913
2015-08-21T08:05:02.987
|pid|
<p>I have a dual (sequential) loop control system controlling the angle of a rotational joint on a robot using an absolute encoder. I have tuned the inner control loop (for the motor) and am now working on tuning the outer loop (for the joint).</p> <p><em>Example of a dual loop controller</em> <a href="https://i.stack.imgur.com/IuDx0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IuDx0.png" alt="dual loop"></a></p> <p>When I disturb the system the response isn't what I would expect.</p> <p><em>Kp = 0.4</em> <a href="https://i.stack.imgur.com/fmBnB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fmBnB.png" alt="enter image description here"></a></p> <p><em>Kp = 0.1 Kd = 0.001</em> <a href="https://i.stack.imgur.com/PBz8Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PBz8Q.png" alt="enter image description here"></a></p> <p>I didn't add a <em>Ki</em> term because I don't have any steady state error.</p> <p>I'm confused by the fact that the second overshoot in the first plot is larger than the first one. No matter how I adjust the parameters I can't seem to get rid of the oscillation in the velocity of the joint (seen in the second plot). One limitation I have is if I increase both <em>Kp</em> and <em>Kd</em> too high the gearbox of the becomes very noisy because the noise in the encoder signal creates larger adjustments in the position of the motor. I'm working on adding a filter to the output using the method described <a href="http://controlguru.com/pid-with-controller-output-co-filter/" rel="nofollow noreferrer">here</a>.</p> <p>The code I'm using for the outer loop is:</p> <pre><code>static float e_prev = 0.0; e = joint_setpoint - joint_angle; e_i += e/0.001; // dt = 0.001s e_d = (e - e_prev)/0.001; // dt = 0.001s e_prev = e; motor_setpoint += k_p * e + k_i * e_i + k_d * e_d; </code></pre> <p>I'm beginning to think that the system might not be able to be modeled by a first order equation, but would this change the implementation of the control loop at all? Any advice is appreciated!</p> <p>Ben</p>
Non linear control system?
<p>It should be possible to plug that shield into either a Due or a Mega, with the caveat that the URF01 pins on the shield may interfere with two sockets of the Power header (IOREF and the opening next to it). (Older Mega's may have a six-pin Power header rather than eight-pin.) If there's interference you might need to remove a pin or two from the URF01 group. Here is a picture of a Due overlaid (at 41% transparency) on top of a Mega: <a href="https://i.stack.imgur.com/Zbe7J.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zbe7J.jpg" alt="Due at 41% transparency overlaid on Mega"></a></p> <p>The overlay is imperfect (due to slight mispositioning, slight scale errors, and camera distortion) but is good enough to show that the shield should plug in. The shield appears to have a power-indicator LED on it, but aside from that apparently has no electronics to cause problems.</p> <p>The photos that I overlaid came from <a href="https://www.arduino.cc/en/Main/arduinoBoardMega" rel="nofollow noreferrer">arduinoBoardMega</a> and <a href="https://www.arduino.cc/en/Main/ArduinoBoardDue" rel="nofollow noreferrer">ArduinoBoardDue</a> or related sites.</p> <p>The following is from β€œ<a href="https://blog.arduino.cc/2011/01/05/nice-drawings-of-the-arduino-uno-and-mega-2560/" rel="nofollow noreferrer">nice drawings of the arduino uno and mega 2560</a>” on arduino.cc and should let you measure your Due to see if it matches: <a href="https://i.stack.imgur.com/7RdzA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7RdzA.png" alt="mega dimensions diagram"></a></p> <p>Numbers in the picture above are in mils, that is, in thousandths of an inch. Numbers in the picture below (from <a href="http://www.hwkitchen.com/products/arduino-mega-2560/" rel="nofollow noreferrer">hwkitchen.com</a>) are in millimeters. Note, the picture above is the more-recent R3 version, with 8 pins in the power block, vs the 6 pins in that block in the picture below. <a href="https://i.stack.imgur.com/Vly49.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vly49.jpg" alt="enter image description here"></a></p> <p>The thread <a href="http://forum.arduino.cc/index.php?topic=148640.0" rel="nofollow noreferrer">shield layout for DUE/Mega</a> in forum.arduino.cc also may be of interest. It suggests the Due and Mega have matching pin layouts. It also gives links to some CAD files for the boards.</p>
7915
2015-08-21T18:10:46.970
|arduino|
<p>Like the title says.. Will it work? I know about the due 3.3 volt limitations.</p> <p>I want to build a hexapod with 18 servo's.</p> <p>The shield I am looking at:</p> <p><a href="http://yourduino.com/sunshop2/index.php?l=product_detail&amp;p=195" rel="nofollow">http://yourduino.com/sunshop2/index.php?l=product_detail&amp;p=195</a></p> <p>If it isn't compatible. Is there an alternative shield which will work? I can't seem to find much for the due.</p>
Arduino mega shield v2.0 compatibility with arduino due
<p>I don't believe 3 consecutive axes intersecting at a point rules out closed form solutions. Although i don't have a reference for this.</p> <p>You should investigate <a href="http://openrave.org/" rel="nofollow">OpenRave</a> and <a href="http://openrave.org/docs/latest_stable/openravepy/ikfast/" rel="nofollow">ikfast</a>. This will generate library independent optimized C++ code with closed form solutions for a wide variety of kinematic chains and IK types. </p> <p>I found this paper that might be of interest to you:</p> <blockquote> <p>Luo, R.C.; Tsung-Wei Lin; Yun-Hsuan Tsai, "<a href="http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;arnumber=6942608&amp;isnumber=6942370" rel="nofollow">Analytical inverse kinematic solution for modularized 7-DoF redundant manipulators with offsets at shoulder and wrist</a>," in Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on , vol., no., pp.516-521, 14-18 Sept. 2014</p> </blockquote> <p>While it describes a 7 DOF arm, its shoulder and wrist joints don't intersect at a point. You might gain some insight from their formulation, or perhaps some of their references.</p>
7920
2015-08-22T11:36:47.887
|inverse-kinematics|
<p>My 6 joint robot arm structure doesn't meet the requirements for a closed form solution (no 3 consecutive axes intersecting at a point or 3 parallel axes...). </p> <p>What would be best method to adopt to get solution in 1ms or less? Estimation accuracy of 1mm. I'm assuming the computation is done on an average laptop Intel Core i3, 1.7GHz, 4GB RAM</p>
Inverse kinematics solution for 6DOF serial arm
<p>If you know the wheel radius and the speed of the robot, you will be able to calculate its location at any time relatively to its initial position.</p> <pre><code>travelled_distance = Speed * time; current_position = initial_position + travelled_distance; </code></pre> <p>This is a simplified 1-dimensional equation than can help you. But keep in mind that without sensors and odometry the localization can't be very accurate.</p>
7921
2015-08-22T21:36:03.890
|localization|mapping|
<p>Is it possible to localize a robot without any sensors, odometer and servo motors?</p> <p>Assume robot has dc motors and no obstacles.</p>
Robot localization without any sensors
<p>You have to know your initial heading, let's call it $\theta_0$. So you start at some position, $p_{start}$, and you're trying to get to some end position, $p_{end}$.</p> <p>Assume starting position and ending positions are given by:</p> <p>$$ p_{start} = &lt;x_0 , y_0&gt; \\ p_{end} = &lt;x_1 , y_1&gt; \\ $$</p> <p>Those positions are absolute, but you are trying to get from one to another, so all you really care about are relative distances. Then, let:</p> <p>$$ dX = x_1 - x_0 \\ dY = y_1 - y_0 \\ $$</p> <p>Now you can find the absolute heading $\theta_{0 \rightarrow 1}$ with the arc tangent of $dX$ and $dY$, but the basic $atan{}$ function doesn't handle negative signs correctly because it can't tell if which or both of $dX$ and $dY$ were negative. What you need is <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow">atan2</a>. </p> <p>$atan2{(dY, dX)}$ will give you the absolute heading of $p_{end}$ with respect to $p_{start}$. Now to find how much your vehicle needs to turn, find the difference between your starting heading and the required heading:</p> <p>$$ \mbox{Turn angle}= \theta_{0 \rightarrow 1} - \theta_0 \\ \mbox{Turn angle}= atan2{(dY, dX)} - \theta_0 \\ $$</p> <p>With turn angle, I like my vehicles to turn the direction that requires the least movement, so instead of limiting $\mbox{Turn angle}$ to $0&lt;= \mbox{Turn angle} &lt;= 2\pi$, I would limit the angle to $-\pi&lt;= \mbox{Turn angle} &lt;= \pi$.</p> <p>In this way you would wind up turning -10 degrees (10 degrees clockwise) instead of +350 (350 degrees CCW). Ultimately you should wind up in the same spot, but -10 looks "right" and +350 looks foolish. </p> <p>You can limit the turn angle with code like:</p> <pre><code>while turnAngle &gt; 3.1415 turnAngle = turnAngle - 2*3.1415: end while turnAngle &lt; -3.1415 turn Angle = turnAngle + 2*3.1415; end </code></pre> <p>This code runs and doesn't care what multiple of +/-2$\pi$ you're in, it spits out the same angle bounded to within +/-$\pi$.</p> <p>Finally, calculate how far you travel along your new heading with the magnitude of distance between the points:</p> <p>$$ \mbox{distance} = \sqrt{dX^2 + dY^2} \\ $$</p> <p>Remember to set the new absolute heading, $\theta_{0 \rightarrow 1}$, as your current heading then get the next point!</p> <h2>:UPDATE: - Numeric Example</h2> <p>I believe a point of confusion here may be that I didn't state that the absolute heading is relative to the +x-axis. That said, I'll work through OP's problem as shown in the most recent update. </p> <p>The initial heading $\theta_0$ is in-line with the +x-axis, so $\theta_0 = 0$. </p> <p>$$ p_{start} = &lt;-300,300&gt; \\ p_{end} = &lt;-300,-300&gt; \\ $$</p> <p>$$ dX = x_{end} - x_{start} = (-300) - (-300) \\ \boxed{dX = 0} \\ $$</p> <p>$$ dY = y_{end} - y_{start} = (-300) - (300) \\ \boxed{dY = -600} \\ $$</p> <p>$$ \theta_{0 \rightarrow 1} = atan2(dY,dX) \\ \theta_{0 \rightarrow 1} = atan2(-600,0) \\ \theta_{0 \rightarrow 1} = -1.5708 \mbox{rad} = -90^o $$</p> <p>Now, the change in heading:</p> <p>$$ \mbox{Turn Angle} = \theta_{0 \rightarrow 1} - \theta_0 \\ \mbox{Turn Angle} = -1.5708 - 0 \\ \mbox{Turn Angle} = -1.5708 = -90^o\\ $$</p> <p>The distance you need to travel after turning $\mbox{Turn Angle}$ is given by: $$ \mbox{distance} = \sqrt{dX^2 + dY^2} \\ \mbox{distance} = \sqrt{0^2 + (-600)^2} \\ \mbox{distance} = \sqrt{(-600)^2} \\ \mbox{distance} = 600 \\ $$</p> <p>Now the new heading $\theta_1$ is equal to the absolute heading between start and end:</p> <p>$$ \theta_1 = \theta_{0 \rightarrow 1} \\ $$</p> <p>Or, alternatively, the new heading is equal to the initial heading plus the turn angle, but the turn angle was defined by the difference between $\theta_{0 \rightarrow 1}$ and $\theta_0$, so the equation above is just a shortcut. </p> <p>Now, proceed to the next point:</p> <p>$$ p_{start} = &lt;-300,-300&gt; \\ p_{end} = &lt;0,-300&gt; \\ $$</p> <p>$$ dX = x_{end} - x_{start} = (0) - (-300) \\ \boxed{dX = 300} \\ $$</p> <p>$$ dY = y_{end} - y_{start} = (-300) - (-300) \\ \boxed{dY = 0} \\ $$</p> <p>$$ \theta_{1 \rightarrow 2} = atan2(dY,dX) \\ \theta_{1 \rightarrow 2} = atan2(0,300) \\ \theta_{1 \rightarrow 2} = 0 \mbox{rad} = 0^o $$</p> <p>Now, the change in heading:</p> <p>$$ \mbox{Turn Angle} = \theta_{1 \rightarrow 2} - \theta_1 \\ \mbox{Turn Angle} = 0 - (-1.5708) \\ \mbox{Turn Angle} = +1.5708 = +90^o \\ $$</p> <p>The distance you need to travel after turning $\mbox{Turn Angle}$ is given by: $$ \mbox{distance} = \sqrt{dX^2 + dY^2} \\ \mbox{distance} = \sqrt{(300)^2 + (0)^2} \\ \mbox{distance} = \sqrt{(300)^2} \\ \mbox{distance} = 300 \\ $$</p> <p>Now the new heading $\theta_1$ is equal to the absolute heading between start and end:</p> <p>$$ \theta_2 = \theta_{1 \rightarrow 2} \\ $$</p> <p>This process repeats as you see fit. </p> <h2>:Second Update: Code Issues</h2> <p>You need to turn (left or right) by the amount <code>turnAngle</code>. You have not described your physical robot so nobody can tell you exactly how to turn. </p> <p>You have your code in <code>void loop()</code>, which makes me think you are running the same code over and over, but you never take the following steps:</p> <ol> <li>You never declare a starting heading. </li> <li>You fail to define <code>turnAngle</code> as the <em>difference</em> between <code>atan2((yt-yc),(xt-xc))</code> and the starting heading. </li> <li>After you have turned, you fail to set your new heading. </li> <li>You fail to move forward. </li> <li>You fail to update your current position with the target position</li> </ol> <p>I would suggest modifying your code as follows:</p> <p> <h2>Last update</h2> <p>The wheel base is the distance from the center of the tire to the center of the other tire. Assuming that the motor speeds while turning are the same magnitude but opposite sides (<code>leftMotorSpeed = - rightMotorSpeed</code>), and the wheels are the same diameter, you can find out how quickly you are turning by using the arc length equation <code>s = L*theta</code>, so <code>theta = s/L</code>, where <code>theta</code> is the rotation, <code>s</code> is the arc length, and <code>L</code> is half the wheel base. Then, taking a derivative, $\dot{\theta} = \dot{s}/L$, where $\dot{s}$ is the linear tire speed, given by $\dot{s} = r_\mbox{tire}*v_\mbox{motor}$</p> <p>So you can take motor speed and get back to rotation rate, and then numerically integrate that and compare to the $\mbox{Turn Angle}$: </p> <p><code>thetaDot = (rTire * vMotor) / (wheelBase/2);</code> </p> <p>then </p> <p><code>theta = theta + thetaDot*dT;</code> </p> <p>where <code>dT</code> is the sample/sweep time on the controller. Finally, use a while loop to turn the motors until the desired angle is reached. Putting it all together:</p> <pre><code>//current points float xc = -300; float yc = 300; //target points float xt = -300; float yt = -300; //turning angle float turnAngle; //*************** float startingHeading = 0; float currentHeading = startingHeading; //*************** //*************** float turnRight = 1; float turnLeft = -1; float wheelBase = &lt;This is a number you get by measuring your vehicle&gt; float tireRadius = &lt;This is a number you get by measuring your vehicle&gt; float speedModifier = 0; float vRightMotor = 0; float vLeftMotor = 0; float vMotor = 0; float theta = 0; float distance = 0; //************** void setup() { // pin setup Serial.begin(9600); } void loop() { //************* destinationHeading = atan2((yt-yc), (xt-xc)); //calculate turning angle destinationHeading = destinationHeading * 180/3.1415; //convert to degrees turnAngle = destinationHeading - currentHeading; //************* if (turnAngle &gt; 180) { turnAngle = turnAngle-360; } if (turnAngle &lt; -180) { turnAngle = turnAngle+360; } //*************** if (turnAngle &lt; 0) { speedModifier = turnRight; } if (turnAngle &gt; 0) { speedModifier = turnLeft } theta = 0; while abs(abs(theta)-abs(turnAngle)) &gt; 0 { vRightMotor = speedModifier * &lt;100 percent speed - varies by your application&gt;; vLeftMotor = -speedModifier * &lt;100 percent speed - varies by your application&gt;; &lt;send the vRightMotor/vLeftMotor speed commands to the motors&gt; vMotor = vRightMotor; thetaDot = (tireRadius * vMotor) / (wheelBase/2);` theta = theta + thetaDot*dT; } &lt;send zero speed to the motors&gt; currentHeading = destinationHeading; distance = sqrt((xt - xc)^2 + (yt - yc)^2); if (distance &gt; 0) { // go forward } xc = xt; yc = yt; //**************** } </code></pre>
7928
2015-08-24T11:23:36.903
|arduino|navigation|
<p>My goal is to move robot in certain points as shown in the figure. It's initial position is (x0,y0) and move along other coordinates.</p> <p><a href="https://i.stack.imgur.com/TbgKo.png" rel="noreferrer"><img src="https://i.stack.imgur.com/TbgKo.png" alt="enter image description here"></a></p> <p>I am able to track robot position using a camera which is connected to pc and camera is located at the top of the arena. I've mounted a ir beacon on the robot, camera find this beacon and locates it's coordinate(in cm) in the arena. Using this coordinate how can I move my robot to another position, say new position (x1,y1)</p> <p>My robot has arduino mega 2560 with two DC motors, communication between pc and robot is done using bluetooth</p> <p>Update:</p> <p>Thanks @Chuck for the answer, however I still have few doubts regarding turning angle.</p> <p>My robot position setup is as shown in the image.</p> <p>(xc, yc) is the current position and (xt, yt) is the target position. <a href="https://i.stack.imgur.com/MsCA9.png" rel="noreferrer"><img src="https://i.stack.imgur.com/MsCA9.png" alt="enter image description here"></a></p> <p>If I want to align robot in the direction of the target coordinates, I've to calculate atan2 between target and current coordinates. But the angle remains same since it's current position is not changing with respect to the target point. so I assume robot simply makes 360' rotation at current position?</p> <p>Update:</p> <p>The path points is as show below in the image, is my initial heading angle assumption is correct? </p> <p>'1' is the starting point.</p> <p><a href="https://i.stack.imgur.com/buKFd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/buKFd.png" alt="enter image description here"></a></p> <p>Update</p> <p>Thank you for your patience and time, I'm still struck at turning, my code goes like this</p> <pre><code>//current points float xc = -300; float yc = 300; //target points float xt = -300; float yt = -300; //turning angle float turnAngle; void setup() { // pin setup Serial.begin(9600); } void loop() { turnAngle = atan2((yt-yc), (xt-xc)); //calculate turning angle turnAngle = turnAngle * 180/3.1415; //convert to degrees if (turnAngle &gt; 180) { turnAngle = turnAngle-360; } if (turnAngle &lt; -180) { turnAngle = turnAngle+360; } if (turnAngle &lt; -10) { //turn right } if (turnAngle &gt; 10) { //turn left } } </code></pre> <p>Since angle is always -90' robot only makes right turn in loop at current point, since angle is not changing. I think I'm missing something here.</p>
Robot path planning
<p>Disclaimer: I have never actually used the B3 so trust this only 90% ;)</p> <p>The iMax B3 is specifically marketed towards 2/3 cell Lipo Batteries and reviews suggest it works ok, so the B3 should be fine to use. As I mentioned in the comments, it can only supply 0.7A for 3 cells, so a 2.65Ah battery is going to take a very long time to charge. Can the B3 damage the battery? As long as you dont get faulty hardware (which is true for all chargers), and the fact you certainty won't have overcharging problems, it'll be completely safe. </p> <p>I suggest investing in the iMax B6 as you can charge at a much faster rate and those will last you a decent amount of time (I've had mine for 2 years no problem)</p>
7930
2015-08-24T17:02:14.263
|battery|lithium-polymer|
<p>Can i charge a lipo nano tech battery over imax b3 charger. 2650mah 35/70c 3s is the battery</p>
Can i charge a lipo nano tech battery over imax b3 charger
<p>A friend of mine is senior developer at Infineon technologies for automotive applications. He told me that the best Inertial-MEMS for the automotive applications (which is basically your price target) are provided by BOSCH. After a quick search, I found the SMI130 which has pretty good signal-to-noise ratios. In addition this chip is available for 13$.</p>
7940
2015-08-26T06:12:55.920
|sensors|research|
<p>I am new in this field, I am looking for some high precision gyroscopes and accelerometers for attitude measurements.The precision requirement is around 0.2~0.5 deg/s dynamic. I have done some digging myself, not a single integrated MEMS sensor can do that without costing too much. So some heavy math is needed but that's fine.I need to make sure the prefect sensors are chosen, the budget is less than 100USD. can any one help, thanks in advanced.</p>
recommendation for really high precision attitude measurement sensors
<p>I'd guess it's because some use is made of the vocabulary (and perhaps even the tools) of the branch of mathematics called <a href="https://en.wikipedia.org/wiki/Combinatorial_topology" rel="nofollow">"combinatorial topology"</a>.</p>
7945
2015-08-27T06:28:41.820
|motion-planning|
<p>Why are 'cell decomposition' methods in motion planning given the name, "combinatorial" motion planning?</p>
Why the name "combinatorial"?
<p>Mechanisms and Mechanical Devices Sourcebook - Nicholas P. Chironis &amp; Neil Sclater, McGraw-Hill, ISBN 0-07-011356-4. The format of this book is more akin to a scrapbook of newspaper clippings, but it is a good visual "food-for-thought" collection spanning several generations of mechanism design for a broad range of applications and industries.</p>
7954
2015-08-28T05:25:27.173
|mobile-robot|mechanism|
<p>I wanted to know if there is any sort of archive of mechanisms that contains a brief description of mechanisms like there type of motion and forces involved. Not lengthy derivations and other stuff. </p>
Book on mechanisms
<p>What is BE? Bachelor of Engineering? I feel like you shouldn't need our help if you're in your senior year of an engineering degree, but briefly:</p> <p>It's all about the specifications. Pick a vertical acceleration you want to achieve, then calculate the total force (thrust) you need with $F=ma$.</p> <p>Divide the total thrust by the number of motors you want to use for the per-motor thrust required. </p> <p>Define a rotation rate you want to achieve then calculate the torque required to achieve that rate with the torque equation $\tau = I \alpha$.</p> <p>Use the torque from this equation with the per-motor thrust from earlier to determine the motor radial distance with $\tau = r \times F$. This spacing will impact your moment if inertia, so you will have to re-calculate the required torque - this is called iterative design. </p> <p>Define a top vertical speed, then calculate the required motor power with this speed and the per by $P = F v$. Then define a top rotational speed and calculate the power required for rotating by $P = \tau \cdot \omega$. The actual power you need will be the larger of the two power terms you calculate to ensure you can meet all of your performance specifications. </p> <p>Using the calculated motor power will give you a ballpark to start shopping for motors. Contact sellers and give them your power and thrust requirements to see if they can provide something that meets your needs, but the motors are going to weigh something substantial, so you need to basically re-calculate everything from the moment of inertia down <em>again</em> to refine your design. Again, iterative design process. </p> <p>You will be better off empirically testing motor/prop combinations than trying to calculate thrust from blade design, but again, your base calculations should give the class of motor you need by output power required. Also, use sales engineers! It's a LOT less work to give a sales engineer your spec and have them do the legwork than to do this testing yourself. Give them your spec, then ask to try their recommendation to verify before purchase. </p> <p>You should find your motor power requirements in line with <a href="https://robotics.stackexchange.com/a/7461/9720">this guide</a> posted by TobiasK. Using the moderate flight estimate of 250W/kg, you're looking at over 6kW just for your payload. When you add an airframe, motors, and batteries, you could be looking at 7.5 to 10kW - this is not a small project! </p>
7964
2015-08-30T06:45:19.557
|quadcopter|
<p>I am a student of BE taking Mega-Quadcopter as my final year project.Can u please help me with the total hand calculations of the mega-copter i.e its procedure and formulaes? . I wanted to know how to calculate the dimensions of frame,specifications of motor and propeller,the rating of ESC's and the power rating of the batteries and its total no.s.I do not want direct answers but its procedure and formulaes.I want to lift aload of around 20-30 kgs .Please feel free to help.</p>
Total Hand calculations procedure & formulaes of Mega-Quadcopter
<p>In addition to Chuck's answer: <br> I recommend using opencv or any other Computer Vision library. <br> But you do not need to go the simple way. Do not place the camera directly above the table. You can place it everywhere you want, in any orientation you want. Because you do know your table, the cameras position and its attitude, you can use some easy trigonometrics to calculate the position of the puck. You know have a 3-Dimentional problem to solve, instead of the 2D Problem, which you have with a straight orientation, But it can be done easily by a computer. It is also possible to place multiple cameras, if they have a bad angle.</p> <p>Another possibility (I will just mention this one, I do not recommend it) is using light barriers in the borders of your table, at all four. There are pretty long ones with multiple sensing points (resulotion is about 1 inch). At the end you still need some software which is estimating the current position and velocity of the puck from the information you get from light barrier. The disadvantage is that this light barriers cost a fortune. They easily cost more then a complete airhockey table. If you are interested in this solution look for light barriers used in automation industry for safety purposes in the handling area.</p>
7972
2015-08-31T04:00:31.323
|sensors|microcontroller|design|electronics|laser|
<p>I'm not sure if this is the right place to post this but here goes.</p> <p>So, as the title states, I'm planning on building a desk that doubles as an air hockey table which has a robot on the other side.</p> <p>The robot would be mounted on a rail which should be able to go left and right using a linear actuator. It should be able to "attack" the puck using two servos.</p> <p>The real problem is how should I detect the puck's location?</p> <p><strong>My idea:</strong></p> <p>Since the table would have tiny holes in the corners of a every square(0.5inx0.5in), I could fit in a laser on the bottom part of the table, a laser for ever 1in so a 1inx1in square, the same location would be reflected on the "ceiling" of the table but instead of laser diodes, they would be replaced by an ldr. </p> <p>So I'm planning on doing a matrix and reading the signals of the ldr's columns and rows then performing some logic to locate the center of the puck.</p> <p><strong>PROBLEMS:</strong></p> <p>While I don't see any performance flaws in my plan, I see <strong>tons</strong> of flaws when done imperfectly even to the tiniest bit.</p> <ol> <li>I have to be exactly accurate regarding the laser diode's position, it has to be on the center of the holes, right below the z-axis. This should be easy if I'm just going to place 4 or 5. But I'm not.</li> <li>According to my estimations, I'm going to have to use 300-700 laser diodes, depending on if I'm planning on putting the lasers only on the opponent's side or on the entire board. It would definitely be costly. Imagine 300...</li> <li>This isn't really a huge problem, more like a hassle. Wiring 300 of these. Forget the pcbs, the project area is just to large.</li> </ol> <p>I have thought of numerous way to lessen these, like using a color sensor to get the x-axis location and a laser situated on a negative x-axis pointing to the positive x-axis to locate the puck's y location, but I'm still comparing ideas.</p> <p><strong>Advantages:</strong></p> <p>I could get a 3d-like graphical representation with 3d-like controls (3d-like in reality but technically 2d since the lasers are only plotted in the x and y axis though facing the z-axis). </p> <p>Since this project is going to be my room desk, situated in an automated room, I was thinking of making "desk modes" which should toggle between a game that takes advantage of the lasers and their controls, A control desk for my room, ordinary desk mode, and an air hockey mode.</p> <p><strong>My question: (More like a request)</strong></p> <p>Does anyone have <em>another</em> idea regarding how I should be able to locate the puck's x and y location accurately in real time?</p> <p>EDIT: The table is roll-able and stored underneath a loft bed which has an under-area height of 5'4". Which means I can't go grande on the a vertical solution.</p> <p>EDIT #2: Thanks to the helpful people here, I have come to the conclusion of using a camera.</p> <p>The camera will be that of a smartphone's, I'll create an app that tracks an object by color and a has fixed size comparison to identify the distance of the robot from the puck. The phone will then process this and send signals via bluetooth.</p> <p>The phone is anchored at the end of the robot's moving part so the camera is reminiscent of those games with a first-person view.</p> <p>Incoming problems: I'm looking forward to some delay, given the delay in processing.</p>
Air hockey with a robot as an opponent
<p>Frequency refers to how quickly something is cycling, and the Hertz (Hz) unit means specifically "cycles per second."</p> <p>A cycle just refers to any repetitive motion, relative to some arbitrary reference point. The cycle is complete when the object in reference is at the same point moving the same way when it started. </p> <p>A trip from my house to work and back is one cycle, but the cycle is not 8 hours because I don't go immediately back to work as soon as I get home. Instead, the cycle is actually 24 hours, because I'm not at my house <em>moving toward work</em> again until the next morning. </p> <p>You could also look at a tire turning relative to a mark on the ground, or you could look at a weld rod relative to a butt joint. The weld rod completes one weave cycle when it starts, goes all the way to one side, goes all the way to the other side, then returns to where it started. </p> <p>Now, the frequency in Hertz (Hz) indicates how many times per second the weld rod is going to do this. If you want a fast back-and-forth with less material deposited, you would want a higher frequency. If you want a slower back and forth with more material deposited, you want a lower frequency. </p> <p>I'm not sure how your welder is configured, but if it's possible to set it to zero that would also disable the weave function because you are indicating that you want the weld rod to go back and forth "zero times per second."</p> <p>I hope this gives you a better conceptual understanding of frequency! </p>
7998
2015-09-03T07:11:50.220
|robotic-arm|industrial-robot|
<p>This is a simple question that I can't seem to find the answer for but when setting up the weave function how exactly does frequency (Hz) determine how fast it moves back and forth? </p> <p>In other words if I raise frequency will it move quicker or slower and what factors must I consider? </p>
Weave Weld Lincoln Electric Mig Robot
<p>There is a mistake in calculating ds; it should be: ds = L*vc instead of ds = vc</p>
8008
2015-09-04T12:40:56.670
|control|algorithm|matlab|visual-servoing|
<p>I was trying to implement the IBVS algorithm (the one explained in the Introduction <a href="http://www.irisa.fr/lagadic/visp/documentation/visp-2.10.0/tutorial-ibvs.html" rel="nofollow noreferrer">here</a>) in MATLAB myself, but I am facing the following problem : The algorithm seems to work only for the cases that the camera does not have to change its orientation in respect to the world frame.For example, if I just try to make one vertex of the initial (almost) square go closer to its opposite vertex, the algorithm does not work, as can be seen in the following image</p> <p><a href="https://i.stack.imgur.com/LeVwK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LeVwK.jpg" alt="enter image description here"></a></p> <p>The red x are the desired projections, the blue circles are the initial ones and the green ones are the ones I get from my algorithm.</p> <p>Also the errors are not exponentially dereasing as they should.</p> <p><a href="https://i.stack.imgur.com/lbdoH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lbdoH.jpg" alt="enter image description here"></a></p> <p>What am I doing wrong? I am attaching my MATLAB code which is fully runable. If anyone could take a look, I would be really grateful. I took out the code that was performing the plotting. I hope it is more readable now. Visual servoing has to be performed with at least 4 target points, because else the problem has no unique solution. If you are willing to help, I would suggest you take a look at the <code>calc_Rotation_matrix()</code> function to check that the rotation matrix is properly calculated, then verify that the line <code>ds = vc;</code> in <code>euler_ode</code> is correct. The camera orientation is expressed in Euler angles according to <a href="http://www.coppeliarobotics.com/helpFiles/en/eulerAngles.htm" rel="nofollow noreferrer">this</a> convention. Finally, one could check if the interaction matrix <code>L</code> is properly calculated.</p> <pre><code>function VisualServo() global A3D B3D C3D D3D A B C D Ad Bd Cd Dd %coordinates of the 4 points wrt camera frame A3D = [-0.2633;0.27547;0.8956]; B3D = [0.2863;-0.2749;0.8937]; C3D = [-0.2637;-0.2746;0.8977]; D3D = [0.2866;0.2751;0.8916]; %initial projections (computed here only to show their relation with the desired ones) A=A3D(1:2)/A3D(3); B=B3D(1:2)/B3D(3); C=C3D(1:2)/C3D(3); D=D3D(1:2)/D3D(3); %initial camera position and orientation %orientation is expressed in Euler angles (X-Y-Z around the inertial frame %of reference) cam=[0;0;0;0;0;0]; %desired projections Ad=A+[0.1;0]; Bd=B; Cd=C+[0.1;0]; Dd=D; t0 = 0; tf = 50; s0 = cam; %time step dt=0.01; t = euler_ode(t0, tf, dt, s0); end function ts = euler_ode(t0,tf,dt,s0) global A3D B3D C3D D3D Ad Bd Cd Dd s = s0; ts=[]; for t=t0:dt:tf ts(end+1)=t; cam = s; % rotation matrix R_WCS_CCS R = calc_Rotation_matrix(cam(4),cam(5),cam(6)); r = cam(1:3); % 3D coordinates of the 4 points wrt the NEW camera frame A3D_cam = R'*(A3D-r); B3D_cam = R'*(B3D-r); C3D_cam = R'*(C3D-r); D3D_cam = R'*(D3D-r); % NEW projections A=A3D_cam(1:2)/A3D_cam(3); B=B3D_cam(1:2)/B3D_cam(3); C=C3D_cam(1:2)/C3D_cam(3); D=D3D_cam(1:2)/D3D_cam(3); % computing the L matrices L1 = L_matrix(A(1),A(2),A3D_cam(3)); L2 = L_matrix(B(1),B(2),B3D_cam(3)); L3 = L_matrix(C(1),C(2),C3D_cam(3)); L4 = L_matrix(D(1),D(2),D3D_cam(3)); L = [L1;L2;L3;L4]; %updating the projection errors e = [A-Ad;B-Bd;C-Cd;D-Dd]; %compute camera velocity vc = -0.5*pinv(L)*e; %change of the camera position and orientation ds = vc; %update camera position and orientation s = s + ds*dt; end ts(end+1)=tf+dt; end function R = calc_Rotation_matrix(theta_x, theta_y, theta_z) Rx = [1 0 0; 0 cos(theta_x) -sin(theta_x); 0 sin(theta_x) cos(theta_x)]; Ry = [cos(theta_y) 0 sin(theta_y); 0 1 0; -sin(theta_y) 0 cos(theta_y)]; Rz = [cos(theta_z) -sin(theta_z) 0; sin(theta_z) cos(theta_z) 0; 0 0 1]; R = Rx*Ry*Rz; end function L = L_matrix(x,y,z) L = [-1/z,0,x/z,x*y,-(1+x^2),y; 0,-1/z,y/z,1+y^2,-x*y,-x]; end </code></pre> <p>Cases that work:</p> <pre><code>A2=2*A; B2=2*B; C2=2*C; D2=2*D; A2=A+1; B2=B+1; C2=C+1; D2=D+1; A2=2*A+1; B2=2*B+1; C2=2*C+1; D2=2*D+1; </code></pre> <p>Cases that do NOT work: Rotation by 90 degrees and zoom out (zoom out alone works, but I am doing it here for better visualization)</p> <pre><code>A2=2*D; B2=2*C; C2=2*A; D2=2*B; </code></pre> <p><a href="https://i.stack.imgur.com/0Xhde.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Xhde.jpg" alt="enter image description here"></a></p>
Image Based Visual Servoing algorithm in MATLAB
<p>Definitely camera based options, however, I would go with IR sensors or some even some sound-based sensors if you want a really specific measurement. </p>
8019
2015-09-07T06:06:49.893
|slam|
<p>Which method is better, in term of accuracy, for detection of indoor localization of a drone. Camera based system or wireless techniques like WLAN or Bluetooth?</p>
indoor positioning system: which is better?
<p>You are using one PID controller to try to drive a quaternion? A quaternion by definition represents three degrees of freedom, roll pitch yaw, and a PID controller is Single Input, Single Output (SISO) controller.</p> <p>You're trying to mask a Multiple Input, Multiple Output (MIMO) system by hiding your three variables in a quaternion.</p> <p>If the moments of inertia around each primary axis are the same, then you might be able to get by with using one quaternion controller, but the thing to remember is that a PID controller gets tuned, and the PID tuning hides the system dynamics in the gains. A speed controller that works well for a 5kg arm is not going to work well for a 50kg arm because the gains are tuned to the acceleration - the mass gets &quot;built into&quot; the PID gains.</p> <p>When you look at something like aircraft, missiles, etc., the moments of inertia around each primary axis are going to be quite different, or <em>could</em> be quite different, and this is where I don't think a quaternion PID controller is going to be successful - there's not &quot;one&quot; mass or moment of inertia at work. A quaternion command that represents a satellite roll won't get the same dynamic response as a quaternion command that represents a satellite yaw unless the moments of inertia about those two axes is the same. The worst-case scenario would be that the moments of inertia are so different that a command that results in a nice, damped response on one axis results in instability on another axis.</p> <p>This is where I comment above that it's really a multi-input, multi-output system when you look at a quaternion, and trying to handle it all in a single PID controller is difficult at best and could be a waste of time trying to tune it. The better approach (in my opinion) is to split control to each primary axis, then there's a single moment of inertia, single input, single output. This is where PID controllers can perform well.</p>
8024
2015-09-07T21:24:38.050
|control|pid|stability|
<p>I want to control the attitude(roll, pitch, yaw) of a vehicle capable of pitching and rolling. To do this I have created a quaternion PID controller. First I take the current attitude of the vehicle converting it to a quaternion Qc and do the same for the desired attitude with the quaternion Qd. I then calculate the input of my PID controller as Qr = Qc' x Qd. The imaginary parts of the quaternions are then fed as force requests on the roll, pitch, yaw axes of the vehicle. I test on a simulator and the control works but becomes unstable in some cases (request for R: 60 P: 60 Y:60). I also want this to work around singularities (i.e. pitch 90)</p> <p>Does anyone know why I get this behavior and if so explain (thoroughly) what I'm doing wrong?</p>
PID quaternion contoller
<p>It's tough to say anything about a really large covariance matrix. This is because (presumably) the matrix includes the uncertainty about a whole bunch of features, not just the robot pose. Typically what matters in real life is the uncertainty in the position of the robot (and possibly orientation). So with only three spatial deimensions, you can easily display a 3D covariance ellipse which will tell you the expected position and uncertainty of the robot.</p> <p>In literature there are several methods for displaying some statistics about an arbitrarily large state space. The most common is some measure of the eigenvalues of the matrix. (For a 2x2 or 3x3 matrix, this is often a covariance ellipse as mentioned above) For an nxn matrix, you can display the blockwise covariances for each feature, which will tell you which features are very, very roughly estimated, <em>or</em> you can output some function of the covariance itself.</p> <p>For example, the determinant is a common measure of uncertainty used in GPS as the Dillution of Precision. Another common measure is the maximum eigenvalue, though that only really makes sense on a feature-by-feature basis.</p>
8027
2015-09-08T06:56:50.313
|kalman-filter|
<p>I'm implementing an extended Kalman filter and I'm facing a problem with showing the covariances to the user.</p> <p>The covariance matrix estimate contains all the information we have about the current value estimate, but that is too much to display. I would like to have a single number that says "our estimate is really good" when close to 0 and "our estimate is not worth much" when large.</p> <p>My intuitive simple solution would be to average all the values in the covariance estimate matrix (or maybe just the diagonal), except that in my case the values have different units and different ranges.</p> <p>Is it possible to do something like this?</p>
How to sumarize Kalman filter covariances for display?
<p>To give you an answer that is similarly broad to your question: Mechanical engineering, electrical engineering, control theory or computer science are a good start.</p> <p>To give a more in-depth answer, you would have to tell us a little bit more. For starters, are you interested in robotics as a hobby or study, and what is your background? Is there any specific field of robotics that interests you? </p>
8028
2015-09-08T10:46:02.267
|artificial-intelligence|embedded-systems|first-robotics|
<p>What subjects are involved in robotics. If I want to build robots then what necessary things I need to learn consecutively as a beginner.</p>
What I need to learn to build robots
<p>Computer simulations are not done in the continuous time domain. Simulator designers approximate continuous simulation by reducing the difference between consecutive discrete time points to such a low level that it "looks like" continuous simulation.</p> <p>If you are modelling something in a 'pseudo continuous' simulator, the best way to get similar results in a hardware implementation would be to increase the frequency of your control signals.</p>
8033
2015-09-08T19:56:23.303
|control|simulation|
<p>As far as I know, a robot sends orders as discrete signals. However, isn't computer simulation based on continuous simulation? Do you know if it may happen any important difference when comparing reality to simulation in some cases? I heard that cable-driven robots were quite sensitive.</p>
continuous vs discrete simulation in robotics
<p>Unfortunately it is not as simple as just shifting the a and alpha columns, as the locations of the frames and the directions of their axes can also change when moving from one DH formulation to another.</p> <p>As you can see in the <a href="https://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters#Modified_DH_parameters" rel="nofollow">Wikipedia entry on Modified DH Parameters</a>, these key differences result in a number of changes to how the overall transformation matrix is built.</p>
8043
2015-09-10T15:59:27.587
|kinematics|dh-parameters|
<p>I currently have a description of my 22 joint robot in "classic" DH parameters. However, I would like the "modified" parameters. Is this conversion as simple as shifting the $a$ and $alpha$ columns of the parameter table by one row?</p> <p>As you can imagine, 22 joints is a lot, so I'd rather not re-derive all the parameters if I don't have to. (Actually, the classic parameters are pulled out of OpenRave with the command: <code>planningutils.GetDHParameters(robot)</code>. </p>
How to convert between classic and modified DH parameters?
<p>I'm not sure what you mean by:</p> <blockquote> <p>$R_A^C$ must for a 2x2 matrix be defined as $[xa \cdot xb , xa \cdot xb ; ya \cdot yb , ya \cdot yb]$</p> </blockquote> <p>because I don't know where you are getting the names or formulas from. If you have:</p> <p>$$ R_A^B = \left[ \begin{array}{ccc} a &amp; b \\ c &amp; d \end{array} \right] \\ R_B^C = \left[ \begin{array}{ccc} e &amp; f \\ g &amp; h \end{array} \right] \\ $$</p> <p>Then in multiplying the two you get:</p> <p>$$ R_A^C = R_A^B R_B^C = \left[ \begin{array}{ccc} (ae + bg) &amp; (af + bh) \\ (ce + dg) &amp; (cf + dh) \end{array} \right] \\ $$</p> <p>A 2x2 rotation matrix has the form:</p> <p>$$ R_A^B = R(\alpha) = \left[ \begin{array}{ccc} \cos{\alpha} &amp; -\sin{\alpha} \\ \sin{\alpha} &amp; \cos{\alpha} \end{array} \right] \\ R_B^C = R(\beta) = \left[ \begin{array}{ccc} \cos{\beta} &amp; -\sin{\beta} \\ \sin{\beta} &amp; \cos{\beta} \end{array} \right] \\ $$</p> <p>If you go through two successive rotations:</p> <p>$$ R_A^C = R(\alpha + \beta) = R(\alpha)R(\beta) = R_A^B R_B^C \\ R_A^C = \left[ \begin{array}{ccc} (\cos{\alpha} \cos{\beta} + (-\sin{\alpha} \sin{\beta})) &amp; ((-\cos{\alpha} \sin{\beta}) + (-\sin{\alpha} \cos{\beta})) \\ (\sin{\alpha} \cos{\beta} + \cos{\alpha} \sin{\beta}) &amp; ((-\sin{\alpha} \sin{\beta}) + \cos{\alpha} \cos{\beta}) \end{array} \right] \\ $$</p> <p>Using the angle sum identities:</p> <p>$$ \sin{(\alpha + \beta)} = \sin{\alpha}\cos{\beta} + \cos{\alpha}\sin{\beta} \\ \cos{(\alpha + \beta)} = \cos{\alpha}\cos{\beta} - \sin{\alpha}\sin{\beta} \\ $$</p> <p>You can simplify the result of the matrix multiplication: $$ R_A^C = \left[ \begin{array}{ccc} (\cos{\alpha} \cos{\beta} + (-\sin{\alpha} \sin{\beta})) &amp; ((-\cos{\alpha} \sin{\beta}) + (-\sin{\alpha} \cos{\beta})) \\ (\sin{\alpha} \cos{\beta} + \cos{\alpha} \sin{\beta}) &amp; ((-\sin{\alpha} \sin{\beta}) + \cos{\alpha} \cos{\beta}) \end{array} \right] \\ $$ $$ R_A^C = \left[ \begin{array}{ccc} (\cos{\alpha} \cos{\beta} - \sin{\alpha} \sin{\beta}) &amp; -(\cos{\alpha} \sin{\beta} + \sin{\alpha} \cos{\beta}) \\ (\sin{\alpha} \cos{\beta} + \cos{\alpha} \sin{\beta}) &amp; (-\sin{\alpha} \sin{\beta} + \cos{\alpha} \cos{\beta}) \end{array} \right] \\ $$ $$ R_A^C = \left[ \begin{array}{ccc} \cos{(\alpha + \beta)} &amp; -\sin{(\alpha + \beta)} \\ \sin{(\alpha + \beta)} &amp; \cos{(\alpha + \beta)} \end{array} \right] \\ $$</p> <p>And that's how you get $R_A^C = R_A^B R_B^C$. It works out because the matrix multiplication results in angle additions. </p>
8045
2015-09-10T19:08:13.490
|frame|
<p>I am the moment learning about rotation matrices. It seems confusing how it could be that $R_A^C=R_A^BR_B^C$ is the rotation from coordinate frame <s>A to C</s> C to A, and A,B,C are different coordinate frames.</p> <p>$R_A^C$ must for a 2x2 matrix be defined as $$ R_A^C= \left( \begin{matrix} xaβ‹…xb &amp; xaβ‹…xb \\ yaβ‹…yb &amp; yaβ‹…yb \end{matrix} \right) $$</p> <p>$x_a, y_a and x_b,y_b$ are coordinates for points given in different coordinate frame. I don't see how, using this standard, the multiplication stated above will give the same matrix as for $R_A^C$. Some form for clarification would be helpful here.</p>
composition of rotation matrices
<p>First off, there's no guarantee the wire colors in the iRobot cable match the colors in the cable made by Tensility. Step one is to get an ohm meter and see if the colors match the pins you expect. If they do, I wonder if you accidentally blew the resettable fuse? See the discussion at the end of the section entitled External Serial Port Mini-DIN Pinout on <a href="http://www.irobotweb.com/~/media/MainSite/PDFs/About/STEM/Create/iRobot_Roomba_600_Open_Interface_Spec.pdf" rel="nofollow noreferrer">page 3</a>. Depending on what you've plugged in, you may have pulled more than 200mA. Wait a bit for the fuse to cool, and then with nothing connected to the robot, (carefully) jam a multimeter between pins 6 and 1 on the connector and see if you get battery voltage. If you do, plug your cable in, with nothing attached to the cable but the voltmeter, and see if you get battery voltage where you'd expect it. If that checks out, put an ammeter between the battery power wires and your Pi, preferably in peak detect mode. </p> <p>Good luck; I hope you have a multimeter. </p> <p><sup>Please note that while I'm an iRobot employee, the postings on this site are my own and don't necessarily represent iRobot's positions, strategies, or opinions.</sup></p>
8050
2015-09-12T16:59:39.940
|raspberry-pi|irobot-create|serial|roomba|
<p>I don's seem to be able to get any battery power from Create 2. I spliced the original cable it came with, and tried to use the power from red/purple(+) and yellow/orange(-) to power a Raspberry Pi2, with no luck. While the serial-to-USB cable still works, and I am able to command the robot via Python, there seems to be no power coming on the red/purple cables. I tried with a multimeter with no luck, even as I moved the device from passive/safe/full modes. There is no power even when Create 2 is charging/docked.</p>
iRobot Create 2 serial battery power
<p>A homogeneous transformation matrix $H$ is often used as a matrix to perform transformations from one frame to another frame, <em>expressed in the former frame</em>. The translation vector thus includes [x,y(,z)] coordinates of the latter frame expressed in the former. Perhaps that this already answers your question, but below is a more elaborate explanation.</p> <p>The transformation matrix contains information about both rotation and translation and belongs to the special Eucledian group $SE(n)$ in $n$-D. It consists of a rotation matrix $R$ and translation vector $r$. If we permit no shear, the rotation matrix contains only information about the rotation and belongs to the orthonormal group $SO(n)$. We have:</p> <p>$$H=\begin{bmatrix} R &amp; r \\ \bar{0} &amp; 1 \end{bmatrix}$$</p> <p>Let's define $H^a_b$ the transformation matrix that expresses coordinate frame $\Phi_b$ in $\Phi_a$, expressed in $\Phi_a$. $\Phi_a$ can be your origin, but it can also be an other frame.</p> <p>You can use the transformation matrix to express a point $p=[p_x\ p_y]^\top$ (vectors) in another frame: $$P_a = H^a_b\,P_b$$ $$P_b = H^b_c\,P_c$$ with $$P = \begin{bmatrix} p \\ 1 \end{bmatrix}$$ The best part is that you can stack them as follows: $$P_a = H^a_b H^b_c\,P_c = H^a_c\,P_c $$ Here a small 2 D example. Consider a frame $\Phi_b$ translated $[3\ 2]^\top$ and rotated $90^\circ$ degrees with respect to $\Phi_a$. $$H^a_b = \begin{bmatrix}\cos(90^\circ) &amp; -\sin(90^\circ) &amp; 3 \\ \sin(90^\circ) &amp; \cos(90^\circ) &amp; 2 \\ 0 &amp; 0 &amp; 1 \end{bmatrix}=\begin{bmatrix}0 &amp; -1 &amp; 3 \\ 1 &amp; 0 &amp; 2 \\ 0 &amp; 0 &amp; 1 \end{bmatrix}$$ A point $p_b=[3\ 4]^\top$ expressed in frame $\Phi_b$ is $$\begin{bmatrix}p_{a,x} \\ p_{a,y} \\ 1 \end{bmatrix} = \begin{bmatrix}0 &amp; -1 &amp; 3 \\ 1 &amp; 0 &amp; 2 \\ 0 &amp; 0 &amp; 1 \end{bmatrix}\begin{bmatrix}3 \\ 4 \\ 1 \end{bmatrix}=\begin{bmatrix}-1 \\5 \\1 \end{bmatrix} \to p_a = \begin{bmatrix}-1\\5\end{bmatrix}$$ Try to make a drawing to improve your understanding.</p>
8053
2015-09-13T15:23:01.840
|kinematics|frame|
<p>I am trying to understand how to use, what it requires compute the homogenous transformation matrix. </p> <p>I know 2 points from 2 different frames, and 2 origins from their corresponding frames. </p> <p>I how transformation matrix looks like, but whats confusing me is how i should compute the (3x1) position vector which the matrix needs. As i understand is, this vector a origin of the old frame compared to the new frame. But how to calculate it, the obvious answer (I think) would be to subtract both ($O_{new} - O_{old}$ ), but it does not feel right. </p> <p>I know its a simple question but my head cannot get around this issue, and how can i prove it the right way, with the information i know?</p>
How to use the Homogeneous transformation matrix?
<p>I don't have one, but I found this article:</p> <p><a href="http://www.robotreviews.com/chat/viewtopic.php?t=10631" rel="noreferrer">http://www.robotreviews.com/chat/viewtopic.php?t=10631</a></p> <p>There they say the "cliff sensors" collect a lot of dust and can be cleaned out with a blast of compressed air. From the article, these sensors detect the floor and prevent the Roomba from wandering off the top stair, or some similar mishap.</p> <p>If these sensors are dirty, apparently it will keep stopping and beeping, thinking its route is blocked.</p> <p>If you don't have a source of compressed air you can get "canned air" from specialist camera places, but it's expensive. Otherwise, just blowing hard (once per sensor, without getting any spit in there!) may do the trick but be careful not to damage anything...</p>
8080
2015-09-17T16:59:08.490
|roomba|
<p>I'm really in doubt whether it is proper to ask this question here, so I'm apologizing if it is not, I'll delete it.</p> <p>I have a Roomba robot which has worked for me for more than three years, and now while it is working it is producing some strange sounds, so I've decided to clean it thoroughly.</p> <p>But when I disassembled it down to this point:</p> <p><a href="https://i.stack.imgur.com/W2ZTR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W2ZTR.jpg" alt="Roomba disassembled"></a> </p> <p>I got stuck with these sort of glass things (marked with the red rectangles at the picture). They are really filthy from the inside and I cannot figure out how to clean them.</p> <p>Does anyone know how one can remove dust from the inside on these things? May be there are some Roomba creators here.</p> <p>Thanks in advance.</p>
Could anyone tell me what are these things in a Roomba robot and how to clean them, please?
<p>You might be driving the servo beyond its limits. As noted in the pololu.com article β€œ<a href="https://www.pololu.com/blog/17/servo-control-interface-in-detail" rel="nofollow">Servo control interface in detail</a>”:</p> <blockquote> <p>If you keep making the pulses wider or narrower, servos will keep moving farther and farther from neutral until they hit their mechanical limits and possibly destroy themselves. This usually isn’t an issue with RC applications since only about half of the mechanical range is ever used, but if you want to use the full range, you have to calibrate for it and be careful to avoid crashing the servo into its limit.</p> </blockquote> <p>Similarly, the arduino.cc article about <a href="https://www.arduino.cc/en/Reference/ServoWriteMicroseconds" rel="nofollow">Servo.WriteMicroseconds</a> says (with emphasis I added):</p> <blockquote> <p>On standard servos a parameter value of 1000 is fully counter-clockwise, 2000 is fully clockwise, and 1500 is in the middle. [...] some [...] servos respond to values between 700 and 2300. Feel free to increase these endpoints until the servo no longer continues to increase its range. <strong>Note however that attempting to drive a servo past its endpoints (often indicated by a growling sound) is a high-current state, and should be avoided.</strong></p> </blockquote> <p>I suggest running a calibration, where you drive the servo to different angles, to find out what parameter values are large enough to drive the servo to its extreme positions without overdriving it.</p>
8103
2015-09-22T16:07:28.647
|arduino|rcservo|
<p>I have recently purchased my first ever servo, a cheap unbranded Chinese MG996R servo, for Β£3.20 on eBay.</p> <p><a href="https://i.stack.imgur.com/p57Ha.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p57Ha.jpg" alt="MG996R clone servo"></a></p> <p>I am using it in conjunction with a Arduino Servo shield (see below):</p> <p><a href="https://i.stack.imgur.com/RKe8Z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RKe8Z.jpg" alt="Arduino Servo Shield"></a></p> <p>As soon as it arrived, before even plugging it in, I unscrewed the back and ensured that it had the shorter PCB, rather than the full length PCB found in MG995 servos. So, it seems to be a reasonable facsimile of a bona-fide MG996R.</p> <p>I read somewhere (shame I lost the link) that they have a limited life, due to the resistive arc in the potentiometer wearing out. So, as a test of its durability, I uploaded the following code to the Arduino, which just constantly sweeps from 0Β° to 180Β° and back to 0Β°, and left it running for about 10 to 15 minutes, in order to perform a very simple <em>soak test</em>.</p> <pre><code>#include &lt;Servo.h&gt; const byte servo1Pin = 12; Servo servo1; // create servo object to control a servo // twelve servo objects can be created on most boards int pos = 0; // variable to store the servo position void setup() { servo1.attach(servo1Pin); // attaches the servo on pin 9 to the servo object Serial.begin(9600); } void loop() { pos = 0; servo1.write(pos); // tell servo to go to position in variable 'pos' Serial.println(pos); delay(1000); // waits 15ms for the servo to reach the position pos = 180; servo1.write(pos); // tell servo to go to position in variable 'pos' Serial.println(pos); delay(1000); // waits 15ms for the servo to reach the position } </code></pre> <p>When I returned, the servo was just making a grinding noise and no longer sweeping, but rather it seemed to be stuck in the 0Β° position (or the 180Β°). I picked the servo up and whilst not hot, it was certainly quite warm. A quick sniff also revealed that hot, burning motor windings smell. After switching of the external power supply and allowing it to cool, the servo began to work again. However, the same issue occurred a little while later. Again, after allowing it to rest, upon re-powering, the servo continues to work. However, I am reluctant to continue with the soak test, as I don’t really want to burn the motor out, just yet.</p> <p>Is there a common β€œno-no” of not making servos sweep from extreme to extreme, and one should β€œplay nice” and just perform 60Β° sweeps, or is the cheapness of the servo the issue here?</p> <p>I am powering the servo from an external bench supply, capable of 3A, so a lack of current is not the issue.</p> <hr> <p>Please note that I also have a follow up question, <a href="https://robotics.stackexchange.com/questions/8104/should-a-mg996r-servos-extreme-position-change-over-time">Should a MG996R Servo's extreme position change over time?</a></p>
Overheating/Jamming MG996R servo
<p>Is there anything with a weight (actually moment of inertia) connected to the servo? If so, it might be that the plastic "mounting" part is slipping with respect to axle. Axle shouldn't ever be changing extremes.</p>
8104
2015-09-22T16:13:59.983
|rcservo|
<p>This question is a follow on from my previous question, <a href="https://robotics.stackexchange.com/questions/8103/overheating-jamming-servo">Overheating/Jamming MG996 servo</a>.</p> <p>I have recently purchased my first ever servo, a cheap unbranded Chinese MG996R servo, for Β£3.20 on eBay.</p> <p><a href="https://i.stack.imgur.com/p57Ha.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p57Ha.jpg" alt="MG996R clone servo"></a></p> <p>After mounting the servo horn and the bracket, I realised that I had not mounted the horn in a <em>tout a fait</em> 0Β° orientation, rather the angle between the bracket and the servo side was approximately 20Β°. However, after switching the servo on and off a couple of times, with each time allowing the servo to perform, say, about 10 sweeps each time, I quickly noted that the servo’s extreme positions were changing over time, so that the initial extremes and then the extremes after about 5 on and off cycles, had changed by about 15Β°, so that now, 0Β° and 180Β° the bracket is now parallel with the body of the servo. </p> <p>I was quite surprised at this, as I had assumed that the 0Β° and 180Β° positions would be fixed, and not change over time, or vary each time that it was switched on and off.</p> <p>Seeing as there should be a <em>stop peg</em> on the gear connected to the potentiometer inside, how is this even possible?</p>
Should a MG996R Servo's extreme position change over time?
<p>An ESC needs to have the throttle limits adjusted before you can use it. While there are many options you can set from a programming menu (see page 4 of <a href="http://www.hobbyking.com/hobbyking/store/uploads/981522291X561242X33.pdf" rel="nofollow">this PDF</a> for reference), the throttle adjustment is pretty straightforward:</p> <ol> <li>Disconnect the battery from the ESC</li> <li>Send and hold a full throttle signal to the ESC</li> <li>Connect the battery to the ESC</li> <li>Wait 2 seconds for a beep-beep (di-di)</li> <li>Send a zero throttle signal to the ESC</li> <li>Hold zero throttle until you hear a long beep</li> </ol> <p>If you continue having problems or this doesn't work, check that the battery pack you are using is fully charged and the correct voltage for the ESC you are trying to use.</p>
8108
2015-09-23T09:51:39.470
|arduino|brushless-motor|esc|pwm|
<h2>Attempt to clean up:</h2> <p>I'm trying to use <a href="http://rctimer.com/product-1127.html" rel="nofollow noreferrer">this motor</a> with <a href="http://www.rctimer.com/product-1279.html" rel="nofollow noreferrer">this ESC</a> and an Arduino Uno R3.</p> <p>Typically, I used the PWM pins when I use the Arduino and an ESC, but I can't control the motor even if I use the servo library, and I've also tried sample code from different websites.</p> <p>The ESC has a beep I can't understand. Sometimes it's high-low-high or high for 4 seconds, but I can't find anything on Google. </p> <p>Sometimes the motor spins periodically for a short time, but I don't know why. Some sites recommend using flash or bootloader, but I'd prefer to use Arduino PWM or the servo library. </p> <h2>Original post</h2> <p>Specific ESC is Rctimer Mini ESC16A OPTO SimonK Firmware SN16A ESC..</p> <p>I can only using ESC(Discussed above..) and RCTimer 1806-1450KV Multi-Rotor BLDC Motor.</p> <p>Typically, I used PWM pins(3, 9, 10, 11-because similar Signal frequency) when using Arduino-ESC.. but, I can control BLDC Motor even i used to Servo library.. </p> <p>I've been used usual websites example code.</p> <p>Just ESC had unknowable beep .. sometime di-ri-di or di(for 4 seconds).. I couldn't find that way.. in google (or my country websites)</p> <p>Sometimes, The Motor spins(In a certain value, periodically) for a short time but I don't know why The motor spins</p> <p>In google sites, just using flash or Bootloader, but I'll use Arduino PWM or Servo..</p> <p>So.. Please! would you please help me? Thank you for reading my thread.</p> <p><a href="https://i.stack.imgur.com/vU9oN.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vU9oN.jpg" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/MNUMG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MNUMG.jpg" alt="enter image description here"></a></p>
How to use specific ESC,BLDC motor through Arduino Uno R3?
<p>Typically with a <strong>m</strong>ultiple <strong>i</strong>nput, <strong>m</strong>ultiple <strong>o</strong>utput (MIMO) system, a control engineer uses a <a href="https://en.wikipedia.org/wiki/Full_state_feedback">state feedback controller</a>. This style of controller leverages a <a href="https://en.wikipedia.org/wiki/State-space_representation">state-space model</a> of the system and generally takes the form:</p> <p>$$ \dot{x}=\mbox{A}x+\mbox{B}u \\ y = \mbox{C}x + \mbox{D}u \\ $$</p> <p>where $x$ is a vector of states, $u$ is a vector of inputs, $y$ is a vector of outputs, and the time derivative of the states, $\dot{x}$, shows how the states evolve over time, as determined by combinations of states $\mbox{A}$ and inputs $\mbox{B}$. Outputs are also determined by an interaction between states and inputs, but the outputs can be any combination, so the output state and input matrices are different - $\mbox{C}$ and $\mbox{D}$. </p> <p>I won't go into a large amount of detail regarding state feedback controls, but in general, the matrices $\mbox{A} \rightarrow \mbox{D}$ "map" or associate a particular state or input to another state or input. For instance, if you want to model a system of unrelated differential equations, you would get something like:</p> <p>$$ \dot{x} = \left[ \begin{array}{ccc} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \end{array} \right] = \left[ \begin{array}{ccc} k_1 &amp; 0 &amp; 0 \\ 0 &amp; k_2 &amp; 0 \\ 0 &amp; 0 &amp; k_3 \end{array} \right] \left[ \begin{array}{ccc} x_1 \\ x_2 \\ x_3 \end{array} \right] $$ which represents: $$ \dot{x}_1 = k_1 x_1 \\ \dot{x}_2 = k_2 x_2 \\ \dot{x}_3 = k_3 x_3 \\ $$</p> <p>If you wanted to add input $u_1$ to the equation for $\dot{x}_1$ and input $u_2$ to $\dot{x}_3$, then you could add a $\mbox{B}u$ term:</p> <p>$$ \dot{x} = \left[ \begin{array}{ccc} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \end{array} \right] = \left[ \begin{array}{ccc} k_1 &amp; 0 &amp; 0 \\ 0 &amp; k_2 &amp; 0 \\ 0 &amp; 0 &amp; k_3 \end{array} \right] \left[ \begin{array}{ccc} x_1 \\ x_2 \\ x_3 \end{array} \right] + \left[ \begin{array}{ccc} 1 &amp; 0 \\ 0 &amp; 0 \\ 0 &amp; 1 \end{array} \right] \left[ \begin{array}{ccc} u_1 \\ u_2 \end{array} \right] $$</p> <p>If you want to keep this, but you think that state $x_1$ contributes to how $x_2$ changes, you can add that interaction:</p> <p>$$ \dot{x} = \left[ \begin{array}{ccc} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \end{array} \right] = \left[ \begin{array}{ccc} k_1 &amp; 0 &amp; 0 \\ \boxed{ k_{x_1 \rightarrow x_2} } &amp; k_2 &amp; 0 \\ 0 &amp; 0 &amp; k_3 \end{array} \right] \left[ \begin{array}{ccc} x_1 \\ x_2 \\ x_3 \end{array} \right] + \left[ \begin{array}{ccc} 1 &amp; 0 \\ 0 &amp; 0 \\ 0 &amp; 1 \end{array} \right] \left[ \begin{array}{ccc} u_1 \\ u_2 \end{array} \right] $$</p> <p>When you write these out now, you get:</p> <p>$$ \begin{array} \\ \dot{x}_1 &amp; = &amp; k_1 x_1 + u_1 \\ \dot{x}_2 &amp; = &amp; k_{x_1 \rightarrow x_2}x_1 + k_2 x_2 \\ \dot{x}_3 &amp; = &amp; k_3 x_3 + u_2 \end{array} $$</p> <p>You can keep building up complexity as your system requires. Once you have a model, for state feedback controls, you need to make sure that the system is <strong>linear</strong>, in that the system doesn't have trig functions or one state multiplying itself or another state, and make sure that it is <strong>time invariant</strong>, in that the matrices $\mbox{A} \rightarrow \mbox{D}$ don't change with time - no function of (t) in them. You may be able to make some simplifications, such as a <a href="https://en.wikipedia.org/wiki/Small-angle_approximation">small angle approximation</a> to help get your $\mbox{A}$ matrix into the <strong>LTI</strong> form required for the next step. </p> <p>Now you can "mask" the entire system into the tidy two equations first shown, hiding the entire $\mbox{A}$ matrix with just the letter 'A', etc. With the <a href="https://en.wikipedia.org/wiki/Laplace_transform">Laplace transform</a> you can (hand-wave) evaluate the uncontrolled, open-loop dynamics of the system. You do this by finding the <a href="http://demonstrations.wolfram.com/SimulationOfFeedbackControlSystemWithControllerAndSecondOrde/">poles of the system</a>, which in term indicate system response. </p> <p>You can also evaluate the system to see if it is <a href="https://en.wikipedia.org/wiki/Controllability">controllable</a>, meaning that you can use your inputs to alter all of the states in a unique manner, and to see if it is <a href="https://en.wikipedia.org/wiki/Observability">observable</a>, meaning that you can actually determine what the values of the states are. </p> <p>If the system is controllable, you can take information about the states, $-\mbox{G}x$, and feed that into the system, using the information you have about the states to drive them to a desired value. Using only the two initial equations for clarity, when you add the control signal to the input you get:</p> <p>$$ \dot{x} = \mbox{A}x + \mbox{B}(u - \mbox{G}x) \\ y = \mbox{C}x + \mbox{D}u \\ $$</p> <p>which becomes:</p> <p>$$ \dot{x} = \mbox{A}x - \mbox{BG}x + \mbox{B}u \\ y = \mbox{C}x + \mbox{D}u \\ $$</p> <p>which can be rearranged as:</p> <p>$$ \dot{x} = [\mbox{A}-\mbox{BG}]x + \mbox{B}u \\ y = \mbox{C}x + \mbox{D}u \\ $$</p> <p>Where before you system response was driven by the $\mbox{A}$ matrix, now it is driven by $\mbox{A-BG}$. You can again evaluate the poles via the Laplace transform, but now you have a gain matrix $\mbox{G}$ you can use to tune the controller, putting the poles wherever you want, which establishes the time response to be whatever you want.</p> <p>The process continues, with <a href="https://en.wikipedia.org/wiki/State_observer">observers</a> setup to compare the actual system output $y$ with the model's predicted output $\hat{y}$. This is where it's important to note that the outputs don't have to be the same combination of states as you use in the state differential equation - where your states might be a current your output might be a voltage ($R\times I$) so you can make a comparison with a measurable signal on your real system. </p> <p>Like I said, there is a <em>ton</em> of information involved with modeling systems and designing state feedback controllers, I just outlined the general process as I believe this is scope you were looking for with your question. </p>
8111
2015-09-23T17:12:28.867
|control|pid|
<p>I'm familiar with using PID to perform closed loop control when there is a single output and a single error signal for how well the output is achieving the desired set-point.</p> <p>Suppose, however, there are multiple control loops, each with one output and one error signal, but the loops are not fully independent. In particular, when one loop increases its actuator signal, this changes the impact of the output from other loops in the system.</p> <p>For a concrete example, imagine a voltage source in series with a resistor, applying a voltage across a system of six adjustable resistors in parallel. We can measure the current through each resistor and we want to control the current of each resistor independently by adjusting the resistance. Of course, the trick here is that when you adjust one resistor's resistance, it changes the overall resistance of the parallel set, which means it changes the voltage drop due to the divider with the voltage source's resistance and hence changes the current through the other resistors.</p> <p>Now, clearly we have an ideal model for this system, so we can predict what resistance we should use for all resistors simultaneously by solving a set of linear equations. However, the whole point of closed loop control is that we want to correct for various unknown errors/biases in the system that deviate from our ideal model. The question then: what's a good way to implement closed loop control when you have a model with this kind of cross-coupling?</p>
Multiple control loops with overlapping effects
<p>As I mentioned in <a href="https://robotics.stackexchange.com/questions/2553/capacitive-touch-input-robot-to-remote-access-ipad">this other question</a> on the same topic, it should be easy to put a stylus on a small robot arm. Note that since you don't need a lot of elevation difference, a <a href="https://en.wikipedia.org/wiki/Cartesian_coordinate_robot" rel="nofollow noreferrer">gantry</a>, <a href="https://en.wikipedia.org/wiki/Delta_robot" rel="nofollow noreferrer">Delta</a>, or <a href="https://en.wikipedia.org/wiki/SCARA" rel="nofollow noreferrer">SCARA</a> style robot is probably better than a "traditional" robot arm.</p>
8120
2015-09-25T21:16:11.547
|automatic|
<p>I have extremely limited knowledge in the general topic of robotics and therefore this question is a shot in the dark. Please let me know if the topic is unsuitable for the site.</p> <p>I am interested in creating a device that would generate a touchscreen tap. In a nutshell, I would like to replicate on a touchscreen the automated mouse functionality you can obtain with software like AutoHotKey in Windows. Since, without jailbreaking the phone, a software solution is basically impossible, it occurs that one of the first components would be a physical device that simulates a tap. Do any options for such a component exist?</p> <p>I recognize that there are philosophical implications with creating such a device. I am assuming the entire conversation to be theoretical and solely related to the hardware design.</p> <p>Thanks, Alex</p>
Device to generate screen tap response
<p>The alternator in a car is constantly charging the 12V battery while the car is running, which is to say battery-wise you shouldn't have any issues. </p> <p>Several things could be meant when you say the motors are "rated" for 12V. Is that a maximum, minimum, or nominal voltage? I'm assuming they're rated for DC power, but there are some low voltage AC motors. </p> <p>If I recall correctly, I <em>think</em> a vehicle 12V bus could swing from dead battery voltage of 11.8 to a charging voltage of 14.4V, so you should check that you aren't violating any ratings on the motors. </p> <p>For the electronics I would use a 5V DC-DC converter. You should consider a 12V-12V DC-DC converter for power conditioning if the generator output is too noisy. </p> <p>The only reason I would consider using a battery is if the generator can't supply the peak current you need or if you have to be able to run some functions without the generator on. Remember that the generator must supply more than the average power draw or you'll always wind up running the batteries dead.</p> <p>Lastly, some generators may require a minimum load for proper voltage regulation, so be sure to check. </p>
8122
2015-09-26T17:16:51.937
|electronics|power|battery|
<p>A have designed a robot to perform tasks in farms. But the problem now is I'm not sure on the best way to supply continuous power to my robot. All the motors are rated at 12V and only Arduino and a few sensors work at 5V or less. </p> <p>Can I continuously charge a 12V lead acid battery with an adapter (comes with the battery) plugged into the AC output of the generator while the robot is operating? Do I have to worry about overcharging the battery?</p> <p>Or should I use the generator's DC output which can supply 12V and up to 8.3Amp. Or is there any other suggestions?</p> <p>Some information about the adapter which are stated on the package: 1. Built-in over-charge protection device. 2. Built-in thermal protection device 3. Output: 6v/12v 2Amp</p> <p>This is the generator that I have: <a href="http://global.yamaha-motor.com/business/pp/generator/220v-60hz/0-1/et950/" rel="nofollow">http://global.yamaha-motor.com/business/pp/generator/220v-60hz/0-1/et950/</a></p> <p>This is my first robot which is quite big that requires a lot of electrical/electronic knowledge to power it. I do not have a lot of experience in this field. So any feedback is greatly appreciated.</p>
Powering my robot with 12V battery which is charged by a gas/petrol generator while the robot is operating?
<blockquote> <p>If you can do well on simulations, you should be able to do it well in the real world.</p> </blockquote> <p>There's this lovely saying:</p> <blockquote> <p>In theory, there is no difference between theory and practice.</p> </blockquote> <p>And what that implies for practice goes without saying. </p> <blockquote> <p>You can model delay and noise, you can model servo's response curve.</p> </blockquote> <p>A simulation is only as good as the model it is simulating. We like simple models. We don't like nonlinear things. We don't like to deal with too many variables...</p> <p>That something works well in a simulation is often a necessary condition to be able to do it in the real world, but it's not necessarily (pun intended) a sufficient one.</p> <p>Given all the necessary information doesn't mean it's possible to do it or reasonable to do so. Maybe we know how to do the calculations and have all the input, but cannot do the calculations quick enough with a reasonable sized processor.</p> <p>It appears to me that your reasoning is that because we are able to do it, we should be doing it. There are plenty of other reasons to do (or <strong>not</strong> do) something aside from the mere possibility of doing it. Maybe it is in fact possible to build a robot with bipedal locomotion. <strong>It's quite possible that the difficulty in actually doing it is in finding a reason to do so.</strong></p> <p>Would that bipedal robot be faster than a robot with wheels? More stable? Would it require less energy? How well would it work on different terrain? Will the money required for further research and testing pay off later in any way?</p> <p>A lack of benefits or at least lack of obvious benefits is a major difficulty to develop something. Running a computer simulation is one thing, but building a physical robot (and designing one before that) requires funding which requires making rich people interested in finding a solution to the problem.</p> <blockquote> <p>some of the evolutionary algorithms converge to very good solutions</p> </blockquote> <p>Speaking of evolution, bipedal motion is not exactly the most popular one. And it isn't the best one either, because no matter how successful you consider the bipedal homo sapiens, one thing he did was to invent other forms of locomotion superior to his bipedal one.</p> <p>It looks like this is one of those research topics that we'd like to solve so we can claim that we are able to do it, but other technologies (drones fro example) seem to offer more and more obvious advantages.</p>
8129
2015-09-27T15:48:56.970
|walk|
<p>Not sure if this has been asked, but there are lots of simulations of bipedal locomotion algorithms online, some of the evolutionary algorithms converge to very good solutions. So it seems to me that the algorithm part of bipedal locomotion is well-understood.</p> <p>If you can do well on simulations, you should be able to do it well in the real world. You can model delay and noise, you can model servo's response curve.</p> <p>What I don't understand is then why is it still difficult to make a walking robot? Even a robot like the Big Dog is rare.</p>
Why are bipedal robots difficult?
<p>I don't have access to the quoted reference, but believe the explanation (stated, for simplicity, in terms of a vector rather than a matrix) below will apply. </p> <p>Consider an $n$-element vector $V$. In general, there are $n$ degrees of freedom for populating the vector. However, if scale is unimportant or of no account, that is, if all scalar multiples $a V$ are taken as equivalent, then we might as well scale $V$ so that $||V|| = 1$ (ie has unit length). In this case, we have at most $n-1$ degrees of freedom, because specifying any $n-1$ elements of a unit length vector forces the value of the remaining element.</p>
8133
2015-09-28T09:40:35.993
|computer-vision|
<p>I am reading "Computer Vision: Models, Learning, and Inference" in which author writes at several points (like on page 428-429) that although matrix <strong>A</strong> seems to have 'n' degree of freedom but since it is ambiguous up to scale so it only has 'n-1' degree of freedom? Can anyone explain what this thing means? Why one degree of freedom is decreased?</p>
"Ambiguous up to scale" , Explanation required
<p>One way you can use SLAM in your setup is to stop the robot every 30 cm or so, and perform a sweep with your lidar. You can then use e.g. one of the 2D SLAM packages from <a href="https://openslam.org/" rel="nofollow">openslam</a>. The problem with sweeping while you are moving is that you cannot get any correspondence information from a single range reading. SLAM works by associating features in one reading with the same feature in another reading. By having a single point reading you cannot tell if two readings have the same original scene point or not. If you have two scans on the other hand, scene features like e.g. a bump in the wall can be matched between the readings.</p>
8146
2015-09-29T20:13:29.110
|slam|otherservos|laser|rangefinder|
<p>I have a small mobile robot with a LidarLite laser range finder attached to a servo. As of now I have the range finder side-sweeping in a 30 degree arc, taking continuous distance readings to the side of the robot (perpendicular to the robots forward motion).</p> <p>My goal is to have the robot drive roughly parallel to a wall, side-scanning the entire time, and create a 2D map of that wall it is moving past. The 2D topography map is created post processing (I use R for much of my data processing, but I don't know is popular for this kind of work).</p> <p>From what I know of it, SLAM sounds like a great tool for what I want to do. But I have two issues:</p> <p>1: I know my robot will not have a consistent speed, and I have no way to predict or measure the speed of my robot. So I have no way to estimate the odometry of the robot. </p> <p>2: The robot will also move further and closer to the wall as it proceeds down it's path. So I can not depend on a steady plane of travel from my robot.</p> <p>So given that I don't have any odometry data, and my realtive distance to the wall changes over the course of a run, is it possible to use SLAM to create 2D maps?</p> <p>I'm looking into stitching algorithms that are used for other applications, and some of these can handle the variances in relative distance, but I was hoping SLAM or some other algorithm could be of use here.</p>
Using SLAM to create 2D topography
<p>I'll try to go through your question as best as I can:</p> <blockquote> <p>I was wondering, whether one could calculate the rotation of the sensor relative to the vehicle by application of the accelerometer and gyrometer?</p> </blockquote> <p>Without any more information than an accelerometer and a gyro, no, you cannot, because gyros measure velocity and accelerometers measure acceleration, which means that you can only use those to find a position by using a <a href="https://en.wikipedia.org/wiki/Constant_of_integration" rel="nofollow noreferrer">constant of integration</a>. </p> <p>The constant of integration here would be an initial velocity for the first integration of accelerometer output, and and initial position <strong>and heading</strong> for the second integration of accelerometer output and gyro output, respectively. </p> <p>The gyro and accelerometer will only ever give you <em>relative</em> positions and headings. You would need to find an initial heading to align those outputs to a (literal) global coordinate system, at which point you would be using whatever device you used for initial heading to compare with the magnetometer. </p> <blockquote> <p>Theoretically, the accelerometer yields a vector down and can be used for calculation of the coordinate system. </p> </blockquote> <p>I would be careful regarding this - see <a href="https://robotics.stackexchange.com/q/1858/9720">Ben's question/comment</a> regarding his experience that 3-axis accelerometers tend to have a left-handed coordinate system. That is, if you haven't read the datasheet on the accelerometer, you could mis-interpret the orientation of the sensor because it's not using the coordinate system you think it is. </p> <blockquote> <p>The discrepancy between magnetometer and gyrometer then, may be used for calculation of the correct orientation of the compass. Later the compass should be used for yaw calculation. </p> </blockquote> <p>I don't know if you're using magnetometer and compass interchangeably here or if you are actually referring to two different devices, but as stated above, a gyro will only ever output a relative heading, so you can't use it to determine any absolute heading. This means that you can't use it to set/test/check the alignment of an absolute output device such as a magnetometer. </p> <blockquote> <p>Below is the starting orientation of both sensors (just an example, the orientation of the compass can be anything). Does someone know a good way to figure out the rotation of the compass?</p> </blockquote> <p>I don't know how specific you are trying to get with your measurement, but in general, I would just command the vehicle to point North. If you have a rotationally symmetric vehicle like a quadcopter then you'll have to determine which arm/end of the vehicle is "forward" (should correspond to a "move forward" command) and mark that end. Once the vehicle stops actively maneuvering to reach your heading command you can use a manual/secondary compass to determine true North and mark/compare the difference between that position and "forward" on your vehicle. </p> <p>If you need a more precise measurement or if you can't command an absolute heading, then your only other choice is to try to get access to the electronics. I would look for the magnetometer chip itself, find the <a href="https://electronics.stackexchange.com/questions/48734/how-do-i-identify-pin-1-on-a-chip-with-no-corner-mark">orientation of the chip</a>, then read the datasheet and determine how it senses North. The chip will be soldered to a board which, in turn, will be attached to the vehicle. You can then relate compass orientation to the craft. </p>
8150
2015-09-30T09:44:49.727
|magnetometer|orientation|
<p>On many drones are already external magnetometers. Unfortunately, the orientation of such sensors is sometimes unknown. E.g. the sensor can be tilted 180Β° (pitch/roll) or XΒ° in yaw. I was wondering, whether one could calculate the rotation of the sensor relative to the vehicle by application of the accelerometer and gyrometer? Theoretically, the accelerometer yields a vector down and can be used for calculation of the coordinate system. The discrepancy between magnetometer and gyrometer then, may be used for calculation of the correct orientation of the compass. Later the compass should be used for yaw calculation. Below is the starting orientation of both sensors (just an example, the orientation of the compass can be anything). Does someone know a good way to figure out the rotation of the compass? </p> <p><a href="https://i.stack.imgur.com/JbG8H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JbG8H.png" alt="enter image description here"></a></p>
Sensor orientation of an external magnetometer
<p>It seems to belong to this software: <a href="https://github.com/CPFL/Autoware/blob/master/ros/src/computing/perception/detection/lib/image/dpm_ttic/include/dpm_ttic/dpm_ttic.hpp" rel="nofollow">https://github.com/CPFL/Autoware/blob/master/ros/src/computing/perception/detection/lib/image/dpm_ttic/include/dpm_ttic/dpm_ttic.hpp</a></p> <p>So - I assume that is just a case where someone is using that software, and it is showing how to set some default parameters.</p> <p>I would suggest you ask the developers of that software directly though, if you want to use it.</p>
8153
2015-09-30T12:36:53.917
|c++|
<pre><code>static void set_default_param(DPMTTICParam&amp; param) { param.overlap = 0.4; param.threshold = -0.5; param.lambda = 10; param.num_cells = 8; } </code></pre>
Can someone explaine to me this code?
<p>Nowadays we no longer employ exact solutions for the IK problem, simply because the number of degrees of freedom so as the number of constraints the final configuration needs to comply with make the so called geometric approach intractable. </p> <p>By contrast, iterative methods are used to converge to the most plausible solution. Therefore, the <strong>reachability is tested at the end of this iterative process</strong> by comparing - within a given threshold - the target with the pose reached by the end-effector.</p> <p>Of course, one can create beforehand a map of the workspace containing samples of reachable and/or dexterous locations for the manipulator and use it as a sort of rough prediction.</p> <p>The <strong>Jacobian embeds a local description</strong> of the robot, so that it cannot account for any global hint whether the goal can or cannot be attained.</p>
8171
2015-10-02T14:20:47.623
|mobile-robot|robotic-arm|kinematics|inverse-kinematics|jacobian|
<p>Given a set of robot joint angles (i.e. 7DoF) $\textbf{q} = [q_1, ... , q_n]$ one can calculate the resulting end-effector pose (denoted as $\textbf{x}_\text{EEF}$), using the foward kinematic map. </p> <p>Let's consider the vice-versa problem now, calculating the possible joint configurations $\textbf{q}_i$ for a desired end-effector pose $\textbf{x}_\text{EEF,des}$. The inverse kinematics could potentially yield infinitely many solutions or (and here comes what I am interested in) no solution (meaning that the pose is not reachable for the robot).</p> <p>Is there a mathematical method to distinguish whether a pose in Cartesian space is reachable? (Maybe the rank of the Jacobian) Furthermore can we still find a reachability test in case we do have certain joint angle limitations? </p>
Robot arm reachability of a pose in Cartesian space
<p>There is no angle that <em>should</em> be removed. At a gimbal lock, you are able to express an orientation in an infinite amount of ways if you use a three-angle representation. At a gimbal lock, axes of two angular rotations are aligned. As such, their rotations are no unique solution to a given orientation. It depends on the representation which axes are considered here.</p> <p>I'll try to find time later to add a can-in-series example to further clarify this.</p> <p>I suggest you never ever look back at three-angle representations, such as pitch-yaw-roll, Euler angles or Tait-Bryan angles. The theory is flawed as it does not make any sense mathematically. It's like trying to flatten a piece of paper over a globe.</p>
8191
2015-10-06T16:33:02.827
|motion-planning|
<p>How do i determine which angle i can negate when gimbal lock occurs. </p> <p>As i've understood with gimbal lock that it remove one degree of freedom, but how do i determine which degree can be removed when a value R[1][3] of a rotation matrix (size 3x3) has the value 1. Is it the Roll, Pitch or yaw which can be taken out from the equation?</p>
rotation matrix to euler angles with gimbal lock
<p>It sounds like you are looking for an artificial muscle. I see you've already looked at one type of</p> <ul> <li><a href="http://en.wikipedia.org/wiki/shape-memory_alloy" rel="nofollow">shape-memory alloy</a>; also called memory metal, memory alloy, muscle wire, Nitinol</li> </ul> <p>Have you looked at</p> <ul> <li><a href="http://writerofminds.blogspot.com/2014/05/homemade-artificial-muscles-iv.html" rel="nofollow">twisted fishing line artificial muscles</a>?</li> <li>The other types mentioned in the Wikipedia: <a href="http://en.wikipedia.org/wiki/artificial_muscle" rel="nofollow">"artificial muscle"</a> article?</li> </ul> <p>It seems more natural to me to use normal rotary motors to rotate the screw-on lid of a typical jar, so I am curious to see how your system works. Good luck!</p>
8199
2015-10-07T06:57:29.360
|mechanism|
<p>Background: Introductory robotics competition for college freshmen; Bot has to open 8 jars (with two balls in each of them) in ten minutes and load the balls into a shooting mechanism.</p> <p>So, we were doing this project and we hit upon a challenge that the jar is not opening like we originally intended to. So we decided to get a rack-pinion mechanism and use it for unscrewing the lid. However, it is too large and we are unable to fit the bot in the required dimensions</p> <p><strong>The actual question:</strong> Are there any wires or rigid columns/things which can contract ~1 cm when electricity is passed through it? And what would their price range be? Our budget is also in the list of constraints for the bot</p> <p>Edit: We can include a wire of length &lt;1m or a column of length &lt;30 cm. Also, the wire needs to contract only more than 7mm</p>
Wires or columns which contract on passing electricity
<p>Here is my solution, that is just an approximation but it is working quite well.</p> <p>Essentially this is what I do:</p> <ul> <li>I store the a priori covariance matrix in $P_{old}$.</li> <li>I execute the normal (uncorrected) update step, and store the covariance matrix in $P_{new}$.</li> <li>Given the direction of the landmark $d$, I compute the parallel component (to this direction $d$) of the axis of the ellipse represented by $P_{old}$. Then, I select the component with maximum length and store it in $a_{par}$.</li> <li>I compute the perpendicular component (to the same direction $d$) of the axis of the ellipse represented by $P_{new}$. Then, I select the component with maximum length and store it in $a_{perp}$.</li> <li>Finally I generate the corrected ellipse from the 2 axis $a_{par}$ and $a_{perp}$ and store the covariance matrix in $P$. </li> </ul> <p>This is represented in the following example (direction of the landmark $d$ = 0Β°) figure, where the red ellipse is represented by $P_{old}$, the green ellipse by $P_{new}$, and the blue ellipse is the corrected one represented by $P$, where only the component perpendicular to $d$ has been shrunk.</p> <p><a href="https://i.stack.imgur.com/AqaT7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AqaT7.png" alt="enter image description here"></a></p>
8202
2015-10-07T14:22:27.490
|mobile-robot|slam|kalman-filter|ekf|
<p>I have implemented an EKF on a mobile robot (x,y,theta coordinates), but now I've a problem. When I detect a landmark, I would like to correct my estimate only on a defined direction. As an example, if my robot is travelling on the plane, and meets a landmark with orientation 0 degrees, I want to correct the position estimate only on a direction perpendicular to the landmark itself (i.e. 90 degrees).</p> <p>This is how I'm doing it for the position estimate:</p> <ul> <li>I update the x_posterior as in the normal case, and store it in x_temp.</li> <li>I calculate the error x_temp - x_prior.</li> <li>I project this error vector on the direction perpendicular to the landmark.</li> <li>I add this projected quantity to x_prior.</li> </ul> <p>This is working quite well, but how can I do the same for the covariance matrix? Basically, I want to shrink the covariance only on the direction perpendicular to the landmark.</p> <p>Thank you for your help.</p>
EKF-SLAM: Shrink covariance matrix on one direction
<p>In velocity kinematic, you can establish a relationship between the velocity of the end-effector and the joint velocities, </p> <p>$$ \begin{align} x_{2}(t) &amp;= a_{1} \cos\theta_{1}(t) + a_{2} \cos(\theta_{1}(t)+\theta_{2}(t)) \\ y_{2}(t) &amp;= a_{1} \sin\theta_{1}(t) + a_{2} \sin(\theta_{1}(t)+\theta_{2}(t)) \end{align} $$ </p> <p>where $a_{1}$ and $a_{2}$ are the lengths of the links. Now the aim is to acquire $\dot{x}_{2}(t)$ and $\dot{y}(t)_{2}$. This is Matlab code for obtaining the derivatives if you are lazy to do them by hand. </p> <pre><code>clear clc syms a1 a2 x y th1 th2 t th1(t) = symfun(sym('th1(t)'), t); th2(t) = symfun(sym('th2(t)'), t); x = a1*cos(th1(t)) + a2*cos(th1(t) + th2(t)); y = a1*sin(th1(t)) + a2*sin(th1(t) + th2(t)); dxdt = diff(x,t) dydt = diff(y,t) </code></pre> <p>$$ \begin{align} \dot{x}_{2}(t) &amp;= -a_{1} \dot{\theta}_{1}\sin\theta_{1} - a_{2}(\dot{\theta}_{1} + \dot{\theta}_{2}) \sin(\theta_{1}+\theta_{2}) \\ \dot{y}_{2}(t) &amp;= \ \ \ a_{1} \dot{\theta}_{1}\cos\theta_{1} + a_{2}(\dot{\theta}_{1} + \dot{\theta}_{2}) \cos(\theta_{1}+\theta_{2}) \end{align} $$ </p> <p>Now you substitute the given parameters in the aforementioned equations. </p>
8204
2015-10-08T03:08:51.853
|robotic-arm|
<p>The joint velocities are constant and equal to $\dot{\theta}_{2}$ = 1 and $\dot{\theta}_{1}$ = 1. How to Compute the velocity of the end-effector when $\theta_{2} =\pi/2$ and $\theta_{1} = \pi/6$</p> <p><a href="https://i.stack.imgur.com/d9IWO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d9IWO.png" alt="enter image description here"></a></p>
velocity of the end-effector
<p>Reusing parts is a very important and common concept.</p> <p>Using the same DC motor 4 times means you have to deal with only one motor type. If you choose a 2nd type of motor for the roll, you have to fiddle with the characteristics of two motors. Using a motor with more torque usually means that you need a different drive circuit. From here on the problems are proliferating. </p> <ul> <li>What if the supplier of the bigger motor stops selling the motor?</li> <li>What if the driver circuit of the bigger motor does not have the precision of the smaller ones?</li> <li>What's the price to buy 1 + 2 motors vs. buying 4 of the same kind?</li> </ul> <p>If you are a company building industrial robots you buy DC motors in bulk. You use few standard motors that your engineers are familiar with. The driver circuits are proven and tested.</p> <p>If you face a higher torque requirement, you simply add another familiar motor. It's not economical to pay somebody finding a better suited motor, which has several possible drawbacks as pointed out above.</p> <p>As a company your goal is to sell robots, not to overengineer them.</p>
8209
2015-10-08T15:35:47.143
|industrial-robot|
<p>I saw one old industrial robot(Year 1988) end effector is having 2 DC motor for roll drive. After roll drive, yaw and pitch drives are connected and it has dc motors separately.</p> <p>But roll drive has two DC motors. Why are they used like this? why not single with higher torque.</p> <p>All the roll, pitch and yaw motors are same spec. Total 4 DC motors.</p> <p>Two DC motor connected to single shaft using gears in roll.</p>
Two DC motors and single output?
<p>Basically it does not matter. <br> But you have to be carefull if the plate is rotating fast, because the rotation of the plate around its center point, with the IMU placed out of center, will cause the accelerometer to measure centrifugal forces.</p> <p>If your task is to stabalize the platform, this won't be an issue for you.</p>
8220
2015-10-10T06:31:02.580
|control|sensors|imu|sensor-fusion|
<p>I recently bought a IMU . I am new at this. </p> <p>My question: Does the positioning of the IMU matter? Are there any differences between placing it at the center of the plate or if it is offset from the center?</p> <p>I am still learning about this topic. So any help would be greatly appreciated.</p> <p>Thanks.</p>
Balancing a plate with an IMU offset from the center
<p>Assuming that you mean to say your frame of reference is rotating with those roll-pitch-yaw rates and you are tracking a stationary object with position defined by $R$, $\theta$, and $\phi$ in the rotating frame, then you need to do this:</p> <ol> <li><p>Compute the new roll, pitch, and yaw angles given the previous values and the measured rates.</p></li> <li><p>Compute the rotation matrix, $A$, for the new reference frame.</p></li> <li><p>Compute the previous position vector, $p$, for the object, where $p = (R \cos \theta \cos \psi, R \sin \theta \cos \psi, R \sin \psi)$</p></li> <li><p>Compute the new predicted position vector by rotating the previous one into the new frame (i.e., $p_{new} = A^{-1} p$)</p></li> <li><p>Compute the new predicted $R$, $\theta$, and $\psi$ from the elements of the new predicted position vector ($R_{new} = \sqrt{p_{new,1}^2 + p_{new,2}^2 + p_{new,3}^2}$, etc.)</p></li> </ol> <p>Note that I am not paying attention to your axes and sign conventions, so you will have to take that into account for your own solution.</p>
8222
2015-10-10T16:58:31.950
|kalman-filter|
<p>I have a sensor that gives R, Theta, Phi (Range, Azimuth and Elevation) As such: <a href="https://i.stack.imgur.com/eVci6.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/eVci6.jpg</a> I need to predict the next state of the object given the roll, pitch yaw angular velocities given the above information. But the math is really confusing me. So far all I've gotten is this:</p> <pre><code>Xvel = (R * AngularYVel * cos(Theta)) YVel = (R * AngularXVel * cos(Phi)) ZVel = (R * AngularYVel * -sin(Theta)) + (R * AngularXVel * -sin(Phi)) </code></pre> <p>i worked this out by trigonometry, so far this seems to predict the pitching about the x axis and yawing about my y axis (sorry i have to use camera axis) But i dont know how to involve the roll (AngularZVel)</p>
3D Angular velocity to 3D velocity to predict next state
<p>may be this would be useful to you,</p> <pre><code>#include &lt;ros/ros.h&gt; #include &lt;tf/transform_datatypes.h&gt; #include &lt;ar_track_alvar_msgs/AlvarMarkers.h&gt; void cb(ar_track_alvar_msgs::AlvarMarkers req) { if (!req.markers.empty()) { tf::Quaternion q(req.markers[0].pose.pose.orientation.x, req.markers[0].pose.pose.orientation.y, req.markers[0].pose.pose.orientation.z, req.markers[0].pose.pose.orientation.w); tf::Matrix3x3 m(q); double roll, pitch, yaw; m.getRPY(roll, pitch, yaw); ROS_INFO("roll, pitch, yaw=%1.2f %1.2f %1.2f", roll, pitch, yaw); // roll --&gt; rotate around vertical axis // pitch --&gt; rotate around horizontal axis // yaw --&gt; rotate around depth axis } // if } int main(int argc, char **argv) { ros::init(argc, argv, "arlistener"); ros::NodeHandle nh; ros::Subscriber sub = nh.subscribe("ar_pose_marker", 1, cb); ros::spin(); return 0; } </code></pre> <p>Thanks to <a href="http://ros-robotics.blogspot.de/" rel="nofollow">http://ros-robotics.blogspot.de/</a></p>
8223
2015-10-10T19:36:36.353
|ros|pose|
<p>I am using the ar_track_alvar package in Indigo to detect AR Tags and determine their respective poses. I am able to run the tracker successfully as I can visualize the markers in RViz. I give the following command to print the pose values</p> <blockquote> <p>rostopic echo /ar_pose_marker</p> </blockquote> <p>and I get the following output indicating that the poses are determined.</p> <blockquote> <pre><code>header: seq: 0 stamp: secs: 1444430928 nsecs: 28760322 frame_id: /head_camera id: 3 confidence: 0 pose: header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: '' pose: position: x: 0.196624979223 y: -0.238047436646 z: 1.16247606451 orientation: x: 0.970435431848 y: 0.00196992162831 z: -0.126455066154 w: -0.205573121457 </code></pre> </blockquote> <p>Now I want to use these poses in another ROS node and hence I need to subscribe to the appropriate ROS message('ar_pose_marker"). But I am unable to get enough information on the web on the header files and functions to use in order to extract data from the published message. It would be great if somebody can point to a reference implementation or documentation on handling these messages. It might be useful to note that ar_track_alvar is just a ROS wrapper and hence people who have used ALVAR outside of ROSmay also give their inputs.</p> <p><strong>UPDATE:</strong></p> <p>I tried to write code for the above task as suggested by @Ben in the comments but I get an error. The code is as follows</p> <pre><code>#include &lt;ros/ros.h&gt; #include &lt;ar_track_alvar_msgs/AlvarMarker.h&gt; #include &lt;tf/tf.h&gt; #include &lt;tf/transform_datatypes.h&gt; void printPose(const ar_track_alvar_msgs::AlvarMarker::ConstPtr&amp; msg) { tf::Pose marker_pose_in_camera_; marker_pose_in_camera_.setOrigin(tf::Vector3(msg.pose.pose.position.x, msg.pose.pose.position.y, msg.pose.pose.position.z)); } int main(int argc, char **argv) { ros::init(argc, argv, "pose_subscriber"); ros::NodeHandle nh; ros::Subscriber pose_sub = nh.subscribe("ar_pose_marker", 1000, printPose); ros::spin(); return 0; } </code></pre> <p>And I get the following error</p> <pre><code>/home/karthik/ws_ros/src/auto_land/src/pose_subscriber.cpp: In function β€˜void printPose(const ConstPtr&amp;)’: /home/karthik/ws_ros/src/auto_land/src/pose_subscriber.cpp:17:53: error: β€˜const ConstPtr’ has no member named β€˜pose’ marker_pose_in_camera_.setOrigin(tf::Vector3(msg.pose.pose)); ^ make[2]: *** [auto_land/CMakeFiles/pose_subscriber.dir/src/pose_subscriber.cpp.o] Error 1 make[1]: *** [auto_land/CMakeFiles/pose_subscriber.dir/all] Error 2 make: *** [all] Error 2 </code></pre> <p>Any suggestions?</p>
Determining pose from ar_track_alvar message in ROS
<p>Check out the video I linked in my answer to a <a href="https://engineering.stackexchange.com/a/4210/1633">similar question at the Engineering SE</a>. In that answer I said that the problem with trying to use differential steering in a wheeled vehicle is that you have to accept a tremendous amount of slip in order to perform turns. </p> <p>In a wheeled vehicle on a hard surface, like pavement, this means burning tires to make common turns. Tanks don't have this issue because they aren't designed to and typically don't operate on roads, they operate on soil. In that scenario the ground gets torn up instead of the "wheels"(tracks). However, with a tank, there isn't <em>as much</em> damage to the ground because the tracks distribute the weight of the vehicle over a much larger surface area, reducing the ground pressure it exerts. </p> <p>The problem you'll have with a differentially steered wheeled vehicle operating in sand is that you get small points of contact between the tire and terrain due to the circular nature of the tire (point of tangent plus some tire/terrain deformation), and that small point of contact boosts ground pressure. Then, on top of that, you're going to spin tires - you're asking for the vehicle to dig itself in. </p> <p>Your vehicle is light, so if you manage to get some large/oversized tires and run them under-inflated you might be able to get by, but this for me is one of the great joys of engineering: the best, fastest way to get a satisfying answer is to try it. Try what you want to do in the environment you expect to operate. If building a prototype is too costly/time consuming I would choose a different method of locomotion for fear of digging into the sand, but if your actual terrain is "sandy" instead of "sand", as in by a river, or if the sand is wet or compacted, you might be able to get by.</p> <p>Finally, that said, if you really want to go this route and weight is an issue, I would use <a href="http://www.robotmarketplace.com/products/mechanical_main.html" rel="nofollow noreferrer">belts or chains</a> to attach the wheels to the motors. Using a gearbox would be expensive weight-wise and would also probably limit the wheel base you could use. </p>
8226
2015-10-11T05:29:52.497
|mobile-robot|motor|wheeled-robot|
<p>I'm working on a project that requires me to build a small vehicle (footprint of ~ 14 x 14 inches, less than 6.5 pounds) that can traverse sand. For the steering system, I was thinking of replicating the way tanks and lawn mowers navigate (ability to do zero-point turns), but I want to do this with four wheels instead of tracks like a tank.</p> <p>I need help with implementing this idea. My preliminary thoughts are to have two motors where each motor power the wheels on one side of the vehicle (I think this would require a gearing system) or to have a motor to power each individual wheel which I'd rather avoid.</p>
How would I replicate a tank/zero-turn steering system in a small robotic vehicle?
<p>A project like this was already thought and working. </p> <p>It was proposed from Vijay Kumar, one of the gurus of quarotor helicopers (that is the name he thinks it is the most appropriate for these objects).</p> <p>I hope you will find the following link interesting:</p> <p><a href="http://spectrum.ieee.org/automaton/robotics/aerial-robots/a-smartphone-is-the-brain-for-this-autonomous-quadcopter" rel="nofollow">Vijay Kumar: a smartphone as brain for a quadrotor</a></p> <p><a href="https://www.youtube.com/watch?v=mtTiPnNUKj8" rel="nofollow">This is the video</a></p>
8228
2015-10-11T08:00:43.973
|arduino|quadcopter|
<p>I want to replace the flight module with smart phone because it has all sensors that are required, like gyroscope, magnetometer, etc. Is that possible?</p> <p>I am using an Google Nexus 4 Android (OS model 5.1). I will control using another mobile, I am able write an app, with an Arduino acting as a bridge between smartphone and copter. I am using flight controller <a href="https://www.openpilot.org/product/coptercontrol/" rel="nofollow">OpenPilot CC3D CopterControl</a>.</p>
Quad Copter flight module can replace with smart phone?
<p>I have worked on a cartesian robot with similar requirements<sup>&dagger;</sup> as your own, and we selected <a href="https://en.wikipedia.org/wiki/Linear_motor#Synchronous_motor" rel="nofollow">direct drive synchronous linear motors</a> for our x/y stages. In our case, both axes were around 2m in length, but magnet the tracks have the potential to be as long as you need to build them.</p> <p><sup>&dagger; Less than an order of magnitude higher at $2/5ms^{-1}$ velocity &amp; $10/20ms^{-2}$ acceleration for x &amp; y stages respectively.</sup></p> <p>In our case, we used an <a href="http://www.aerotech.co.uk/product-catalog/motors/linear-motors/blmfs5-series.aspx?p=%2Fproduct-catalog%2Fmotors.aspx%3Fgclid%3DCjwKEAjw1_KwBRDEz_WvncL4jGwSJAAEym0dQY8gHk1TZOwd_Kj5EWvbQJFpST9NGdjYcHJDtRByYRoCOo3w_wcB" rel="nofollow">exposed magnet track linear motor from aerotech</a>, but for most applications a u-shaped track is more convenient as it contains the high magnetic fields within a more enclosed space. Our cartesain robot had large exposed magnet track surfaces which meant that we had to post warnings regarding letting pacemakers and metal tools getting too close (a screwdriver could be attracted to the magnet track with sufficient force to damage both and anything which got in between, such as your fingers).</p> <p>There are a number of manufacturers making direct drive linear motors (as a <a href="https://www.google.co.uk/search?q=direct%20drive%20linear%20motors" rel="nofollow">google search</a> will show), and in addition to flat and U-shaped tracks, manufacturers such as <a href="http://www.micromech.co.uk/dir_products/nippon/motors_shaft.html" rel="nofollow">Nippon Pulse</a> and <a href="http://www.linmot.com/products/linear-motors/" rel="nofollow">LinMot</a> do cylindrical shaft motors, which might be easier to retrofit to a robot which currently uses a lead-screw linear actuator.</p> <p>The accuracy of these direct drive systems can be very much dependent on tuning. They often suffer from significant <a href="https://en.wikipedia.org/wiki/Cogging_torque" rel="nofollow">cogging effects</a>, though some manufacturers claim to include <a href="http://www.kollmorgen.com/en-us/products/motors/direct-drive/direct-drive-linear/" rel="nofollow">patented zero and anti-cogging techniques</a> and often maximum and minimum speeds can be related to how smooth you want the movement to be at those speeds.</p> <p><sup>As an aside, we also used <a href="https://en.wikipedia.org/wiki/Air_bearing" rel="nofollow">air-bearings</a> on our X track, which is how we managed high speed and acceleration with a relatively small motor. They were really difficult to tune however and this was complicated by the fact that we had 6 X-carriages on each Y beam (of which there were two, so 14 x/y axes and 16 motors in total).</sup></p>
8241
2015-10-13T06:38:47.927
|actuator|industrial-robot|cnc|
<p>I would like to make a Cartesian robot with maximum speed of up to $1ms^{-1}$ in x/y plane, acceleration $2ms^{-2}$ and accuracy at least 0.1mm. Expected loads: 3kg on Y axis, 4kg on X axis. Expected reliability: 5000 work hours. From what I have seen in 3D printers, belt drive seems not precise enough (too much backlash), while screw drive is rather too slow. </p> <p>What other types of linear actuators are available? What is used in commercial grade robots, i.e. <a href="http://www.janomeie.com/products/desktop_robot/jr-v2000_series/index.html" rel="nofollow">http://www.janomeie.com/products/desktop_robot/jr-v2000_series/index.html</a></p>
Linear actuators in a cartesian robots
<p>Please see here:</p> <p><a href="http://www.robotsinsearch.com/products/blog/how-interface-ust-series-hokuyo-laser-scanners" rel="nofollow">How to interface with the UST series Hokuyo Laser Scanners</a></p>
8244
2015-10-13T16:49:29.270
|sensors|ros|rangefinder|linux|
<p>This is a very basic beginner question, I know, but I am having trouble connecting to the Hokuyo UST-10LX sensor and haven't really found much in terms of helpful documentation online.</p> <p>I tried connecting the Hokuyo UST-10LX directly to the ethernet port of a Lubuntu 15.04 machine. The default settings of the Hokuyo UST-10LX are apparently: ip addr: 192.168.0.10 netmask: 255.255.255.0 gateway: 192.168.0.1</p> <p>So, I tried going to the network manager and setting IPv4 settings manually, to have the ip addr be 192.168.0.9, netmask of 255.255.255.0, and gateway to 192.168.0.1. I also have a route set up to the settings of the scanner.</p> <p>I then go into the terminal and run:</p> <pre><code>rosrun urg_node urg_node _ip_address:=192.168.0.10 </code></pre> <p>and get this output:</p> <pre><code>[ERROR] [1444754011.353035050]: [setParam] Failed to contact master at [localhost:11311]. Retrying... </code></pre> <p>How might I fix this? I figure it's just a simple misunderstanding on my end, but through all my searching I couldn't find anything to get me up and running :(</p> <p>Thank you for the help! :)</p> <p>EDIT:</p> <p>HighVoltage pointed out to me that I wasn't running <code>roscore</code> which was indeed the case. I was actually running into problems before that when I still had <code>roscore</code> up, and when I tried it again, this was the output of the <code>rosrun</code> command:</p> <pre><code>[ERROR] [1444828808.364581810]: Error connecting to Hokuyo: Could not open network Hokuyo: 192.168.0.10:10940 could not open ethernet port. </code></pre> <p>Thanks again!</p>
How to connect ethernet based Hokuyo scanner?
<p>You gave two methods of comparison - "either integrating the IMU or differentiating the optical tracking data". I'm talking about one basic transform to convert from the vision frame to the global frame and vice-versa.</p> <p>One transform, that you calculate in advance, that relates one <em>origin</em> to another <em>origin</em>. You seem to be talking about methods of best-fit to get good correspondence, but I'm saying to use one fixed, somewhat generic transform. </p> <p>All you need to do is shift from one frame to another. This will preserve drift.</p> <p>:EDIT: </p> <p>I may have misunderstood the question, or your particular scenario. I thought that you could just go out and measure the location and orientation of the camera, measure the location and orientation of the robot, and make your own transform. </p> <p>In the event that you can't or don't want to (too time consuming, lots of trials, etc.), then you could "cheat" by exploiting the fact that drift is a slow event, where a very small error is added over a very large number of samples. </p> <p>What you might be able to try is to use the <a href="http://www.mathworks.com/matlabcentral/fileexchange/26186-absolute-orientation-horn-s-method" rel="nofollow">method you already found</a> and apply that to only the first few samples of your data set. </p> <p>If your robot is in motion from the start of the set, then the first sample should be enough to fix a position and the second sample should fix a heading/rotation. Use the resulting transform you get from only these two samples and apply that transform to all the others. Noise may be an issue, but that seems to be the point of what you're trying to do. </p> <p>If you think it's too noisy or there's too much error, then you could try the algorithm with the first 3 samples, or first 4, or 5, etc. As you've noted, the more samples you include the more this will account for drift, but if you can't physically measure the transform you don't really have any other option. </p>
8245
2015-10-13T20:44:29.350
|sensors|localization|imu|calibration|
<p>I have a dataset that contains position information from tracking a robot in the environment. The position data comes both from a very accurate optical tracking system (Vicon or similar) and an IMU. I need to compare both position data (either integrating the IMU or differentiating the optical tracking data).</p> <p>The main problem is that both systems have different reference frames, so in order to compare I first need to align both reference frames. I have found several solutions; the general problem of aligning two datasets seems to be called "the absolute orientation problem".</p> <p>My concern is that if I use any of these methods I will get the rotation and translation that aligns both datasets minimizing the error over the whole dataset, which means that it will also compensate up to some extent for the IMU's drift. But I am especially interested in getting a feeling of how much the IMU drifts, so that solution does not seem to be applicable.</p> <p>Anyone has any pointer on how to solve the absolute orientation problem when you do not want to correct for the drift?</p> <p>Thanks</p>
Aligning datasets with drift
<p>If your object $O$ has a different orientation from your global frame $S$, and you know what that difference in orientation is, you can create a 4x4 transform matrix between the two:</p> <p>$$ T = \left[ \begin{array}{cc} R &amp; s \\ 0 &amp; 1 \end{array} \right] $$</p> <p>where $R$ is the 3x3 rotation matrix, $s$ is the 3x1 translation vector, $0$ is a 1x3 row of zeros, and $1$ is just 1. You can transform your (points, angular velocities, etc.) from one frame to another with:</p> <p>$$ \left[ \begin{array}{c} x' \\ y' \\ z' \\ 1 \end{array} \right] = \left[ \begin{array}{cc} R &amp; s \\ 0 &amp; 1 \end{array} \right] \left[ \begin{array}{c} x \\ y \\ z \\ 1 \end{array} \right] $$</p> <p>If you don't care about translation you can just set $s = \left[ \begin{array} 00 \\ 0 \\ 0 \\ \end{array} \right]$</p>
8247
2015-10-14T08:43:47.833
|mobile-robot|imu|gyroscope|
<p>Let us assume I have an object O with axis $x_{O}$, $y_{O}$, $z_{O}$, with different orientation from the global frame S with $x_{S}$, $y_{S}$, $z_{S}$ (I don't care about the position). Now I know the 3 instantaneous angular velocities of the object O with respect to the same O frame, that is $\omega_O^O = [\omega_{Ox}^O \omega_{Oy}^O \omega_{Oz}^O]$. How can I obtain this angular velocity with respect to the global frame (that is $\omega_O^S$)?</p> <p>Thank you!</p>
Angular velocities and rotation matrices
<p>Your first step works because it is implied that your frames are "aligned", meaning that:</p> <p>$$ \theta_{X_0} = 0 \\ \theta_{Y_0} = 0 \\ \theta_{Z_0} = 0 \\ $$</p> <p>In general (as with any integration!), you have some starting angles(initial condition), $\theta_{(X,Y,Z)_0}$, and you proceed to update from there, such that:</p> <p>$$ \theta_{X_N} = \theta_{X_{N-1}} + \omega_{X_N} dT \\ \theta_{Y_N} = \theta_{Y_{N-1}} + \omega_{Y_N} dT \\ \theta_{Z_N} = \theta_{Z_{N-1}} + \omega_{Z_N} dT \\ $$</p> <p>Note that when your initial conditions are as you use in your example, the $\theta_{N-1}$ terms vanish leaving you with the equations you originally stated. </p>
8250
2015-10-14T13:16:26.083
|imu|gyroscope|frame|
<p>Let us assume we have a gyro that is perfectly aligned to a global frame ($X,Y,Z$). </p> <p>From what I know the gyro data give me the angular rate with respect to the gyro axis ($x,y,z$). So let's say I got $\omega_x,\omega_y,\omega_z$. Since I know that the 2 frames are perfectly aligned I perform the following operations:</p> <ul> <li>$\theta_X = dt * \omega_x$</li> <li>$\theta_Y = dt * \omega_y$</li> <li>$\theta_Z = dt * \omega_z$ </li> </ul> <p>where $\theta_X$ is the rotation angle around $X$ and so on.</p> <p>My question is: what is this update like in the following steps? Because this time the measurement that I get are no more directly related to the global frame (rotated with respect to the gyro frame).</p> <p>Thank you!</p>
Gyro measurement to absolute angles
<p>All the steps are actually correct there- accept, when you press RESET, you curt need to select your Robot type, I usually just keep pressing RESET till I get USER, then press start &amp; only then it displays ramclr or something</p>
8261
2015-10-15T07:33:34.720
|robotic-arm|
<p><em>To avoid wasting your time on this question, you might only want to react on this if you have knowledge of industrial robotic arms specific. Common troubleshooting is unlikely to fix this problem or could take too much time.</em></p> <p>We've started a project with the Mitsubishi Melfa RV-2AJ robotic arm. Everything went fine until the moment we replaced the batteries.</p> <p>The controller displays: "FAiL" and does not respond to any buttons or commands sent through serial connection.</p> <p>We did replace the batteries of both the robot and the controller. As it took some time to get the batteries delivered, we've left the robot (and controller) withouth power for the weekend. (Which might have caused this problem)</p> <p>Is there anyone with knowledge of Mitsubishi Robotic arms around here? I'm kinda hoping it would be a common problem/mistake and anyone with experience on this subject would know about it?</p>
Robotic arm ["FAiL"] error display. - Festo / Mitsubishi Melfa RV-2AJ (Controller CR1-571)
<p>Another option is to build your own drone, for example with a <a href="https://pixhawk.org/modules/pixhawk" rel="nofollow noreferrer">PixHawk Autopilot</a>. Then connect your PixHawk to an embedded computer as the <a href="http://www.hardkernel.com/main/products/prdt_info.php?g_code=G143452239825" rel="nofollow noreferrer">Odroid XU-4</a> or the (way more powerful) <a href="https://developer.nvidia.com/embedded/buy/jetson-tx2" rel="nofollow noreferrer">Nvidia Jetson TX2</a>. Both of them support Linux (on the TX2 you have linux ubuntu for tegra) Then you could choose your own camera (I am actually in the process of selecting a new camera for our drones) and interface everything through <a href="http://www.ros.org/" rel="nofollow noreferrer">ROS</a>.</p> <p>Of course, this is not trivial and it requires some expertise. But I firmly think it is one of the best options out there. Especially using the TX2 because it could open also to you the world of Deep Learning onboard the quadrotor. This is true because the NVIDIA Jetson TX2 has a GPU which is meant for deep learning application. </p> <p>I hope this will help. </p>
8262
2015-10-15T08:32:53.013
|computer-vision|cameras|
<p>I am working on a project where I want to run some computer vision algorithms (e.g. face recognition) on the live video stream coming from a flying drone. </p> <p>There are many commercial drones out there that offer video streams, like</p> <ul> <li><a href="http://www.flyzano.com/mens/" rel="nofollow">http://www.flyzano.com/mens/</a></li> <li><a href="https://www.lily.camera/" rel="nofollow">https://www.lily.camera/</a></li> <li>etc..</li> </ul> <p>But none of them seem to give access to the video feed for real-time processing. </p> <p>Another idea is to have the drone carry a smartphone, and do the processing on the phone through the phone's camera. Or just use a digital camera and an arduino that are attached to the drone. </p> <p>Although these ideas are feasible, I would rather access the video-feed of the drone itself. So my question is that are there any drones out there that offer this feature? or can be hacked somehow to achieve this? </p>
Real-time video processing on video feed from a drone's camera
<p>For any underwater thrusters you should use a motor with as low rpm as you can (at the same time that means a bigger torgue). And as big propeller as... you should test how big it can. For very low rpm (lets to say 300-500 rpm) it can 3-4 blades).</p>
8271
2015-10-16T20:23:18.090
|motor|
<p>I am planning to build a homemade ROV, and I wanted to know a couple of things about the motors. First is: Will it be Ok, if I use a brushed DC motor, instead of a brushless motor, and is there any major disadvantages ? Second : What RPM DC motor should I aim for ? High RPM or low RPM ? Will 600rpm be enough ? The specific motor that I am talking about is <a href="http://www.ebay.ca/itm/37mm-12V-DC-600RPM-Replacement-Torque-Gear-Box-Motor-New-/320984491847?hash=item4abc2aa747:m:mEBEQXXpqmNg4-vxmFaZP5w" rel="nofollow">http://www.ebay.ca/itm/37mm-12V-DC-600RPM-Replacement-Torque-Gear-Box-Motor-New-/320984491847?hash=item4abc2aa747:m:mEBEQXXpqmNg4-vxmFaZP5w</a></p> <p>Will this be a good motor for the propellers of the ROV. I am planning to have 4 motors / propellers. Two for upward and downward thrusting, and 2 for forward and side thrusting. The propellers that I plan to use, are basic plastic 3 blade propellers, with diameter, between 40mm and 50mm. </p> <p>My main question is, what RPM and torque should I aim for when choosing the DC motor ?</p>
DC Motors for a ROV?
<p>The torque bandwidth is typically referring to the maximum frequency of motion at which the actuator can provide that torque. So your actuator can provide a peak torque of 100 Nm, as in it can hold up a weight of 100 N held at a torque arm of 1 m. If you want to swing that weight back and forth you could do it at up to 4 Hz, but no faster without damaging or overheating the actuator. However, if you reduce the weight to 2 Nm then you can swing it at 20 Hz.</p> <p>Keep in mind that this is an oversimplification since if you were swinging a weight back and force the torque will not be constant the entire time. Still, it provides you with an idea of how fast the actuator can respond with different levels of applied torque (the torque is a resistance against the actuator, or rather an "impedance").</p> <p>Note the "zero impedance" bandwidth is the same thing except with no payload weight being held at the end of the actuator torque arm -- the weight of the arm itself <em>is</em> the payload.</p>
8277
2015-10-17T10:19:06.187
|control|design|mechanism|joint|
<p>The rest of my student team and I are in the process of redesigning an exoskeleton and building it based on an existing one. From the papers that we have been reading there are some references to low, high and zero impedance torque bandwith.</p> <p>What is that? Does it have to do with the control system?</p> <p>It is measured in Hz. Here is a table from one of the papers:</p> <p><a href="https://i.stack.imgur.com/yqK0d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yqK0d.png" alt="Torque bandwith"></a></p>
What is torque bandwidth in actuated joints and how does it affect the control systems?
<p><a href="http://www.digikey.com/product-search/en/motors-solenoids-driver-boards-modules/motors-ac-dc/983125" rel="nofollow">Digikey</a> has the ability to sort by physical size (body diameter). I would have thought other sites would offer this as well, but Mouser and Tower Hobbies don't appear to. </p>
8288
2015-10-19T19:12:04.663
|motor|
<p>I have a 1inch square tube that I would like to place a motor into. <img src="https://i.stack.imgur.com/JvPPH.jpg" alt="small gear motor inside steel square tube"></p> <p>The motor I have takes up approximately 1/2 of the available space (roughly 3/4 inch I.D.) I would like to find the largest motor that will fit in the space without having to cobble too much of a housing. </p> <p>Where/how can i find motors by physical dimensions?</p>
Sourcing motors by physical dimensions
<p>Matlab has a package called <a href="http://www.mathworks.com/products/simscape/" rel="nofollow">Simscape</a> that you can use for modeling physical systems in general. I would just caution you up front that Simscape is almost more like a plugin manager in that it enables other modules and doesn't offer a terrific amount of content on its own. This means you get to buy Simscape, then buy whatever other toolboxes you want that require Simscape. That said, if you're using Solidworks, you/your company is probably no stranger to paying good money for (great) software tools. </p> <p>Once you have Simscape, you can purchase/install a package called <a href="http://www.mathworks.com/products/simmechanics/" rel="nofollow">SimMechanics</a>. This package has the capability to <a href="http://www.mathworks.com/help/physmod/smlink/ref/linking-and-unlinking-simmechanics-link-software-with-solidworks.html" rel="nofollow">interface with Solidworks</a>; see the videos <a href="http://www.mathworks.com/videos/importing-and-merging-cad-models-into-simmechanics-68841.html" rel="nofollow">here</a> and <a href="http://www.mathworks.com/videos/importing-a-cad-suspension-assembly-into-simmechanics-89612.html?type=shadow" rel="nofollow">here</a> (the second video is embedded in <a href="http://www.mathworks.com/products/simmechanics/features.html#importing-cad-models" rel="nofollow">this page</a> in case you have trouble accessing it).</p> <p>Lastly, there's an <a href="http://www.mathworks.com/help/aeroblks/index.html" rel="nofollow">Aerospace Blockset</a> for Simulink that helps you <a href="http://www.mathworks.com/help/aeroblks/wind.html" rel="nofollow">model wind</a>, but I'm not an aero guy and haven't used this set, so I can't comment on how useful it may be. </p>
8292
2015-10-20T08:46:38.927
|control|simulator|uav|matlab|simulation|
<p>Currently I am developing a control system for an aircraft of a unique design (something in between a helicopter and a dirigible). At this moment I can model only the dynamics of this vehicle without any aerodynamic effects taken into account. For this I use the following work-flow:</p> <p>Mechanical model in SolidWorks -> MSC ADAMS (Dynamics) &lt;--> MATLAB/Simulink (Control algorithms)</p> <p>Thus, the dynamics of the vehicle is modeled in ADAMS and all control algorithms are in MATLAB/Simulink. Unfortunately, ADAMS can not simulate any aerodynamic effects. As a result, I can not design a control system that is capable to fight even small wind disturbances.</p>
Are there any aerodynamics modeling/simulation software that is capable to consume a SolidWorks model and to interface with MATLAB/Simulink?
<p>It looks like there are 3 main components to the demonstration in this video. </p> <h3>1. Creating the map</h3> <p>Navigation requires a map of the robot's environment -- it looks like this was done manually. It doesn't matter whether you create the digital map based on the real-world map, or a real-world map based on a contrived digital map. But the maps must match.</p> <h3>2. Computing the path</h3> <p>As mentioned in <a href="https://robotics.stackexchange.com/a/8297/350">Chuck's answer</a>, <a href="https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm" rel="nofollow noreferrer">Dijkstra's algorithm</a> can compute a path through a map -- one of many methods of doing so, but probably the most appropriate for this problem. However, you need a bit more than Dijkstra here; the final path is not just the list of nodes you will travel through, but what direction you need to turn at each node (i.e. how many lines the line sensor will need to cross over) in order to be on the line leading to the next node in the path. </p> <p>I would imagine that the instructions set to the robot are just a simple list of how many paths to rotate over at each intersection. Like <code>[-1, -1, 1, 1, 0]</code>. This assumes a known starting point and orientation.</p> <h3>3. Following the path</h3> <p>This seems like a straightforward application of a line-following robot. It looks like this robot is also able to detect the intersections.</p> <p>In their video, they provide a huge piece of manual assistance: placing the robot at a known starting location. All that the robot does is follow the lines, and at each detected intersection executes a turn according to the list of turns it received as instructions.</p>
8293
2015-10-20T11:23:03.210
|wheeled-robot|mapping|line-following|
<p>I know how to make a line follower. But in <a href="https://www.youtube.com/watch?v=5At_u5rnh2U" rel="nofollow">this video</a> what have they done exactly? They are giving the source and destination in the map but how the robot moves based on the instruction given in map?</p> <p>What is the procedure to do it? They have mapped the path. Please do watch the video.</p>
Advanced Line Following Robot of Maze Solving
<p><a href="https://stackoverflow.com/questions/2533321/robotics-simulator/2556393">This closed StackOverflow question</a> has a long list of simulators and some discussion on them. Some are simple and 2D only, some are 3D and include rigid body physics simulation. </p>
8294
2015-10-20T11:31:26.367
|mobile-robot|odometry|simulation|
<p>I want a mobile robot to go from a starting position to a goal position. But, I don't want to calculate the pose from encoders. Instead I want to know if there exist such a simulator that provides pose function that makes the work easier, like go_to(x_coordinate, y_coordinate). That means, the robot will automatically calculate its current position and leads itself to the goal position. </p>
Need a mobile robot simulator that provides easier odometry funtions
<p>The MATLAB tutorial doesnt cover this aspect well about when you have matlab and gazebo running on the same linux machine with no virtual box.</p> <p>Change your linux ethernet interface ip address statically using:</p> <pre><code>sudo ifconfig eth0 192.168.1.1 netmask 255.255.255.0 </code></pre> <p>assign to ros master the same ip address using: </p> <pre><code>export ROS_MASTER_URI=http://192.168.1.1:11311 &gt;&gt; ~/.bashrc export ROS_HOSTNAME=192.168.1.1 &gt;&gt; ~/.bashrc export ROS_IP=192.168.1.1 &gt;&gt; ~/.bashrc </code></pre> <p>assign to matlab ros, an ip address in same subnet as the master above using :</p> <pre><code>setenv('ROS_IP','192.168.1.100:11311') setenv('ROS_IP','192.168.1.100') </code></pre> <p>start rosmaster in terminal using : <code>roscore</code></p> <p>start gazebo model in terminal</p> <p>connect to rosmaster ip adress in MATLAB using: <code>rosinit('192.168.1.1:11311')</code></p> <p>Now you should see gazebo and ros topics in matlab command window</p> <p>You can also follow the matlab tutorial to see how much more you can do</p>
8296
2015-10-20T13:42:49.827
|ros|matlab|gazebo|
<p>I am planning to use MATLAB and Gazebo for one of my course projects. </p> <p>However all the tutorials I have seen till now use Gazebo by using a virtual machine which has ROS and Gazebo installed. I have already installed ROS and Gazebo on this machine (OS Ubuntu). I also have MATLAB installed on it. </p> <p>Is it possible to use the Gazebo on this machine itself with the MATLAB toolbox? </p>
Using Gazebo installed on same machine in MATLAB
<p>Path planning would essentially be setting out a path on a particular terrain for the robot to navigate, primarily automatically, coupled with the shortest route detection avoiding obstacles.</p> <p>Motion planning could be termed as the the procedure by which you enable your robot to move - which type of sensors to use, what type of wheel to move, automatic or RC, etc.</p> <p>Path planning could also be termed an inherent software exercise and motion planning a hardware exercise.</p>
8302
2015-10-20T21:28:21.337
|mobile-robot|motion-planning|humanoid|
<p>What are the main differences between motion planning and path planning? Imagine that the objective of the algorithm is to find a path between the humanoid soccer playing robot and the ball which should be as short as possible and yet satisfying the specified safety in the path in terms of the distance from the obstacles. </p> <p>Which is the better terminology? motion planning or path planning?</p>
What is the difference between path planning and motion planning?
<p>It sounds like you are developing a robotic manipulator from scratch so I also assume you will be writing software for position control too. Both the design and control depend on not only the <em>dynamics</em> but also the <em>kinematics</em> of your robot.</p> <p>Kinematics cover the relationships between the joint angles and link lengths (each "arm" is referred to as a <em>link</em>) so that you can determine the position and orientation (as well as velocities and accelerations) along any part of the robot. One method of modeling the kinematics is to break your set of links into the corresponding series of <a href="https://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters" rel="nofollow">Denavit-Hartenberg parameters</a>. This is not difficult to do, you just have to familiarize yourself with coordinate frames and determine the coefficients for your particular links.</p> <p>Keep in mind that we are always considering the robot states to be the angles of the joints (or extension of any prismatic joint). Given any set of angles we can then use the kinematics to determine the Cartesian positions of the robot components. Same goes for angular rates to Cartesian velocities and angular accelerations to Cartesian accelerations (of course the velocities and accelerations depend on the angles and rates as well).</p> <p>To get the torques we then look at the dynamics, where we use the kinematics to model the robot motion and then examine the reaction forces and moments between links based on the inertia of the moving masses. Again, this is not actually too difficult and reduces to a series of free-body diagrams that can be solved in series. The torque on each joint is then simply the moment being applied to it projected onto the joint axis of rotation.</p> <p>This is known as the <em>computed torque method</em> whereby we prescribe a trajectory for the manipulator joints over time (angles, angular velocities, and angular accelerations) then use the kinematics to determine the Cartesian motion of the joints and finally solve for the reaction torques in the joints. Essentially we are saying "if the manipulator were to perform this motion, what torques will it need to apply through the motors".</p> <p>When you control the manipulator, you can then input those computed torques for your desired motion and include a layer of feedback control to correct for errors since our model is never perfect. Of course you can use the kinematics and dynamics to approach the problem in different ways, but you'll always need to include them both.</p> <p>As 50k4 mentioned in the other answer, there are plenty of software packages to help you do this. However, you will most definitely have to understand the underlying principles in order to translate your particular design into the configuration that the software expects (i.e., DH parameters or some other accepted convention). Manipulators can be a bit daunting so take it in stride!</p>
8310
2015-10-22T07:40:06.647
|arduino|robotic-arm|stepper-motor|rcservo|torque|
<p>I am currently planning on building a robotic arm. The arm's specs are as follows: </p> <ul> <li>3 'arms' with two servos each (to move the next arm)</li> <li>single servo clamp</li> <li>mounted on revolving turntable</li> <li>turntable rotated by stepper motor</li> <li>turntable mounted on baseplate by ball bearings to allow rotation</li> <li>baseplate mounted on caterpillar track chassis</li> <li>baseplate is smaller in length and width than caterpillar chassis</li> </ul> <p>What are the required formulas in determining how much torque each servo must produce, keeping in mind that the arm <em>must</em> be able to lift weights of up to 1 kilogram? Also, considering that the ball bearings will take the load of the arm, how strong does the stepper have to be (just formulas, no answers)?</p> <p>As far as overall dimensions are concerned, the entire assembly will be <em>roughly</em> 255mm x 205mm x 205mm (l x w x h). I have not finalized arm length, but the aforementioned dimensions give a general estimate as to the size. </p>
What is the required theory behind building a robotic arm?
<p>The inverted pendulum is unstable in the sense that it will diverge away from the upright position if <em>perturbed</em> (for example, slightly blown by wind or lightly vibrated). On the other hand the "climbing robot" will swing back to its original position if perturbed, so you can say it is stable.</p> <p>You don't need to derive the equations of motion for an unstable system in order to control it if you have sensor feedback of the desired output (just like you said). Instead, you use a feedback control law that compares the sensor reading to your desired output to compute the required control input that reduces that difference. Read about <a href="https://en.wikipedia.org/wiki/PID_controller" rel="nofollow">PID controllers</a> to get an idea of how that works (it's actually pretty simple).</p> <p>The advantage to using the equations of motion is that you can <em>predict</em> the required input to achieve a desired response, and this predicted input is often referred to as <em>feed-forward</em> control (or <em>open-loop</em> control). If you combine both feed-forward and feedback control you get a very robust system that applies predicted inputs to achieve a desired output but also implements feedback based on sensors to correct for errors (your equations of motion can never be perfectly accurate).</p> <p>Another benefit of using the equations of motion is that you can account for non-linearities in the system that might otherwise be problematic for a PID controller.</p> <p>When it comes to programming, using feedback will be much simpler than using feed-forward since the equations of motion will likely be relatively complex (in general, but not bad for a simple inverted pendulum).</p>
8313
2015-10-22T10:40:13.363
|mobile-robot|control|
<p>As someone who is new and is still learning about robotics, I hope you can help me out.</p> <p>Let's say I have two systems: </p> <ul> <li>(a) Inverted Pendulum (unstable system)</li> <li>(b) Pole Climbing Robot (stable system)</li> </ul> <p>For system (a), I would say that generally, it is a more dynamic system that produces fast motion. So, in order to effectively control it, I would have to derive the Equations of Motions (EOM) and only then I can supply the sufficient input to achieve the desired output. Eventually, the program will implement the EOM which enables the microcontroller to produce the right signal to get the desired output.</p> <p>However for system (b), I assume that it is a stable system. Instead of deriving the EOM, why cant I just rely on the sensor to determine whether the output produced is exactly what I want to achieve? </p> <p>For unstable system, controlling it is just difficult and moreover, it does not tolerate erratic behavior well. The system will get damaged, as a consequence. </p> <p>On the contrary, stable system is more tolerant towards unpredictable behavior since it is in fact stable.</p> <p>Am I right to think about it from this perspective? What exactly is the need for deriving the EOM of systems (a) and (b) above? What are the advantages? How does it affect the programming of such systems?</p> <p><strong>Edited:</strong> Some examples of the climbing robot that I'm talking about: </p> <ol> <li><p><a href="http://i.ytimg.com/vi/gf7hIBl5M2U/hqdefault.jpg" rel="nofollow">i.ytimg.com/vi/gf7hIBl5M2U/hqdefault.jpg</a></p></li> <li><p><a href="http://ece.ubc.ca/~baghani/Academics/Project_Photos/UTPCR.jpg" rel="nofollow">ece.ubc.ca/~baghani/Academics/Project_Photos/UTPCR.jpg</a></p></li> </ol>
Is modelling a robot and deriving its Equations of Motions more applicable to a system that is inherently unstable?
<p>One of the options is to utilize the <a href="https://en.wikipedia.org/wiki/Exponential_smoothing" rel="nofollow noreferrer">exponential moving average</a>. The below picture shows data corrupted by Gaussian noise with zero mean and 0.4 variance and how the filter does a good job to remove the spiky noisy data. </p> <p><a href="https://i.stack.imgur.com/9GApL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9GApL.png" alt="enter image description here"></a></p> <pre><code>x = -pi:0.01:2*pi; perfectY = cos(x); % generate data noisyY = perfectY + .4 * randn(1, length(perfectY)); % corrupt data by noise a = 0.13; % tuning parameter preSmoothY = cos(-pi); % arbitrary initial value for i = 1:length(noisyY) Smooth = (1-a)*preSmoothY + a*noisyY(i); preSmoothY = Smooth; expY(i) = Smooth; end plot( x,noisyY,'g', x, expY,'r', 'LineWidth', 2) hold on plot(x, perfectY, 'b','LineWidth', 2) grid on legend('noisy data', 'filtered data', 'data') </code></pre> <p>Another choice is using Kalman filter. It is an optimal filter if you meet the requirements which means no filter can perform better than the Kalman. Unfortunately, the filter imposes some constraints on the signal such as linearity and Gaussianity. Moreover, the system must be described precisely. </p>
8319
2015-10-22T23:48:40.253
|quadcopter|gyroscope|filter|
<p>I would like to filter angular velocity data from a "cheap" gyroscope (60$). These values are used as an input of a nonlinear controller in a quadcopter application. I am not interested in removing the bias from the readings.</p> <p><strong>Edit</strong>: I'm using a l2g4200d gyroscope connected via i2c with an Arduino uno. The following samples are acquired with the arduino, sent via serial and plotted using matlab.</p> <p>When the sensor is steady, the plot shows several undesired spikes.</p> <p><a href="https://i.stack.imgur.com/v2A6F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v2A6F.png" alt="enter image description here"></a></p> <p><strong>How can I filter these spikes?</strong></p> <p><strong>1st approach:</strong> Spikes are attenuated but still present...</p> <p>Let's consider the following samples in which a couple of fast rotations are performed. Let's assume that the frequency components of the "fast movement" are the ones I will deal with in the final application.</p> <p><a href="https://i.stack.imgur.com/8LvBD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8LvBD.png" alt="enter image description here"></a></p> <p>Below, the discrete Fourier transform of the signal in a normalized frequency scale and the second order ButterWorth low pass filter.</p> <p><a href="https://i.stack.imgur.com/SDYpK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SDYpK.png" alt="enter image description here"></a></p> <p>With this filter, the main components of the signal are preserved. </p> <p><a href="https://i.stack.imgur.com/Xg5Ft.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xg5Ft.png" alt="enter image description here"></a></p> <p>Although the undesired spikes are attenuated by a factor of three the plot shows a slight phase shift...</p> <p><a href="https://i.stack.imgur.com/lh0cd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lh0cd.png" alt="enter image description here"></a></p> <p>And the spikes are still present. How can I improve this result? Thanks.</p> <p><strong>EDIT 2:</strong></p> <p>1./2. I am using a breakout board from Sparkfun. You can find the circuit with the Arduino and the gyro in this post: <a href="http://userk.co.uk/gyroscope-arduino/" rel="nofollow noreferrer">Can you roll with a L3G4200D gyroscope, Arduino and Matlab?</a> I have added pullup resistors to the circuit. I would exclude this option because other sensors are connected via the i2c interface and they are working correctly. I haven't any decoupling capacitors installed near the integrated circuit of the gyro. The breakout board I'm using has them (0.1 uF). Please check the left side of the schematic below, <strong>maybe I am wrong</strong>.</p> <p><a href="https://i.stack.imgur.com/4E2KK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4E2KK.png" alt="enter image description here"></a></p> <p>Motors have a separate circuit and I have soldered all the components on a protoboard.</p> <ol start="4"> <li><p>The gyro is in the quadcopter body but during the test the motors were turned off.</p></li> <li><p>That is interesting. The sampling frequency used in the test was 200Hz. Increasing the update freq from 200 to 400 hz doubled the glitching.</p></li> </ol> <p>I found other comments on the web about the same breakout board and topic. Open the comments at the bottom of the <a href="http://forum.bildr.org/viewtopic.php?t=443" rel="nofollow noreferrer">page</a> and Ctrl-F <code>virtual1</code></p>
Filtering angular velocity spikes of a cheap Gyroscope
<p>I think your diagram is missing an angle for the laser angle with respect to the vehicle body -- I'm going to call that angle $\alpha$, see this diagram for clarity:</p> <p><a href="https://i.stack.imgur.com/hAerL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hAerL.png" alt="enter image description here"></a></p> <p>Since it seems you are tracking an object with your laser, I imagine the point of this is to predict the angular velocity of that object in the vehicle frame. So your laser is scanning in 2D, providing range at a number of angles, and you want to predict what angle the object will be at in the next time step.</p> <p>In that case, you need to consider to motion of the system to figure out the motion of the target object in the laser frame. Assume the target object has position $\hat{p}$ in the world frame and $\hat{p}_L$ in the laser frame. Although it sounds like this is for a manipulator that is fixed at the base, we will still consider a moving centre of mass with position $p_0$ in the world frame. The position of the target in the world frame is then:</p> <p>$\hat{p} = p_0 + A_{\beta} \begin{bmatrix} L_z \\ -L_y \\ \end{bmatrix} + A_{\beta} R \begin{bmatrix} \cos \alpha \\ -\sin \alpha \\ \end{bmatrix} $ </p> <p>Where $A_{\beta} = \begin{bmatrix} \cos \beta &amp; \sin \beta \\ -\sin \beta &amp; \cos \beta \\ \end{bmatrix}$</p> <p>Then we differentiate that to get the velocity -- which should be zero assuming your object is stationary in the world frame.</p> <p>$0 = \dot{p}_0 + \dot{\beta} J_{\beta} \begin{bmatrix} L_z \\ -L_y \\ \end{bmatrix} + R \dot{\beta} J_{\beta} \begin{bmatrix} \cos \alpha \\ -\sin \alpha \end{bmatrix} + R \dot{\alpha} A_{\beta} \begin{bmatrix} -\sin \alpha \\ -\cos \alpha \end{bmatrix} + \dot{R} A_{\beta} \begin{bmatrix} \cos \alpha \\ -\sin \alpha \end{bmatrix}$</p> <p>Where $J_{\beta} = \begin{bmatrix} -\sin \beta &amp; \cos \beta \\ -\cos \beta &amp; -\sin \beta \\ \end{bmatrix}$</p> <p>This then defines two equations (the z- and y-components of the above equation) in two unknowns ($\dot{R}$ and $\dot{\alpha}$).</p> <p>However, what you really need is to simply find an expression for the <em>predicted</em> range and laser angle ($\tilde{R}$ and $\tilde{\alpha}$) in terms of your states, the vehicle pose, ($z$,$y$,$\beta$) and target position ($\hat{z}$,$\hat{y}$).</p> <p>$\tilde{R} = \sqrt{\left( \hat{z} - \left( z + L_z \cos \beta \right) \right)^2 + \left( \hat{y} - \left( y + L_y \sin \beta \right) \right)^2}$</p> <p>$\tilde{\alpha} = \tan^{-1} \left( \frac{\hat{y} - \left( y + L_y \sin \beta \right)}{\hat{z} - \left( z + L_z \cos \beta \right) } \right) - \beta$</p> <p>Then you can apply this in an EKF according to the chat discussion. Note I have switched the axes labels and sign conventions to something that I am more used to.</p> <p><a href="https://i.stack.imgur.com/dZLZq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dZLZq.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/rcCKt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rcCKt.png" alt="enter image description here"></a></p>
8327
2015-10-24T16:21:02.087
|robotic-arm|sensor-fusion|
<p>I have the following system here:</p> <p><a href="https://i.stack.imgur.com/DKIDk.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/DKIDk.jpg</a> <a href="https://i.stack.imgur.com/mQPXP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mQPXP.png" alt="enter image description here"></a></p> <p>Basically, I have a range finder which gives me $R_s$ in this 2D model. I also have the model rotate about the Centre of Mass, where I have angular values and velocities Beta ($\beta$) and BetaDot ($\dot{\beta}$).</p> <p>I can't see, for the life of me, how to figure the formula for the angular velocity in the Range Finder frame. How am I supposed to do this? I have all the values listed in those variables. The object there doesn't move when the vehicle/system pitches. It's stationary.</p>
Transforming angular velocity?
<p>Here are my two suggestions for dealing with this problem:</p> <p>Use a median filter, which replaces each value of your signal with the median of the values in a small window around each one. Here is some pseudo-code, where <code>x</code> is your original signal, <code>y</code> is the filtered signal, <code>N</code> is the number of points in your signal, and <code>W</code> is the number of points in the median filter window.</p> <p><code>for (k = 1 to N) { y[k] = median of samples x[k-W+1] to x[k] }</code></p> <p>If you are using MATLAB then you can use the function <code>medfilt1</code> to do this, or the function <code>median</code> to make your own filter (see <a href="http://www.mathworks.com/help/signal/ug/remove-spikes-from-a-signal.html" rel="nofollow noreferrer">this</a>), whereas if you are using a language like C++ then you may need to write your own functions (see <a href="https://stackoverflow.com/questions/2114797/compute-median-of-values-stored-in-vector-c">this</a>).</p> <p>The other option is to simply check the magnitude of the change in the signal and reject any sample whose change is beyond some threshold. Something like this:</p> <p><code>if (abs(x[k] - y[k-1]) &gt; threshold) { y[k] = x[k-1] } else { y[k] = x[k] }</code></p> <p>EDIT: </p> <p>Taking a look at your data, it looked suspiciously like an angle wrapping issue, but around 180 deg instead of 360 deg. The spikes disappear if you double the signal then apply an angle wrap (using MATLAB's <code>wrapToPi</code> for example). The plot below shows the doubled signal in blue and the doubled signal after wrapping in red.</p> <p><a href="https://i.stack.imgur.com/KF7Yl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KF7Yl.png" alt="enter image description here"></a></p> <p>Here is the code I used:</p> <pre><code>sensorData = dlmread('sensor.txt'); t = sensorData(:,1); x = sensorData(:,2); x2 = 2*x; y = wrapToPi(x2*(pi/180))*(180/pi); figure hold on; plot(t,x,'k','linewidth',2); plot(t,x2,'b','linewidth',2); plot(t,y,'r','linewidth',2); </code></pre>
8329
2015-10-25T10:41:43.953
|sensors|imu|matlab|noise|filter|
<p>I try to measure Euler angles from an IMU, but some discontinuities happens during measurement, even in vibrationless environment, as shown in the images below. </p> <p>Can someone explain which type of filter will be the best choice to filter this type discontinuities?</p> <p><a href="https://i.stack.imgur.com/gHMgg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gHMgg.jpg" alt="Discontinuities in Euler angle measurement"></a></p> <p><a href="https://i.stack.imgur.com/FBnSc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FBnSc.jpg" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/bvIZa.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bvIZa.jpg" alt="enter image description here"></a></p>
Filtering IMU angle discontinuities
<p>Both the forward kinematics and inverse kinematics aren't too difficult, but always a little tricky for parallel manipulators like this one. Consider the configuration in this diagram.</p> <p><a href="https://i.stack.imgur.com/4em8U.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4em8U.png" alt="enter image description here"></a></p> <p>The forward kinematics first involve solving for the position of the joint where you hold the pen from each motor joint separately and then equating the two.</p> <p>$\begin{bmatrix} a - b \cos \theta_2 - c \cos \beta \\ a \sin \theta_2 + c \sin \beta \end{bmatrix} = \begin{bmatrix} e \cos \theta_1 + d \cos \alpha \\ e \sin \theta_1 + d \sin \alpha \end{bmatrix}$</p> <p>This gives you two equations in two unknowns, $\alpha$ and $\beta$. After you solve for those angles, simply substitute them back into one of the sides of the above equation and you have the position of the pen given the input motor angles $\theta_1$ and $\theta_2$.</p> <p>For the inverse kinematics, you start with the known position of your pen (the blue dot in the image), then you compute the lengths of the red lines, $f$ and $g$. Since you know all other lengths you can also solve for the angles in triangles $def$ and $gcb$. From that geometry you can then get the angles $\theta_1$ and $\theta_2$.</p> <p>So then you can turn each of your desired clock characters into an x-y path for the pen, convert the x-y path into $\theta_1$-$\theta_2$ trajectories, and control your motors accordingly. It looks like the entire linkage is then mounted on another joint that pushes the pen away from the surface between characters, but the logic behind that should be pretty easy to implement.</p>
8331
2015-10-25T14:18:45.693
|robotic-arm|first-robotics|
<p>I am a beginner at robotics. I recently stumbled across this <a href="https://www.youtube.com/watch?v=iOLFP90DneY" rel="nofollow">robotic clock on youtube</a>.</p> <p>I am an electrical engineering student and am interested in submitting it as my minor project.</p> <p>I have studied the basics on forward and inverse kinematics, Greubler's Equation, four bar linkage but this robot seems to be a 5 bar linkage. I want to know how to implement it in a 5 bar linkage.</p> <p>How to use the inverse kinematics solutions described in <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.456.7665&amp;rep=rep1&amp;type=pdf" rel="nofollow">Combined synthesis of five-bar linkages and non-circular gears for precise path generation</a>, to make the robot follow desired trajectory?</p> <p>I have been stuck at this for days... any sort of help would be appreciated.</p>
Designing a 5 bar linkage robot: Plot Clock
<p>You are most likely running into problems with the maximum time step in your simulation. This phenomenon is known as <a href="https://en.m.wikipedia.org/wiki/Stiff_equation#Motivating_example" rel="nofollow noreferrer">stiffness</a>, where your equations of motion are highly sensitive to the size of the time step in a discrete solution.</p> <p>Consider a simple mass-spring system with mass $m$, spring stiffness $k$, displacement $x$ and velocity $\dot{x}$ for states $q$, and an input force $F$ for the input $u$.</p> <p>$q = \begin{bmatrix} \dot{x} \\ x \end{bmatrix}$</p> <p>$A = \begin{bmatrix} 0 &amp; -\frac{k}{m} \\ 1 &amp; 0 \end{bmatrix}$</p> <p>$B = \begin{bmatrix} \frac{1}{m} \\ 0 \end{bmatrix}$</p> <p>$C = \begin{bmatrix} 0 &amp; 1 \end{bmatrix}$</p> <p>$D = 0$</p> <p>We apply a full-state feedback control law (which ends up being PD control), with:</p> <p>$u = -Kq$</p> <p>And use MATLAB's <code>place</code> function to determine the gain matrix $K$ for desired poles $p$. Package this into a setup script like this:</p> <pre><code>% system coefficients k = 1000; % spring stiffness (N/m) m = 1; % mass (kg) % state-space model A = [0 (-k/m); 1 0]; B = [(1/m); 0]; C = [0 1]; D = 0; % desired poles p = [-1000, -2000]; % gain matrix K = place(A,B,p); % initial conditions x_dot_0 = 0.5; x_0 = 0.2; </code></pre> <p>Then create a Simulink model:</p> <p><a href="https://i.stack.imgur.com/cM2My.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cM2My.png" alt="enter image description here"></a></p> <p>Now if we use poles $p = \begin{bmatrix} -10 &amp; -20 \end{bmatrix}$, we get an acceptable albeit slower response. But when we use poles $p = \begin{bmatrix} -1000 &amp; -2000 \end{bmatrix}$ we find that there is only marginal stability with some steady-state error (although the bandwidth of the steady-state error is small).</p> <p>This is happening because the closed-loop system frequency is so high that there is numerical instability in the simulation caused by the solver's choice of time-step. To get around this, we can switch the maximum time step from <code>auto</code> to something appropriate like <code>1-e5</code>. You can do this in the "Solver options" found in the "Simulation -> Model Configuration Parameters" menu (it might be called something a bit different in your version of MATLAB).</p> <p>Here is the comparison of the low gain result, high gain result with default <code>auto</code> maximum time-step, and high gain result with maximum time-step of <code>1e-5</code>.</p> <p><a href="https://i.stack.imgur.com/nzuX7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nzuX7.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/N85qS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N85qS.png" alt="enter image description here"></a></p>
8333
2015-10-25T22:51:39.043
|control|stability|
<p>I'm given an assignment in which I have to design a full state feedback controller by pole placement. The state space system is fully controllable and I've been using Matlab/Simulink to determine the required feedback gain K using the place() command for several sets of poles, however once I use poles that are "too negative", for example p=[-100,-200,-300,-400,-500], my controlled system starts showing bounded oscillatory behaviour. </p> <p>Is it possible that too negative poles can cause marginal stability? And if so, why? I've read that this is only possible when the real part of one or more poles equals 0, which certainly isn't the case here. </p>
Relation between pole placement and marginal stability?
<p>I am answering an old question, but Staubli does make a SCARA arm with the vertical joint over the base. It was called the TS20. <a href="https://www.staubli.com/en-us/robotics/product-range/industrial-robots/4-axis-scara-robots/ts20/" rel="nofollow noreferrer">https://www.staubli.com/en-us/robotics/product-range/industrial-robots/4-axis-scara-robots/ts20/</a></p>
8350
2015-10-28T11:17:06.630
|robotic-arm|design|actuator|
<p>I am currently interested in SCARA arm designs and I have few, beginner questions for which I didn't find answers yet. </p> <p>1/ While comparing professional arms (made by epson, staubli...) I noticed that the actuator used for the translation on the Z axis is at the end of the arm. On "hobby" arms like the makerarm project on kickstarter they use a leadscrew with the actuator at the beginning of the arm. I thought it was smarter to put the actuator handling this DoF at the begining of the arm (because of its weight) and not at the end, but I assume that these companies have more experience than the company behind the makerarm. So I'm probably wrong, but I would like to understand why :)</p> <p>2/ Also I would like to understand what kind of actuators are used in these arms. The flx.arm (also a kickstarter project) seems to be using stepper motors but they also say they are using closed loop control, so they added an encoder on the stepper motors right?</p> <p>Wouldn't it be better to not use stepper and, for instance, use DC brushless motors or servos instead ?</p> <p>3/ I also saw some of these arms using belts for the 2nd Z axis rotation, what is the advantage ? it only allows to put the actuator at the begining of the arm ?</p>
differences between SCARA arm design
<p>In the end, end effector speed is going to be determined by joint speed + kinematics, so I would choose whichever format you want that makes life easier on you. </p> <p>Regarding singularities, there's no singularity caused by joint velocities, only by joint angular positions. So, if you're trying to avoid singularities but are only controlling based on velocity, then you'll probably need some separate controller or other method to "watch" the trajectory you're on to alter the specific combinations of joint velocities to avoid the singularity.</p> <p>Regarding accuracy of control, again your end effector is located based on the combination of joint angles, so ultimately your joint angle positioning AND end effector positioning are both determined by your ability to encode, read, and control the joint angles. Again, I would go with whichever method is more convenient for you. </p>
8361
2015-10-30T15:39:09.417
|robotic-arm|jacobian|force-sensor|
<p>I have a 6 DOF arm whose velocities I'm controlling as a function of force applied to the end effector. The software for the robot allows me to input either the desired end effector velocity or the desired joint angular velocities, which I know can be found using the inverse Jacobian.</p> <p>Are there any benefits of using one scheme over the other? Would, for example, one help avoid singularities better? Does one lead to more accurate control than the other does? </p>
6DOF robot arm: Velocity of end effector vs. joint velocities
<p>First you are right, Imageprocessing is nothing for MCUs. Mostly because of their limited hardware ressources. But it is simply not the task of those thingys. So the reason you use MCU is the control you have over the system. You can use hunderds of different interrupts, you can manipulate all timers, you have DMAs, tons of different peripherals and response times within few microseconds. The complexity and controlability of those chips can be seen by looking at the documentation of those chips. An actual one, eg out of the STM32F7-Series has 3000 pages documentation of software and registers. <br> If you look at image processing, what you need there? Lot of computation power!!! You need no timers, no DMA, no responses within microseconds. <br> So why use MCU for imageprocessing? It's just not there business. For Imageprocessing you need moderen processors with huge computational power. <br> The drawback of modern processors you loose all the nice stuff you like about MCUs. So you cannot control something with high frequencies, you have no access to timer registers,not a lot of peripherial things. <br> Given this background and the task to solve a problem with sensors, actors and some conputer vision you can solve this taks with following approaches:<br></p> <p><strong>Use dedicated I/O Hardware for normal Computer</strong> <br></p> <p>Let your computer vision and all your control algorithms on your computer and use additional hardware to switch IOs use some buses or whatever you have to do. An example hardware would be phidgets, which work with USB; or a PCI card which is directly connected to your computer's bus system. This provides the advantage that you only have one system, but your system's realtime capability is not given at all. The OS (Windows or Android) have no realtime capabilities at all. with some tweeks and experience you reach a jitter of 0.5ms. This is basically your idea, you mentioned in the question.</p> <p><strong>Programm MCUs for vision</strong></p> <p>Well use MCUs for things they are not made for. If you do not need lot of processing on the images this can be a viable option. Depends on your task and on the setup. Most of the algorithms are available and can be implemented with some time investment. The problem is that your are missing the computational power and the memory. So you are limited in computations per timeframe.</p> <p><strong>Apply a distributed controller concept</strong></p> <p>Take the best of both worlds. Let the MCU and the computer with decent CPU,Memory and OS what they were designed for. The only problem is communication between those modules. But there are solutions, from serial (USB) over TCP/IP(Ethernet) or even SPI. It's just you need to figure out, what are the requirements and what fits them best.</p> <p><strong>Use a system which combines the best out of each world</strong></p> <p>There are not a lot modules which are made for such applications but in the time of early development for the IoT there are some. In my personal opinion the best module is the Beaglebone Black. It has some high performance ARM core out of there cortex A series and a DSP coprocessor for the high responsiveness and low jitter. On this DSP you do not have all features you can access on a normal MCU but it maybe sufficent.</p> <p>As conclusion the distributed controller is the most sophisticated and can be adjusted to meet almost any need. The other methods depend a lot on the requirements. They might be enought or maybe not. For some "I want to try something quick and dirty and i do not care about performance" use some phidgets, they are relatively cheap and are pretty easy to make them work. </p>
8365
2015-10-31T06:46:08.263
|microcontroller|
<p>Im working on a robot that needs image processing to analyze data recieves from cameras. </p> <p>As i searched for ARM and AVR libraries i found that there is no dip library for these micros and their limited ram is hard for image data process. i want to know is there any hardware that connects to a win or android or... devices and make it possible to that device connect to actuators and sensors?</p> <p>thank you or helping.</p>
using a device with os instead of microcontrollers
<p>For a 5-DOF manipulator you will have 15 DH parameters. Three sets of joint angles will yield 9 equations, so that will not be enough to determine the parameters. And this is assuming you know the end-effector position, whereas if you are simply trying to use the constraint that all three positions are equal then you will have even fewer equations to work with.</p> <p>You should be able to estimate the $\alpha$ parameters simply by inspecting the robot -- hopefully they are all 0 deg or $\pm$90 deg.</p> <p>If you are measuring the end-effector position then you can simply set up as many test points as are necessary to get all of the parameters. This can be done strategically to simplify the process. Start with the outermost link, rotate that joint only, and look at how the end-effector position changes, then you can determine that link's parameters. Then move to the next link and repeat.</p> <p>Alternatively, you can use the forward kinematics in a differential form as would be used to determine end-effector velocity, $\dot{p}$, in terms of joint angle velocities, $\dot{\theta}$, based on the Jacobian, $J$.</p> <p>$\dot{p} = J \dot{\theta}$</p> <p>Consider this relationship in terms of differences in end-effector position related to differences in joint angles:</p> <p>$\Delta p = J \Delta \theta$</p> <p>The above relationship can then be used to solve for the DH parameters given a series of measured $\Delta p$ and $\Delta \theta$ values using a non-linear least-squares solution.</p>
8368
2015-11-02T02:34:14.903
|dh-parameters|
<p>Given three sets of joint angles in which the end effector is in the same position, is it possible to find the DH parameters?</p> <p>If the robot has 2 DOF in shoulder, 2 DOF in elbow, and 1 dof in wrist, with DH parameters as upper arm length, elbow offset in 1 axis, lower arm length, can this be solved, if so how?</p> <p>I tried iterating through DH parameters to minimize position of end effector with forward kinematics, but this doesnt seem to work as DH parameters of 0 for everything makes 0 minimal distance.</p> <p>Reason for this; given a physical robot, no DH parameters are known, and measuring by hand is not accurate.</p>
Solving for DH Parameters
<p>The (inverse) Jacobi Matrix is pose dependent. It is not constant, it is dependent on pose (joint angles or TCP coords, based on formulation) and so it characterizes a pose not a robot. By implementing it as a constant you basicly assume that the same joint velocity will always cause the same TCP velocity. In reality it is clear that joint positions play an important role in how joint velocities are transformed to the TCP. </p> <p>I recommend you generate the (inverse) Jacobi matrix in a symbolic form, similarly as your forward kinematics, dependend on variables (joint angles of TCP cooridnates) and write a function that calculates the value of the matrix for each current pose. Then recalculate the matrix in every iteration and it should be fine.</p> <p>Furthermore, in this form, if you have a 3x3 matrix, you can easily calculate its inverse, you do not need a pseudoinverse. Later, if you have 7 DOFs you might...</p>
8389
2015-11-05T04:24:16.707
|inverse-kinematics|python|joint|jacobian|
<p>I am currently trying to implement an Inverse Kinematics solver for Baxter's arm using only 3 pitch DOF (that is why the yGoal value is redundant, that is the axis of revolution). I for the most part copied the slide pseudocode at page 26 of <a href="http://graphics.cs.cmu.edu/nsp/course/15-464/Fall09/handouts/IK.pdf" rel="nofollow">http://graphics.cs.cmu.edu/nsp/course/15-464/Fall09/handouts/IK.pdf</a> .</p> <pre><code>def sendArm(xGoal, yGoal, zGoal): invJacob = np.matrix([[3.615, 0, 14.0029], [-2.9082, 0, -16.32], [-3.4001, 0, -17.34]]) ycurrent = 0 while xcurrent != xGoal: theta1 = left.joint_angle(lj[1]) theta2 = left.joint_angle(lj[3]) theta3 = left.joint_angle(lj[5]) xcurrent, zcurrent = forwardKinematics(theta1, theta2, theta3) xIncrement = xGoal - xcurrent zIncrement = zGoal - zCurrent increMatrix = np.matrix([[xIncrement], [0], [zIncrement]]) change = np.dot(invJacob, increMatrix) left.set_joint_positions({lj[1]: currentPosition + change.index(0)/10}) #First pitch joint left.set_joint_positions({lj[3]: currentPosition + change.index(1)/10}) #Second pitch left.set_joint_positions({lj[5]: currentPosition + change.index(2)/10}) #Third Pitch joint def forwardKinematics(theta1, theta2, theta3): xcurrent = 370.8 * sine(theta1) + 374 * sine(theta1+theta2) + 229 * sine(theta1+theta2+theta3) zcurrent = 370.8 * cos(theta1) + 374 * cos(theta1+theta2) + 229 * cos(theta1+theta2+theta3) return xcurrent, zcurrent </code></pre> <p>Here is my logic in writing this: I first calculated the Jacobian 3x3 matrix by taking the derivative of each equation seen in the forwardKinematics method, arriving at:</p> <p>[370cos(theta1) + 374cos(theta1+theta2) ..... </p> <p>0 0 0</p> <p>-370sin(theta1)-374sin(theta1+theta2)-...... ]</p> <p>In order to arrive at numerical values, I inputted a delta theta change for theta1,2 and 3 of 0.1 radians. I arrived at a Jacobian of numbers:</p> <p>[0.954 0.586 .219</p> <p>0.0000 0.000 0.0000</p> <p>-.178 -.142 -0.0678]</p> <p>I then input this matrix into a pseudoinverse solver, and came up with the values you see in the invJacob matrix in the code I posted. I then multiplied this by the difference between the goal and where the end effector is currently at. I then applied a tenth of this value into each of the joints, to make small steps toward the goal. However, this just goes into an infinite loop and my numbers are way off what they should be. Where did I go wrong? Is a complete rewrite of this implementation necessary? Thank you for all your help.</p>
3 DOF Inverse Kinematics Implementation: What's wrong with my code?
<p>Very common problem which calls for using the "Pointcloud Library" PCL: <a href="http://www.pointclouds.org" rel="nofollow">www.pointclouds.org</a> This datastructures in the library also make it easy to find according 3D-Points when you segmented your RGB image. There are lots of tutorials and a very busy community.</p>
8400
2015-11-07T01:41:06.743
|localization|computer-vision|kinect|cameras|gazebo|
<p>I've recently been trying to use Gazebo to do some modelling for a couple tasks. I have a robot that's effectively able to locate a ball and get x,y coordinates in terms of pixels using a simple RGB camera from the Kinect. I also have a point cloud generated from the same Kinect, where I hope to find the depth perception of the ball using the X,Y coords sent from the circle recognition from my RGB camera. My plan earlier was to convert the X,Y coordinates from the RGB camera into meters using the DPI of the Kinect, but I can't find any info on it. It's much, much harder to do object recognition using a Point Cloud, so I'm hoping I can stick to using an RGB camera to do the recognition considering it's just a simple Hough Transform. Does anybody have any pointers for me?</p>
Using an RGB + Depth Camera to locate X,Y,Z coordinates of a ball
<p>The problem is not with the code, but with the point you want to send the arm in the simulation. Whats going is the goal's points constitute a cartesian distance larger than the manipulator can catch. It occurs cause in python may have some numerical errors when the simulation is running.</p>
8419
2015-11-10T01:38:31.510
|kinematics|inverse-kinematics|matlab|python|jacobian|
<p>I asked a question similar to this earlier, but I believe I have a new problem. I've been working on figuring out the inverse kinematics given an x,y,z coordinate. I've adopted the Jacobian method, taking the derivative of the forward kinematics equations with respect to their angles and input it into the Jacobian. I then take the inverse of it and multiply it by a step towards the goal distance. For more details, look at <a href="http://www.seas.upenn.edu/~meam520/notes02/IntroRobotKinematics5.pdf" rel="nofollow noreferrer">http://www.seas.upenn.edu/~meam520/notes02/IntroRobotKinematics5.pdf</a> page 21 onwards. </p> <p>For a better picture, below is something: <img src="https://i.stack.imgur.com/7mVwI.png" alt="3 DOF Arm"></p> <p>Below is the code for my MATLAB script, which runs flawlessly and gives a solution in under 2 seconds:</p> <pre><code>ycurrent = 0; %Not using this xcurrent = 0; %Starting position (x) zcurrent = 0; %Starting position (y) xGoal = .5; %Goal x/z values of (1, 1) zGoal = .5; theta1 = 0.1; %Angle of first DOF theta2 = 0.1; %Angle of second DOF theta3 = 0.1; %Angle of third DOF xchange = xcurrent - xGoal %Current distance from goal zchange = zcurrent - zGoal %Length of segment 1: 0.37, segment 2:0.374, segment 3:0.2295 while ((xchange &gt; .02 || xchange &lt; -.02) || (zchange &lt; -.02 || zchange &gt; .02)) in1 = 0.370*cos(theta1); %These equations are stated in the link provided in2 = 0.374*cos(theta1+theta2); in3 = 0.2295*cos(theta1+theta2+theta3); in4 = -0.370*sin(theta1); in5 = -0.374*sin(theta1+theta2); in6 = -0.2295*sin(theta1+theta2+theta3); jacob = [in1+in2+in3, in2+in3, in3; in4+in5+in6, in5+in6, in6; 1,1,1]; invJacob = inv(jacob); xcurrent = .3708 * sin(theta1) + .374 * sin(theta1+theta2) + .229 * sin(theta1+theta2+theta3) zcurrent = .3708 * cos(theta1) + .374 * cos(theta1+theta2) + .229 * cos(theta1+theta2+theta3) xIncrement = (xGoal - xcurrent)/100; zIncrement = (zGoal - zcurrent)/100; increMatrix = [xcurrent; zcurrent; 1]; %dx/dz/phi change = invJacob * increMatrix; %dtheta1/dtheta2/dtheta3 theta1 = theta1 + change(1) theta2 = theta2 + change(2) theta3 = theta3 + change(3) xcurrent = .3708 * sin(theta1) + .374 * sin(theta1+theta2) + .229 * sin(theta1+theta2+theta3) zcurrent = .3708 * cos(theta1) + .374 * cos(theta1+theta2) + .229 * cos(theta1+theta2+theta3) xchange = xcurrent - xGoal zchange = zcurrent - zGoal end </code></pre> <p>Below is my Python code, which goes into an infinite loop and gives no results. I've looked over the differences between it and the MATLAB code, and they look the exact same to me. I have no clue what is wrong. I would be forever grateful if somebody could take a look and point it out.</p> <pre><code>def sendArm(xGoal, yGoal, zGoal, right, lj): ycurrent = xcurrent = zcurrent = 0 theta1 = 0.1 theta2 = 0.1 theta3 = 0.1 xcurrent, zcurrent = forwardKinematics(theta1, theta2, theta3) xchange = xcurrent - xGoal zchange = zcurrent - zGoal while ((xchange &gt; 0.05 or xchange &lt; -0.05) or (zchange &lt; -0.05 or zchange &gt; 0.05)): in1 = 0.370*math.cos(theta1) #Equations in1-6 are in the pdf I linked to you (inv kinematics section) in2 = 0.374*math.cos(theta1+theta2) in3 = 0.2295*math.cos(theta1+theta2+theta3) in4 = -0.370*math.sin(theta1) in5 = -0.374*math.sin(theta1+theta2) in6 = -0.2295*math.sin(theta1+theta2+theta3) jacob = matrix([[in1+in2+in3,in2+in3,in3],[in4+in5+in6,in5+in6,in6], [1,1,1]]) #Jacobian invJacob = inv(jacob) #inverse of jacobian xcurrent, zcurrent = forwardKinematics(theta1, theta2, theta3) xIncrement = (xGoal - xcurrent)/100 #dx increment zIncrement = (zGoal - zcurrent)/100 #dz increment increMatrix = matrix([[xIncrement], [zIncrement], [1]]) change = invJacob*increMatrix #multiplying both matrixes theta1 = theta1 + change.item(0) theta2 = theta2 + change.item(1) theta3 = theta3 + change.item(2) xcurrent, zcurrent = forwardKinematics(theta1, theta2, theta3) xchange = xcurrent - xGoal zchange = zcurrent - zGoal print "Xchange: %f ZChange: %f" % (xchange, zchange) print "Goals %f %f %f" % (theta1, theta2, theta3) right.set_joint_positions(theta1) #First pitch joint right.set_joint_positions(theta2) #Second pitch right.set_joint_positions(theta3) #Third Pitch joint def forwardKinematics(theta1, theta2, theta3): xcurrent = .3708 * math.sin(theta1) + .374 * math.sin(theta1+theta2) + .229 * math.sin(theta1+theta2+theta3) zcurrent = .3708 * math.cos(theta1) + .374 * math.cos(theta1+theta2) + .229 * math.cos(theta1+theta2+theta3) return xcurrent, zcurrent </code></pre>
Running my 3 DOF Inverse Kinematics Code: Works in MATLAB, not in Python
<p>The other servo, being an electrical device, has some finite resistance. </p> <p>When the servos are <em>not</em> electrically connected, the output of the manually turned servo is an open circuit, meaning that there is infinite resistance and no current flows. Because no current is flowing, the <em>electrical</em> output power of manually turned servo is </p> <p>$P = IV$</p> <p>where $I = 0$, so </p> <p>$P = (0)(V) = 0$. </p> <p>When you connect another servo, there is now a finite resistance connected to the manually turned servo's leads, so now current may flow. <em>Now</em> the current is given by</p> <p>$V = IR$</p> <p>$I = V/R$</p> <p>so the <em>electrical</em> output power of the servo is</p> <p>$P = IV$</p> <p>$P = V^2/R$</p> <p>There are two "types" of voltage in a motor when you operate it normally; the applied voltage and a back-voltage, commonly called <a href="https://en.wikipedia.org/wiki/Counter-electromotive_force" rel="nofollow">"back EMF"</a>. This back EMF is a byproduct of motor motion.</p> <p>The requirements for motor action are:</p> <ol> <li>Current-carrying conductor</li> <li>Applied magnetic field </li> <li>Applied voltage differential on the conductor</li> </ol> <p>The output of motor action is relative motion between the current-carrying conductor and the magnetic field. </p> <p>The requirements for generator action are:</p> <ol> <li>Current-carrying conductor</li> <li>Applied magnetic field</li> <li>Relative motion between the conductor and field</li> </ol> <p>The output of generator action is a voltage differential in the conductor. </p> <p>So you can see that if you meet motor action then you get generator action and both happen simultaneously. </p> <p>When you back-drive a motor, you <em>can</em> get an output voltage at the motor terminals, <strong>if</strong> you have an applied magnetic field. In some instances, electric vehicles for instance, the motor is created such that the magnetic field is in the form of an electromagnet, called <a href="https://en.wikipedia.org/wiki/Field_coil" rel="nofollow">field windings</a>. In this version, it is possible to vary the strength of the magnetic field by varying the current in the field windings; this varies the output current of the motor during back-driven operations and produces a variable strength braking effect. </p> <p>In your case, with servo motors, the magnetic field typically comes in the form of permanent magnets in the motor housing. This means that the only way to vary output current (assuming a fixed output resistance) is to vary speed, as this varies the voltage you generate as dictated by the back-EMF constant;</p> <p>$ V_{\mbox{back EMF}} = K_{\omega} \omega \phi$</p> <p>where $K_{\omega}$ is the back-EMF constant, $\omega$ is the speed at which you are driving the motor, and $\phi$ is the strength of the magnetic field. </p> <p>Looking again at the electrical output power of the servo:</p> <p>$P = V^2/R$</p> <p>Output power of the servo goes up with voltage squared. Since voltage is proportional to the driving speed of the motor, you can see that output power goes up based on speed squared; if you double the speed at which you drive the servo you quadruple the output power of the servo. This is why the servo gets "very hard" to drive. </p> <p>There will always be some resistance when you try to drive a servo or motor, even with an open circuit, because they are real devices and thus have mass and inertia. You apply a torque to accelerate the motor:</p> <p>$ \tau = I\alpha$</p> <p>where $\tau$ is the torque you apply, $I$ is the effective motor inertia, and $\alpha$ is the rate at which you are accelerating the motor shaft. Output power varies with torque and speed:</p> <p>$P_\mbox{mechanical} = \tau \omega$</p> <p>so it requires more power to accelerate an already-moving object than it does to accelerate a stationary object. </p> <p>Finally, the total power you apply to the servo is given by:</p> <p>$P = P_{\mbox{mechanical}} + P_{\mbox{backEMF}}$</p> <p>$P = \tau \omega + V^2/R$</p> <p>$P = I\alpha \omega + (K_{\omega} \omega \phi)^2/R$</p> <p>$P = (I \alpha)\omega + \frac{(K_\omega \phi)^2}{R}\omega^2 $</p> <p>Where again, if the back-driven servo is not electrically connected to anything, then $R$ becomes infinitely large and the second term in the above equation becomes zero. </p>
8420
2015-11-10T04:42:07.340
|servomotor|
<p>I have two servo motors that I rigged up to use as a telescope remote focuser. The idea is to turn one servo by hand and use the power generated to turn the other, which is geared to a telescope focuser knob. I noticed that when the two servos are electrically connected, it is noticeably harder to turn a servo compared to turning it by itself. I tried changing the polarity of the connection hoping it would help, but it is still harder to turn the servo when they are connected. Does anyone know why this is?</p>
Why does it require more force to turn a servo if it is electronically connected to another servo?
<p>The orientation <em>angles</em> will be provided in the Earth frame (global or world frame is more appropriate). It is pretty much impossible to provide a frame orientation with angles <em>with respect to that same frame</em>.</p> <p>Then, given some acceleration $a_b$ in the body frame, you can find the acceleration in the world frame $a_w$ by pre-multiplication with the rotation matrix $R$.</p> <p>$a_w = R a_b$</p> <p>If you assume a typical roll-pitch-yaw sequence, then this will look like:</p> <p>$\begin{bmatrix} a_{w,x} \\ a_{w,y} \\ a_{w,z} \end{bmatrix} = \begin{bmatrix} c_\psi c_\theta &amp; -s_\psi c_\phi + c_\psi s_\theta s_\phi &amp; s_\psi s_\phi + c_\psi s_\theta c_\phi \\ s_\psi c_\theta &amp; c_\psi c_\phi + s_\theta s_\psi s_\phi &amp; -c_\psi s_\phi + s_\theta s_\psi c_\phi \\ -s_\theta &amp; c_\theta s_\phi &amp; c_\theta c_\phi \end{bmatrix} \begin{bmatrix} a_{b,x} \\ a_{b,y} \\ a_{b,z} \end{bmatrix}$</p> <p>Where $c_x = \cos x$ and $s_x = \sin x$.</p> <p>Of course, you can expand this to specify the x-y-z components of the world frame acceleration as individual equations as you mentioned:</p> <p>$a_{w,x} = f(a_b, \phi, \theta, \psi)$</p> <p>$a_{w,y} = g(a_b, \phi, \theta, \psi)$</p> <p>$a_{w,z} = h(a_b, \phi, \theta, \psi)$</p> <p>Note that I am using the convention where $\phi$ is roll (about x-axis), $\theta$ is pitch (about y-axis), and $\psi$ is yaw (about z-axis).</p> <p>When the rotation sequence roll, pitch, yaw is applied, each rotation occurs with respect to a world frame axis. We could get the same final result by applying a yaw, pitch, roll sequence but about each <em>new</em> axis. This latter method is more intuitive when visualizing roll-pitch-yaw (or rather Euler angle) rotations.</p> <p>Here is a figure to clarify the rotation sequence in case it helps. The world frame is shown in black, the body frame in red-green-blue, and intermediary frames during the "intuitive" rotation sequence are included in grey.</p> <p><a href="https://i.stack.imgur.com/XNqRc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XNqRc.png" alt="enter image description here"></a></p>
8425
2015-11-10T17:32:24.587
|quadcopter|dynamics|
<p>For a quadcopter, what is the relationship between roll, pitch, and yaw in the <strong>earth frame</strong> and acceleration in the x, y, and z dimensions in the earth frame? To be more concrete, suppose roll ($\theta$) is a rotation about the earth frame x-axis, pitch ($\phi$) is a rotation about the earth frame y-axis, and yaw ($\psi$) is a rotation about the z-axis. Furthermore, suppose $a$ gives the acceleration produced by all four rotors, i.e. acceleration normal to the plane of the quadcopter. Then what are $f, g, h$ in</p> <p>$$a_x = f(a,\theta,\phi,\psi)$$ $$a_y = g(a,\theta,\phi,\psi)$$ $$a_z = h(a,\theta,\phi,\psi)$$</p> <p>where $a_x$, $a_y$, and $a_z$ are accelerations in the $x$, $y$, and $z$ dimensions.</p> <p>I've seen a number of papers/articles giving the relationship between x,y,z accelerations and attitude, but it's never clear to me whether these attitude angles are rotations in the earth frame or the body frame.</p>
Relationship between earth frame attitude and acceleration for a quadcopter
<p>Late late answer, but this is also the only thing that comes up when you Google this. I ended up installing a 100 Ohm resistor on the speaker wire. It seems to be a good sweet spot for lowering the volume but still hearing it If you need to know it's location. 200 Ohm made it almost a whisper, for comparison.</p> <p>I have a 690. I used this video <a href="https://youtu.be/s8K2gCzfTxE" rel="nofollow noreferrer">https://youtu.be/s8K2gCzfTxE</a> for taking it completely apart but stopped at the speaker.(don't Remove the circuit board)<img src="https://i.stack.imgur.com/ENVdC.jpg" alt="690 resistor install" /></p> <p>The image shows the resistor before sealing it in shrink tubing.</p> <p>As shown, you can also just disconnect the connector if you want to silence it, but not cut any wires.</p>
8444
2015-11-12T17:45:07.417
|irobot-create|roomba|digital-audio|
<p>I am working with an iRobot Create 2 and I work with others around me. Whenever I turn the robot on, send it an OI reset command, etc., it makes its various beeps and noises. I would like to not have this happen since I find it a little annoying and I'm sure those who have to work around me would like to have things quiet so they can concentrate on their work. Is there a way to accomplish turning off the beeps (while still being able to easily re-enable them), or am I out of luck?</p>
Is there a way to turn the sound off of a Roomba?
<p>The CAD you pointed out is for the Turtlebot, and includes electromechanical CAD for that assembly. I only spot-checked a few files, but the only Create-relevant CAD in that package I found was an STL of the robot's outer dimensions. The Create 2 manual should have enough information for someone to produce a solid model of similar precision, but I do not believe iRobot has released a shell for the robot, yet. Maybe you should make one and put it on a site like Thingiverse? There are other designs by iRobot that can be found there for various parts of the robot, like the bin.</p> <p><sup> Please note that I am an iRobot employee, but the postings on this site are my own and don't necessarily represent iRobot's positions, strategies, or opinions. </sup></p>
8457
2015-11-13T22:42:58.407
|irobot-create|design|turtlebot2|
<p>I found CAD files for the Create on the <a href="http://download.ros.org/downloads/turtlebot/" rel="nofollow">ROS TurleBot download page</a> (<a href="http://download.ros.org/downloads/turtlebot/TurtleBot-Hardware-2011-07-23.zip" rel="nofollow">.zip</a>), and shells on the <a href="https://bitbucket.org/osrf/gazebo_models/src/e024fe922c810f468e2a7ab277750354465478cd/create/?at=default" rel="nofollow">gazebo sim page</a>. </p> <p>Any ideas where the files for the Create 2 could be found?</p>
Create 2 CAD files
<p>Dunno if you still want suggestions for your build, but I would suggest using graphite, adhesive and maybe electroplating for your muscle.</p> <p>Basically, you could use a <a href="https://youtu.be/UoB2ZQ-jFwA?si=i9fAmNWfrCMEhWJj" rel="nofollow noreferrer">DIY conductive ink</a> with graphite powder and dip the nylon/polyethylene fibers on it a couple of times in order to make a thin conductive and resistive layer on the fibers.</p> <p>You could electroplate on top of that, but you could also just keep it like that.</p> <p>I suggest this because that by simply passing an electric current through the now conductive nylon will convert part of electricity into heat and finally into contraction. So a simple system with a potentiometer would work, but you would need to test the system a couple of times in order to get the ideal voltage and amperage that turns into an X amount of heat.</p> <p>You could also use a <a href="https://youtu.be/LXhTFBGgskI" rel="nofollow noreferrer">thermocouple arduino system</a> in order to control the temperature automatically.</p> <p>I hope I was helpful, cheers.</p>
8461
2015-11-15T03:36:17.743
|arduino|electronics|actuator|
<p>I'm trying to make artificial muscles using nylon fishing lines (see <a href="http://io9.com/scientists-just-created-some-of-the-most-powerful-muscl-1526957560" rel="noreferrer">http://io9.com/scientists-just-created-some-of-the-most-powerful-muscl-1526957560</a> and <a href="http://writerofminds.blogspot.com.ar/2014/03/homemade-artificial-muscles-from.html" rel="noreferrer">http://writerofminds.blogspot.com.ar/2014/03/homemade-artificial-muscles-from.html</a>)</p> <p>So far, I've produced a nicely coiled piece of nylon fishing line, but I'm a little confused about how to heat it electrically.</p> <p>I've seen most people say they wrap the muscle in copper wire and the like, pass current through the wire, and the muscle acuates on the dissipated heat given the wire resistance.</p> <p>I have two questions regarding the heating:</p> <p>1) isn't copper wire resistance extremely low, and thus generates very little heat? what metal should I use? </p> <p>2) what circuit should I build to heat the wire (and to control the heating)? Most examples just "attach a battery" to the wire, but afaik that is simply short-circuiting the battery, and heating the wire very inneficiently (and also may damage the battery and it could even be dangerous). So what's a safe and efficient way to produce the heat necessary to make the nylon muscle react? (I've read 150 centigrads, could that be correct?) for example with an arduino? or a simple circuit in a breadboard?</p> <p>thanks a lot!</p>
Electronic circuit for heating nylon fishing line muscle
<p>Overall, you simply need to rotate each point in your cloud using the rotation matrix $R$. That rotation matrix will be the latest rotation matrix for your vehicle/sensor. You can compute the rotation matrix based on the current values for roll, pitch, and yaw according to the following:</p> <p>$R = \begin{bmatrix} c_\psi c_\theta &amp; c_\psi s_\theta s_\phi - s_\psi c_\phi &amp; c_\psi s_\theta c_\phi + s_\psi s_\phi \\ s_\psi c_\theta &amp; s_\psi s_\theta s_\phi + c_\psi c_\phi &amp; s_\psi s_\theta c_\phi - c_\psi s_\phi \\ -s_\theta &amp; c_\theta s_\phi &amp; c_\theta c_\phi \end{bmatrix}$</p> <p>Where $\psi$ is yaw, $\theta$ is pitch, and $\phi$ is roll.</p> <p>To get the latest roll, pitch, yaw angles you will need to integrate the current roll, pitch, yaw angle <em>rates</em>. Typically, you will be measuring the <em>body angular rates</em> using an IMU, but these rates are (a) not in the world frame and (b) not in roll, pitch, yaw form.</p> <p>First, you need to transform the body-frame angular rates into world-frame angular rates. To do that, simply pre-multiply the body rates by the current rotation matrix:</p> <p>$\omega_o = R \omega_b$</p> <p>Where $\omega_o = \begin{bmatrix} \omega_{o,x} \\ \omega_{o,y} \\ \omega_{o,z}\end{bmatrix}$ are the world-frame angular rates and $\omega_b = \begin{bmatrix} \omega_{b,x} \\ \omega_{b,y} \\ \omega_{b,z}\end{bmatrix}$ are the body-frame angular rates.</p> <p>Now that you have the angular rates about each axis in the world frame, you can use that equation you linked to in your question to convert to roll, pitch, yaw rates:</p> <p>$\begin{bmatrix} \omega_{o,x} \\ \omega_{o,y} \\ \omega_{o,z}\end{bmatrix} = \begin{bmatrix} c_\psi c_\theta &amp; -s_\psi &amp; 0 \\ s_\psi c_\theta &amp; c_\psi &amp; 0 \\ -s_\theta &amp; 0 &amp; 1 \end{bmatrix} \begin{bmatrix} \dot{\phi} \\ \dot{\theta} \\ \dot{\psi} \end{bmatrix} $</p> <p>$\omega_o = E \omega_{rpy}$</p> <p>Note that you are starting with $\omega_o$ and finding $\omega_{rpy}$, so that means you need the inverse of $E$, and you'll need to watch for singularities (when $\cos \theta = 0$).</p> <p>$\omega_{rpy} = E^{-1} \omega_o$</p> <p>Then integrate the rates to get new roll, pitch, yaw angles, where $\tau$ is your time step:</p> <p>$\phi_{k+1} = \phi_k + \dot{\phi}_k \tau$</p> <p>$\theta_{k+1} = \theta_k + \dot{\theta}_k \tau$</p> <p>$\psi_{k+1} = \psi_k + \dot{\psi}_k \tau$</p> <p>Those new roll, pitch, yaw values give you a new rotation matrix. Then just rotate the points in your point cloud accordingly. Keep in mind that you want to predict the new point cloud in the body frame as your question states. This rotation matrix takes vectors in the body frame and defines them in the world frame, so you need to do the opposite, which just means using the inverse of the rotation matrix.</p> <p>So, for a point $p_{o,i}$ in the world frame, its coordinates in the body frame $p_{b,i}$ are defined by:</p> <p>$p_{b,i} = R^{T} p_{o,i}$</p> <p>Where the transpose $R^T$ is the same as the inverse $R^{-1}$.</p> <p>Also note that my equations might look a bit confusing since I am only including the time step index $k$ when showing the forward integration of roll, pitch, yaw angles. You'll be using the rotation matrix from the <em>previous</em> time step to convert the angular rates in the <em>current</em> time step to the world frame, then updating the roll, pitch, yaw rates and angles, and then computing the rotation matrix for the <em>current</em> time step in order to transform your point cloud (unless this is part of a graph-based solution).</p>
8463
2015-11-15T14:33:34.007
|mobile-robot|
<p>I have a 3D point in space with it's XYZ Coordinates about some Frame A. I need to calculate the new XYZ coordinates, given the angular velocities of each axis at that instant of time about Frame A</p> <p>I was referring to my notes, but I'm a little confused. This is what my notes say: <a href="https://i.stack.imgur.com/hoREn.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/hoREn.jpg</a></p> <p>As you can see, i can calculate the angular velocity vector w given my angular velocities. But I'm not sure how this translates to how to calculate my new XYZ position! How can i calculate the RPY values this equation seems to need from my XYZ, and how can i calculate my new position from there</p>
Angular velocity to translational velocity
<p>1) A powersupply with 1kW is definetly not enought for lab tests with mobile robots, especially with quadrotors. If you do naive fallacy your quadrotor is limited to around 4 kg. Take the main frame with the motors, place a decent 2D or even 3D scanner on it and a CPU which can handle some data, and the quadrotor will be more than 4kg. I have seen quadrotors with peak power of around 10kW. I recommend using a power supply which can provide a bit more then the power which is needed to hover the quadrotor (rule of thumb: 250Watt/kg).<br> The mobile robot should use less then the quadrotor in most cases. <br><br> 2)Connect them in series for more Voltage... sure no problem at all. <br>But connecting them in parallel for more current...Don't do this the current flowing across the PSU can fry them easily.</p>
8466
2015-11-16T00:37:19.083
|quadcopter|power|humanoid|
<p>I'm a researcher in a lab that's starting work on some larger humanoid/quadruped robots as well as a quadcopter. Currently, we have several power supplies that have a max rating of 30V/30A and our modified quadcopter easily maxes out the current limit with only half of its propellers running. It seems like most power supplies are meant for small electronics work and have fairly low current limits. I think that I want to look for power supplies that are able to provide between 24-48V and higher than 30A for an extended period of time. </p> <p>1.) Is this unreasonable or just expensive? 2.) Do most labs just connect PSUs in series to get higher voltages?</p> <p>Thanks for the input.</p>
Sizing high current power supplies for large robots
<p>UPDATE Based on the comment below that the answer is not correct:</p> <p>Kinematic decoupling divides the inverse kinematics problem into two parts. Coordinates of the wrist center point are calculated and based on those all the joint angles behing the wrist center point have to be determined. If this is possible, kinematic decoupling can be applied in you case. In other words if you can calculate a number of coordinates of the wrist center point that fully determines all joint angles behind your wrist center point than ... there you have it... kinematic decoupling works for you...</p> <p>The original answer:</p> <p><em>Yes, kinematic decoupling will work under 1 condition. You have to be able to determine the XYZ coordinates of your wrist center point (where the wrist axes intersect eachother) from the desired end effector coordinates. If the IK is solvable or not for the given input (i.e. the pose can be reached) is not a question of the kinematic decoupling but a question of the inverse kinematics</em></p> <p>I may have been to fast in assuming XYZ, other coordiantes might be OK, depending on how many joints are there behind the wrist, XY could be enough. </p> <p>If you have 5 DOF and have the desired TCP coordinates (X, Y, A, B, C) you cannot calculate XYZ of the wrist center point, but not calcualting XYZ may be ok, if the coordinates you can calculate are enough to determine all joint angels before the wrist center point., So if you have X and Y of the wrist center point, and 2 joint before the wirst center point, if the X and Y coordaintes fully determine the q1 and q2 angles, kinematic decoupling can be used to write the IK equations.</p>
8474
2015-11-16T22:39:26.837
|kinematics|
<p>Is kinematic decoupling of a 5DOF revolute serial manipulator also valid? The three last joints is a spherical joint. Most literatures only talks about decoupling of 6DOF manipulators.</p> <p>Thanks in advance, Oswald</p>
Kinematic decoupling
<p>First of all the link you posted states that your motor driver carrier board has built-in overcurrent protection...so...where is the problem? </p> <p>If it does not have:</p> <p>Put a fuse in series with the motor that will protect the controller (you can find slow fuses that allow higher current for short time). Then implment a current control loop with output saturation, basicly you give the pwm to the motor based on current (in mechanical terms it will be a torque controller with a maximum torque as a parameter). That way you have the fuse as a fallback safety and operationally if everything works finewith the current controller you can be sure that the fuse will not be used. </p> <p>I assume you need velocity controll or position control for the motor, that is no problem, just use cascading control loops where the inner loop in is a current (torque controller) the middle loop is a velocity controller and the outer loop is a position controller, <a href="http://www.globalspec.com/reference/13556/179909/chapter-7-2-6-control-paradigms-some-applications" rel="nofollow">like this</a>. It is the way industrial drive amplifiers for motion control work...</p>
8475
2015-11-16T23:21:18.300
|motor|microcontroller|current|
<p>I have a <a href="http://store.amequipment.com/218-series-gearhead-motor-64mm-12v-right-hand-p-354.html" rel="nofollow">motor</a> with a stall current of up to 36A. I also have a <a href="http://pololu.com/product/705" rel="nofollow">motor controller</a> which has a peak current rating of 30A. Is there any way I could reduce the stall current or otherwise protect the motor controller?</p> <p>I realize the "right" solution is to just buy a better motor controller, but we're a bit low on funds right now.</p> <p>I thought of putting a resistor in series with the motor and came up with a value of 150mΞ©, which would reduce the maximum current draw to 25A (given the 12V/36A=330mΞ© maximum impedance of the motor). Is there any downside to doing this? Would I be harming the performance of the motor beyond reducing the stall torque?</p>
How can I reduce a motor's maximum current draw?
<p>There is a handy online Calculator I use to answer such questions. I tried with the data you provided (and had to lookup specs on your motors). I can offer the following insights, but I am only guessing about your model's all-up weight.</p> <p>If we assume it is roughly 800g with 6x3 props:</p> <ul> <li>max speed: ~66km/h </li> <li>additional payload: 934g (of course this will affect top speed)</li> <li>mixed flight time: 3.9min</li> </ul> <p>Everything seems within spec with this data (ie. nothing will burn up, overheat, or be over or under driven)</p> <p>It probably won't fly at all with 4 inch props (no matter what pitch).</p> <p>Your flight time is quite low, you might consider a larger battery.</p> <p>The calculator I used is <a href="http://www.ecalc.ch/xcoptercalc.php?ecalc&amp;lang=en" rel="nofollow">eCalc</a></p> <p>I strongly recommend spending some time with a calculator like that and trying experiments by adjusting only 1 dimension at a time, say motors or battery or props to see what effect it will have on the model before settling on your particular setup. I think that choosing the best motors is always the hardest choice as there are a lot of variants. The calculator has a lot of predefined motor specs by brand/model so you can try different motors quickly. Not all brands are covered though so its good to have access to your motors actual specs if necessary.</p>
8486
2015-11-17T21:42:27.633
|quadcopter|
<p>I'm currently in a (risky) project that involves me building the fastest quad I can afford.</p> <p>I'm trying to get something close to this <a href="https://www.youtube.com/watch?v=xQon7OD5CnY" rel="nofollow noreferrer">extremely fast warpquad</a></p> <p>After reading a lot about quadcopters, as I know I can buy all this and it should fit together and fly without any problem.</p> <pre><code>Motors: Multistar Elite 2306-2150KV ESC: Afro Race Spec Mini 20Amp Quanum neon 250 carbon racing frame(I love how it looks) 6Inch Props CC3D flight controller 4S 1400mah 40-80C Battery Any 6ch radio </code></pre> <p>My questions are:</p> <ol> <li><p>Am I wrong or missing something? as I had only read about it (thinking this is a common build for racer quad).</p> </li> <li><p>Will this overheat (bad consequences) if I let it drain the full battery at 100% throttle?</p> </li> <li><p>Will this fly at least 4 minutes under the previous conditions?</p> </li> <li><p>Should I get a higher C-rating battery?</p> </li> <li><p>As I can't find better motors of that size, is the only way to improve its speed by putting a 6S battery? and what would happen if I do it?</p> </li> <li><p>Should I put the 6inch props or 4inch? I know 4inch should get faster rpm changes but will it be noticeable at these sizes?</p> </li> <li><p>And in general, any tips to make it faster will be welcome.</p> </li> </ol> <p>Thanks.</p>
How to build a fast quadcopter?
<p>I usually do this to disconnect and I've had no issues with it to date:</p> <pre><code>128 // To put in passive mode 173 // To stop OI </code></pre> <p>If you only do 128 or 173 on their own, I could see it potentially having issues since it may not exit OI depending on what mode you were in. </p> <p>I do not request the stream to be paused, though I can't imagine that hurting.</p>
8487
2015-11-17T22:27:25.110
|mobile-robot|irobot-create|
<p>I am working with a Create 2 and I am executing a simple sequence like (in pseudocode):</p> <pre><code>create serial connection from Macbook to Create start the OI with by sending the 128 code send a pause-stream command (just to be safe) initiate the data streaming with ids: [29, 13] every 0.5 seconds for 15 seconds: poll the streamed sensor data and print it send a pause-stream command before shutdown send a 128 to put the robot in "passive mode" (I have also tried 173) close the serial connection </code></pre> <p>The outcome when I run the above program repeatedly is that it works the first time, I see sensor data (that seems to not change or be reactive) printing to the screen, but on future runs no serial can be read and the program crashes (because I am throwing an exception because I want to get this problem ironed out before getting to far along with other things). If I unplug and replug my USB cable from my Macbook, then the program will work for another run, and then fall back into the faulty behavior.</p> <p>I do not experience this issue with other things like driving the robot, I am able to run programs of similar simplicity repeatedly. If I mix driving and sensor streaming, the driving works from program run to program run, but the data streaming crashes the program on the subsequent runs.</p> <p>I have noticed that if I want to query a single sensor, I need to pause the stream to get the query response to come through on the serial port, and then resume it. That is why I am so inclined to pause/restart the stream.</p> <p>Am I doing something wrong, like pausing the stream too often? Are there other things I need to take care of when starting/stopping the stream? Any help would be appreciated!</p> <p><strong>EDIT:</strong> I should note that I am using Python and pyserial. I should also note, for future readers, that the iRobot pushes its streamed data to the laptop every 15ms where it sits in a buffer, and the data sits there until a call to serial.read() or to serial.flushInput(). This is why it seemed that my sensor values weren't updating when I read/polled every half second, because I was reading old values while the current ones were still buried at the back of the buffer. I worked around this issue by flushing the buffer and reading the next data to come in.</p> <p><strong>EDIT 2:</strong> Sometimes the above workaround fails, so if I detect the failure, I pause the stream, re-initialize the stream, and read the fresh data coming in. This seems to work pretty well. It also seems to have solved the issue that I originally asked the question about. I still don't know <em>exactly</em> why it works, so I will still accept @Jonathan 's answer since I think it is good practice and has not introduced new issues, but has at least added the benefit of the robot letting me know that it has started/exited by sounding tones.</p>
Is there a way to disconnect and reconnect from a Create 2 that was streaming sensor readings without having to unplug/replug my USB-serial cable?
<p>Applying the rotation sequence (x-y-z) that you specify, your first robot tool will have the following rotation matrix:</p> <p>$R_1 = \begin{bmatrix} 0 &amp; 0 &amp; -1 \\ 0 &amp; -1 &amp; 0 \\ -1 &amp; 0 &amp; 0 \end{bmatrix}$</p> <p>While your second robot tool will have the following rotation matrix:</p> <p>$R_2 = \begin{bmatrix} 0 &amp; 0 &amp; -1 \\ 0 &amp; 1 &amp; 0 \\ 1 &amp; 0 &amp; 0 \end{bmatrix}$</p> <p>Here is a figure to clarify, where the subscript 0 refers to the world coordinate frame, 1 refers to robot 1, and 2 refers to robot 2. Coordinate frames are shown with the x-axis in red, y-axis in green, and z-axis in blue -- grey lines show the world-frame.</p> <p><a href="https://i.stack.imgur.com/wnm24.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnm24.png" alt="enter image description here"></a></p> <p>There are actually infinite solutions that would rotate you from frame 1 to frame 2, but the simplest would be to rotate about the world frame x-axis by 180 degrees.</p> <p>We can get that mathematically by looking at the rotation necessary to transform frame 2 to frame 1:</p> <p>$R_1 = R_{2\rightarrow 1} R_2$</p> <p>$R_{2 \rightarrow 1} = R_1 R^T_2$</p> <p>$R_{2 \rightarrow 1} = \begin{bmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; -1 &amp; 0 \\ 0 &amp; 0 &amp; -1 \end{bmatrix}$</p> <p>Which can be treated as a rotation about the x-axis by angle $\phi$:</p> <p>$R_{2 \rightarrow 1} = \begin{bmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; \cos \phi &amp; -\sin \phi \\ 0 &amp; \sin \phi &amp; \cos \phi \end{bmatrix}$</p> <p>Where $\cos \phi = -1$ and $\sin \phi = 0$. This leads to the solution:</p> <p>$\phi = n \pi$</p> <p>Where $n$ is any odd integer. The smallest rotation is when $n = 1$ or $n = -1$, which corresponds to 180 degree rotation about the x-axis in either direction to bring you from frame 2 to frame 1.</p> <p>On the other hand, if you consider the angle error to simply be the same as a Euclidean distance, then you get the following (evaluating the formula you give in your question):</p> <p>$\varepsilon = \pi \sqrt{\frac{3}{2}}$</p> <p>So why is that different? It's because <em>translation</em> takes place in $\mathcal{R}^3$ space while <em>rotation</em> takes place in $\mathcal{S}^3$ space. Euclidean distance is applicable in $\mathcal{R}^3$ space, but not $\mathcal{S}^3$ space. Rotations are non-linear and order matters, meaning most of the concepts applicable to translation are not applicable to rotation.</p> <p>In some cases you can use the Euclidean distance in angular coordinates to approximate the angular error between two points, but only when the angle errors are small -- essentially linearizing the trigonometric relations $\cos \theta \approx 1$ and $\sin \theta \approx \theta$.</p> <p>You cannot take two sets of Euler angles, subtract them, and use that difference as the set of Euler angles defining the rotation transformation between the two frames defined by the original angles. For example, given your defined sets of angles resulting in the difference $\begin{bmatrix} \frac{\pi}{2} &amp; \pi &amp; -\frac{\pi}{2} \end{bmatrix}$, those new Euler angles would yield the following rotation matrix:</p> <p>$R = \begin{bmatrix} 0 &amp; 0 &amp; -1 \\ 1 &amp; 0 &amp; 0\\ 0 &amp; -1 &amp; 0 \end{bmatrix}$</p> <p>But we already know that the rotation matrix is:</p> <p>$R = \begin{bmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; -1 &amp; 0 \\ 0 &amp; 0 &amp; -1 \end{bmatrix}$</p> <p>Once again, you simply cannot apply linear mathematics to rotations. The rotation angles give you rotation matrices, and those matrices multiply. The new rotation matrix will itself have a set of corresponding angles, and those angles are not simply the difference between the original sets. The closest you can get to what you are talking about is by using <a href="https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation" rel="nofollow noreferrer">quaternions</a>, where subsequent rotations can be "added" and "subtracted" but again using non-linear mathematics.</p>
8491
2015-11-18T08:27:48.370
|robotic-arm|kinematics|geometry|
<p>Given two robot arms with TCP (Tool Center Point) coordinates in the world frame is:</p> <p>$X_1 = [1, 1, 1, \pi/2, \pi/2, -\pi/2]$</p> <p>and</p> <p>$X_2 = [2, 1, 1, 0, -\pi/2, 0]$</p> <p>The base of the robots is at:</p> <p>$Base_{Rob1} = [0, 0, 0, 0, 0, 0]$</p> <p>$Base_{Rob2} = [1, 0, 0, 0, 0, 0]$</p> <p>(The coordinates are expressed as successive transformations, X-translation, Y-translation, Z-translation, X-rotation, Y-rotation, Z-rotation. None of the joint axes are capable or continuous rotations.)</p> <p>How many degrees does the TCP of robot 2 have to rotate to have the same orientation as the TCP of robot one?</p> <p>Is the calculation </p> <p>$\sqrt{(\pi/2 - 0)^2 + (\pi/2 - (-\pi/2))^2 + (-\pi/2 - 0)^2}$</p> <p>wrong? If yes, please specify why.</p> <p>UPDATED: is the relative orientation of the two robots [Ο€/2,Ο€/2,βˆ’Ο€/2]βˆ’[0,βˆ’Ο€/2,0]=[Ο€/2,Ο€,βˆ’Ο€/2]? but the euclidean distance cannot be applied to calculate angular distance?</p> <p>In other words:</p> <ol> <li><p>While programming the robot, and tool frame is selected for motion, to match the orientation of the other one, i would have to issue a move_rel($0, 0, 0, \pi/2, \pi, -\pi/2$) command, but the executed motion would have magnitude of $\pi$?</p></li> <li><p>While programming the robot, and world frame is selected for motion, to match the orientation of the other one, i would have to issue a move_rel($0, 0, 0, \pi, 0, 0$) command, and the executed motion would have magnitude of $\pi$?</p></li> </ol>
Relative orientation of two robots
<p>Image features are connected to the camera pose through two steps: (1) the relationship between the feature pixel coordinates and its homogeneous coordinates in the camera reference frame, and (2) the relationship between the camera reference frame and the world frame. Take a look at this figure, where the world frame is denoted with subscript 0 and the camera frame is denoted with subscript C. A feature is shown as a blue dot, with position $p$ in the world frame.</p> <p><a href="https://i.stack.imgur.com/1E7or.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1E7or.png" alt="enter image description here"></a></p> <p>The camera has a particular field of view (shown with a pyramid in the image), which relates the pixels coordinates to the relative position of the feature in the camera reference frame, $\tilde{p}$, through the camera projection matrix:</p> <p>$\begin{bmatrix} I \\ J \\ 1 \end{bmatrix} = \begin{bmatrix} k_x &amp; 0 &amp; C_x \\ 0 &amp; k_y &amp; C_y \\ 0 &amp; 0 &amp; 1 \end{bmatrix} \begin{bmatrix} X \\ Y \\ 1 \end{bmatrix}$</p> <p>Where $I$ and $J$ are the pixel coordinates of the feature in the image, and the camera is defined with parameters $k_x$, $k_y$, $C_x$ and $C_y$ (based on the field of view and output image size). The <em>homogeneous coordinates</em>, $X$ and $Y$ are defined based on the relative position of the feature in the camera frame:</p> <p>$\tilde{p} = \begin{bmatrix} \tilde{x} \\ \tilde{y} \\ \tilde{z} \end{bmatrix}$</p> <p>$X = \frac{\tilde{x}}{\tilde{z}}$</p> <p>$Y = \frac{\tilde{y}}{\tilde{z}}$</p> <p>That relative position of the feature is then related to the actual position of the feature (in the world frame) as well as the camera pose, according to:</p> <p>$p = R_C \tilde{p} + p_C$</p> <p>Where $R_C$ is the rotation matrix defining the camera orientation and $p_C$ is the position of the camera.</p> <p>One thing I have not discussed is lens distortion, which must be accounted for before using this approach but can be treated separately. Note that the <a href="http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html" rel="nofollow noreferrer">OpenCV Camera Calibration and 3D Reconstruction</a> page provides a more detailed explanation.</p>
8499
2015-11-18T18:41:32.657
|mapping|visual-servoing|
<p>I have a robotic arm and a camera in eye-in-hand configuration. I know that there is a relationship between the body velocity $V$ of the camera and the velocities $\dot s$ in the image feature space that is $\dot s=L(z,s) V$ where $L$ is the interaction matrix. I was wondering if one can find a mapping (a so called diffeomorphism) that connects the image features' vector $s$ with the camera pose $X$. All I was able to find is that it is possible to do that in a structured environment which I don't fully understand what it is.</p>
Mapping between camera pose and image features in visual servoing
<p>Let me give you a mathematician's perspective on the difference between the two kinds problems.</p> <p><strong>Forward kinematics</strong> asks the question: <em>given a certain input (i.e. control command), what will be the output (i.e. robot configuration, pose, etc.)</em>. <strong>Inverse kinematics</strong> asks the reverse question: <em>given a certain desired output, what is the necessary input</em>. </p> <p>Inverse problems are usually much harder to solve (e.g. ill-conditioned or otherwise ill-posed) than forward problems<sup>1</sup>. This is true not only of kinematics, but also dynamics and a wide range of computational problems as well.</p> <p>Consider the following analogy. Let's assume we are given a basic algebraic function $f(x)=x^3-5x+1$. Forward kinematics is like asking to find $f(2)$. It's easy to do because the equation is already given and all we need to do is substitute the input to get the output. Kinematic equations are also this way: designed to convert inputs into outputs easily. Inverse kinematics is like asking to solve the equation $f(x)=2$. It's not as easy to find the answer. Going backwards to find the input that gives a specific output is non-trivial. An answer may not even exist. If it does exist, it may not be unique. You may require additional information (e.g. regularization) to obtain a unique solution with desirable characteristics. </p> <p>Often, it's <em>impossible to solve inverse kinematics analytically</em>. This is especially for true for <em>serial arm linkages</em>. This means that you will need a numerical method to solve a system of non-linear equations of the form $F(X)=0$. Borrowing from calculus, we can use Newton's method. In higher dimensional spaces, Newton's method requires formulating the Jacobian (or at least the approximate Jacobian) of $F(X)$.</p> <hr> <p><strong>Notes</strong></p> <p><sub> 1. For parallel arms (such as a Delta arm or Stewart platform), the difficulty is reversed (i.e. inverse kinematics is straightforward, whereas forward kinematics is more tricky). In general, one of the two problems (either forward or inverse) will always be harder to solve than the other. </sub></p>
8500
2015-11-18T18:59:54.263
|inverse-kinematics|forward-kinematics|jacobian|
<p>I am not quite sure if I quite understand the difference between these two concepts, and why there is a difference between these two concept. </p> <p>Yesterday I was trying to compute the jacobian needed for an inverse kinematics, but the usual input I provided my transformation in the Forward kinematics being the Points P and xyz could not be applied, The transformation matrix was given a state vector Q, at which the the Tool position could be retrieved... </p> <p>I am not sure if understand the concept quite well, and can't seem to the google the topics, as they usually include terminologies which makes the concepts too simple (Angle calc and so on.. )</p> <p>I know it might be pretty much to ask, but what form of input is needed to compute the jacobian ?, and what and why is there a difference between forward and inverse kinematics?.. </p>
Forward kinematic and inverse kinematic... When to use what?
<p>The linkages suggested by jwpat7 are great, and I would probably prefer to use something like that. But, if you want something <em>really</em> simple, then you might be able to get away with a four-bar linkage <a href="https://www.youtube.com/watch?v=RsFnbChKVXU" rel="nofollow noreferrer">like this type of thing</a>. Although it does depend on continuous rotation, so maybe not appropriate for your application.</p> <p>With a servo-motor that rotates back-and-forth, you may be able to build a mechanism similar to what I show in the figure below. The gears (shown in blue) are all free to rotate, but input from the motor will actually rotate the entire leg and foot together until hitting a mechanical stop (some limiting tab that prevents the leg from rotating further). At this point, the input motor will then transfer power by rotating the gears so that only the foot moves.</p> <p><a href="https://i.stack.imgur.com/uVUi0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uVUi0.png" alt="enter image description here"></a></p> <p>This will work when there is less torque resistance from rotating the entire leg and foot together compared to rotating the foot with respect to the leg -- assuming the inertia of the foot is small this will always work when the leg is simply moving in free space. However, if you are touching the ground (which I assume a legged robot will be...), then this might get tricky. It would then become a matter of adding more resistance to the gear transmission so that the leg still rotates until hitting the stop.</p> <p>Another option would be to use some kind of cam, so that when the main input rotation reaches a given point, another arm is lifted by the cam surface to rotate only the foot. If you aren't sure what I mean then check out <a href="https://www.youtube.com/watch?v=KRnTXvjTeIA" rel="nofollow noreferrer">this video</a> for inspiration.</p>
8501
2015-11-18T19:03:28.317
|mechanism|motion-planning|servomotor|legged|gearing|
<p><strong>Note before I start:</strong> I have not actually put anything together yet, i'm still just planning, so any changes that require a shape change or anything like that are accepted.</p> <p>I'm working on making a walking robot with my arduino and 3d printing all the pieces I need. It will have four legs, but since it needs to be mobile, I didn't want the power supply to be huge. I've decided it would be best if I can get each leg to only require 1 servo, at 5V each. I know how to get the leg to move back and forth, but i want to be able to lift it in between; before it brings the leg forward, it needs to lift up the foot. The only thing I can think of is the rotation maybe locking some sort of gear.</p> <p>When a motor begins rotating clockwise, how can I have it power a short motion to move an object toward itself, and when it begins moving counterclockwise to power the same object a short distance away from itself?</p> <p>The servos I am using have 180* of rotation, so they don't go all the way around in a loop.</p> <p>also: don't know if it will be important or not, but because of the peculiar construction of the foot, it would be best if it was lifted straight up, rather than up at an angle, but it isn't 100% necessary. </p> <p>Are there any robots that already do this? if so, I'm unaware of them. Thanks for your time.</p>
Lifting robotic leg with only one servo
<p>In general, Euler angles (or specifically roll-pitch-yaw angles) can be extracted from <em>any</em> rotation matrix, regardless of how many rotations were used to generate it. For a typical x-y-z rotation sequence, you end up with this rotation matrix where <strong><span class="math-container">$\phi$</span> is roll, <span class="math-container">$\theta$</span> is pitch, and <span class="math-container">$\psi$</span> is yaw</strong>:</p> <p><span class="math-container">$R = \begin{bmatrix} c_\psi c_\theta &amp; c_\psi s_\theta s_\phi - s_\psi c_\phi &amp; c_\psi s_\theta c_\phi + s_\psi s_\phi \\ s_\psi c_\theta &amp; s_\psi s_\theta s_\phi + c_\psi c_\phi &amp; s_\psi s_\theta c_\phi - c_\psi s_\phi \\ -s_\theta &amp; c_\theta s_\phi &amp; c_\theta c_\phi \end{bmatrix}$</span></p> <p>Note that <span class="math-container">$c_x = \cos x$</span> and <span class="math-container">$s_x = \sin x$</span>.</p> <p>To get roll-pitch-yaw angles out of any given numeric rotation matrix, you simply need to relate the matrix elements to extract the angles based on trigonometry. So re-writing the rotation matrix as:</p> <p><span class="math-container">$R = \begin{bmatrix} R_{11} &amp; R_{12} &amp; R_{13} \\ R_{21} &amp; R_{22} &amp; R_{23} \\ R_{31} &amp; R_{32} &amp; R_{33} \end{bmatrix}$</span></p> <p>And then comparing this to the above definition, we can come up with these expressions to extract the angles:</p> <p><span class="math-container">$\tan \phi = \frac{R_{32}}{R_{33}}$</span></p> <p><span class="math-container">$\tan \theta = \frac{-R_{31}}{\sqrt{R_{32}^2 + R_{33}^2}}$</span></p> <p><span class="math-container">$\tan \psi = \frac{R_{21}}{R_{11}}$</span></p> <p>When you actually solve for these angles you'll want to <strong>use an <code>atan2(y,x)</code> type of function so that quadrants are appropriately handled</strong> (also keeping the signs of the numerators and denominators as shown above).</p> <p>Note that this analysis <strong>breaks down when <span class="math-container">$\cos \theta = 0$</span></strong>, where there exists a singularity in the rotation -- there will be multiple solutions.</p> <p>Also note that your question is referring to the rotation transformation <em>between</em> two joints (<span class="math-container">$^{n-1}R_n$</span>), which is indeed composed of an x-axis and z-axis rotation, but the actual rotation of that joint with respect to the base will be different (<span class="math-container">$^0R_n$</span>) and composed of all the rotations for joints 1 to <span class="math-container">$n$</span>. I point this out because <span class="math-container">$^{n-1}R_n$</span> <em>can</em> be decomposed back into the relevant x-axis (<span class="math-container">$\alpha$</span>) rotation and z-axis (<span class="math-container">$\theta$</span>) rotation, whereas <span class="math-container">$^0R_n$</span> is better suited to roll-pitch-yaw -- especially for the end-effector.</p>
8516
2015-11-20T03:37:29.850
|kinematics|forward-kinematics|dh-parameters|
<p>I've calculated a DH Parameter matrix, and I know the top 3x3 matrix is the Rotation matrix. The DH Parameter matrix I'm using is as below, from <a href="https://en.wikipedia.org/wiki/Denavit%E2%80" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Denavit%E2%80</a><a href="https://i.stack.imgur.com/ZJ2LC.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZJ2LC.jpg" alt="enter image description here"></a></p> <p>Above is what I'm using. From what I understand I'm just rotating around the Z-axis and then the X-axis, but most explanations from extracting Euler angles from Rotation matrixes only deal with all 3 rotations. Does anyone know the equations? I'd be very thankful for any help.</p>
Getting pitch, yaw and roll from Rotation Matrix in DH Parameter
<p>You have far too much vibration reaching your sensors. Mount the board you're using with vibration damping mounts, this should solve most of your problem. An easy way to do this is to attach each corner of your board to the frame using a small but thick piece of double stick squishy foam[1]. The board will now act as a spring mass damper attached to your frame. By making the pieces of foam narrower you will increase the cutoff frequency of the system. Here is a link to an arducopter page talking about vibration damping to solve this problem[2].</p> <p>While a complementary filter may be suitable for your application, the correct tool for estimating attitude is a kalman filter capable of handling nonlinear system dynamics and measurement dynamics. The pixhawk uses an extended kalman filter to estimate attitude[3]. It may be overkill for your system depending on how far you want your quad to deviate from level hover.</p> <p>[1]<a href="http://hobbyking.com/hobbyking/store/__26457__anti_vibration_foam_orange_latex_190mm_x_140mm_x_6mm.html" rel="nofollow">http://hobbyking.com/hobbyking/store/__26457__anti_vibration_foam_orange_latex_190mm_x_140mm_x_6mm.html</a></p> <p>[2]<a href="http://copter.ardupilot.com/wiki/common-vibration-damping/" rel="nofollow">http://copter.ardupilot.com/wiki/common-vibration-damping/</a></p> <p>[3]<a href="https://pixhawk.org/firmware/apps/attitude_estimator_ekf" rel="nofollow">https://pixhawk.org/firmware/apps/attitude_estimator_ekf</a></p>
8525
2015-11-21T20:51:02.850
|control|imu|accelerometer|gyroscope|
<p>I'm building quadcopter from scratch, software is implemented on STM32F4 microcontroller. Frequency of main control loop equals 400Hz.</p> <p>I've though everything is almost finished but when i've mounted everything and started calibration of PIDs i faced a problem. It was impossible to adjust PID parameters properly. So i started test with lower power (not enough to fly) and i've managed quite fast adjust PID for roll but when i've increased power problems with control came back.</p> <p>After that i've done more measurements. <a href="https://i.stack.imgur.com/2Bix8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Bix8.jpg" alt="Roll angle with engines off"></a> <a href="https://i.stack.imgur.com/y2878.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y2878.jpg" alt="Roll angle with engines on but without blades from complementary filter"></a></p> <p>I didn't make test with blades but probably this is even worse and that is why i cannot calibrate it.</p> <ul> <li>If problem is due to vibration how can i fix it?</li> <li>If something else is cause of that symptom, what is it?</li> <li>Can i solve this through better controls and data fusion algorithms? Now i use complementary filter for acc and gyro sensors data fusion in roll and pitch.</li> </ul>
Quadrocopter problem with stability
<p>You can control DC motors from an Android device without a separate micro controller. The two ways I've seen this done are through a USB to parallel printer cable or through a headphone jack. Read more about these approaches at <a href="http://terakuhn.weebly.com/phone_usb_controller.html" rel="nofollow noreferrer">http://terakuhn.weebly.com/phone_usb_controller.html</a> and <a href="https://makezine.com/projects/make-34/smartphone-servo/" rel="nofollow noreferrer">https://makezine.com/projects/make-34/smartphone-servo/</a> respectively. However, I'm not familiar with the L298N motor controller so I'm not sure if you could use that with either of these approaches.</p>
8526
2015-11-22T09:44:55.160
|motor|usb|
<p>I am new to robotics, I will be controlling DC motors from Android device through USB</p> <p>For this I have selected L298N motor controller(After watching YouTube videos ) And got some DC motors</p> <p>I have no idea how do I connect this to Android device via USB cable</p> <p>Help appreciated</p> <p>Ref: <a href="https://www.bananarobotics.com/shop/L298N-Dual-H-Bridge-Motor-Driver" rel="nofollow">https://www.bananarobotics.com/shop/L298N-Dual-H-Bridge-Motor-Driver</a> <a href="https://youtu.be/XRehsF_9YQ8" rel="nofollow">https://youtu.be/XRehsF_9YQ8</a></p> <p>PS: All I know is programming android</p>
Which USB interface for Android device I can use for motor driver
<p>For the most part I was able to get this done. The only difference I chose was to use the Point Cloud library's triangular mesh generator instead of Meshlab's mesh generator mainly because Meshlab's generator is interactive. Below is the image of the Kinect (on top of the Baxter) taking a picture of the shelf, and a picture of the STL put together from the PointCloud the Kinect took.<a href="https://i.stack.imgur.com/2qp64.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2qp64.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/CbpMx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CbpMx.png" alt="enter image description here"></a></p> <p>The code I used is as below:</p> <pre><code>#include &lt;ros/ros.h&gt; #include &lt;sensor_msgs/PointCloud2.h&gt; #include &lt;pcl_conversions/pcl_conversions.h&gt; #include &lt;pcl/point_cloud.h&gt; #include &lt;pcl/point_types.h&gt; #include &lt;pcl/io/pcd_io.h&gt; #include &lt;pcl/kdtree/kdtree_flann.h&gt; #include &lt;pcl/features/normal_3d.h&gt; #include &lt;pcl/surface/gp3.h&gt; #include &lt;pcl/io/vtk_io.h&gt; #include &lt;pcl/io/vtk_lib_io.h&gt; #include &lt;pcl/io/io.h&gt; int n = 0; void cloud_cb () { // Load input file into a PointCloud&lt;T&gt; with an appropriate type pcl::PointCloud&lt;pcl::PointXYZ&gt;::Ptr cloud (new pcl::PointCloud&lt;pcl::PointXYZ&gt;); pcl::PCLPointCloud2 cloud_blob; pcl::io::loadPCDFile ("mesh.pcd", cloud_blob); pcl::fromPCLPointCloud2 (cloud_blob, *cloud); //* the data should be available in cloud // Normal estimation* pcl::NormalEstimation&lt;pcl::PointXYZ, pcl::Normal&gt; n; pcl::PointCloud&lt;pcl::Normal&gt;::Ptr normals (new pcl::PointCloud&lt;pcl::Normal&gt;); pcl::search::KdTree&lt;pcl::PointXYZ&gt;::Ptr tree (new pcl::search::KdTree&lt;pcl::PointXYZ&gt;); tree-&gt;setInputCloud (cloud); n.setInputCloud (cloud); n.setSearchMethod (tree); n.setKSearch (20); n.compute (*normals); //* normals should not contain the point normals + surface curvatures // Concatenate the XYZ and normal fields* pcl::PointCloud&lt;pcl::PointNormal&gt;::Ptr cloud_with_normals (new pcl::PointCloud&lt;pcl::PointNormal&gt;); pcl::concatenateFields (*cloud, *normals, *cloud_with_normals); //* cloud_with_normals = cloud + normals // Create search tree* pcl::search::KdTree&lt;pcl::PointNormal&gt;::Ptr tree2 (new pcl::search::KdTree&lt;pcl::PointNormal&gt;); tree2-&gt;setInputCloud (cloud_with_normals); // Initialize objects pcl::GreedyProjectionTriangulation&lt;pcl::PointNormal&gt; gp3; pcl::PolygonMesh triangles; // Set the maximum distance between connected points (maximum edge length) gp3.setSearchRadius (0.025); // Set typical values for the parameters gp3.setMu (2.5); gp3.setMaximumNearestNeighbors (100); gp3.setMaximumSurfaceAngle(M_PI/4); // 45 degrees gp3.setMinimumAngle(M_PI/18); // 10 degrees gp3.setMaximumAngle(2*M_PI/3); // 120 degrees gp3.setNormalConsistency(false); // Get result gp3.setInputCloud (cloud_with_normals); gp3.setSearchMethod (tree2); gp3.reconstruct (triangles); // Additional vertex information std::vector&lt;int&gt; parts = gp3.getPartIDs(); std::vector&lt;int&gt; states = gp3.getPointStates(); pcl::io::savePolygonFileSTL("mesh.stl", triangles); } void saveFile(const sensor_msgs::PointCloud2::ConstPtr&amp; input) { if(n &lt;= 1) { pcl::PCLPointCloud2::Ptr pcl_input_cloud(new pcl::PCLPointCloud2); pcl_conversions::toPCL(*input, *pcl_input_cloud); pcl::io::savePCDFile("mesh.pcd", *pcl_input_cloud); n++; cloud_cb(); } return; } int main (int argc, char** argv) { // Initialize ROS ros::init (argc, argv, "my_pcl_tutorial"); ros::NodeHandle nh; ros::Rate loop_late(1); // Create a ROS subscriber for the input point cloud ros::Subscriber sub = nh.subscribe("/cameras/kinect/depth/points", 1, saveFile); while(n&lt;=1) { ros::spinOnce(); loop_late.sleep(); } return 0; } </code></pre> <p>Most of the code is copied exactly from <a href="http://pointclouds.org/documentation/tutorials/greedy_projection.php" rel="nofollow noreferrer">http://pointclouds.org/documentation/tutorials/greedy_projection.php</a> . I'll have to find a way to get a better 3D reconstruction of the shelf considering there are gaps in the levels.</p>
8546
2015-11-26T01:23:16.723
|robotic-arm|localization|slam|kinect|gazebo|
<p>I am currently applying path planning to my robotic arm (in Gazebo) and have chosen to use an RRT. In order to detect points of collision, I was thinking of getting a Point Cloud from a Kinect subscriber and feeding it to something like an Octomap to have a collision map I could import into Gazebo. However, there is no Gazebo plugin to import Octomap files and I do not have enough experience to write my own. The next idea would be to instead feed this point cloud to a mesh generator (like Meshlab) and turn that into a URDF, but before starting I'd rather get the input of somebody far more experienced. Is this the right way to go? Keep in mind the environment is static, and the only things moving are the arms. Thank you. Below is just a picture of an octomap.<a href="https://i.stack.imgur.com/B8OEZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B8OEZ.png" alt="enter image description here"></a></p>
Modelling Point Clouds for Collision Detection in Gazebo
<p>The first manipulator shown is a typical articulated arm without the final roll about the tool axis. Just use the DH parameters for a Puma but set joint 6 = 0. </p> <p>I don't know of any shortcuts for the second manipulator other than working out the DH parameters directly. However, DH seems like overkill for this design. DH allows you to handle things like joint twists and other offsets, which your manipulator does not have. Rather than that, would you consider using simple rotation matrices and a vector loop to describe the end effector position and orientation with respect to the global coordinate system?</p> <p><a href="https://i.stack.imgur.com/WNzfF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WNzfF.png" alt="5DOF with coordinate systems"></a></p> <p>In this picture I show your link lengths $L_1$, $L_2$, and $L_3$, along with coordinate systems $0$ through $5$. I left the $\hat Y$ axes out for clarity, but they follow the right-hand rule.</p> <p>Your positional vector loop is $$\vec P_t = L_1 \hat Z_1 + L_2 \hat X_2 + L_3 \hat Z_5$$ where $\vec P_t$ represents the position of the center of your tool tip.</p> <p>Now find the rotation matrices to define the unit vectors $\hat Z_1$, $\hat X_2$, and $\hat Z_5$.</p> <p>I will use $_x^{x+1}R$ for the rotation matrix between coordinate systems $x$ and $x+1$, and $\theta_i$ to represent the $i$th joint angle.</p> <p>$$_0^1R = \begin{bmatrix} \cos \theta_1&amp;-\sin \theta_1&amp;0\\ \sin \theta_1&amp;\cos \theta_1&amp;0\\ 0&amp;0&amp;1 \end{bmatrix} $$</p> <p>$$_1^2R = \begin{bmatrix} \cos \theta_2&amp;-\sin \theta_2&amp;0\\ 0&amp;0&amp;1\\ -\sin \theta_2&amp;-\cos \theta_2&amp;0 \end{bmatrix} $$ (notice $\hat Y_2$ should point "down")</p> <p>$$_2^3R = \begin{bmatrix} 0&amp;0&amp;1\\ -\sin \theta_3&amp;-\cos \theta_3&amp;0\\ \cos \theta_3&amp;-\sin \theta_3&amp;0 \end{bmatrix} $$</p> <p>$$_3^4R = \begin{bmatrix} \sin \theta_4&amp;-\cos \theta_4&amp;0\\ 0&amp;0&amp;1\\ -\cos \theta_4&amp;-\sin \theta_4&amp;0 \end{bmatrix} $$</p> <p>$$_4^5R = \begin{bmatrix} 0&amp;0&amp;1\\ \cos \theta_5&amp;-\sin \theta_5&amp;0\\ \sin \theta_5&amp;\cos \theta_5&amp;0 \end{bmatrix} $$</p> <p>Now find the unit vectors needed for your vector loop (all with respect to the global coordinate system $0$). $\hat Z_1$ is the third column from $_0^1R$:</p> <p>$$\hat Z_1 = \left( \begin{array} {c}0\\ 0\\ 1 \end{array} \right) $$</p> <p>$\hat X_2$ is the first column from $_0^2R$, which you find by multiplying $_0^1R _1^2R$:</p> <p>$$_0^2R = \begin{bmatrix} \cos \theta_1 \cos \theta_2&amp;-\cos \theta_1 \sin \theta_2&amp; -\sin \theta_1\\ \sin \theta_1 \cos \theta_2&amp;-\sin \theta_1 \sin \theta_2&amp; \cos \theta_1\\ -\sin \theta_2&amp;-\cos \theta_2&amp;0 \end{bmatrix} $$</p> <p>$$\hat X_2 = \left( \begin{array} {c}\cos \theta_1 \cos \theta_2\\ \sin \theta_1 \cos \theta_2\\ -\sin \theta_2 \end{array} \right) $$</p> <p>In a similar way you can find $\hat Z_5$:</p> <p>$$\hat Z_5 = \left( \begin{array} {c}\sin \theta_4 (\cos \theta_1 \sin \theta_2 \sin \theta_3 - \sin \theta_1 \cos \theta_3) - \cos \theta_1 \cos \theta_2 \cos \theta_4\\ \sin \theta_4 (\sin \theta_1 \sin \theta_2 \sin \theta_3 + \cos \theta_1 \cos \theta_3) - \sin \theta_1 \cos \theta_2 \cos \theta_4\\ \cos \theta_2 \sin \theta_3 \sin \theta_4 + \sin \theta_2 \cos \theta_4 \end{array} \right) $$</p> <p>Plugging values for the link lengths, and unit vectors, into your vector loop equation gets you the tool tip position.</p> <p>For the upper left $3x3$ submatrix of your 4x4 DH matrix, you just use $_0^5R$ (assuming the coordinate system definitions are the same).</p> <p>Now back to your wrist partitioning question. You can use the same technique to compute the origin of the $4$ coordinate system ($L_1 \hat Z_1 + L_2 \hat X_2$). Rotations $\theta_3$, $\theta_4$ and $\theta_5$ do not affect the position of this point, which is often referred to as the wrist center. Those three joints can then orient the tool with respect to the $4$ coordinate system. </p>
8549
2015-11-26T16:36:51.750
|kinematics|forward-kinematics|dh-parameters|
<p>Is it possible to decouple a 5DOF manipulator? This question I asked earlier and I believe I got the right answers but I never show the drawings of the manipulator and now I'm hesitating during setup of the DH parameters for Forward Kinematics. See drawing depicted here.<a href="https://i.stack.imgur.com/fUCIF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fUCIF.png" alt="enter image description here"></a> </p> <p><a href="https://i.stack.imgur.com/QiEgK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QiEgK.png" alt="enter image description here"></a></p>
DH parameters and Kinematic Decoupling
<p>I don't think I have heard of an algorithm specifically for minimizing jerk and definitely none for jounce. However, here are some semi-recent academic articles and papers that may be of interest to you. They all minimize acceleration in different ways, and you may be able to be extend for jerk and jounce.</p> <ul> <li>IEEE SPECTRUM article: <a href="http://spectrum.ieee.org/automaton/robotics/industrial-robots/robots-with-smooth-moves-are-more-efficient" rel="nofollow">Robots With Smooth Moves Are Up to 40% More Efficient</a> profiling the <a href="http://www.areus-project.eu/smooth-robot-movements-reduce-energy-consumption-by-up-to-40-percent/" rel="nofollow">AREUS project</a>.</li> <li>RSS 2012: <a href="http://www.tobiaskunz.net/pubs/KunzRSS12-Trajectories.pdf" rel="nofollow">Time-Optimal Trajectory Generation for Path Following with Bounded Acceleration and Velocity</a> by Tobias Kunz and Mike Stilman. Video <a href="http://www.golems.org/projects/traj.html" rel="nofollow">here</a>.</li> <li>HUMANOIDS 2015: <a href="http://www.neil.dantam.name/papers/kingston2015lc3.pdf" rel="nofollow">Kinematically Constrained Workspace Control via Linear Optimization</a> by Zachary K. Kingston, Neil T. Dantam, and Lydia E. Kavraki.</li> </ul>
8555
2015-11-27T13:51:11.517
|control|algorithm|
<p>In order to perform a cyclic task, I need a trajectory planning algorithm. This trajectory should minimize jerk and jounce.</p> <p>When I search for trajectory planning algorithms, I get many different options, but I haven't found one which satisfies my requirements in terms of which values I can specify. An extra complicating factor is that the algorithm should be used online in a system without too much computing power, so mpc algorithms are not possible...</p> <p>The trajectory I am planning is 2D, but this can be stripped down to 2 trajectories of 1 dimention each. There are no obstacles in the field, just bounds on the field itself (minimum and maximum values for x and y)</p> <p>Values that I should be able to specify:</p> <ul> <li>Total time needed (it should reach its destination at this specific time) </li> <li>Starting and end position</li> <li>Starting and end velocity</li> <li>Starting and end acceleration</li> <li>Maximum values for the position.</li> </ul> <p>Ideally, I would also be able to specify the bounds for the velocity, acceleration, jerk and jounce, but I am comfortable with just generating the trajectory, and then checking if those values are exceeded.</p> <p>Which algorithm can do that?</p> <p>So far I have used fifth order polynomials, and checking for limits on velocity, acceleration, jerk and jounce afterwards, but I cannot set the maximum values for the position, and that is a problem...</p> <p>Thank you in advance!</p>
Which trajectory planning algorithm for minimizing jerk
<p>Actually, the caster wheel has ideally no effect on the kinematics of the vehicle. In reality there will be some resistance from the caster wheel that does impact the vehicle motion, but we can still ignore it for the sake of designing a control law.</p> <p>Based on the extended discussion in the comments, your sensor can be used to measure the <em>lateral error</em> of the robot with respect to the line it is following. Consider the diagram below, where the robot position is depicted with a dark blue circle and its direction of motion is the red arrow (with constant speed $v$). The lateral error is $e$ (perpendicular distance to the line), while the heading error is $\alpha$ (angle of the velocity with respect to the line).</p> <p><a href="https://i.stack.imgur.com/XPuXI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XPuXI.png" alt="enter image description here"></a></p> <p>What you care about is having a control law that controls the robot heading so that an appropriate value of $\alpha$ causes $e$ to be minimized. To do that, consider the <em>error dynamics</em> of $e$:</p> <p>$\dot{e} = v \sin \alpha$</p> <p>Which can be extended to:</p> <p>$\ddot{e} = v \dot{\alpha} \cos \alpha$</p> <p>If we ignore the fact that the line direction may be changing (valid for most cases similar to roads), then the rate of change of the heading error is approximately the rate of change of the robot heading (turn rate $\omega$):</p> <p>$\dot{\alpha} \approx \omega$</p> <p>$\ddot{e} = v \omega \cos \alpha$</p> <p>Now comes the tricky part. What we really want to do is minimize the error $e$ by controlling the turn rate $\omega$, but the above equation is non-linear and we prefer to design control laws with linear systems. So let's make up a new control input $\eta$ that is related to $\omega$:</p> <p>$\eta = v \omega \cos \alpha$</p> <p>Then we can make a feedback control law for $\eta$. I'll go straight to the answer and then follow it up with the details if you are interested...</p> <p>The feedback controller can be a full PID as shown below:</p> <p>$\eta = -K_p e - K_d \dot{e} - K_i \int e dt$</p> <p>And then we calculate the necessary turn rate $\omega$:</p> <p>$\omega = \frac{\eta}{v \cos \alpha}$</p> <p>Normally you might do this by using a measurement of $\alpha$, but since you are only measuring $e$ you could just assume that term is constant and use:</p> <p>$\omega = \frac{\eta}{v}$</p> <p>Which is really just using a PID control law for $\omega$ based on $e$ but now with the factor $\frac{1}{v}$ in the gains.</p> <p>With $\omega$ known, you can compute the necessary wheel speed differential as follows (based on your variable names, and where $b$ is the width between wheels):</p> <p><code>midSpeed + value</code> $ = \frac{1}{2} \omega b + v$</p> <p>$v = $ <code>midSpeed</code></p> <p><code>value</code> $= \frac{1}{2}\omega b$</p> <p>Overall, you are computing $\omega$ using a PID control law as a function of lateral error $e$ (coming from your sensor). You then compute <code>value</code> from the value for $\omega$ and use it to determine the left and right wheel speeds.</p> <hr> <p>Now, read on for more details regarding the error dynamics and linearized control system:</p> <p>We can write out the system dynamics like this, where we consider $z$ to be the vector of the error states.</p> <p>$z = \begin{bmatrix} \dot{e} \\ e \end{bmatrix}$</p> <p>$\dot{z} = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0 \end{bmatrix} z + \begin{bmatrix} 1 \\ 0 \end{bmatrix} \eta$</p> <p>If you are familiar with linear control theory then you can start to see how this is a convenient form where we can make a feedback control law with input $\eta$ as a function of the error:</p> <p>$\eta = -\begin{bmatrix} K_d &amp; K_p \end{bmatrix} z$</p> <p>$\dot{z} = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0 \end{bmatrix} z - \begin{bmatrix} 1 \\ 0 \end{bmatrix} \begin{bmatrix} K_d &amp; K_p \end{bmatrix} z$</p> <p>$\dot{z} = \begin{bmatrix} -K_d &amp; -K_p \\ 1 &amp; 0 \end{bmatrix} z$</p> <p>This closed-loop system has the following characteristic equation:</p> <p>$s^2 + K_d s + K_p = 0$</p> <p>Which results in the poles (this is a PD control law, not PID):</p> <p>$p = -\frac{1}{2}K_d \pm \sqrt{\frac{1}{4}K_d^2 - Kp}$</p> <p>So if you wanted a critically damped system with no oscillations around the line, you could choose one of the gains and use the following relationship to get the other (start with smaller gains and tune until you get the response you want):</p> <p>$K_p = \frac{1}{4}K_d^2$</p>
8556
2015-11-27T19:45:01.913
|arduino|motor|pid|line-following|
<p>I'm building a line following robot. I have made different chassis designs. The main prototype I'm using is a rectangle base. At one side motors are placed. On the other side of the rectangle caster wheel is placed in the middle. Look at the following image.</p> <p><a href="https://i.stack.imgur.com/e89HK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e89HK.png" alt="Robot Design Prototype"></a></p> <p>By varying the values of <code>distance</code>, I have seen that the stability of the robot is varying rapidly. </p> <p>I'm driving the robot using <code>PID</code>. I have seen that for some chassis designs it is very hard(sometimes impossible) to calculate correct constant values. And for some chassis it is very easy. By the word <strong>stability</strong> I meant this. I have a feeling that the robot dimensions, <code>distance</code> values and that stability has a relationship..</p> <p>Is there an equation or something that can be used to estimate the value of the <code>distance</code> when the width of the robot is known..?</p> <p>Other than that is there a relationship between robot weight and diameter of the wheel or robot dimensions and the diameter..?</p> <p>Thanks for the attention!!</p>
Length and Width of a Line Following Robot
<p>Surface properties like that are really only for cosmetic purposes so it is unlikely any CAD program has that option. However, you may be able to get the appropriate effect by assigning particular <em>faces</em> either glass or mirror texture.</p> <p>Based on your added comments, I think the best option is simply to capture images or animation frames with whatever proper surface setting is needed for the required view angle. Then you can stitch the separate animation frames together to make it seem as though the component has the look you want.</p>
8561
2015-11-28T23:38:17.973
|design|mechanism|3d-printing|3d-model|visualization|
<p>I see there are things like glass and mirror in Autodesk Inventor Professional 2016 but is there a possibility to have Venetian mirror? So that from one side it would look like a mirror and from the other side it would look like a transparent glass?</p>
Is Venetian mirror possible in Autodesk Inventor?