Answer
stringlengths
44
28.2k
Id
stringlengths
1
6
CreationDate
stringlengths
23
23
Tags
stringlengths
4
94
Body
stringlengths
54
35.7k
Title
stringlengths
13
150
<p>Ultrasonic sensors will in my experience give you better range and accuracy than infrared, unless specifically dealing with many smooth, or sound dampening surfaces. You can poll them one at a time in order to not cause interference, and use some that use I2C (like <a href="http://www.maxbotix.com/SelectionGuide/indoor-uav.htm#uav" rel="nofollow">these</a>), then you could directly attach them to the raspberry pi to poll the values. What flight controller are you using? for most you could simply send direction data over the GPIO. </p>
7266
2015-05-15T01:42:32.263
|mobile-robot|quadcopter|sensors|
<p>I have a quadcopter built, and I need to be able to make it to autonomously follow a route and avoid obstacles where possible.</p> <p>My general plan is to have an array of sensors on a pre-defined "front". The quadcopter will only go forward. Generally I'd like to make it so that if the sensors pointing at a higher angle detect something getting closer as the bot moves forward, the quadcopter will stop, descend until the distance to that detected object decreases, and then continues forward. Similarly, I'd like the opposite event to happen if the sensors pointing at a lower angle detect something getting closer to the quadcopter.</p> <p>I'm thinking of having something like 9 small infrared distance detectors (pointing up, forward, down || left, forward, right), basically a 3x3 matrix.</p> <p>Would anyone have any ideas of the feasibility of this? I'd like to use a raspberry pi, but it will probably also need an additional board to read in the values from its sensors. In addition, I have no idea which sensors to use, or if infrared can even work. Any suggestions are more than welcome.</p> <p>I was also thinking about ultrasonic sensors, but having 9 of them could get cluttered, and I'd worry about their short range when a crash means death for the quadcopter. I also fear they would cause interference with each other.</p>
How can I make a quadcopter avoid obstacles using infrared?
<p>I assume that you will have GPS on your drone in addition to the GPS unit you will want to track.</p> <p>You will make heavy use of the <a href="http://www.movable-type.co.uk/scripts/latlong.html" rel="nofollow">Haversine formula</a>, which will tell you the bearing between 2 lat/lon (GPS) coordinates. This will come in handy in the following ways:</p> <ol> <li>Using the current and previous coordinates from the target's GPS, you know what direction the target is moving.</li> <li>Using the drone's coordinates and target's coordinates, you know what direction to aim the camera. </li> </ol> <p>Once you know where the target is and what your orientation should be to it, you can calculate the drone's desired position based on the desired following distance.</p>
7271
2015-05-15T12:22:50.210
|cameras|gps|line-following|
<p>As the title states, is there any way to make a following drone that tracks a GPS unit, and follows/orients camera to that? Similar to <a href="https://www.lily.camera/" rel="nofollow">this</a></p>
How would I implement a following drone with a camera using GPS?
<h3>Actuators Forces</h3> <p>Do I get this right: you have a theoretical model of a rigid multibody system and would like to perform rigid body dynamics computations. You have implemented the model and now would like to compute how the model behaves when driven by an actuator.</p> <p>However what is an actuator for you? Is it simply a force acting at that joint? Is it a DC motor model? Is it a PID controller?</p> <p>The dynamics algorithms in the book are described in terms of generalized positions $q$, generalized velocities $\dot{q}$, generalized velocities $\ddot{q}$, and generalized forces $\tau$. If you have a prismatic joint whose's translation is described by $q_i$ then the linear force at that joint is described by $\tau_i$. If you have a revolute (hinge) joint whose's rotation is described by $q_j$ then $\tau_j$ represents a torque at that joint.</p> <p>It is up to your understanding of an actuator how $\tau$ is computed. If you simply want to apply forces or torques then put the values into the corresponding values of $\tau$. Once you have that they serve as an input to the forward dynamics algorithms to compute the systems response to the applied forces.</p> <p><em>Note beside:</em> Featherstone uses $\tau^a$ to denote the active loop closure forces. From your model description there does not seem to be any kinematic loops and therefore $\tau^a$ does not apply.</p> <h3>Gravitational Acceleration:</h3> <p>Featherstone applies the gravitational acceleration at the base and lets it propagate by the algorithms through the tree. This is done in the RNEA, Table 5.1 in the line</p> <p>$a_0 = -a_g$.</p> <p>Instead of doing that you can also modify the line</p> <p>$f_i^B = I_i a_i + v_i \times^* I_i v_i$</p> <p>to </p> <p>$f_i^B = I_i (a_i - {}^iX_0a_g) + v_i \times^* I_i v_i$</p> <p>to apply the gravitational effects individually on each body. This introduces additional computations and I do not see any benefits in doing so.</p> <h3>Spatial Algebra vs. Concatenation of 3-D Vectors</h3> <p>Spatial Algebra is not just concatenation of 3-D vectors. The former expresses rigid body motions at a fixed coordinate frame, whereas the latter is expressed at points that move with the body. As a result spatial accelerations are the time derivatives of spatial velocities. In the classical notation using two 3-D equations this is not the case (Section 2.11 of Featherstone's book):</p> <p>If a body has a constant angular velocity $\omega$ then all points on that body that are not on the axis of rotation have an acceleration towards the axis of rotation (center of rotation in the planar case). In Spatial Algebra this body has zero spatial acceleration <em>independent of the frame the acceleration is expressed in</em>.</p> <p>Spatial velocity describes linear and angular velocity of the body point that currently coincides with the origin of the (fixed) reference frame. If that frame is expressed at the center of mass and oriented with the global reference frame then it appears to be a simple concatenation of 3-D linear and angular velocity, however this is only the case for this specific choice of reference frame. Expressed at a different frame you get different values but it still represents the same spatial velocity.</p> <p>Spatial acceleration describes the <em>flow</em> of the linear and angular velocity of the point that coincides with origin. "Flow" here means how the vector quantities (linear and angular velocity) change over time.</p>
7278
2015-05-15T20:10:53.803
|dynamics|actuator|joint|screw-theory|
<p>I'm working on a project where I need to model a system that is essentially comprised of a series of ball-and-socket joints attached to a base, which is attached in turn to a prismatic joint (rail). </p> <p>I've read Roy Featherstone's <em>Rigid Body Dynamics Algorithms</em> cover-to-cover, and I've also read the <em>Dynamics</em> section from the <em>Springer Handbook of Robotics</em> (also written by Featherstone). </p> <p>It took me a long time to get acclimated to using his "spatial vector" and "spatial matrix" notation, but after re-creating all of his notation by hand as an exercise it works out to just be a nice way of concatenating 3x3 and 3x1 matrices and vectors into 6x6 and 6x1 matrices and vectors. The maths he invents to perform operations can be a bit tedious to read as he hijacks some standard notation, but overall everything is very compact, very easy to implement in MATLAB. </p> <p><strong>My problem is this:</strong> How do I add actuators to the model? He walks through explicitly configuring the joint definitions, link definitions, etc., but when it comes to actuators or applied forces he says something like, "Just add a $\tau_a$ here and Bob's your uncle!" - it's not discussed at all. In the <em>Handbook of Robotics</em> he suggests introducing a false acceleration to the fixed base to add the gravitational force term, but doesn't show how to add it in local coordinates nor does he mention how to add the actuator input. </p> <p>Any help would be greatly appreciated. I've considered starting over with a different book, but it's going to be a great expense of my time to re-acclimate myself to a different set of notation. I'd like to move forward with this, but I feel like I'm just a few inches shy of the finish line. </p>
Adding an Actuator or Force to a (Featherstone) Articulated Rigid Body Model
<p>Dead reckoning is determining pose (position and rotation) using speed estimates from sensors. For example, you know your initial position and use sensors such as encoder, accelerometers, gyros, etc... to estimate your current position by integrating sensor measurements. </p> <p>Odometer is determining the pose using only the rotation of the wheels of a robot. Sometime people talking about visual odometry, and visual odometry is the same except cameras are used instead of wheel encoders.</p> <p>Basically, odometry is a type of dead reckoning. </p>
7287
2015-05-18T03:56:34.447
|odometry|deduced-reckoning|
<p>In terms of robotics, what are the differences between odometry and dead-reckoning? </p> <p>I read that odometry uses wheel sensors to estimate position, and dead-reckoning also uses wheel sensors, but "heading sensors" as well. Can someone please elaborate on this point for me?</p>
Odometry vs Dead-reckoning
<p>I reckon that the equations you raised in your previous <a href="https://robotics.stackexchange.com/questions/7235/how-to-implement-and-code-inner-and-outer-pd-controllers-for-quadrotor-for-posit">post</a>, in the assumption of small angles, already give us the intuition that $U_1$ is used to keep the height $z$ as desired, so that what is left to us to control the linear displacement $x$ and $y$ of the quadcopter is the roll $\phi$ and pitch $\theta$.</p> <p>Therefore, your last equation to find out the desired $\phi_d$ and $\theta_d$ as function of linear acceleration $\ddot{\hat{x}}_d$ and $\ddot{\hat{y}}_d$ is correct.</p> <p>Of course, then, you'll impose some tracking behavior such as the one I've seen in your script that relates the linear acceleration with the displacement error, something like the well known impedance control law: $\ddot{\hat{x}}_d=K_p\left(x_d-x\right)+K_d\dot{x}$. Note how $\ddot{\hat{x}}_d \neq \frac{d^2x_d}{dt^2}$.</p> <p>In this case it is evident how you don't employ the yaw $\psi$ for the linear displacement: it's simply kept constant. It is usually used for more complicated maneuvers indeed.</p> <p>If the small angle assumption does no longer hold, then you're forced to rely on the Eq.(6) described in the <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.149.4367&amp;rep=rep1&amp;type=pdf" rel="nofollow noreferrer">paper</a> you cited, which still doesn't change too much the story.</p> <p>I don't know how familiar you're with Simulink, but it'd be much easier to simulate this system by means of it.</p>
7288
2015-05-18T08:03:22.760
|control|quadcopter|
<p>I've posted a <a href="https://robotics.stackexchange.com/questions/7235/how-to-implement-and-code-inner-and-outer-pd-controllers-for-quadrotor-for-posit">question</a> regarding this matter that I couldn't solve. I'm reading this <a href="ftp://213.176.96.142/ieee8966173d-c305-20140930104826.pdf" rel="nofollow noreferrer">paper</a>, the authors state </p> <blockquote> <p>Linear $x$ and $y$ Motion Control: From the mathematical model one can see that the motion through the axes $x$ and $y$ depends on $U_{1}$. In fact $U_{1}$ is the total thrust vector oriented to obtain the desired linear motion. If we consider $U_{x}$ and $U_{y}$ the orientations of $U_{1}$ responsible for the motion through x and y axis respectively, we can then extract from formula (18) the roll and pitch angles necessary to compute the controls $U_{x}$ and $U_{y}$ ensuring the Lyapunov function to be negative semi-definite ( see Fig. 2).</p> </blockquote> <p>The paper is very clear except in the linear motion control. They didn't explicitly state the equations for extracting the angles. The confusing part is when they say </p> <blockquote> <p>we can then extract from formula (18) the roll and pitch angles necessary to compute the controls $U_{x}$ and $U_{y}$</p> </blockquote> <p>where formula (18) is</p> <p>$$ U_{x} = \frac{m}{U_{1}} (\cos\phi \sin\theta \cos\psi + \sin\phi \sin\psi) \\ U_{y} = \frac{m}{U_{1}} (\cos\phi \sin\theta \sin\psi - \cos\phi \cos\psi) \\ $$</p> <p>It seems to me that the roll and pitch angles depend on $U_{x}$ and $U_{y}$, therefore we compute the roll and pitch angles based on the $U_{x}$ and $U_{y}$ to control the linear motion. </p>
Linear Motion Control for quadrotor (clarification)
<p>So here's a system commmonly used in portioning substances, but I don't know what it's called for you to look it up (sorry). I've included two pictures, one for reference for us to call things by the same name, and another for displaying principle of operation. </p> <p>A portioner is a disc or other shape that has a hole. The hole is sized to contain the correct quantity of the substance you are trying to portion. Think of it like a measuring spoon. </p> <p>The portioner goes into a portioner housing, which allows a hopper to attach. It also allows the material contained in the portioner to be released. The hopper allows you to hold as much stock as you would like. One sample, a day, a week - the size of the hopper is up to you. </p> <p>As far as operation, the disc is configured with its hole aligned towards the hopper. When you want the required quantity of material, you just move the disc to align its hole with the outlet of the portioner housing. The material contained in the hole of the disc falls out. </p> <p>Note that I've drawn this as a vertical-rotating system, but you could just as easily have a horizontal-translating system. This would look like a cigar cutter. </p> <p>:EDIT:</p> <p>To be clear, this setup works with any units of measure. It's as big or as small as you need it to be. </p> <p><img src="https://i.stack.imgur.com/n43qA.png" alt="Parts"> <img src="https://i.stack.imgur.com/dxGdS.png" alt="Operation"></p>
7294
2015-05-18T17:05:50.670
|electronics|
<p>I've been toying around with the idea of automating the process of testing aquarium water for certain chemicals. Very briefly, salt water aquariums (reefs, specifically) require almost-daily testing for 3-4 chemicals (calcium, alkalinity, ammonia, phosphate). This is typically done by hand, using <a href="http://www.bulkreefsupply.com/aquarium-monitors-controllers/testing/salifert.html" rel="nofollow noreferrer">various kits</a>. There are two main types </p> <ul> <li>you combine several powders with a fixed amount of aquarium water, and then compare the color the mixture turns with a chart</li> <li>you combine several liquids together with the aquarium water, and then add another liquid until the mixture turns a color. you then record how much of the final liquid you had to add for the color change to occur (titration).</li> </ul> <p>Both methods are straightforward, but tedious. To maintain an aquarium well, you really do need daily readings of all of those metrics, which easily adds up to 30 minutes+ daily.</p> <p>So - I'd like to be able to automate the process. The biggest question is, how do I reliably dispense the materials needed? We're talking in gram and milliliter UoM here. The kits come with plastic syringes and spoons of correct volume for the powders. I need a way to measure out and dispense both of these, and a way to queue up several days worth (refilling daily defeats the purpose).</p> <p>Any ideas?</p> <p><em>Edit</em> this is different from <a href="https://robotics.stackexchange.com/questions/1259">How to measure and dispense a finite amount of powder or liquid</a> because of the units of measure involved. I need to be able to reliably dispense ~ 1g +/- 5% of a powder, or 1ml +/- 5% of liquid.</p>
Dispensing precise quantities of liquid and powder
<p>Yes, the <a href="http://www.owirobots.com/store/index.php" rel="nofollow">OWI Robot arm</a> is just a toy and doesn't have nearly enough accuracy, repeatability, sensor feedback, etc. to be able to do any sort of "real" assembly task. I'm sure you are aware of industrial arm like Kuka, ABB, etc. However, these arms are not suited to fine assembly either. A new class of robot arm is becoming more popular, one that is "human safe" and designed for assembly tasks with some human involvement. These are things like <a href="http://www.universal-robots.com/en/products/" rel="nofollow">Universal Robotics</a>, <a href="http://www.rethinkrobotics.com/sawyer-intera-3/" rel="nofollow">Rethink Robotics Sawyer</a>, and <a href="http://new.abb.com/products/robotics/yumi" rel="nofollow">ABB YuMi</a>. </p> <p>Also, even with "perfect AI control", a robotic arm needs a certain about of good sensor feedback to do anything useful. One example of this is demonstrated in an old experiment where a woman's hand was completely numbed and she was asked to light a match. It took about 45 seconds of fumbling with the match before she was successful. With an un-numbed hand she can light the match in just a few seconds on the first try. The point being that even with arguably the most powerful computer on the planet (the human brain), without good tactile sensing we are unable to do fine manipulation. (I was unable to locate the video of said experiment, but it is also mentioned in <a href="http://www.upenn.edu/pennnews/current/node/3447" rel="nofollow">this</a> article.)</p>
7297
2015-05-18T19:18:40.413
|robotic-arm|
<p>Suppose I have perfect AI to control robotic arm. </p> <p>What characteristics should it fulfill to be able to take such common tools as screwdriver and linesman's and disassemble and then assemble conventional notebook computer?</p> <p>Are there such models available?</p> <p>Is seems to me, that such arms as OWI-535 are only toys, i.e. they can just relocate lightweight objects and that's all. Am I right?</p> <p><strong>UPDATE</strong></p> <p>Also suppose that my AI can look at assembly area with multiple HD cameras and can perfectly "understand" what is sees.</p>
Arm to disassemble and assemble notebook at home?
<p>I think dielectric mechanism like piezo are suitable for compact designs. These might be usefull <a href="http://en.wikipedia.org/wiki/Piezoelectric_motor" rel="nofollow">http://en.wikipedia.org/wiki/Piezoelectric_motor</a></p> <p><a href="https://www.google.com/patents/US4812698" rel="nofollow">https://www.google.com/patents/US4812698</a></p>
7311
2015-05-20T08:25:30.970
|actuator|manufacturing|soft-robotics|
<p>I want to make a compact (actuators motors and sensors are all in one) soft robot. Actuators can be pneumatic or dielectric. I need suggestions about manufacturating. I'm open to new ideas.</p>
How can i make a compact soft robot
<h1>First</h1> <p>I would like to caution your choice of motor torque. I don't know how you generated the drawing of your wrist, but CAD packages like Inventor or Solidworks can easily generate masses and moments of inertia if you select the correct material. </p> <p>For looking solely at <em>holding</em> torque, the torque required to <strong>not move</strong>, you need a torque that supports the load's weight at the distance from the joint to the center of mass of the load as well as the arm's weight between the joint and the load (and associated motors, gearboxes, etc.) at the distance from the joint to the center of mass of the arm.</p> <p>$$ \tau_{holding} = x_{center.of.mass_{load}} (m_{load} g) + x_{center.of.mass_{arm}} (m_{arm} g) $$</p> <p>where $\tau_{holding}$ is the holding torque, in Nm, $x$ is the distance from the joint axis to the respective center of mass, <strong>in meters</strong>, $m$ is the mass of the respective component, <strong>in kg</strong>, and $g$ is the gravitational constant; $9.81 m/s^2$. </p> <p>The <em>accelerating</em> torque, or torque required to <strong>move</strong> your part, depends on the moment of inertia. If you're not actuating at the center of mass of both the arm and the load (you're not; you can't!) then you need the parallel axis theorem to find the moment of inertia at the joint. </p> <p>$$ I_{end.effector} = I_{load} + m_{load} x^2_{center.of.mass_{load}} + I_{arm} + m_{arm} x^2_{center.of.mass_{arm}} $$</p> <p>where inertias $I$ are in $kg-m^2$, and again, $m$ is in kg and $x$ is in meters. :EDIT: When I say end effector here, I'm referring to the everything after the wrist joint, up to and including the load. When I talk about the "arm" in this context, I mean all of the robot after the wrist joint, excluding the load. The center of mass of the 'arm' is the center of mass of the linkage(s) between the wrist joint and the load. :\EDIT:</p> <p>The torque required to accelerate the load is then given by:</p> <p>$$ \tau_{accelerating} = I_{end.effector} \alpha_{desired} $$</p> <p>where torque $\tau$ is in Nm and angular acceleration $\alpha$ is in <strong>$rad/s$</strong>.</p> <p>If you don't care about a particular acceleration you can rearrange the above equation to determine what your acceleration will be, given a particular moment of inertia and motor torque:</p> <p>$$ \alpha_{achievable} = \frac{\tau_{motor}}{I_{end.effector}} $$</p> <p>If your end effector (the robot arm between the wrist and the load) had no moment of inertia through its center of mass (it was just a point mass), the moment of inertia through the wrist joint would reduce to $mx^2$. This is also true of the load. However, because it and the load are both real objects, they will have size, which also implies they will have a nonzero moment of inertia through their centers of mass. </p> <p>All of this is to say, <em>if you don't account for moments of inertia, you may find that your wrist is incapable of accelerating your load</em>. If it can't accelerate, it won't be able to position well if at all. </p> <h1>Second</h1> <p>I would like to caution your choice of units. 'gr' is the symbol for <em>grain</em>, which is a unit of mass, like gram, but one grain is equal to approximately 0.065 grams. If you mean gram, 'g' is the (SI) symbol you should use. If you would like to avoid confusion between the gram symbol and the gravitational constant, you should use kilograms instead 'kg'. </p> <p>Most equations are predicated on SI base units, which means kilograms and meters, not grams and millimeters. If you would like to express precision to the millimeter or gram level, use trailing zeros as trailing zeros after a decimal are considered <a href="http://en.wikipedia.org/wiki/Significant_figures" rel="nofollow">significant figures</a>.</p> <h1>Third</h1> <p>To your question, I would suggest using servomotors for positioning unless you need to have positive feedback that you have reached your destination or unless you're trying to do some form of external PID control on the joint. </p> <p>Note that all servos run on some sort of internal PID control with internal position feedback. You don't get access to their position encoder (potentiometer, typically), but they DO run closed-loop and can be expected to reach the position you've requested <strong>provided they have sufficient torque, see the First note above</strong>. </p> <p>If you DO need position feedback (CNC quality control or something else), you could search <a href="http://www.digikey.com/product-search/en/sensors-transducers/encoders/1966131?stock=1" rel="nofollow">Digikey</a> for encoders to get exactly what you need or you could search for 'rotary encoders' online. I have experience with rotary encoders for larger applications, but at a glance the miniature offerings from <a href="http://www.nemicon.com/hollow.htm" rel="nofollow">Nemicon</a> and <a href="http://www.usdigital.com/products/encoders" rel="nofollow">US Digital</a> look nice. </p> <p>If you are looking specifically for a gearbox and motor (search: gearmotor), then <a href="https://www.pololu.com/category/22/motors-and-gearboxes" rel="nofollow">Pololu</a> has some nice miniature offerings. </p> <p>As a final note, I'm not affiliated with any of the companies I've linked to. I have offered them as examples of what you may be interested in and what they're called; sometimes the hardest part of finding something is figuring out what to put into the search box.</p>
7315
2015-05-20T13:01:31.440
|motor|actuator|torque|
<p>Currently designing a spherical wrist, I want to manipulate a 300gr payload. the design has a 200mm span, so I'm guessing at a 1.1Nm (considering the weight of structure &amp; motors). <img src="https://i.stack.imgur.com/WTicB.png" alt="spherical wrist"></p> <p>I've looked at Maxon, Faulhaber, but can't find any motor+gearbox+encoder under a 100gr. Any suggestion ?</p>
2Nm small motor
<p>I find it is all fine and good to discuss a problem but sometimes users need a working example albeit not in their preferred programming language. I have used face detection tracking for robotics behaviors but this has many problems like lighting, faces not visible, profile or back of head Etc. </p> <p>Motion tracking is better. Using example code I developed some Raspberry Pi camera python code to do real time motion tracking based on diff file method mentioned above. Code is on github <a href="https://github.com/pageauc/motion-track" rel="nofollow">https://github.com/pageauc/motion-track</a> and YouTube video here <a href="https://youtu.be/09JS7twPBsQ" rel="nofollow">https://youtu.be/09JS7twPBsQ</a> (most of my stuff has a video associated with it)</p> <p>The python code can be modified to run on other platforms and systems. From this demo I developed a real time object speed camera and a simple hotspot game that can be found in the same repo. Recently I have been working on Camera Tracking. Instead of tracking motion in the FOV (field of view) I track actual camera movement. This uses a little different technique than image diff method. I am working on further developing this to use in robotics where there are no wheel encoders. This would use the background to find high value target search rectangles from the video background image (not signs or symbols) to aid in correcting navigation errors for straight line and turns. See my github repo for details and more explanation Real time speed on a RPI 3 is quite good. </p> <p>So these are a few real world examples to your question that may be useful if you also have the prerequisite programming skills and/or learn time. Camera Pan/tilt servo system can actively scan an area and detect and track moving object(s) using just a computer controlled camera setup without the need for special sensors although this would be useful for night operation. </p> <p>Now I don't claim to be the best coder but a lot of googling will get you a long way there if you keep trying. I also like to post video of projects although they can go out of date after a while. My experience with some opencv video is that quite often there is no code repo reference. Just video with no good sound or vocal explanation and poorly demonstrated. This is like showing a video of cars driving by quickly to explain how vehicle chassis, electrical and power train systems work. Just my observation.</p> <p>A lot of times products don't do what you would like or are too expensive. Developing your own solution is sometimes not the best approach if you factor in your time and effort unless you do it as a hobby. For example, I would not think of developing a custom dash cam solution since there are many good, inexpensive product solutions on the market.</p>
7316
2015-05-20T13:24:15.110
|cameras|
<p>I'm looking into CCTV, and am interested in minimising costs by having a motion tracking camera cover an area that would otherwise utilise multiple cameras.</p> <p>There is already something like this on the market manufactured by a company called NightWatcher (I think). However, it does not track, it merely senses using 3 PIR's and points the camera in 1 of 3 positions. Ping ponging between them if the subject is between sensors.</p> <p>I like the idea of this, but not the drawbacks, and was wondering if there was anything I could do with an arduino or similar to achieve a better result.</p> <p>I stumbled across this, but am not entirely sure about it. Also this is for outside application, and that thread is for indoor (if that makes a difference).</p> <p><a href="https://robotics.stackexchange.com/a/1397/9751">https://robotics.stackexchange.com/a/1397/9751</a></p> <p>Edit...</p> <p>Just in case I have mislead you, I want to have a unit where sensors detect movement and then a camera to face that position.</p>
How can I make a motion tracking camera?
<p>This 12V vaccum pump from Sparkfun may be of interest to you: <a href="https://www.sparkfun.com/products/10398" rel="nofollow">link</a>.</p> <p>In order to drive this pump you could either:</p> <ol> <li>Use an HBridge motor driver for both forwards and reverse variable speed control. This would allow you to release your playing cards with some force, but it may not be necessary.</li> <li>If you think you could get away with just stopping the pump to let go of a card, you could simply use a Mosfet to variably control the pump with only a sucking ability and no blowing.</li> <li>If you don't expect to require variable speed control, you could use a relay to very simply open and close a 12V circuit with the motor.</li> </ol> <p>Good luck.</p>
7317
2015-05-20T14:45:29.673
|robotic-arm|actuator|
<p>I'd like to buy a small Vacuum Lifter so that I can move playing cards around with robotics. But my "google-fu" is failing me. I don't really know what search terms to look for... or what webpages to look to find this kind of component.</p> <p>In essence, I want an electronic version of a <a href="http://rads.stackoverflow.com/amzn/click/B001U35OJI" rel="nofollow">Vacuum Pen</a>.</p> <p>I don't really know where to search for this kind of component. I've found pneumatic valves and other complicated machinery... but ideally I'd want a self-contained electronic vacuum pen. Since I'm only planning to move playing cards around.</p> <p>Anyone have an idea where to look for something like this? Thanks.</p>
Vacuum Lifter: Moving Playing Cards
<p>What you're looking for is called an optical flow system: a camera that by recognizing the movement of patterns can estimate the movement of the camera target relative to the camera. You can implement your own optical flow algorithm through the use of <a href="http://robots.stanford.edu/cs223b05/notes/CS%20223-B%20T1%20stavens_opencv_optical_flow.pdf" rel="nofollow">OpenCV</a>, a popular computer vision library.</p> <p>However, it's more likely that you'll want to use existing sensors with SoCs for optical flow built in, such as those found in optical mice. An example commonly used with quadcopters is this <a href="http://copter.ardupilot.com/wiki/common-optional-hardware/common-optical-flow-sensors-landingpage/common-mouse-based-optical-flow-sensor-adns3080/" rel="nofollow">ADNS3080 breakout board</a>, or <a href="https://store.3drobotics.com/products/px4flow" rel="nofollow">this one</a> from 3DRobotics. Good luck!</p>
7320
2015-05-20T22:30:54.867
|localization|
<p>i want to make 3d position and position change estimation from photos taken from flying robot. I need suggestions for fast photo matching. </p>
Position estimation from photo fingerprinting
<p>What you are looking for is called <a href="https://en.wikipedia.org/wiki/Serialization" rel="nofollow">serialization</a>. Serialization is the process of creating a string (a serial stream of data) (not necessarily NUL-terminated) from arbitrary data.</p> <p>The issue serialization addresses is the fact that the same data are represented in different computers differently. For example, take this <code>struct</code>:</p> <pre><code>struct A { unsigned int x; unsigned long y; }; struct A a = { 0x12345678, 0x1020304050607080LLU }; </code></pre> <p>This would be laid out in memory in a 32-bit little-endian CPU like this:</p> <pre><code>lowest address highest address +--+--+--+--+--+--+--+--+ |78|56|34|12|80|70|60|50| +--+--+--+--+--+--+--+--+ \____ ____/ \____ ____/ V V x y </code></pre> <p>On a 32-bit big-endian CPU like this:</p> <pre><code>lowest address highest address +--+--+--+--+--+--+--+--+ |12|34|56|78|50|60|70|80| +--+--+--+--+--+--+--+--+ \____ ____/ \____ _____/ V V x y </code></pre> <p>On a 64-bit little-endian CPU like this:</p> <pre><code>lowest address highest address +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ |78|56|34|12|??|??|??|??|80|70|60|50|40|30|20|10| +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ \____ ____/ \____ ____/ \__________ __________/ V V V x padding y </code></pre> <p>On a 16-bit big-endian microcontroller like this:</p> <pre><code>lowest address highest address +--+--+--+--+--+--+--+--+ |56|78|??|??|50|60|70|80| +--+--+--+--+--+--+--+--+ \_ _/ \_ _/ \____ ____/ V V V x padding y </code></pre> <p>In the above, <code>??</code> means irrelevant values where the compiler has introduced padding to align <code>long</code>. The values given to <code>a.x</code> and <code>a.y</code> are intentionally too large to fit in some architectures just so that in this example you could identify what byte belongs to what.</p> <p>Now the issue is that if you simply give the byte-representation of this struct from one computer to another, for example by simply transmitting over the network, the other computer is not guaranteed to see the same data. Imagine if in the example above, the 64-bit computer sends the representation to the 16-bit microcontroller, the <code>x</code> value would have the wrong byte order, and <code>y</code> value would be garbage taken from the padding in the original struct. If the 16-bit microcontroller sends the representation to the 64-bit computer, the <code>x</code> value would be half garbage, and the <code>y</code> value would be read outside the actual buffer sent, which is anything that happens to be there, such as your password.</p> <p>Now that you understand the problem, you need to learn how to serialize your data. If you are interfacing with a system where serialization is already in place, of course you need to follow its rules. However, if you are designing your system, you are free to do it however you like! First, it's a good idea to define a fixed size for your data. For example, in the example above, we could have defined <code>struct A</code> like this:</p> <pre><code>struct A { uint32_t x; uint64_t y; }; </code></pre> <p>Doing this, now you know that <code>x</code> for example is always going to need 4 bytes to transfer, and you should expect to extract 4 bytes on the other side to reconstruct <code>x</code>. Note that, you still can't send the memory representation of this <code>struct</code> from one computer to another because both the paddings could be different and the byte order.</p> <p>Next, you need to think about in which order you want to send the bytes. Let's say you want to send them in big-endian. You may also need to think about floating point numbers. How are you going to represent them? One possibility is to send them as two numbers: what comes before the <code>.</code> and what comes after it. How would you handle sign? This all depends on your application.</p> <p>So, let's serialize <code>struct A</code> with the assumption that it is going to be presented in big-endian. The code is very simple, you simply put the bytes in the order you define in a string and send it:</p> <pre><code>size_t serialize_struct_A(struct A *data, unsigned char *buf, size_t buf_len) { size_t len = 0; /* we need 12 bytes for struct A, so buffer should have enough size */ if (buf_len &lt; 12) return 0; /* you can either be lazy and specify each byte */ buf[len++] = data-&gt;x &gt;&gt; 24 &amp; 0xFF; buf[len++] = data-&gt;x &gt;&gt; 16 &amp; 0xFF; buf[len++] = data-&gt;x &gt;&gt; 8 &amp; 0xFF; buf[len++] = data-&gt;x &amp; 0xFF; /* or calculate simple formulas to do it in a loop */ for (int i = 56; i &gt;= 0; i -= 8) buf[len++] = data-&gt;y &gt;&gt; i &amp; 0xFF; /* len should be 12 now! */ return len; } </code></pre> <p>Now we know for sure that <code>buf</code> contains the following regardless of the CPU architecture:</p> <pre><code>lowest address highest address +--+--+--+--+--+--+--+--+--+--+--+--+ |12|34|56|78|10|20|30|40|50|60|70|80| +--+--+--+--+--+--+--+--+--+--+--+--+ \____ ____/ \__________ __________/ V V x y </code></pre> <p>We could transfer this, e.g., with libusb like this:</p> <pre><code>unsigned char a_serialized[12]; size_t len = serialize_struct_A(&amp;a, a_serialized, sizeof a_serialized); if (len == 0) { /* error, my buffer was too small */ } int ret = libusb_bulk_transfer(dev, endpoint, a_serialized, len, &amp;transferred, timeout); if (ret != 0) { /* error */ } </code></pre> <p>On the other side, when you receive the buffer, you can deserialize it. Again, the code is simple:</p> <pre><code>int deserialize_struct_A(struct A *result, unsigned char *buf, size_t buf_len) { size_t cur = 0; /* we need 12 bytes for struct A, so we should have received this much */ if (buf_len &lt; 12) return -1; *result = (struct A){0}; /* again, either the lazy way */ result-&gt;x |= buf[cur++] &lt;&lt; 24; result-&gt;x |= buf[cur++] &lt;&lt; 16; result-&gt;x |= buf[cur++] &lt;&lt; 8; result-&gt;x |= buf[cur++]; /* or the compact way */ for (int i = 56; i &gt;= 0; i -= 8) result-&gt;y |= buf[cur++] &lt;&lt; i; return 0; } </code></pre> <p>So far, we dealt with unsigned numbers. How would you handle signed numbers? 2's complement representation is quite ubiquitous, so one could think of sending the signed number as if it were unsigned, relying on the fact that the sender and the receiver both use 2's complement representation. If you cannot rely on this, there are a variety of things one could do. One simple way would be to use an extra byte for the sign:</p> <pre><code>size_t serialize_int32_t(int32_t data, unsigned char *buf, size_t buf_len) { size_t len = 0; /* we need 5 bytes for int32_t, so buffer should have enough size */ if (buf_len &lt; 5) return 0; if (data &lt; 0) { buf[len++] = 1; data = -data; } else buf[len++] = 0; buf[len++] = data &gt;&gt; 24 &amp; 0xFF; buf[len++] = data &gt;&gt; 16 &amp; 0xFF; buf[len++] = data &gt;&gt; 8 &amp; 0xFF; buf[len++] = data &amp; 0xFF; /* len should be 5 now! */ return len; } int deserialize_int32_t(int32_t *result, unsigned char *buf, size_t buf_len) { size_t cur = 0; int sign; /* we need 5 bytes for int32_t, so we should have received this much */ if (buf_len &lt; 5) return -1; *result = 0; /* again, either the lazy way */ sign = buf[cur++]; *result |= buf[cur++] &lt;&lt; 24; *result |= buf[cur++] &lt;&lt; 16; *result |= buf[cur++] &lt;&lt; 8; *result |= buf[cur++]; if (sign) *result = -*result; return 0; } </code></pre> <p>There are a variety of optimizations that are possible. For example, if you are sure that the architectures on both end are big-endian and therefore the memory layout for integers are identical, you could just transfer the memory representation, for example:</p> <pre><code> /* serialize */ *(uint32_t *)(buf + len) = data-&gt;x; len += 4; /* deserialize */ data-&gt;x = *(uint32_t *)(buf + len); len += 4; </code></pre> <p>Another optimization could be the following. If you have multiple signed integers that you want to send with the method above, you could use one bit for the sign instead of one byte, then pack all those sign bits together in one or more bytes that are not necessarily placed directly before the number. Or, if you are sure that a 4-byte number is always between 0x7FFFFFFF and -0x7FFFFFFF, you can use the most significant bit to store the sign with the rest of the bits showing the absolute value of the integer (this is called a sign and value method of representation for signed integers).</p> <p>Bottom line is that you need to be able to lay out your numbers byte by byte and be able to recover them on the other side of the communication. How you do that entirely depends on yourself.</p> <hr> <p>Note: the code snippets above are not tested!</p>
7327
2015-05-21T12:04:58.903
|arduino|communication|usb|c++|
<p>Folks at programmers stack exchange asked me ask here:</p> <p>I want to communicate with an arduino and sent integers it. I code this program in C++. I initialy used bulk_transfer(), but it sends only char data.</p> <p>This in the API reference for libusb: <a href="http://libusb.org/static/api-1.0/group__syncio.html" rel="nofollow">http://libusb.org/static/api-1.0/group__syncio.html</a></p> <p>Here is the prototype of bulk_transfer()</p> <p>int libusb_bulk_transfer (struct libusb_device_handle *dev_handle, unsigned char endpoint, unsigned char *data, int length, int *transferred, unsigned int timeout)</p> <p>As you can see, <em>data</em> is an unsigned char pointer, that is, a pointer to a buffer containing <em>length</em> unsigned chars. I can successfully transcieve strings. How do I transfer integers with sign?</p> <p>Currently I am thinking about a system in which the arduino asks for the digit by sending a character and my program sends the number as reply followed by the sign, which is requested next. Is this solution viable? Or should I transfer the integer as a string? Is there a better way?</p>
How to transfer signed integers with libusb?
<p>As can be seen in the following two photos from pages 1 and 3 of <a href="http://www.davidhunt.ie/motorised-time-lapse-rail-with-raspberry-pi/2/" rel="nofollow noreferrer">David Hunt's article mentioned in question</a>, the timing belt forms a U-bend as it passes the motor's drive pulley. Sets of bearings support the back of the belt on each side of the drive pulley. It looks like the centerline of the drive pulley is in line with the rest of the belt. A deeper U than is shown probably would be a good idea.</p> <p>As is clear from descriptions and the circuits shown on <a href="http://www.davidhunt.ie/motorised-time-lapse-rail-with-raspberry-pi/3/" rel="nofollow noreferrer">page 3 of David Hunt's article</a>, the drive motor is an ordinary DC motor operated open-loop rather than being operated as a servo motor. For example, the description says “Sending a pulse of 12v for 150ms to the motor drives the dolly about 3mm”. To move the platform 10 inches, one would pulse the motor 80 times. If you sent one 150 ms pulse every 45 seconds, the platform would move about 10 inches in an hour.</p> <p><img src="https://i.stack.imgur.com/FY9DY.jpg" alt="upper view of bare drive"> <img src="https://i.stack.imgur.com/bfET3.jpg" alt="lower view of drive pulley"></p> <p><em>Edit re “Is there any advantages to using the "DC open loop motor" vs a servo?” :</em> Generally, using a plain DC motor or gearmotor will be less expensive than using a servomotor. Servomotors typically have an encoder built in or attached on (see pictures at <a href="http://www.tigertek.com/servo-motor-resources/incremental-absolute-encoders.html" rel="nofollow noreferrer">tigertek.com</a> or <a href="http://www.orientalmotor.com/technology/articles/servo-motor-overview.html" rel="nofollow noreferrer">orientalmotor.com</a>, for example) and typically are driven by servoamplifiers. (Hobby or RC servos can be controlled without using a separate servoamplifier.)</p> <p>If high-speed or high-accuracy positioning are required, a servo probably would be a better choice than a non-servoed motor. But otherwise, using a plain motor or gearmotor costs less and leaves you with more latitude for choice.</p> <p>Regarding stepper motors vs servomotors or plain DC motors, (1) stepper motors will cost more per watt of power output, (2) stepper motors generally rotate slower, (3) stepper motors typically need more-complex driving sequences. </p>
7339
2015-05-24T04:25:54.070
|otherservos|
<p>I'd like to create a camera slider similar to <a href="http://www.davidhunt.ie/motorised-time-lapse-rail-with-raspberry-pi/2/" rel="nofollow noreferrer">this one</a>. <img src="https://i.stack.imgur.com/9aq9s.jpg" alt="enter image description here"></p> <p>The only part I'm not sure on is how to setup the camera drive. It looks like I can buy a similar <a href="https://www.servocity.com/html/xl_timing_belt___foot.html#.VWFNmWRVhHx" rel="nofollow noreferrer">timing belt here</a>, but I'm not sure how to set up the servo to drive the slider. Particularly how to keep the belt in contact with the drive pulley. </p> <p>My fabrication skills are very limited so I need a simple or out of the box solution.</p>
Driving a non-circular timing belt
<p>take an image than take with little zoom out. crop two specific points from first and match (tepmlate)from second. than calculate difference this gives relation between two images than you can compute depth with camera angle.</p>
7344
2015-05-25T05:56:29.237
|localization|
<p>There is too many 2d position estimation with one camera. Is there any 3d position estimation application or technique with one camera? If there is no application or technique why?</p>
Image based 3d position estimation with one camera
<p>Did you have a look into <strong>Orocos</strong> <a href="http://www.orocos.org/" rel="nofollow">http://www.orocos.org/</a> ? Compared with ROS it has the huge advantage of real time capability. That would be my choice to implement complicated real time controllers.</p> <p>In my work (<a href="http://www.gpu-voxels.org" rel="nofollow">http://www.gpu-voxels.org</a>) I am also using a GPU for 3D collision avoidance calculations on our robots, but I am using our own <strong>MCA2</strong> <a href="http://www.mca2.org" rel="nofollow">http://www.mca2.org</a> framework for the real time stuff. Our actuators are all controlled via CANOpen interfaces.</p> <p>30 Hz should be no problem at all! We run some motor controllers with 100 Hz and more. Andy my GPU (NVidia TITAN) processes 3D pointclouds of multiple Kinect cameras at 30Hz. So 2D image processing should be possible, even on a Laptop GPU.</p>
7359
2015-05-28T00:07:30.850
|control|real-time|
<p>I would like to make a robotic system which takes as input a video feed, runs some GPU-based image recognition on the video, and outputs commands to a set of motors. The goal is to have the motors react to the video with as little latency as possible, hopefully of the order of 10s of ms. Currently I have a GTX 770m on a laptop running Ubuntu 14.04, which is connected to the camera and doing the heavy image processing. This takes frames at 30Hz and will output motor commands at the same frequency.</p> <p>After a few days of looking around on the web for how to design such a system, I'm still at a loose end whether (a) it is even feasible (b) if so, what the best approach is to interface the laptop with the motors? The image processing must run on Linux, so there is no leeway to change that part of things.</p>
Interfacing GPU image processing with motor control at 30+Hz
<p><em>In addition to @Shahbaz</em> </p> <p>According to <a href="http://www.springer.com/us/book/9783540239574" rel="nofollow">this book</a></p> <p>Multiple Mobile Robot Systems is main topic and swarm robotics is a sub topic both of them motivated from</p> <ol> <li><p>the task complexity is too high for a single robot to accomplish</p></li> <li><p>the task is inherently distributed</p></li> <li><p>building several resource-bounded robots is much easier than having a single powerful robot</p></li> <li><p>multiple robots can solve problems faster using parallelism</p></li> <li><p>the introduction of multiple robots increases robustness through redundancy</p></li> </ol> <p>Swarm Robots: Historically, some of the earliest work in multirobot systems dealt with large numbers of <strong>homogeneous</strong> robots, called swarms. </p> <p><strong>This means multi robot systems can be homogeneous and heteregeneous but swarm systems have to homegeneous.</strong></p> <p>Multi agent systems is a computerized systems composed of multiple interacting intelligent agents within an environment. But also it can be robot.</p>
7363
2015-05-28T11:37:49.893
|simulation|multi-agent|swarm|
<p>What is the difference between Multiple robots and swarm robots? What is the key point? Also what is multi agent systems? Do multi agent systems works only for computer simulations or games? These terms are used similar applications.</p>
What is the difference between Multiple robots and swarm robots?
<p>There are <a href="http://en.m.wikipedia.org/wiki/Solenoid" rel="nofollow">solenoids</a>, which are electromagnetic linear actuators, and there are <a href="http://en.wikipedia.org/wiki/Linear_variable_differential_transformer" rel="nofollow">linear voltage differential transformers LVDTs</a>, which are electromagnetic linear position sensors. </p> <p>I don't think you could use one for positioning and sensing because solenoids operate <a href="http://www.ledex.com/solenoid-applications/ac-vs-dc-solenoids.html" rel="nofollow">primarily on DC</a> while LVDTs operate on AC. Additionally, solenoid positioning is generally poor and they do not output a position but rather a force, like pneumatics or hydraulics. </p> <p>Hydraulics can utilize the fact that their working fluid is essentially incompressible for position control - i.e., proportional control until position is reached then just shut inlet and outlet valves, but "compressible" actuators such as pneumatics and solenoids have a SIGNIFICANTLY more difficult time achieving performance even remotely on par with hydraulics. </p> <p>Controllers for pneumatic positioning exist, as I'm sure they do for solenoids as well, but solenoids are primarily used for on/off switching. Anything more advanced and you're looking at an expensive position controller. LVDTs are also expensive. </p> <p>Any particular reason a regular geared linear actuator is insufficient? </p>
7378
2015-05-30T05:21:43.700
|motor|actuator|
<p>Are there electric motors, which apply force not in rotational motion, but in longitudinal motion?</p> <p>They should have electromagnetic design and no gears and worms.</p> <p>Such motors would be great for linear actuators. They could transfer both force and feedback.</p> <p>What is the name of such devices?</p>
Electric piston (longitudinal electric motor)?
<p>This is one of the basic problems in planning robot trajectories: The transformation from Configuration-Space to Workspace via direct kinematic is unambiguous, while the inverse way from Workspace to Config-Space is ambivaltent (in all cases where Dim(Config-Space) > Dim(Workspace)).</p> <p>So what almost all planers do (especially the sampling based ones like RRT) is sampling in Config-Space, transforming the sample to Workspace and performing a collision check there. If no collision occurs, add sample to planning-graph.</p> <p>The same has to be done for longer trajectories in between the samples: Break them down to smaller subtrajectories (&lt; Robot/Endeffector-Size) and do a collision check for each waypoint.</p>
7383
2015-05-30T21:50:40.870
|robotic-arm|motion-planning|
<p>I'm working with a 4DOF Parallel-Mechanism arm. I'm interested in writing planners for this arm (PRM or RRT) in the configuration space, but I'm not sure how to identify obstacles/collisions. </p> <p>When writing planners for mobile robots in a 2d workspace, it was easy to define and visualize the workspace and obstacles in which the planner/robot was operating. This website (<a href="http://www.cs.unc.edu/~jeffi/c-space/robot.xhtml" rel="nofollow">link</a>) shows a great example of visualizing the workspace and configuration space for a 2DOF arm, but how can I do this for higher dimensions?</p>
Determine the configuration space for a robotic arm
<p>Creating 3D models with this method is very compute intensive, 123d uses many pictures (at least 20), and examines them for feature points that are common in several pictures and by examining how they change between pictures it can help build up a 3ds point cloud which is then textured using the pictures, this is very resource intensive, and could be done by a beaglebone or raspberry pi, but it would take a very long time. If you wanted to make your own system to 3d scan objects with cameras it is soing to be a lot easier to have an array of them pre calibrated to work in sync to generate models. This is how many professional setups work.</p>
7385
2015-05-31T00:42:17.830
|computer-vision|3d-printing|3d-reconstruction|3d-model|
<p><a href="http://www.123dapp.com/catch" rel="nofollow">123D software</a> can construct a 3D model from photos taken from your phone. It doesn't process the photos in your phone. Instead, it sends them to the cloud to create 3d model. How can i construct a 3d model like this <strong>(only with one camera)</strong>? I searched it but i can only find information on laser/procetor scanners (simple and desktop use only). I think 123D uses only IMU sensors and camera <strong>why do they use the cloud</strong>? Can a beaglebone or rasperry pi create 3d models like this?</p>
3D scanner from Phone Camera
<p>I think I would solve this as I believe you did, where I find a rotation matrix $E$, a translation vector $\overrightarrow {dr}$, and I locate point P in frame B by adding it to the origin of frame B; $\overrightarrow P_{B} = \overrightarrow P + F_B$. Recall that you are "unrotating" <em>from</em> frame B <em>to</em> frame A, so you need to transpose the rotation matrix. Assemble the $T$ matrix: $$ E_{A\rightarrow B} = \begin{bmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; cos(-60) &amp; sin(-60) \\ 0 &amp; -sin(-60) &amp; cos(-60) \end{bmatrix} \\~\\~\\ \\ E_{B\rightarrow A} = E_{A\rightarrow B}^T\\~\\~\\ \overrightarrow {dr} = \begin{bmatrix} 0 \\ 20 \\ 15 \end{bmatrix} \\ T_{B\rightarrow A}= \begin{bmatrix} E_{B\rightarrow A} &amp; \overrightarrow {dr} \\ 0 &amp; 1 \end{bmatrix} $$ Now, multiply $T \overrightarrow {P_B}$ to get $\overrightarrow {P_A}$, which is the same answer you gave in your statement. I believe you did everything correct. I actually drew the problem on a piece of paper to verify the solution. You are also correct that the second problem (2.4) is the same problem, just reworded. Again, try drawing it on a sheet of paper.</p>
7386
2015-05-31T12:29:18.387
|forward-kinematics|books|
<p>I am reading the book "Introduction to Robotics Mechanics &amp; Control", John J Craig., 3rd Ed., Forward transformation problem Examples 2.2 and 2.4.</p> <p>Ex. 2.2 (Page 29): Frame {B} is rotated relative to frame {A} about X axis by 60 degrees clockwise, translated 20 units along Y axis and 15 units along z axis. Find P in frame {A} where P in frame {b} = [0 8 7]</p> <p>The book's answer is [0.18.062 3.572]. But my answer is [0 30.062 11.572].</p> <p>Ex. 2.4 (Page 33): Vector P1 has to be rotated by 60 degrees clockwise, about X axis and translated 20 units along Y axis, and 15 units along Z axis. If P1 is given by [0 8 7], find P2.</p> <p>Essentially Ex.2.2 and 2.4 are the same problem. However, the Transformation matrix for Ex 2.4, has [0 8 7] as translation vector (The 4th column of T) instead of [0 20 15]. And, the given answer is [0.18.062 3.572].</p> <p>I am not sure if it is just typo, or I am missing some genuine operation. Please let me know your opinion.</p> <p>Thanks.</p>
Introduction to Robotics Mechanics & Control, John J Craig., 3rd Ed., Forward transformation problem Examples 2.2 and 2.4
<p>For fingerprint matching model, try working with SVM (support vector machines) algorithm, it is a based supervised learning model. For localization purpose you need to match ridges and bifurcation found in fingerprints. </p>
7388
2015-05-31T14:23:09.897
|mobile-robot|localization|
<p>This <a href="http://www-personal.umich.edu/~johannb/Papers/paper64.pdf" rel="nofollow">paper</a> mentioned the fingerprinting/model matching case. But I could not find an image based algorithm. Any suggestion about image based localization</p>
Fingerprinting/ model matching algorithms for localization
<p>If you are looking for an implementation that would use a microcontroller, you can opt for RF which allows a much broader range of communication then IR can offer.</p> <p>Sparfun offers some transmitters and recievers for use $4/5/ea.</p> <p><a href="https://www.sparkfun.com/products/10534" rel="nofollow">Transmitter</a><br/> <a href="https://www.sparkfun.com/products/10532" rel="nofollow">Receiver</a></p>
7395
2015-06-01T10:15:46.587
|wireless|
<p>I'm relatively new to robotics, and I'm building a project for which I need a simple wireless connection between two circuits such that when the first circuit is switched on, the other circuit gets switched on too. I'm looking to preferably build something like this on my own, but I have no idea about wireless connections. I only know basic wired robotics. I also know C++ programming if that helps. Apologies if such a question has already been asked.</p> <p>Regards, Hanit Banga</p>
Simple wireless connection between two circuits
<p>I am a huge fan of the <a href="https://www.stereolabs.com/" rel="nofollow">ZED</a> it is easy to use and comes with pre built modules to use with opencv and some great examples, but at 500$ it might be out of your price range. I have had plenty of success with two cheap webcams, thee best thing to do it mount them on somthing like a metal bar after removing any pivot mechanism, so they will always be aligned.</p>
7397
2015-06-01T15:02:23.207
|stereo-vision|
<p>I'm looking for some cheap hardware that would offer me results decent enough to continue my experimentation.</p> <p>I've been looking into how to obtain hardware for learning about stereo vision and 3D reconstruction, I found two basic ways: - buy 2 cheap webcams and DIY - buy a stereo camera</p> <p>For what I understood little variations in distance and inclination can easily compromise the diff map and so the DIY version might end up requiring constant calibrations, however on the other end, so buying "professional" stereo camera range from 500 euro to infinite.</p> <p>For the moment I trying something in between, like the minoru 3d, however the overall performance of the camera looks a bit poor also because it's a 2009 product, however I can't find any more recent product offering a similar solution.</p> <p>Can you suggest me what would be the best way/product/guide to archive decent results without spending a fortune ?</p> <p>Thank you very much :)</p>
Selecting hardware: stereo camera for beginners
<p>Did you initialize libusb?</p> <p>From <a href="http://libusb.sourceforge.net/api-1.0/group__lib.html#initialize" rel="nofollow">the Sourceforge documentation:</a></p> <blockquote> <p>int libusb_init ( libusb_context ** context ) </p> <p> Initialize libusb.</p> <p>This function must be called before calling any other libusb function.</p> </blockquote> <p>Seems like a trivial thing to ask but I didn't see it in your code, but I also don't see anything blatantly wrong with your code.</p>
7409
2015-06-03T04:22:50.080
|arduino|communication|usb|
<p>I am doing a line following robot based on opencv. I have my onboard computer(an old pandaboard) running opencv. It will calculate the offset from the required path and communicate it to the arduino via USB. Then it will do PID optimisation on the data, and adjust the speed of the left and right motors.</p> <p>To my dismay the communication part is not working, and I've tried hard for a day to fix it with no result. Here is the relavent code running on the pandaboard:</p> <pre><code>while(1) { r = libusb_bulk_transfer(dev_handle, 131, recieve, 1, &amp;actual, 0); cout&lt;&lt;"r="&lt;&lt;r&lt;&lt;endl; int a; cin&gt;&gt;a; imgvalue=krish.calc_offset(); send[0]=imgvalue&amp;0xff; send[1]=imgvalue&gt;&gt;8; //make write cout&lt;&lt;"Data to send-&gt;"&lt;&lt;imgvalue&lt;&lt;"&lt;-"&lt;&lt;endl; //just to see the data we want to write : abcd cout&lt;&lt;"Writing Data..."&lt;&lt;endl; r = libusb_bulk_transfer(dev_handle, (4 | LIBUSB_ENDPOINT_OUT), send, 2, &amp;actual, 0); //my device's out endpoint was 2, found with trial- the device had 2 endpoints: 2 and 129 if(r == 0 &amp;&amp; actual == 2) //we wrote the 4 bytes successfully cout&lt;&lt;"Writing Successful!"&lt;&lt;endl; else cout&lt;&lt;"Write Error"&lt;&lt;endl; } </code></pre> <p>where <code>imgvalue</code> is the data to be send. This is the code running on the Arduino:</p> <pre><code>void loop() { Serial.write('s'); if(Serial.available()&gt;0) Input_tmp = Serial.read(); if(Serial.available()&gt;0) Input_tmp = Input_tmp | (Serial.read() &lt;&lt; 8); Input=Input_tmp; myPID.Compute(); // adjust the motor speed } </code></pre> <p>What happens when I run is that it will pause at the libusb read operation as the timeout is zero(infinity). At this point I've tried resetting the arduino, but this doesn't help. So how do I make my program respond to this start byte send my the Arduino? Where did I go wrong?</p>
Libusb and arduino communication not working
<p>I'm posting this as an answer because it is a bit lengthy for the comments section, and I hope it will point you in the right direction.</p> <p>First, you mention an "outer P loop" and an "inner D loop", which, to me, would imply that you are performing those two tasks at different rates. This is incorrect for any form of a PID controller (P, PI, PD, etc.)</p> <p>All of your gains, proportional, integral, derivative, should act on the respective error measurement. If you are commanding a speed, proportional error is the difference between command and actual speed. Integral error is the difference between desired and actual position, and derivative error is the difference between reference and actual acceleration.</p> <p>You have the option of either calculation the integral/derivative of your reference speed, the actual speed, and subtracting to get the error, OR you can calculate the integral/derivative of the speed error and use that directly.</p> <p>Once you have all the error terms, you multiply each by its respective gain and <strong>add all the terms</strong> to get the control signal. I emphasize add the terms to reinforce that you need all the terms to get the control signal - you shouldn't be calculating one at a faster rate than the other.</p> <p>What I believe you are missing is what you had asked about <a href="https://robotics.stackexchange.com/questions/7400/quadcopter-force-torques-duty-cycle-conversion">in your other question</a> - it seems like your Matlab model is missing the transfer function that converts from PWM to speed, and maybe from speed to torque or lift.</p> <p>I would imagine the relationship between PWM and speed is linear - lowest PWM signal gives some minimum speed (y-intercept), up to full speed at 100%. Then, as the paper you linked in your other question suggests, try a constant k1/k2 to get from speed to lift/torque.</p> <p><a href="http://aeroquad.com/attachment.php?attachmentid=10414&amp;d=1421496243" rel="nofollow noreferrer">This paper</a> (PDF) seems to give the answer you're looking for (on page 8), and <a href="http://andrew.gibiansky.com/blog/physics/quadcopter-dynamics/" rel="nofollow noreferrer">this page</a> gives a really good derivation of equations of motion. </p> <p>So, back to your specific questions, regarding point 1, you can either operate up to "full throttle" with no modifications, or you can leave a "reserve" (overhead) for maneuvering, or you can have sliding gains to account for a lack of headroom. If you want to rotate, you speed two props up and slow two down. At full throttle, two will slow, but the other two can't go any faster. Maneuvering is cut (for all intents and purposes) in half. You can leave headroom to allow two to speed up, but this restricts your top speed. You could modify gains to (hand waving) slow the two props twice as fast. This makes up for the other two not speeding up at all because you have the same differential. </p> <p>The choice here is up to you, but because the controller is "watching" craft performance as compared to your commands it shouldn't matter too much. The controller isn't going to slow a pair and speed a pair by a fixed amount, it's going to slow/speed until the desired command inputs have been met. It won't respond quite as quickly as you would expect, though, because...</p> <p>Point 2 - actuator saturation. This refers to passing a reference that is of such a magnitude that the actuator cannot comply. The output of your controller is the output of your controller. <em>If you have modeled the system well</em>, then the output should be what you supply as an input to your system. If the actuator saturates, it saturates. This would be something like requesting a speed of 1800rpm from a 1200rpm motor, or a force of 150lb from a 100lb hydraulic cylinder - you are asking the actuator to do something it physically cannot do. </p> <p>In your case the gains may cause the system to saturate - your PWM signal "should be" (per the controller), at 150%, but you can only command 100%. So you truncate to whatever the max is (100%), and send that. When the controller "sees" that the system is responding and approaches the reference command, it will back off on its own. This is the great thing about a controller. The output signal doesn't matter <em>too much</em> because it's looking for a response as compared to your command, not any kind of absolute. </p> <p>That is, if you command a yaw rate of 30 deg/second, but you can only accelerate 1 deg/s^2, your controller may saturate your input asking "full throttle". Once you finally accelerate to the desired yaw rate, the controller will back off <em>on its own</em> to hold the desired command. Note that the controller will always try to meet the performance goals you specified when choosing the gains; <strong>it is up to you to spec the actuators to meet those performance specs.</strong> </p> <p>Mostly this is to say that, if your gains look like they're too high, you may have either asked for more performance than your craft is capable of delivering or you have not modeled your system correctly. If you're close it will generally work, but if your gains are too far off (your model is too incorrect), then you will wind up with either no response or you will wind up railing the controller, where the controller says, "I'm too slow - Full throttle!" and then a moment later, "I'm too fast, turn it off!". This is caused by gains that are too high. </p> <p>You can always <a href="http://sts.bwk.tue.nl/7y500/readers/.%5CInstellingenRegelaars_ExtraStof.pdf" rel="nofollow noreferrer">tune the gains manually to achieve desired performance</a> (section 4 of this PDF - Zeigler Nichols tuning), but I think you'll find that PID control of a MIMO system is nigh impossible due to coupling between input/output channels. I think you would be happiest with <a href="http://en.wikipedia.org/wiki/Full_state_feedback" rel="nofollow noreferrer">a state feedback controller</a>, but that's not what you asked about so I'll just post the link and leave it at that.</p> <p>Good luck!</p>
7415
2015-06-03T15:42:08.500
|control|quadcopter|pid|matlab|
<p>I'm trying to design two PD controllers to control the roll and pitch angle of my quadcopter and a P controller to control the yaw rate. I give to the system the reference roll, pitch and yaw rate from a smartphone controller (with WiFi).In the case of roll and pitch the feedback for the outer 'P' loop is given by my attitude estimation algorithm, while in the inner 'D' loop there is no reference angle rate, and the feedback is provived by a filtered version of the gyroscope data. As far the yaw rate is concerned, is only a P controller, the reference yaw rate is given by the smartphone, and the feedback of the only loop is provived by the smartphone. This is to illustrate the situation. My sampling frequency is 100hz (imposed by the attitude estimation algorithm, that is a Kalman Filter, that I'm using). I have tuned my controller gains with matlab, imposing a rise time of 0.1 seconds and a maximum percent overshoot of 2% with root locus. Matlab is able to found me a solution, but with very large gains (like 8000 for P and 100 for D). I was doing the tuning, using a quadcopter model (for each euler angle) based on the linearized model for quadcopter or instance : $$\ddot \tau_\Phi = I_x\ddot \Phi -&gt; G_\Phi(s) = \frac{I_x }{ s^2} $$ only in order to have a 'reasoned' starting point for my gains, and then re-tune it in the reality. (The transfer function above is continous, in my model I have obliviously used the discrete version at 100hz of sampling rate). This is to do a premise of my following questions. Now, I have to map my controller outputs to duty cycle. Since I'm using a PWM at 25Khz frequency, my period (in the TIM channel configuration) is of 2879. I have checked the activation threshold (after which the motor starts move) and the threshold after which it stops increasing its speeds, and the first is 202 and the second is 2389. I was following the very good answer of <a href="https://robotics.stackexchange.com/questions/2964/quadcopter-pid-output?lq=1">Quadcopter PID output</a> but I still have some questions.</p> <p>1) As far the throttle mapping is concerned, I have to map it in such a way that the values coming from my smartphone controller (in the interval [0, 100]) are not mapped in the whole [202, 2389] interval, but I have to 'reserve' some speed in order to allow the quadcopter to have an angular movement exploiting differences in the 4 motor speeds even with 100% throttle?</p> <p>2) Coming back to the fact that matlab propose me huge gains for my controllers, this leads to the fact that I cannot directly sum the controller output to the duty cycle as stated in the metioned answer (because I will certainly go out of the [202, 2389] bound of my TIM pulse). Doing a proportion will result in altering the gains of the systems, so placing somewhere else the poles of my systems and the procedure done with matlab will became useless, right? So, what I'm doing wrong? I have tried to enforce matlab to bound the gainsm for instance in the [0,100] interval, but in this case it cannot find gains such that my constraints are verified. Thank you</p>
Quadcopter PID output and duty cycle conversion
<p>Most professionals would be researching in particular subfields.</p> <p>When you say:<br> Bot, I'll assume robot and not a natural language processing and response system. (You posted in robotics after all.) </p> <p>Human-Like: Looks? Synthetic Skin <br> Motion? mechanical engineering <br> Intelligence? Mind? neural networks, planning, machine learning, psychology <br> capabilities? data processing <br></p> <p>You leave way too many open questions for an intelligent response. <br> All of these things are open enough that any tool could lead to an advance, while no particular tool gives you much advantage in all places. </p> <p>Plus when you go for the singularity, you can program it in Basic, and then it will work its own way forward to reprogram itself fast enough that it wouldn't matter. </p> <p>I hope this helps a tiny bit. </p>
7416
2015-06-03T17:43:19.260
|design|software|artificial-intelligence|programming-languages|
<p>Currently using Windows 8, what software packages for artificial intelligence programming (robotics branch) are used in today's professional environment as standard. Lots of internet suggestions, but companies seem to keep this a closely guarded secret. And are the internet rumors true? Would switching to Ubuntu offer me more in terms of depth.</p> <p>Context: Educational field: Computer Science and Artificial Intelligence, current focus (though obviously experience in others) in programming languages stands at c++, C and Python. Looking to build, program and develop a human-like bot (NOT aiming for singularity at this point ;))and am asking this question in order to build my toolbox a little. </p>
Artificial Intelligence Software Packages: Professionals, University education is oft' a step behind. What's actually being used?
<p>BCM2070 with Microsoft Bluetooth Enumerator!</p>
7423
2015-06-04T07:00:37.823
|electronics|
<p>I have an application that requires data to be streamed from multiple Bluetooth modules to one host controller. Somewhat like multiple Clients and one Server. </p> <p>The throughput i am looking at is around 1920-bits per second per module. The <a href="https://www.google.co.in/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0CCwQFjAA&amp;url=http%3A%2F%2Fwww.st.com%2Fweb%2Fen%2Fcatalog%2Fsense_power%2FFM1968%2FCL1976%2FSC1324%2FPF253470&amp;ei=ivZvVbG3DZafugT99oCIBA&amp;usg=AFQjCNE3rd0Y5qAVPf9wHGOxc3klSjLEfg&amp;sig2=kQg3v-7wIdE7NDGiFL688Q&amp;bvm=bv.94911696,d.c2E" rel="nofollow">SPBT2632C2A.AT2</a> module only supports SPP profile in which i can have a single link (One Client One Server). My application needs multiple modules ( Max 5) to send information to one server. </p> <p>Is there a way to have One Receiving Station and have multiple transmitting module using SPP? (All modules being the SPBT2632C2A), or i need a Different higher end module on the server side which supports multiple SPP Links?</p> <p>It advisable to look into a module like the BCM2070 and have a driver run system?</p>
Many to One Bluetooth Communication Link
<p>Use the bits in your array to create a connectivity map, assign distances between points (but it sounds like the map is uniformly sampled), then implement some version of <a href="http://en.wikipedia.org/wiki/Dijkstra&#39;s_algorithm" rel="nofollow">Dykstra's algoritm</a>.</p> <p>The map you have can be parsed to see which non-wall tiles have neighbors. Use this, with a regular numbering scheme, to establish the connectivity graph. Each location on your map will have a distance associated with each neighbor. Dykstra uses the map and distances to find the shortest path from one point to another. One output is total distance, another output is a list of way points to get from A to B. The way points will be a list of every neighbor along the path, then all you have to do is get the car to move from one point to another. </p> <p><strong>:EDIT:</strong></p> <p>I started this as a comment, but it's too long and this is really more information on the answer, so I put it here. </p> <p>Matlab has a neat trick where you can <a href="http://matlabtricks.com/post-23/tutorial-on-matrix-indexing-in-matlab" rel="nofollow">access values of a matrix with only one number</a> (called linear indexing) instead of the usual row/col argument. Your map will be an arbitrary size, like the 64x96 you mention. In it you have a 1 if the craft is allowed there (is a path) and a 0 of the craft is not allowed there (is a wall). </p> <p>So you should be able to display your map matrix, which again is of arbitrary size, and see all of the paths and walls. </p> <p>Now to create to connectivity map (the directed graph, or digraph), you want to cycle through all of the points in your map and see which neighbors are adjacent. Note that the linear indexing I linked to shows that Matlab indexes increasing from the top of col1 to the bottom of col1, then top of col2, etc. Linear indexing is great because it gives each location on your map <em>one number</em> with which to refer to that position. </p> <p>Let's assume your map has N locations, where N is the length x width of your map (N = 64x96 = 6144 in your example). Now your digraph matrix should be NxN, because you're looking for a relationship between any one location on your map and any other location. Let's name your digraph "Digraph" for simplicity. </p> <p>This setup means that you should be able to call digraph(a,b) and get a value of 1 if $a$ and $b$ are connected and 0 if they are not, where again $a$ and $b$ are linear indices that refer to one location on your arbitrary map. </p> <p>So, for some implementable Matlab code (writing this on the fly on my phone, so please forgive any errors):</p> <pre><code>Map = load(map.mat); nRows = size(Map,1); nCols = size(Map,2); mapSize = size(Map); N = numel(Map); Digraph = zeros(N, N); for i = 1:nRows for j = 1:nCols currentPos = sub2ind(mapSize,i,j); % left neighbor, if it exists if (j-1)&gt; 0 destPos = sub2ind (mapSize,i,j-1); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end % right neighbor, if it exists if (j+1)&lt;=nCols destPos = sub2ind (mapSize,i,j+1); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end % top neighbor, if it exists if (i-1)&gt; 0 destPos = sub2ind (mapSize,i-1,j); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end % bottom neighbor, if it exists if (i+1)&lt;=nRows destPos = sub2ind (mapSize,i+1,j); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end end end </code></pre> <p>A couple things to note here - 1. This populates the digraph with zeros for every entry of Digraph(a,a). This is a matter of preference, but if you prefer you can set all of those entries to 1 (showing that $a$ is connected to itself) with the command Digraph = Digraph + eye(N);</p> <ol start="2"> <li><p>If you use 1's and 0's in your map, then Map (currentPos)*Map (destPos) is either 1 if both entries are 1 or 0 if either entry is a 0. That is, it doesn't matter which is the wall, you equally can't move from wall to path or path to wall. </p></li> <li><p>If you just use a regular map you should get a symmetric matrix. The nice thing about a Digraph is that it doesn't <em>need</em> to be symmetric. That is, you could have wormholes/tunnels/portals/chutes and ladders where one point on the map is connected to some remote point on the map.</p></li> <li><p>Further, these tunnels can be bi-directional (two way), or they can be uni-durectional. That is, Digraph(a,b) could be 1 (path exists) where Digraph(b,a) is zero (path does not exist).</p></li> <li><p>If there are such tunnels, you need to add them by hand after the script above runs. </p></li> <li><p>Just like sub2ind gives you the linear index given conventional subscripts, ind2sub gives you subscripts given the linear index. </p></li> <li><p>Finally, as a note, the Dykstra algorithm should give you a series of way points, but these will all be in linear index form. This is why I point out the ind2sub function above. </p></li> </ol> <p>Hope this helps. Please feel free to comment if you have any more questions. </p> <p><strong>:EDIT EDIT:</strong> Linear indexing is just a way to refer to a matrix location with one value. You can brew your own as follows (again in Matlab form not c++, though they are similar):</p> <pre><code>nRows = size(matrixToBeIndexed,1); nCols = size(matrixToBeIndexed,2); N = numel(matrixToBeIndexed); sub2ind = zeros(nRows,nCols); ind2subROW = zeros(1,N); ind2subCOL = zeros(1,N); for i = 1:nCols for j = 1:nRows linearIndex = j + (i-1)*nRows; sub2ind(j,i) = linearIndex; ind2subROW(linearIndex) = j; ind2subCOL(linearIndex) = i; end end </code></pre> <p>In this way you can pass subscripts to sub2ind, now a matrix instead of a function, and get the linear index, and you can pass a linear index to the pair of ind2sub functions to get subscripts. How you want to implement ind2sub is up to you; I would personally use an Nx2 matrix but I wrote it as two separate vectors for clarity. </p> <p>Hope this helps! </p>
7438
2015-06-06T14:29:31.373
|arduino|mobile-robot|localization|mapping|planning|
<p>I'm working on an robot that would be able to navigate through a maze, avoid obstacles and identify some of the objects in it. I have a monochromatic bitmap of the maze, that is supposed to be used in the robot navigation. </p> <p>Up till now, I have converted/read the bitmap image of the maze into a 2D array of bits. However, now I need guidance on how to use that array to plan the path for the robot. I would appreciate if you could share any links as well, because I am new to all this stuff (I am just a 1st year BS electrical engineering student) and would be happy to have a more detailed explanation.</p> <p>If you need me to elaborate on anything kindly say so.</p> <p>I would be grateful!</p> <p>Here's the image of the maze.</p> <p><img src="https://i.stack.imgur.com/ahtkQ.png" alt="Sample Maze Image"></p> <p>This is just a sample image; the robot should be able to work with any maze (image) with similar dimensions. And you are welcome!</p> <p>Thank you Chuck!</p> <p><strong>UPDATE</strong> Heres the code for sub2ind in c++. Kindly see if the output is correct:-</p> <pre><code>ofstream subtoind; subtoind.open("sub2ind.txt"); int sub2ind[96][64] = { 0 }; int ind2subROW[6144] = { 0 }; int ind2subCOL[6144] = { 0 }; int linearIndex=0; j = 0; z = 0; for (j = 1; j &lt;= 64; j++) { for (z = 1; z &lt;= 96; z++) { linearIndex = z + (j - 1) * 96; sub2ind[z-1][j-1] = linearIndex-1; //ind2subROW[linearIndex-1] = j-1; //ind2subCOL[linearIndex-1] = z-1; } } for (j = 0; j &lt; 96; j++) { subtoind &lt;&lt; endl; //correction cout &lt;&lt; endl; for (z = 0; z &lt; 64; z++) { subtoind &lt;&lt; sub2ind[j][z] &lt;&lt; " "; } } subtoind.close(); </code></pre> <p>Heres the Link to the output file. <a href="https://drive.google.com/file/d/0BwUKS98DxycUSk5Fbnk1dDJnQ00/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/0BwUKS98DxycUSk5Fbnk1dDJnQ00/view?usp=sharing</a></p>
How to make a directed graph?
<p>Because the P term calculates a proportional response to the error, for example if the quad is tilted a little bit then you only correct a little bit o you do not overshoot your target value, the I controller will ramp up the response if it has not lowered fast enough, which in some cases can be good for example if there is wind and the proportional term is not correcting enough, but that too can cause problems with overshooting the error (requiring a negative error for the same time as there was a positive error), the D controller will dampen the P and I controllers as you get closer to the set point by measuring the rate of change of the system and slowing it as it approaches the set point. </p> <p>Using only the P and D controllers can sometimes give better results especially in less demanding environments and can be easier to tune, In my experience this approach works great for multicopters expected to make less aggressive maneuvers where a well tuned PID controller can perform better with very aggressive environments. </p>
7445
2015-06-07T09:30:42.163
|control|quadcopter|pid|
<p>my question is: in a lot of cases it is possible to find in Internet PD (instead PID) to control the euler angles of quadcopter? Why the integral part is often neglected in this kind of applications? thanks</p>
Why PD controllers for quadcopter angles control?
<p>Short answer is : it depends. Variables such a motors, propellers and body construction are very important. I have been building drones in my garage as a hobby for years. You'd be surprised what difference can propellers make.</p> <p>Since building heavy lift drone can be extremely hard, most of people should be satisfied by buying some <a href="http://www.dronesglobe.com/guide/heavy-lift-drones/" rel="nofollow noreferrer">heavy lift drone</a>. There are several models available on amazon so it's probably good idea to check what market has to offer.</p>
7460
2015-06-09T16:26:21.377
|quadcopter|
<p>I'm looking for an equation (or set of equations) that would allow me to predict (with fair accuracy) how heavy a payload a <a href="http://en.wikipedia.org/wiki/Quadcopter" rel="noreferrer">quadcopter</a> is capable of lifting.</p> <p>I assume the main variables would be the weight of the copter as well as the size + power of the 4 rotors. What general approach can one use to make such a determination?</p>
How to calculate quadcopter lift capabilities?
<p>To what degree are you looking to cage the fans? <a href="http://en.wikipedia.org/wiki/IP_Code" rel="nofollow">IP code gives some grades for fan protection</a>, which may help you decide or even find some stock guards. Level 'A' protects against striking by the back of the hand, 'B' protects against striking a finger. </p> <p>You are correct that the math is complicated; it's more than simply a reduction in diameter - flow around the grating will have a huge impact, especially at the flow rates for the fans. </p> <p>I would suggest you decide what level of protection you want, purchase the guard for one fan, then setup a test rig where you can weight a fan assembly. Add weights until you can <em>just</em> keep the rig grounded. Add the guard, then remove weight until again, the rig is <em>just</em> grounded. This will give you a percentage reduction in thrust capacity. </p> <p>I think you will find this method of empirical testing will be faster, easier, and more accurate than mathematical calculations, because you will find it difficult to get the model of the fan guard correct. Even if you could, assuming the guard is close to the blades, there will be an interaction between the fan blades and guard that is dependent on the blade shape. </p> <p>This answer assumes you don't have access to ANSYS, LS-DYNA, or any other high power CFD tools, but even still empirical testing will be faster unless you already have experience with those tools and a library of blade and guard parts. </p> <p>Finally, regarding the performance-safety tradeoff - safety is up to you to quantify. Performance impacts of adding guards can be quantified as I've described above. Once you have quantified how much safety matters to you (this is usually done with a <a href="http://en.m.wikipedia.org/wiki/Failure_mode_and_effects_analysis" rel="nofollow">failure mode analysis</a>), then the tradeoff should become clear. Quantifying safety is usually one of the hardest problems for every engineering project. </p>
7462
2015-06-09T19:47:38.030
|quadcopter|
<p>I am interested in building a <a href="http://en.wikipedia.org/wiki/Quadcopter" rel="nofollow">quadcopter</a> from scratch.</p> <p>Because I like to err on the side of caution, I'm considering adding "safety cages" around each propeller/rotor, to hopefully prevent (at least minimize) the chance of the spinning rotor blades coming into contact with someone. Without knowing much about the physics behind how "lift" works, I would have to imagine that cages present two main problems for rotors:</p> <ol> <li>They add weight to the copter making it harder to lift the same payload; and</li> <li>They're sheer presence/surface area makes it harder for the spinning rotor to generate lift and push down away from the ground</li> </ol> <p>The former problem should be obvious and self-evident. For the latter problem, what I mean by "surface area" is that I <em>imagine</em> that the more caging around a spinning rotor, the more difficult it will be to lift effectively. For instance, a spinning rotor might have the ability to generate enough power to lift, say, 2kg. But if we were to construct an entire <em>box</em> (not cage) around the entire rotors, with 6 sides and no openings, I would imagine its lift capability would drop to 0kg.</p> <p>So obviously, what I'm interested in is a cage design that provides adequate safety but doesn't "box in" the rotor so much that it causes the rotor to be ineffective or incapable of providing lift. So I'm looking for that optimal tradeoff of safety (boxing/caging around the spinning rotor) and lift performance.</p> <p>I would imagine calculating and designing this is a pretty huge undertaking with <em>a lot</em> of math behind it. I'm just wondering if anyone has already figured all this stuff out, or if anyone knows of a way to model this safety-vs-lift-performance trade off in some way.</p>
How do safety cages around quadcopter rotors/blades affect lift capabilities?
<p>The "best" robot wrist in terms of human analog is probably the <a href="http://www.anthrobot.com/" rel="nofollow">omni-wrist</a> by Mark Rosheim. It has a large range of motion, and does not have singularities or gimbal lock that plague other more conventional wrists. However, it is fairly complicated mechanically, and thoroughly patented i believe.</p> <p>The book <a href="http://rads.stackoverflow.com/amzn/click/0471026220" rel="nofollow">Robot Evolution: The Development of Anthrobotics</a> by the same author has a good discussion of human analog robotics design. </p>
7463
2015-06-10T00:17:51.577
|robotic-arm|design|actuator|joint|humanoid|
<p>I'm planning the design of a wrist for a humanoid robot. I would like to choose a design that is sturdy while allowing for dexterity comparable to a human wrist.</p> <p>One option that was presented to me was to use a <a href="http://en.wikipedia.org/wiki/Stewart_platform" rel="nofollow">Stewart platform</a>. This setup appears to correctly recreate all possible movements of the human hand. My immediate concern is that this platform will use a total of six actuators which will require additional power and computational requirements. I don't want to commit to this design until I am certain that there isn't a better alternative.</p> <p>Is a Stewart platform a good choice for replicating the dexterousness of the human wrist? If not, what is a better solution?</p>
Stewart platform as robotic wrist joint
<p>According to <a href="http://www.micropik.com/PDF/SG90Servo.pdf" rel="nofollow noreferrer">this datasheet</a><sup>(1)</sup>, the red and brown wires are power and ground, while the orange wire is the <a href="https://en.wikipedia.org/wiki/Pulse-width_modulation" rel="nofollow noreferrer">PWM</a> control (as <a href="https://robotics.stackexchange.com/a/7856/37">Vinokanth Velu suggested</a>).</p> <p>The PWM line expects a pulse every 20ms. If the pulse is 1.5ms wide, the SG90 will move towards the midpoint of its 180 degree travel (position 0). If the pulse is 2ms, then it will move all the way to the right (position 90), while a pulse of 1ms, will move it all the way to the left (position -90).</p> <p>The deadband determines the minimum move you can specify, 10us (as suggested by this datasheet) would be around 1.8 degrees, 1us (as the <a href="http://www.towerpro.com.tw/product/sg90-7/" rel="nofollow noreferrer">manufacturer specifies</a>) would be 0.18 degrees.</p> <p>As for how much power it will draw, that depends on what you are asking it to do. Holding position with no force will take very little power, swinging from one extreme to the other and back with the full 1.80 kg-cm force would take significantly more.</p>
7472
2015-06-10T20:13:05.600
|rcservo|wiring|
<p>I got my hands on a few <a href="http://www.servodatabase.com/servo/towerpro/sg90" rel="nofollow">Tower Pro SG90 9G servos</a> but cannot find their schematics or datasheet anywhere (besides that link).</p> <p>I have the following concerns:</p> <ul> <li>Looks like they're rated for 4.8V, but will they tolerate a 5V supply?</li> <li>How do I determine the current they require, in amps, mA, etc.?</li> <li>There's 3 wires: brown, red &amp; yellow-orange, what do each of these guys do? <ul> <li>If I had to <em>guess</em> I'd say that red is power, another one is direction, and another one is the position to rotate to</li> </ul></li> </ul>
Wiring & driving TowerPro SG90 servos
<p>You need a controller like this that can address them individually over I2C. These can be chained together to control more than you'll likely ever need:</p> <p>Adafruit 16-Channel 12-bit PWM/Servo Driver - I2C interface - PCA9685 <a href="https://www.adafruit.com/products/815" rel="nofollow">https://www.adafruit.com/products/815</a></p> <p>How to use them with a Raspberry Pi: <a href="https://learn.adafruit.com/adafruit-16-channel-servo-driver-with-raspberry-pi" rel="nofollow">https://learn.adafruit.com/adafruit-16-channel-servo-driver-with-raspberry-pi</a></p>
7475
2015-06-11T01:46:05.597
|arduino|raspberry-pi|cameras|servomotor|
<p>I'm trying to make an hexapod with 18 servo motors and i'm asking how to control them with a Raspberry Pi. (Never used it). I saw lot's of stuff to control 1, but 18, 20...</p> <p>Currently I'm working on an <a href="http://www.arduino.cc/en/Main/arduinoBoardMega" rel="nofollow">Arduino Mega</a>, and a <a href="http://www.lynxmotion.com/p-395-ssc-32-servo-controller.aspx" rel="nofollow">SSC-32</a> board, but I found the result to slow and jerky.</p> <p>At this end, I want to add a camera and processing the image, I know an Arduino can't handle that process but a Raspberry Pi can ?</p> <p>Thank for all information about that subject :) </p>
Can I control more than 18 servo motor with a Raspberry Pi
<p>You linked a datasheet but <em>not</em> the protocol. I would need to see what the parameters x,y...Ty,Tz relate to specifically on the robot to be able to tell you exactly what to put for the rotations. </p> <p>That said, I would assume that the rotations specify the end effector pose. Imagine holding a ball with both hands. With the ball in one location (x,y,z), your hands could be on the sides of the ball, or your left hand could be on top and right on bottom, or your left hand could be on bottom and your right on top. In actuality your hands could be anywhere on the ball; the orientation of your hands is called the pose. </p> <p>Without any knowledge of the protocol, which joints correspond to which command inputs, no knowledge of the controller, and no knowledge of what you're trying to do with this (industrial!) robot, I would say to just put zeros for the Tx,Ty, and Tz parameters.</p> <p>Zeros for all of the rotation parameters <em>should</em> have the end effector facing in some ordinal direction (straight down, up, forward, etc.). Once you see the "default" orientation, try setting one of the angles to 90 degrees (pi/2 radians). Setting Tx to 90 degrees should cause the end effector to rotate 90 degrees about its x-axis. You can run through all the rotations to identify the primary axes, then use <a href="https://en.wikipedia.org/wiki/Euler_angles" rel="nofollow">Euler angles (presumably)</a> to specify the pose for the end effector. </p> <p>But again, this is all assuming the rotations affect the end effector. The rotation parameters could be in <a href="https://en.wikipedia.org/wiki/Euler_angles" rel="nofollow">Euler anglges</a>, or <a href="https://en.wikipedia.org/wiki/Euler_angles#Tait.E2.80.93Bryan_angles" rel="nofollow">Tait-Bryan angles</a>, but it's unlikely to be something too exotic. </p> <p>If you could edit your question with more detail about what you're trying to do and link to the user manual or communication protocol I could give more help, but I don't really have any useful information about your problem, so at the moment all I can give are educated guesses. </p> <p>:EDIT:</p> <p>Looking at the <a href="http://www.motoman.com/motomedia/manuals/docs/155490-1CD-R5.pdf" rel="nofollow">pendant manual</a> you linked, section 2.3.2 describes operation in Cartesian mode, where operations occur relative to the XYZ axes defined at the base of the robot. I would probably use these to define the starting position of a board to be soldered (that is, the location of some indexing location on the board relative to the base of the robot), then switch to the tool coordinates mode described in 2.3.4. From here you could feed it information direct from the 2D file, adding Z up/down motions as necessary to contact or come close to the board, depending on heat source (torch v soldering iron). </p> <p>Section 2.3.5 covers setting up user coordinates, where it looks like you can define zero-index locations and orientations for workstation planes. The graphic at the top of page 70 looks like what you would be after. The neat trick here is that, if you set up the starting locations correctly, once you go to tool mode everything is in tool coordinates, so you don't have to specify <em>any</em> rotations.</p> <p>As others have mentioned, Euler angles and such can be tricky. I believe you will commonly find Tait-Bryan angles in an XYZ or ZYX arrangement or Euler angles in an XZX or ZXZ arrangement, but there are dozens of ways that these could be setup. </p> <p>As a final reiteration, you shouldn't really need to worry about those angles. Set everything to zero to see what direction the end effector is pointing. Make adjustments to only one angle until you get "down" relative the board set, then use Cartesian mode to move the tool to the indexing position on the board to be soldered. Define that as the user coordinate, then enter tool mode and feed it X-Y coordinates to do the work. </p> <p>Hope this helps!</p>
7483
2015-06-12T10:04:11.370
|robotic-arm|
<p>I need to get coordinates of the specific points from 2D CAD file and transform them so that I could use them to move the robotic arm to those points. The problem is that I only get x y z coordinates and the robotic arm needs x y z Tx Ty Tz coordinates to move to the certain position. </p> <p>Any suggestions?</p> <p>Edited:</p> <p>My task: I need robotic arm to go through certain points on PCB board and heat soldering paste. I could do it manually by setting points with pendant. But a much easier way would be to get coordinates of those points from CAD file and write a code using PC.</p> <pre><code>MOVL MotionSpeedType(0 - linear mm/s, 1 - angular °/s) Speed (0.1 - 1000 mm/s or Max angular speed) coordinate X Y Z Tx Ty Tz ToolNo [Type] (move robot in a cartesian coordinates in linear motion) </code></pre> <p>this is how code for linear motion to a certain point looks like</p> <p>I only could find this <a href="http://www.motoman.com/motomedia/manuals/docs/160475-1CD.pdf" rel="nofollow">manual</a>.</p> <p>This is pendant <a href="http://www.motoman.com/motomedia/manuals/docs/155490-1CD-R5.pdf" rel="nofollow">manual</a> maybe it will be helpful.</p> <p>I am second year student in "Robotics and mechatronics". I'm currently in a internship at the scientific research institution. I really appreciate your help!</p>
How to transform x y z coordinates to Tx Ty Tz?
<p>When you say the matrices are based on measurement data, do you mean that <span class="math-container">$A(t)$</span> varies with your state space output <span class="math-container">$y$</span>, which should be configured to provide your outputs (measurements), or that you want to actually provide a hard-coded series of parameters (field measurements) to the <span class="math-container">$A$</span> matrix?</p> <p>If you are trying to do the former, have essentially <span class="math-container">$A=f(y)$</span>, where <span class="math-container">$y=f(t)$</span> (because actually <span class="math-container">$y=f(x)$</span>, but <span class="math-container">$x=f(t)$</span> by definition), then you can use ode45. In the function you want to use you can define y = x, then use that to update the parameters in <span class="math-container">$A$</span>.</p> <p>If you want the latter, or the former really, you can always brute force it. Setting up some standards real quick, let's say you have:</p> <p>nSamples, which would be the number of time points in your simulation.</p> <p>Your time step is dT.</p> <p>You have nStates, so <span class="math-container">$size(A) = [nStates,nStates]$</span>. Now assume that your A matrix evolves over time based of field samples (testing, etc.), you should have a bunch of <span class="math-container">$A$</span> matrices, such that you actually have <span class="math-container">$size(A) = [nStates,nStates,nSamples]$</span>.</p> <p>Now, try this:</p> <blockquote> <p>x = Initial Conditions;</p> <p>for i = 1:nSamples</p> <p>currentA = A[<strong>: , : ,</strong> i];</p> <p>xDot = currentA * x + B * u;</p> <p>x = x + xDot * dT;</p> <p>y = x;</p> <p>end</p> </blockquote> <p>As before, if you just want to swap out parameters in <span class="math-container">$A$</span> based on <span class="math-container">$y$</span> then that's easy to implement too.</p> <blockquote> <p>x = Initial Conditions;</p> <p>y = f(x);</p> <p>for i = 1:nSamples</p> <p>A = f(y);</p> <p>xDot = Ax + Bu;</p> <p>x = x + xDot*dT;</p> <p>y = f(x);</p> <p>end</p> </blockquote> <p>Again, this is just a brute-force method to implement what lsim does for you. It's just numeric integration.</p> <p>Hope this helps!</p>
7485
2015-06-12T13:50:07.280
|dynamics|matlab|simulation|
<p>I have the following system: $$\dot{x} = A(t)x+B(t)u$$ $$y = x$$</p> <p>$A(t)$ and $B(t)$ are actually scalar, but time-dependent. If they would be constant, I could simulate the system in Matlab using:</p> <p><code>sys = ss(A,B,C,0);</code> <code>lsim(sys,u,t,x0);</code></p> <p>However, it would be nice to simulate the system with dynamic state and input matrix. The matrices are based on measurement data, this means I would have for each discrete time step $t_i$ another matrix $A(t_i)$. Any suggestions how to do that?</p>
Matlab: System simulation with dynamic state matrix / input matrix
<p>All batteries have <a href="http://www.dcbattery.com/faq.html#1" rel="nofollow">an amp-hour rating</a> that says how much charge is in the battery. A triple A baterry and a D cell battery are both 1.5V batteries, but the D cell has a much higher amp-hour rating than a triple A. </p> <p>You could use the amp-hour rating directly, but what I find more useful is to get a watt-hour rating. This is more useful as robotic applications tend to use battery packs of various voltage ratings in combination with a voltage regulator. </p> <p>A battery has a nominal voltage; get the watt-hour rating by multiplying the amp-hour rating by the nominal voltage. Now you can divide this watt-hour rating by the power draw of the robot (in watts) to get the number of hours the battery can power the robot. </p> <p>You can also use this to estimate run time by power ratings for each component before you ever actually build the robot; four 250mW motors total for 1W of power draw. Assuming the <a href="http://forum.arduino.cc/index.php?topic=264083.0" rel="nofollow">Arduino controlling it draws 50mA at 3v3</a>, the controller draws 165mA. This gives a total power draw of 1.165W (again, estimated). </p> <p>Now, a 1000mAh, 7.2V LiPO battery back has (1Ah x 7.2V) = 7.2Wh of capacity. This means that it will run the example circuit for (7.2Wh / 1.165W) = 6.2 hours = 6 hours and 12 minutes. Again, this is an estimate, which may be more or less depending on whether you run the motors at full capacity for the whole time and the efficiency losses in the voltage regulator and other equipment, but it should be generally correct. </p> <p>Regarding your bonus points, the other devices you've listed aren't power storage devices. They can run indefinitely, provided they're powered. A solar cell doesn't hold a charge, so it can't run anything for any amount of time if there's no sun. If there is sun, it can supply its rated power as long as the sun is out. Note again though rated power - you can't run a 5W robot on a 1W solar cell. Same applies for all other power supplies. The are power <strong>supplies</strong>, not power <em>reserves</em> like a battery. </p>
7486
2015-06-12T14:32:50.363
|power|battery|circuit|
<p>Obviously robotic circuits draw different amounts of power/current. So given the same battery, say, a 9V, then connecting it to 2 different circuits will deplete it at two different rates. Robot/Circuit #1 might drain the battery in 5 minutes. Robot/Circuit #2 might drain the battery in 20 minutes.</p> <p>What ratings do batteries have that allows us to figure out how long it will power a circuit for? <strong>Bonus points:</strong> does this same rating uphold for solar panels and, in deed, all power supplies (not just batteries)?</p>
How to determine how long a battery will power a robotic circuit for?
<p>Your scenarios are correct. </p> <p>You can connect multiple solar cells together to get <a href="http://www.instructables.com/id/How-to-Wire-Batteries-in-Series-or-in-Parallel/" rel="nofollow">increased current or increased voltage.</a> Wire them in series (positive to negative) to boost voltage, wire them in parallel (positives to positive) to boost current capacity. </p> <p>As a final note, I would caution running near the maximum capacity of the solar cell. The voltage of the cell will drop as load increases; you can see the <a href="http://www.voltaicsystems.com/2-watt-panel" rel="nofollow">specifications for the cell you're looking at</a> and see the "open circuit" voltage is 7.0V, while the "peak voltage" is 6.0V. The more load you put on the solar cell the lower the output voltage goes. </p> <p>If you manage to lower the output voltage below the minimum for a voltage regulator (which you should absolutely be using considering the voltage swings), then you will <a href="https://en.wikipedia.org/wiki/Brownout_(electricity)" rel="nofollow">brownout</a> the voltage regulator. </p> <p>When this happens you could either have the voltage regulator shutoff (best case scenario). When it turns off it stops drawing power, which reduces the load on the solar cell. When this happens, the reduced load causes the voltage to recover. When this happens the regulator turns back on. You wind up cycling power to your board a LOT and it could damage it. </p> <p>OR the voltage regulator could try to "hold on" and just pass the lower voltage. At this point you are providing the <a href="http://www.emersonnetworkpower.com/en-US/Solutions/ByApplication/Pages/What-Causes-Dirty-Power.aspx" rel="nofollow">dirty power</a> directly to your microcontroller. This could do anything from power cycle as described above to flat-out frying your board. There is <a href="http://forum.arduino.cc/index.php?topic=10489.0" rel="nofollow">a thread about brownout detection and handling</a> over at the Arduino forums if you're interested. </p>
7489
2015-06-12T15:48:46.350
|power|circuit|
<p>Say I have <a href="http://www.adafruit.com/products/200" rel="nofollow">this solar panel</a> that outputs 6V at 330mA, or ~1.98 Watts. If I connect that to Arduino, which expects a 5V supply at (roughly) 50mA, then the Arduino as a whole requires 5V * .05A = 0.25 Watts to power it. To me, if I understand this correctly, then in perfect weather/sunlight, the solar panel will power Arduino all day long, no problem.</p> <p>Now let's say we wire up 4 motors to the Arduino, each of which draw 250 Watts. Now the Arduino + 4 motors are drawing ~1.25 Watts. But since the panels are still outputting 1.98 Watts, I would think that (again, under perfect sunlight) the panel would power the Arduino and motors all day long, no problem.</p> <p>Now we add 4 more motors to the Arduino circuit, for a total of 8 motors. The circuit is now drawing 1.25 Watts + 1 W = 2.25 Watts. I would expect the solar panel to <em>no longer</em> be capable of powering the circuit, at least properly.</p> <p>My first concern here is: am I understanding these 3 scenarios correctly? If not, where is my understanding going awry?</p> <p>Assuming I'm more or less on track, my next question is: can solar panels be "daisy chained" together to increase total power output? In the third case above, is there a way to add a second solar panel into the mix, effectively making the two panels output 1.98 Watts * 2 = 3.96 Watts, which would then make them capable of powering the Arduino and its 8 motors (yet again, assuming perfect weather/sunlight conditions)?</p>
Understanding how solar panels can supply power to robotic circuits
<p>Is there a need to draw from the batteries sequentially? If you wire the <a href="http://www.batteriesnorthwest.com/batteryschool.cfm?TID=17#ANC17" rel="nofollow">batteries in parallel</a> you drain them all at the same time, so you don't need to worry about swapping dead batteries for fresh. </p>
7491
2015-06-12T20:22:09.597
|power|battery|wiring|
<p>I'm looking for my robotics project to draw its power from one of 3 rechargeable batteries; basically whichever has the most "juice" in it. From the initial research I've already done, I believe I could connect each rechargeable battery (probably LiPo) to a diode, and then wire each of the 3 diodes in series.</p> <p>However, being so new to robotics/electronics, I guess I wanted to bounce this off the community as a sanity check, or to see if there is a better way of achieving this. Again, what I am looking for is a way for the circuit to automagically detect that battery #1 has more power than battery #2, and so it "<em>decides</em>" to draw power from #1. The instant #1 is depleted or deemed "less powerful" than #2, the #2 battery takes over. Thoughts/criticisms?</p>
Wiring necessary to route power from any one of several rechargeable batteries
<p>Kalmnan filters are typically used for sensor fusion. You create a model for what you expect the process to look like, use your sensors as inputs, and the output is the filtered estimate. I'm not going to go over implementation in detail as there is <a href="http://www.cs.unc.edu/~welch/kalman/" rel="nofollow">plenty of information about these filters</a> available online; I hope this points you in the right direction and if you have a <em>specific</em> question about it then by all means make a new question and I or someone else here can answer it. </p> <p>I'm interested in your use of two sensors to triangulate (trilaterate) a ball. How are you using two sensors to fix a 3 dimensional coordinate? </p>
7498
2015-06-14T13:30:00.743
|sensor-fusion|
<p>I am combining two position measurements of a ball from two sensors in real time to obtain one triangulated position in x,y,z coordinates. As the data exchange of the measurements carries some latency, the data has to be extrapolated be able to obtain the current position. Due to extrapolation an error appears in the triangulated data.</p> <p>I know that when the ball is in the air, the velocity of the ball should be constant in x and y directions and the velocity in the z direction should decay with g. The velocities in x and y however oscillate as function of time around a mean value which is the actual x respectively y velocity. The same goes for when I compute the acceleration in the z direction. It oscillates as function of time around g.</p> <p>Given that I know how the ball should behave, i.e. that vx and vy should be constant and that the acceleration in the z direction should be z, how can I impose these conditions to better estimate the triangulated position? </p>
Measurement and physics model fusion
<p>It depends on the bipolar motor driver you're using.</p> <p>A driver chip like the L293D (or various more powerful equivalents) would usually be configured as two full H-bridges to drive two simple DC motors forwards or backwards - or to drive the two coils of a bipolar stepper motor.</p> <p>However it can be configured as four half-bridge drivers, to independently drive four coils in simple on/off mode. So these could be the four coils of a unipolar stepper. For example there's a description here: <a href="http://mechatronics.mech.northwestern.edu/design_ref/actuators/stepper_drive1.html" rel="nofollow">http://mechatronics.mech.northwestern.edu/design_ref/actuators/stepper_drive1.html</a></p> <p>If on the other hand you are using a ready-made driver that doesn't allow wiring changes, it may not be possible to drive unipolar steppers with it. So if you have a specific driver in mind, I'd recommend first looking for a datasheet, or manufacturer's advice, to see if this is possible.</p>
7499
2015-06-15T03:56:34.027
|stepper-motor|stepper-driver|
<p>Can I use Bipolar stepper motor driver to drive Unipolar motor in Unipolar configuration ?</p>
Can I use Bipolar stepper motor driver to drive Unipolar motor in Unipolar configuration?
<p>What is the problem you're having with the file?</p> <p>The only two things I notice off the bat are that you aren't outputting an endline character between rows on your output stream and that it looks like you're initializing the digraph variable in an odd manner. As I mentioned previously, it's been a long time since I've used c++, but could you not just call <code>int digraph[6144][6144] = {0};</code>?</p> <p>For the endline character,</p> <pre><code>for (int l = 0; j &lt; 300; l++) // printing only 300 elements { for (int k = 0; k &lt; 300; k++) { connectivityMap &lt;&lt; digraph[l][k] &lt;&lt; &quot; &quot;; } //End the line when you're done outputting rows connectivityMap &lt;&lt; endl; } </code></pre> <p>Lastly, note that you're only outputting the first 300 columns of the first 300 rows - you're looking at the upper left 300x300 portion of the digraph. <code>digraph[a][b]</code> will equal 1 if <code>a</code> and <code>b</code> are connected; as you have a straightforward map this means it will equal 1 if they are neighbors and neither is a wall.</p> <p>:EDIT:</p> <p>I think I see most of the problems you're having with this.</p> <ol> <li>Here is the map image I made from your map text file. Note that you have a lot of pixels in the map, but it's really a low-resolution map that has been blown up.</li> </ol> <p><img src="https://i.stack.imgur.com/SFPuf.png" alt="map" /></p> <ol start="2"> <li>Here is a map image I made that is functionally equivalent but much smaller in resolution. Instead of having 6144 points, the small map has 84. I'm not sure how the black squares with the white 'X's should be treated, so I counted them as walls, though I think it would probably make more sense if they were paths and your start/end position. You should replace the 0's with 1's in those locations if this is the case.</li> </ol> <p><img src="https://i.stack.imgur.com/eenRZ.png" alt="small map" /></p> <ol start="3"> <li>You are getting entries on the diagonal in your digraph because your map is very basic. Point 1 connects to point 2, point 2 to 3, then 3 to 4, etc., so you wind up with 1's (path exists) on the diagonals. Matlab has a function called <code>imwrite</code> that you can use to generate images from matrices; I used this function to generate the digraph images below.</li> </ol> <p><img src="https://i.stack.imgur.com/xgLo9.png" alt="Small Digraph" /> <img src="https://i.stack.imgur.com/nizJe.png" alt="Large Digraph" /></p> <p>Aside from the scale, the small digraph and large digraph have the same data regarding path connectivity. If you want to avoid keeping such a large, sparse matrix (it's all zeros on the upper/lower triangles because you don't have any paths from start to the middle of the map or start to finish!) then you can check out some of the other methods of map representation in <a href="https://www.cs.princeton.edu/%7Ers/AlgsDS07/13DirectedGraphs.pdf" rel="nofollow noreferrer">this document on creating directed graphs</a>.</p> <p>Lastly, <a href="https://drive.google.com/file/d/0B26_5S6Jh4mYWXZ2Sl9mclcxZlU/view?usp=sharing" rel="nofollow noreferrer">here's a link to the Matlab script I wrote</a> to generate the images and digraphs.</p>
7506
2015-06-16T03:44:36.430
|mobile-robot|localization|mapping|planning|
<p>I'm working on an robot that would be able to navigate through a maze, avoid obstacles and identify some of the objects (Boxes in which it has to pot the balls) in it. I have a monochromatic bitmap of the maze, that is supposed to be used in the robot navigation.</p> <p>Up till now, I have converted/read the bitmap image of the maze into a 2D array of bits. Right now I am writing a code that should convert the 2D array (that represents the maze) into a connectivity map so that I could apply a path planning algorithm on it. Mr. @Chuck has helped me by providing a code in MATLAB. i have converted that code into C++, however the code isn't providing the right output. Kindly see the code and tell me what I am doing wrong.</p> <p>I am sharing the link to the 2D array that has been made, the MATLAB code, and my code in C++ to convert the array into a connectivity map.</p> <p><strong>Link to the 2D array:-</strong></p> <p><a href="https://drive.google.com/file/d/0BwUKS98DxycUZDZwTVYzY0lueFU/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/0BwUKS98DxycUZDZwTVYzY0lueFU/view?usp=sharing</a></p> <p><strong>MATLAB CODE:-</strong></p> <pre><code>Map = load(map.mat); nRows = size(Map,1); nCols = size(Map,2); mapSize = size(Map); N = numel(Map); Digraph = zeros(N, N); for i = 1:nRows for j = 1:nCols currentPos = sub2ind(mapSize,i,j); % left neighbor, if it exists if (j-1)&gt; 0 destPos = sub2ind (mapSize,i,j-1); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end % right neighbor, if it exists if (j+1)&lt;=nCols destPos = sub2ind (mapSize,i,j+1); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end % top neighbor, if it exists if (i-1)&gt; 0 destPos = sub2ind (mapSize,i-1,j); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end % bottom neighbor, if it exists if (i+1)&lt;=nRows destPos = sub2ind (mapSize,i+1,j); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end end end </code></pre> <p><strong>Code in C++:-</strong></p> <pre><code>int **digraph = NULL; digraph = new int *[6144]; for (int i = 0; i &lt; 6144; i++) { digraph[i] = new int[6144]; } for (j = 0; j &lt; 96; j++) { for (z = 0; z &lt; 64; z++) { currentPos = sub2ind[j][z]; digraph[currentPos][currentPos] = 0; //------NEW ADDITION----------- if ((z - 1) &gt;= 0) { destPos = sub2ind[j][z - 1]; digraph[currentPos][destPos] = bitarray[j][z] * bitarray[j][z - 1]; } if ((z + 1) &lt; 64) { destPos = sub2ind[j][z + 1]; digraph[currentPos][destPos] = bitarray[j][z] * bitarray[j][z + 1]; } if ((j - 1) &gt;= 0) { destPos = sub2ind[j - 1][z]; digraph[currentPos][destPos] = bitarray[j][z] * bitarray[j - 1][z]; } if ((j + 1) &lt; 96) { destPos = sub2ind[j + 1][z]; digraph[currentPos][destPos] = bitarray[j][z] * bitarray[j + 1][z]; } } } ofstream connectivityMap; connectivityMap.open("diGraph.txt"); for (int l = 0; j &lt; 100; l++) // printing only 100 elements { for (int k = 0; k &lt; 100; k++) { connectivityMap &lt;&lt; digraph[l][k] &lt;&lt; " "; } } </code></pre>
Converting a 2D array of bits to a connectivity map (Code Debugging)
<p>If you have transfer function such that $$ \frac{V_2}{V_1} = H \\ V_2 = H V_1 \\ $$ Then wouldn't your estimate of $V_1$ be given by inverting the transfer function?</p> <p>$$ V_1 = H^{-1} V_2 $$</p> <p>The problem is that you can't use this to measure $V_1$, and here's why:</p> <p>Your measurements are an estimate of $V_1$. $$ V_{est} = f(V_1) $$ You feed that estimate into the pump and get a flow output. $$ V_2 = H V_{est} $$ Now, if you invert the plant, you do <strong>NOT</strong> get a measurement of $V_1$, you get a measurement of your original estimate. $$ V_{est} = H^{-1} V_2 $$ It's like you are trying to draw your own ruler and then use that ruler to see if you drew the ruler correctly. It's a circular definition that's not going to get you anything useful.</p>
7510
2015-06-16T17:43:55.300
|control|sensors|pid|kalman-filter|
<p>I have a sensor reduction model which gives me a velocity estimate of a suspension system(velocity 1) .</p> <p>This suspension system estimate velocity is used to calculate another velocity(velocity 2) via a transfer function/plant model.</p> <p>Can I use velocity 2 to improve my velocity estimate (velocity 1) through Kalman filtering or through some feedback system.??</p> <p><img src="https://i.stack.imgur.com/fF0tc.jpg" alt="enter image description here" /></p> <p>V1 is &quot;estimated&quot; using these two sensors.That is fed into a geroter pump (Fs in diagram) which pumps fluid to manupulate the damper viscous fluid thereby applying resistance to the forces applied to the car body. There is no problem did I have an velocity sensor on the spring.I could measure it accurately but now I only have an estimate. I am trying to make the estimate better.Assume I have a model/plant or transfer function already that gives me the V2 given a V1.</p>
Improving Velocity estimation
<p>I found an answer to this question. </p> <p>Homography estimation is not needed at all. The EKF takes care of the problem of depth estimation. If you start with an initial inverse depth estimate of $\rho_0 = 0.1 $ and $\sigma_\rho = 0.5$, the depth estimate would range from [-0.9,1.1]. This depth estimate would be corrected based on the observed values, obtained through feature matching. <a href="https://www.doc.ic.ac.uk/~ajd/Publications/civera_etal_tro2008.pdf" rel="nofollow">Please refer to this paper</a>.</p> <p>Anybody trying to learn this algorithm would do well by going through this publicly available <a href="http://www-lehre.inf.uos.de/~svalbrec/documents/master_thesis.pdf" rel="nofollow">dissertation by Sven Albrecht</a>. To my understanding, the mathematics involving MonoSLAM can't be elucidated any better than this! </p>
7517
2015-06-17T07:52:19.873
|slam|ekf|
<p>I am trying to understand the implementation of Extended Kalman Filter for <a href="http://www.doc.ic.ac.uk/~ajd/Publications/davison_iccv2003.pdf" rel="nofollow" title="Paper describing MonoSLAM">SLAM using a single, agile RGB camera.</a> </p> <p>The vector describing the camera pose is $$ \begin{pmatrix} r^W \\ q^W \\ V^W \\ \omega^R \\ a^W \\ \alpha^R \end{pmatrix} $$</p> <p>where:</p> <ul> <li>$r^W$ : 3D coordinates of camera w.r.t world</li> <li>$q^W$ : unit quaternion describing camera pose w.r.t world</li> <li>$V^W$ : linear velocity along three coordinate frames, w.r.t world</li> <li>$\omega$ : angular velocity w.r.t body frame of camera</li> </ul> <p>The feature vector set is described as $$ \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{pmatrix} $$ where, each feature point is described using XYZ parameters.</p> <p>For the EKF acting under an unknown linear and angular acceleration $[A^W,\psi^R] $ , the process model used for predicting the next state is:</p> <p>$$ \begin{pmatrix} r^W + V^W\Delta t + \frac{1}{2}\bigl(a^W + A^W\bigr)\Delta t^2 \\ q^W \bigotimes q^W\bigl(\omega^R\Delta t + \frac{1}{2}\bigl(\alpha^R + \psi^R\bigr)\Delta t^2\bigr) \\ V^W + \bigl(a^W + A^W\bigr)\Delta t\\ \omega^R + \bigl(\alpha^R + \psi^R\bigr)\Delta t \\ a^W + A^W \\ \alpha^R + \psi^R \end{pmatrix} $$</p> <hr> <p>So far, I'm clear with the EKF steps. Post this prediction step, I'm not clear how to perform the measurement update of the system state.</p> <p><a href="http://www.doc.ic.ac.uk/~ajd/Scene/Release/monoslamtutorial.pdf" rel="nofollow">From this slide</a>, I was under the impression that we need to initialize random depth particles between 0.5m to 5m from the camera. But, at this point, both the camera pose and the feature depth is unknown.</p> <ul> <li><p>I can understand running a particle filter for estimating feature depth if camera pose is known. I tried to implement such a concept <a href="https://github.com/agnivsen/LibMonoSLAM" rel="nofollow">in this project: where I read the camera pose from a ground truth file</a> and keep triangulating the depth of features w.r.t world reference frame</p></li> <li><p>I can also comprehend running a particle filter for estimating the camera pose if feature depths are known.</p></li> </ul> <p>But both these parameters are unknown. How do I perform the measurement update?</p> <p>I can understand narrowing down the active search region for feature matching based on the predicted next state of the camera. But after the features are matched using RANSAC (or any other algorithm), how do I find the updated camera pose? We are not estimating homography, are we?</p> <p>If you have any idea regarding MonoSLAM (or RGB-D SLAM), please help me out with understanding the EKF steps.</p> <hr> <p>To be more specific: is there a homography estimation step in the algorithm? how do we project the epipolar line (inverse depth OR XYZ) in the next frame if we do not have any estimate of the camera motion?</p>
Need help regarding EKF in MonoSLAM
<p>Great question! Your intuition is correct.</p> <p>I would do this:</p> <ol> <li>Put the state $x$, and expected velocities at the current time $t$, $v $ as the vector to estimate.</li> <li>Use estimated velocities to predict next state $$\hat{x}(t+1)=f(x(t),v(t))$$</li> <li>Use odometry measurements from $t$ to $t+1$ to form an estimate of the state. The difference between this and $x$ is your innovation, $y_o$. Fuse with the predicted state. $$\hat{x}'(t+1)=UpdateStep(\hat{x}(t+1),y_o(t+1)$$</li> <li>Use accellerometers to form a second estimate of state, which is also fused. You now have an estimate of the state at $t+1$ using all available information. $$\hat{x}''(t+1)=UpdateStep(\hat{x}'(t+1),y_a(t+1)$$</li> <li>Now, the velocities, assuming the filter is running "fast" and you can rely on a <a href="https://robotics.stackexchange.com/a/134">constant curvature path</a> are whatever $v$ and $\omega$ explain the difference in the robot states. Use the difference to provide an estimate of the velocities, which is used to update the velocities in the state of the robot. $$\hat{v}(t+1)=UpdateStep(\hat{v}(t+1),GetVelocityEstimate(\hat{x}(t),\hat{x}(t+1))$$</li> </ol> <p>The design of a filter is very much up to the engineer, and in the end you may want to try several variants to see what works best. </p> <p>Some other variants</p> <ul> <li>Swap odometry, velocities, and accells to produce the prediction of state. You'll have to see what works better</li> <li>Use a single integration of accellerometers to produce a velocity update <em>before</em> doing your forward prediction. In effect, you are "daisy chaining" EKFS, one to estimate your velocity, then using that velocity to predict your state, which is updated by other means. This will likely be <strong>much</strong> simpler and less likely to break.</li> </ul> <p>Good luck!</p>
7519
2015-06-17T18:08:41.463
|localization|kalman-filter|gyroscope|odometry|
<p>I have a differential drive robot for which I'm building an EKF localization system. I would like to be able to estimate the state of the robot $\left[ x, y, \theta, v, \omega \right]$ where $x, y, \theta$ represent the pose of the robot in global coordinates, and $v, \omega$ are the translational and rotational velocities. Every mobile robot Kalman filter example I've seen uses these velocities as inputs to prediction phase, and does not provide a filtered estimate of them. </p> <p><strong>Q:</strong> What is the best way to structure a filter so that I can estimate my velocities and use my measured odometry, gyroscope, and possibly accelerometers (adding $\dot{v}$ and $\dot{\omega}$ to my state) as inputs? </p> <p><em>My intuition tells me to use a prediction step that is pure feedforward (i.e. just integrates the predicted velocities into the positions), and then have separate updates for odometry, gyro, and accelerometer, but I have never seen anyone do this before. Does this seem like a reasonable approach?</em></p>
Structuring EKF to estimate pose and velocity with odometry inputs
<p>Page 7 of <a href="http://www.beikimco.com/pdf/BLDC%20Product%20Guide.pdf" rel="nofollow">this PDF</a> states that, "A permanent magnet DC brushless motor behaves like any permanent magnet DC brush motor." </p> <p>You can read more about DC motor operation at <a href="http://lancet.mit.edu/motors/motors3.html" rel="nofollow">this MIT site</a>. I'm not sure if you've got a typo or what you're trying to say, but your question states, "brushless motors decrease torque with speed," after you state that using a gearbox and directly controlling the motor at low speed both, "will give high torques." </p> <p>If you're looking to do speed control at a very low percentage of the motor's speed, I would suggest using a gearbox simply because you won't have much headroom to modulate the motor's speed. For example, assume the motor's <a href="http://www.hobbypartz.com/30p-101-flash-aa-8900kv.html" rel="nofollow">top speed is 50,000RPM</a>. You want to control the motor to 150RPM +/- 10RPM, which means that, if you're going to directly operate those motors at low speed with no gearbox, you need to be able to control the motor speed to within +/- 0.02% of the top speed. This will be hard, and you would probably actually want control within +/-1RPM, or 0.002% of top speed. </p> <p>If you were to use a 100:1 gearbox, then top <em>output</em> speed becomes 500RPM, which means now you have to control motor speed to +/- 2%, or 0.2% for the +/-1RPM case, which suddenly becomes far more manageable. </p> <p>The downside to using a gearbox is that they are physical, mechanical devices, meaning that they will have friction, backlash, and other drawbacks. Essentially you wind up paying a percentage of your motor's power for the advantage that speed becomes easier to control.</p> <p>tl;dr - use gearboxes. </p>
7522
2015-06-18T09:22:09.443
|motor|brushless-motor|
<p>I'm attempting to control a small vehicle at relatively slow (.5 m/s - 1 m/s) speeds, but with extreme accuracy (1mm). For the drive system, I'm considering using brushless motors as they have a much greater power / volume ratio than I am able to find with brushed motors, especially at this small size.</p> <p>I will be using wheels between 1" and 2" diameter, so the RPM I will be looking for is between 150 - 500 RPM at max. This would suggest either driving the motors at a low speed directly, or driving them at a high speed and gearing them down. As I understand it, both setups will give high torques, as brushless motors decrease torque with speed. With brushed motors, it's quite obvious that a gearbox is necessary as otherwise there is no torque in the system, but here the choice isn't as clear, which is why I am asking.</p> <p><strong>tl;dr</strong> Use brushless motors at high speed with gearbox or low speed (ungeared) for high torque / low speed / high precision application?</p>
Highspeed with gearbox or low speed for brushless motor?
<p>You can use two infrared sensors one at the outside and one inside of the room. You can detect by using algorithm such as sensor outside= A sensor inside=B If AB then +1 person if BA then -1 person Use a counter program to determine people in room.</p>
7535
2015-06-19T19:29:02.963
|sensors|
<p>I am working on my first hobby project and I'm not very familiar with sensors yet. I am trying to build a system which detects the presence of a person in a small room with a single entrance/exit door.</p> <p>The idea is when the first person enters the room, the lights turn on and any following person doesn't affect state of the lights. After the last person leaves, the lights should turn off. In a programmatic sense, the lights should turn on when present person count is greater than 0.</p> <p>I have explored my options and found out that infrared sensors are usually used for this type of problem. What I am not sure is how to detect whether person has entered or left, so I would like to ask for some help with this.</p>
Detecting the presence of a person in a room
<p>A discrete controller cannot feel anything that might vary within a sample period, so don't be concerned about it. Even a lag of one sample period is very negligible as seen by the PID and any technique you might consider to compensate for it will be definitely an overkill.</p> <p>The right question you should ask yourself is: <em>"is my sample period selected correctly according to the characteristics of the plant I want to control?"</em></p> <p>Anyway, for <strong>significant lags</strong> (i.e. in the order of several sample periods) affecting a system, there exist methods such as the <a href="https://en.wikipedia.org/wiki/Smith_predictor" rel="nofollow">Smith Predictor</a> that come to help you out.</p>
7547
2015-06-21T20:12:04.990
|pid|kalman-filter|
<p>Let's say a PID is implemented and the errors are calculated using the sensor data, but the sensor data lags by certain amount of time because of the overhead. And the lag time is smaller than the sampling period, how well does PID performs? What I am thinking is that PID will calculate errors based on past data, and use that to control. How will using a Kalman filter to estimate the actual sensor data help? </p>
Lagging sensor data for PID
<p>To your first questions: This depends of the coin acceptor. If the coin acceptor is switching between ground and Vdd then you do not need the pulldown. If the coin acceptor is switching Vdd and floating then you need it.</p> <p>Anyway if you have some kind of "flying wires" between the arduino and the coin acceptor you should place a pulldown next to the arduino the get a clean signal. The flying wires can "catch" some EMF or act as a capacitor.... Both cases are not good for you.</p> <p>Your second question: Basically no, but you should use it when you have long connections between them... Anyway in both cases, it won't hurt, so just place them precautiously.</p>
7552
2015-06-22T10:33:20.577
|arduino|
<p>I understand the concept of using a pull-up/pull-down resistor when implementing a button/switch with Arduino to avoid a floating state, and in fact I have implemented this quite often.</p> <p>But I am not too sure if a pull-down resistor is necessary in chip-chip or chip-sensor communication.</p> <p>I am connecting a coin acceptor to the Arduino (common ground). The coin acceptor's output pin gives a short pulse each time there is a coin inserted. So far I am connecting the output pin of the coin acceptor directly to an Arduino pin and it works without any problem. Is a pull-down resistor (on this line) usually required as precaution in this case?</p> <p>Also I have the same question when connecting 2 pins of 2 separate Arduino's (also common ground) so that one Arduino can read pulses from the other.</p> <p>Thanks in advance for any experience shared!</p> <p>Dave</p>
Pull-down resistor for inter-chip and sensor-to-chip communication
<p>One of the prime sensors for global localisation on land is GPS. This is not an option underwater because electromagnetic waves get absorbed quickly. </p> <p>There are however alternatives, which provide navigation information which is not so easily available on land.</p> <ul> <li>Large Baseline (LBL) - is a method based on sonar, which works very similar to GPS, just using acoustics instead. For this you need transponders that ping at known intervals. The robot can than calculate its position based on the run length intervals. Because the timings are much easier than with GPS, coming up with your own solution might be feasible.</li> <li>Ultra Short Baseline (USBL) - works the same as the LBL, just that the transponders are very close to each other (usually one device). In this way you get an angle and range measurement. Less accurate, but more compact.</li> <li>Doppler Velocity Log (DVL) - uses the doppler effect in multiple directions to give you the velocity against ground. </li> <li>Pressure Sensor - you can directly get a measurement of your depth in the water column with this. </li> <li>AHRS - based on IMU and Compass, you can get a good estimate of your orientation. </li> </ul> <p>The bad news is that the sonar based methods are way off in terms of your budget. However, if you are interested in this, it might be feasible to construct a localisation method based on DIY hydrophones. Should definitely be a fun experience. </p> <p>Your alternatives, especially feasible in a pool, is to go optical. You can for example use an external camera to track your robot, or use markers (e.g. <a href="http://www.uco.es/investiga/grupos/ava/node/26]" rel="nofollow">Aruco</a>) to get a relation to a known position.</p>
7564
2015-06-23T17:28:23.257
|sensors|localization|sensor-fusion|underwater|
<p>I am building an autonomous underwater robot. It will be used in swimming pools. It should be capable of running in any normal sized pool, not just the pool in which I test. So I cannot rely on a particular design or feature. It has to know it's position in the pool, either with respect to the initial position or with respect to the pool. I have a IMU, which is a <a href="https://www.pololu.com/product/2468" rel="noreferrer">Pololu MiniIMU</a> but finding the displacement with an IMU is a near impossible task. </p> <p>What sensor can I use for this task? It should not be very expensive. (below 200$)</p> <p>Tank size: 25x20x2.5 meters</p>
How to localise a underwater robot?
<p>The paper mentioned by @Ben is a good one, but sadly not well known.</p> <p>There are two different DH conventions in common use:</p> <ul> <li>Standard DH, used in books by Paul, Siciliano et.al., Spong et.al., Corke etc. is defined by parameters <span class="math-container">$d_j$</span>, <span class="math-container">$\theta_j$</span>, <span class="math-container">$a_j$</span> and <span class="math-container">$\alpha_j$</span></li> <li>Modified DH, used in the book by Craig (which is where I'm guessing your figures come from) is defined by parameters <span class="math-container">$d_j$</span>, <span class="math-container">$\theta_j$</span>, <span class="math-container">$a_{j-1}$</span> and <span class="math-container">$\alpha_{j-1}$</span></li> </ul> <p>All too often articles/papers don't mention which convention is used, which is a pity because the equation for the link transformation matrix is very different in each case.</p> <p>You list 3 equations at the top of your question:</p> <ul> <li>The first is correct for modified DH parameters</li> <li>The second and third are not correct for modified DH. They are somewhat like the equations for standard DH, expect that they should have <span class="math-container">$a_j$</span> and <span class="math-container">$\alpha_j$</span> not <span class="math-container">$a_{j-1}$</span> and <span class="math-container">$\alpha_{j-1}$</span>. Where did these come from?</li> <li>The second and third are equivalent (even if not correct) since you can change the order of a rotation and translation about the same axis. In general you cannot change the order of transformations in an expression.</li> </ul>
7570
2015-06-24T10:31:05.347
|dh-parameters|
<p>I'm studying Introduction to robotic and found there is different equations to determine the position and orientation for the end effector of a robot using DH parameters transformation matrix, they are :</p> <p><img src="https://i.stack.imgur.com/uVJMt.gif" alt="1-"></p> <p><img src="https://i.stack.imgur.com/Z1hv9.gif" alt="enter image description here"></p> <p><img src="https://i.stack.imgur.com/voRGM.gif" alt="enter image description here"></p> <p>Example: Puma 560, All joints are revolute</p> <p>Forward Kinematics:</p> <p>Given :The manipulator geometrical parameters.</p> <p>Specify: The position and orientation of manipulator.</p> <p>Solution:</p> <p><img src="https://i.stack.imgur.com/1sRQj.png" alt="enter image description here"></p> <p>For Step 4:</p> <p><img src="https://i.stack.imgur.com/z8Ha7.jpg" alt="enter image description here"></p> <p>for step 3 :Here I'm confused </p> <p>Here we should calculate the transformation matrix for each link and then multiply them to get the position and orientation for the end effector.</p> <p>I've seen different articles using one of these equations when they get to this step for the same robot(puma 560)</p> <p>What is the difference between them? Will the result be different? Which one should I use when calculating the position and orientation?</p>
Homogenous Transformation Matrix for DH parameters
<p>How fast are you driving the car and are you <a href="http://www.youtube.com/watch?v=e-7lsyueC08" rel="nofollow noreferrer">allowing slip during turning</a>?</p> <p>From <a href="https://engineering.purdue.edu/ABE/People/Krutz/Lecture%201" rel="nofollow noreferrer">this powerpoint</a>, the turning radius is given by:</p> <p>$$ R = \frac{L}{\delta} $$ where $R$ is the turning radius, $L$ is the wheelbase length, and $\delta$ is the steering angle. Note that the equation is for <em>low speed</em> driving. </p> <p>Given this equation, you can generate a circle around which the car will traverse. Now you need to specify a reaction/braking time. Multiply the reaction time (seconds) by the average vehicle stopping speed (meters per second) to get how many meters the vehicle will travel in the time it takes to stop. Note that if you're using a linear deceleration the average vehicle stopping speed will be half of your cruising speed. </p> <p>Now, the distance around your turning circle is $d = R\theta$, where again, $R$ is the turning radius. This gives your location on the turning circle when you come to a stop to be $\theta = \frac{d}{R}$. </p> <p>Finally, look at the angle the object makes with the axis of your car at that position. This should be the minimum angle you are looking to see an obstacle. Here's a (crude) image for clarity: <img src="https://i.stack.imgur.com/yoqc7.png" alt="object recognition"></p> <p>Assuming your 30 degree steering angle is actually +/- 15 degrees, then your steering circle might look something like the above. Given your stopping distance, you need to be able to identify something at least at the red 'X'. This means your forward-looking sensor needs to be at an angle of, in this example, 50 degrees off the center line of your car in order to be able to see it in time to stop without collision.</p> <p>The faster you are going, the longer it will take to stop, so the farther "around" the turning circle you need to be able to see. As alluded to with the link at the opening of this post, though, RC cars generally have a lot of slip, so you will need to ensure that the car operates at probably a painfully slow speed to ensure the Ackerman steering radius equation above remains valid and to ensure that you don't have slip when braking. </p> <p>While it is probably acceptable to assume that performance in reverse is the same as in forward this may or may not actually be the case. Braking times, speed, and slip could all perform differently in reverse than in the forward direction. </p> <p>Finally, remember that the guidance I've given gives you the <em>minimum</em> distance for a <em>large</em> object and assumes instantaneous detection and reaction signalling. That is, you should add a margin to what this gives you. You <em>should</em> have full sensor coverage between your turning extremes (that is, sensor coverage of +/- 50 degrees in this example) to ensure you don't hit any objects, but as you probably won't hit any RC-scale light poles, people, or other relatively narrow objects, you can probably skate by with a few gaps in your sensor coverage. </p>
7580
2015-06-25T09:41:31.950
|mobile-robot|sensors|wheeled-robot|
<p>I want to develop an autonomous driving RC car. For detecting obstacles, I plan to mount 3-5 ultrasonic sensors in the front and in the back the car. What is the minimum necessary combined field of view of the sensors so the car never hits an obstacle? I.e. what is the minimum angle of detection of the combined sensors the car should have to detect any obstacle in its path?</p> <p>Some data about the car: (I don't know whether all the data is relevant)</p> <ul> <li>Separation between right and left wheel : 19,5 cm</li> <li>Wheelbase (distance between the front and the back wheels): 31,3cm</li> <li>Steering axle: front.</li> <li>Maximum angle of steering: around 30 degrees. The car uses <a href="https://en.wikipedia.org/wiki/Ackermann_steering_geometry" rel="nofollow">Ackermann steering</a></li> </ul>
Sensors' field of view in car driving
<p>In addition to Mindstorms being a good start for software, there are competitions for younger students such as FIRST robotics, and <a href="http://www.botball.org/" rel="nofollow">Botball</a> where Legos plays a large role(may not be found everywhere though). You learn software, and also how to design a robot to perform a specific task or tasks. </p> <p>However you will not learn the electronics side thus missing out on a huge part of the experience. While I haven't tested it, I started with a kit similar to this <a href="http://www.adafruit.com/product/749" rel="nofollow">Parallax BOEBot from Adafruit</a>. It serves as not only an introduction to software, but also the parts involved and assembly. The downside here is that you're not working with Legos and so the ability to reconfigure and build something new is lacking. </p>
7588
2015-06-26T07:22:03.090
|platform|
<p>I would like to start experimenting with Robots. Is <a href="http://www.lego.com/nl-be/mindstorms/?domainredir=mindstorms.lego.com&amp;showlanguageselector=true" rel="nofollow">Lego Mindstorm</a> a good start? Should I consider other platforms?</p>
Is Lego Mindstorm a good start?
<p>Servo's are controlled by Pulse Width Modulation (PWM) so you need a PWM controller that has enough pins and power to drive Servos. I'd go with 2 of <a href="https://www.adafruit.com/products/815" rel="nofollow">Adafruit's I2C-based controller</a> but Pololu has a <a href="https://www.pololu.com/product/1354" rel="nofollow">USB one</a> that is rather popular.</p>
7612
2015-06-30T22:29:27.570
|mobile-robot|raspberry-pi|servomotor|rcservo|hexapod|
<p>Recently I've bought a <a href="http://www.ebay.co.uk/itm/231583577928" rel="nofollow">hexapod kit</a> and 18 TowerPro MG995 servos.</p> <p>My objective is to apply also the Pi camera, sensors and perhaps a claw... So I've been researching and I haven't found a clear answer when comes to the servo control board.</p> <p>Which servo controller board shall I choose to complete my project?</p>
Raspberry Pi Hexapod 18DOF, Best servo control board?
<p><strong>Earlier answer, regarding the self-test</strong></p> <p>Section 3.3.2 describes how to do the self test, section 3.4.1 describes how to read the result. </p> <p>There is a note about not overwriting bits 5, 6, and 7 of register 14h at the start of chapter 3, so read that and then check that you have scaling set correctly (bits 0 and 1 of register 14h).</p> <p>As the chips that are suspect all also fail the self test and voltages on the board are normal, I would blame physical damage (were dropped/thrown during shipping, etc) and ask for an RMA for the parts.</p> <p><em>A caution</em></p> <p>While the board linked in the question accepts an input voltage of up to 6V, the board features a regulator and level shifters. The chip itself operates at a much lower voltage. </p>
7615
2015-07-02T10:28:32.990
|arduino|accelerometer|
<p>I’m using the BMA020 (<a href="http://www.elv.de/3-achsen-beschleunigungssensor-3d-bs-komplettbausatz.html" rel="nofollow">from ELV</a>) with my Arduino Mega2560 and trying to read acceleration values that doesn’t confuse me. First I connected the sensor in SPI-4 mode. Means</p> <p>CSB &lt;-> PB0 (SS)</p> <p>SCK &lt;-> PB1 (SCK)</p> <p>SDI &lt;-> PB2 (MOSI)</p> <p>SDO &lt;-> PB3 (MISO)</p> <p>Also GND and UIN are connected with the GND and 5V Pins of the Arduino board.</p> <p>Here is the self-written code I use</p> <pre><code>#include &lt;avr/io.h&gt; #include &lt;util/delay.h&gt; #define sensor1 0 typedef int int10_t; int TBM(uint8_t high, uint8_t low) { int buffer = 0; if(high &amp; (1&lt;&lt;7)) { uint8_t high_new = (high &amp; 0x7F); buffer = (high_new&lt;&lt;2) | (low&gt;&gt;6); buffer = buffer - 512; } else buffer = (high&lt;&lt;2) | (low&gt;&gt;6); return buffer; } void InitSPI(void); void AccSensConfig(void); void WriteByteSPI(uint8_t addr, uint8_t Data, int sensor_select); uint8_t ReadByteSPI(int8_t addr, int sensor_select); void Read_all_acceleration(int10_t *acc_x, int10_t *acc_y, int10_t *acc_z, int sensor_select); int main(void) { int10_t S1_x_acc = 0, S1_y_acc = 0, S1_z_acc = 0; InitSPI(); AccSensConfig(); while(1) { Read_all_acceleration(&amp;S1_x_acc, &amp;S1_y_acc, &amp;S1_z_acc, sensor1); } } void InitSPI(void) { DDRB |= (1&lt;&lt;DDB2)|(1&lt;&lt;DDB1)|(1&lt;&lt;DDB0); PORTB |= (1&lt;&lt;PB0); SPCR |= (1&lt;&lt;SPE); SPCR |= (1&lt;&lt;MSTR); SPCR |= (0&lt;&lt;SPR0) | (1&lt;&lt;SPR1); SPCR |= (1&lt;&lt;CPOL) | (1&lt;&lt;CPHA); } void AccSensConfig(void) { WriteByteSPI(0x0A, 0x02, sensor1); _delay_ms(100); WriteByteSPI(0x15,0x80,sensor1); //nur SPI4 einstellen } void WriteByteSPI(uint8_t addr, uint8_t Data, int sensor_select) { PORTB &amp;= ~(1&lt;&lt;sensor_select); SPDR = addr; while(!(SPSR &amp; (1&lt;&lt;SPIF))); SPDR = Data; while(!(SPSR &amp; (1&lt;&lt;SPIF))); PORTB |= (1&lt;&lt;sensor_select); } uint8_t ReadByteSPI(int8_t addr, int sensor_select) { int8_t dummy = 0xAA; PORTB &amp;= ~(1&lt;&lt;sensor_select); SPDR = addr; while(!(SPSR &amp; (1&lt;&lt;SPIF))); SPDR = dummy; while(!(SPSR &amp; (1&lt;&lt;SPIF))); PORTB |= (1&lt;&lt;sensor_select); addr=SPDR; return addr; } void Read_all_acceleration(int10_t *acc_x, int10_t *acc_y, int10_t *acc_z, int sensor_select) { uint8_t addr = 0x82; uint8_t dummy = 0xAA; uint8_t high = 0; uint8_t low = 0; PORTB &amp;= ~(1&lt;&lt;sensor_select); SPDR = addr; while(!(SPSR &amp; (1&lt;&lt;SPIF))); SPDR = dummy; while(!(SPSR &amp; (1&lt;&lt;SPIF))); low = SPDR; SPDR = dummy; while(!(SPSR &amp; (1&lt;&lt;SPIF))); high = SPDR; *acc_x = TBM(high, low); SPDR = dummy; while(!(SPSR &amp; (1&lt;&lt;SPIF))); low = SPDR; SPDR = dummy; while(!(SPSR &amp; (1&lt;&lt;SPIF))); high = SPDR; *acc_y = TBM(high, low); SPDR = dummy; while(!(SPSR &amp; (1&lt;&lt;SPIF))); low = SPDR; SPDR = dummy; while(!(SPSR &amp; (1&lt;&lt;SPIF))); high = SPDR; *acc_z = TBM(high, low); PORTB |= (1&lt;&lt;sensor_select); } </code></pre> <p>And now here is what really confuses me. I got 5 of this sensors. One is working with this code perfectly fine. The Data I get is what I expect. I measure earth gravity in z-component if Iay the sensor on the table, if I start turning it I measure the earth gravity component wise in x-, y- and z- direction depending on the angle I turn the sensor.</p> <p>From the other 4 sensors I receive data that is different. The values jump from -314 (about -1.2 g) to +160 (about 0.5g). With the same code, the same wires and the same Arduino.</p> <p>I checked the register settings of all sensors, they are all the same. I checked the wire connection to the first component at the sensors, they are all around 0.3 Ohm. I used an Oscilloscope and made sure CSB, SCK and MOSI work properly.</p> <p>Am I missing something? What causes this similar but wrong behavior of 4 out of 5 sensors?</p>
Problem with acceleration sensor
<p>You have to discretize the model. This is typically done with a <a href="https://en.wikipedia.org/wiki/Z-transform" rel="nofollow">z-transform</a>, where</p> <p>$$ z = e^{sT} $$ or $$ s \approx \frac{2}{T} \frac{z-1}{z+1} $$</p> <p>where $T$ is the sample period. Make the substitution in your CT model, replacing $s$ with the above, then solve it like you would for an ordinary transfer function. </p> <p>Once you've done this, divide everything by the highest order of $z$ in the transfer function. Now $z^{-n}$ represents a delay of $n$ samples, so:</p> <p>$$ Y(z) = X(z) + X(z) z^{-1} $$</p> <p>becomes $$ Y_n = X_n + X_{n-1} $$</p> <p>This is very much akin to the <a href="https://en.wikipedia.org/wiki/Laplace_transform" rel="nofollow">Laplace transform</a>, where you get functions of $s$, where $\frac{1}{s}$ is the integrator and $s$ is the differentiator. Here $z^n$ represents $n$ samples in the future where $z^{-n}$ represents $n$ samples in the past; this is why you must divide by the highest order of z - you can't get future samples.</p> <p>$$ Y(s) = X(s) + sX(s) $$ becomes $$ y = x + \dot{x} $$ So, regarding how to explain to someone <strong>how</strong> to implement a continuous model on a discrete controller, this is it. The Z-transform. <strong>Why</strong> this works the subject of many text books and much too long of an answer to cover here. </p>
7622
2015-07-02T17:14:42.623
|control|
<p>How does one implement virtual model (continuous) while control system itself is discrete (PLC)?</p> <p>I've done this in practice but what about theory, how does one explain this topic to a stranger? (lets say myself)</p>
Virtual model in PLC discrete / continuous
<p>Controllers in general are judged typically by 3 criteria -</p> <ol> <li><p><a href="https://en.wikipedia.org/wiki/Rise_time" rel="nofollow">Rise time</a>, or how quickly the output gets to the input,</p></li> <li><p><a href="https://en.wikipedia.org/wiki/Overshoot_(signal)" rel="nofollow">Percent overshoot</a>, or how much the output oscillates around the input, and</p></li> <li><p><a href="https://en.wikipedia.org/wiki/Settling_time" rel="nofollow">Settling time</a>, or how long it takes the output to stabilize at the same value as the input.</p></li> </ol> <p>Typically you measure these with a step test - from an initial condition of zero, give an input of 1. The time it takes to get from 10 percent to 90 percent is rise time, settling time is generally how long it takes the output to <em>stay</em> within 2% or 5% of the input value, and percent overshoot is the farthest beyond the input that the output got. </p> <p>Allowing some overshoot improves rise time, which means the system responds faster, but at the expense of a longer selling time. </p> <p>Some more explanation, </p> <p>Because of inertia, rise time and percent overshot are competing goals. What is the fastest way to drive 1000 feet? Go full throttle, of course. If you start slowing down to stop right at 1000 feet, you won't get to the mark as quickly as the person that is still going full throttle with no braking. </p> <p>Conversely, while the person who doesn't start braking until they hit 1000 feet may have got there first, it will take time, some distance, to stop. They have overshot the 1000 foot mark. If they overshoot by 10 feet then go in reverse as quickly as they can, they get back to the mark, but will probably overshoot again, but not as much as the first time because they travel a shorter distance and thus can't get to the same top speed. This process repeats and is called oscillation. </p> <p>This trade off, getting there quickly vs not overshooting, is typical. All of engineering is defining criteria and then finding the best trade off to meet the established criteria. </p>
7626
2015-07-03T01:21:20.987
|pid|
<p>If there are input and the sensor measured outputs. What are the objective methods to compare performance besides looking at inputs and outputs matching or not?</p>
what are methods to compare PID controller performance?
<p>I have used POMDP like models on top of a localization algorithm (<a href="http://wiki.ros.org/amcl" rel="nofollow">Adaptive Monte Carlo Localization</a>, from ROS), and a person detector [1][2] to find and follow a person with a humanoid robot. These two algorithms generate the input (observation) for the POMDP model in [1] and [2]. Also in [3] they used a POMDP model with similar input.</p> <p>As next step we used POMCPs (Partially Observable Monte Carlo Processes [4]) which use Monte Carlo simulations to create a policy because they are able to handle much larger state spaces. Having continuous states and observations causes problems when trying to find a policy, since there will be an infinite number of observations possible. We used Monte Carlo processes with continuous states [4], but the observations and actions were discretized to prevent the policy tree from growing too wide. There is however work on the use of continuous observations in POMDP, such as presented in [5]. </p> <p>Other work with POMDP like models on a top level are: [6-8]. </p> <ol> <li>A. Goldhoorn, A. Sanfeliu and R. Alquézar Mancho. Analysis of methods for playing human robot hide-and-seek in a simple real world urban environment, 1st Iberian Robotics Conference, 2013, Madrid, in ROBOT2013: First Iberian Robotics Conference, Vol 252-3 of Advances in Intelligent Systems and Computing, pp. 505-520, 2014, Springer. <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7041445" rel="nofollow">http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7041445</a></li> <li>A. Goldhoorn, A. Garrell Zulueta, R. Alquézar Mancho and A. Sanfeliu. Continuous real time POMCP to find-and-follow people by a humanoid service robot, 2014 IEEE-RAS International Conference on Humanoid Robots, 2014, Madrid, Spain, pp. 741-747, IEEE Press. <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7041445" rel="nofollow">http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7041445</a> </li> <li>Luis Merino, Joaqu\'in Ballesteros, Noé Pérez-Higueras, Rafael Ramón-Vigo, Javier Pérez-Lara, and Fernando Caballero. Robust Person Guidance by Using Online POMDPs. In Manuel A. Armada, Alberto Sanfeliu, and Manuel Ferre, editors, ROBOT2013: First Iberian Robotics Conference, Advances in Intelligent Systems and Computing, pp. 289–303, Springer International Publishing, 2014. <a href="http://link.springer.com/chapter/10.1007%2F978-3-319-03653-3_22" rel="nofollow">http://link.springer.com/chapter/10.1007%2F978-3-319-03653-3_22</a> </li> <li>D. Silver and J. Veness, “Monte-Carlo planning in large POMDPs,” Proceedings of 24th Advances in Neural Information Processing Systems (NIPS), pp. 1–9, 2010.</li> <li>J.M. Porta, N. Vlassis, M.T. Spaan and P. Poupart. Point-based value iteration for continuous POMDPs. Journal of Machine Learning Research, 7: 2329-2367, 2006.</li> <li>Pineau, J., Gordon, G., &amp; Thrun, S. (2003). Point-based value iteration: An anytime algorithm for POMDPs. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 18, pp. 1025–1030). Citeseer. Retrieved from <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.68.1777&amp;rep=rep1&amp;type=pdf" rel="nofollow">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.68.1777&amp;rep=rep1&amp;type=pdf</a></li> <li>Ong, S., Png, S., &amp; Hsu, D. (2009). POMDPs for robotic tasks with mixed observability. Proc. Robotics: Science and Systems. Retrieved from <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.149.3849&amp;rep=rep1&amp;type=pdf" rel="nofollow">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.149.3849&amp;rep=rep1&amp;type=pdf</a></li> <li>Schesvold, D., Tang, J., Ahmed, B. M., Altenburg, K., &amp; Nygard, K. E. (2003). POMDP planning for high level UAV decisions: Search vs. strike. In In Proceedings of the 16th International Conference on Computer Applications in Industry and Engineering (pp. 3–6). Citeseer. Retrieved from <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.125.9134" rel="nofollow">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.125.9134</a></li> </ol>
7644
2015-07-07T07:19:56.307
|kalman-filter|particle-filter|planning|filter|
<p>POMDPs extend MDPs by conceiling state and adding an observation model. A POMDP controller processes either</p> <ul> <li>action/observation histories or</li> <li>a bayesian belief state, computed from the observations (<em>belief-MDP</em> transformation)</li> </ul> <p>In a complex, real-world system like a robot, one usually preprocesses sensory readings using filters (Kalmann, HMM, whatever). The result of which is a belief-state.</p> <p>I am looking for publications that discuss the problem of fitting a (probably more abstract) POMDP model on top of an existing filter-bank. </p> <ol> <li>Do you have to stick to the belief-MDP, and hand over the filtered belief-state to the controller?</li> <li>Is there any way of using history-based POMDP controllers, like MCTS?</li> <li>How do you construct/find the abstract observations you need to formulate the POMDP model?</li> </ol>
How to use a POMDP-based planner on top of a probabilistic filter
<p>Some type of Kalman filter is almost always the best solution to an estimation problem involving a dynamic system given your computer can handle the matrix inversion. Generally, Kalman filters optimally combine the previous estimate, the confidence of the previous estimate, sensor measurements, and sensor confidence together for the new state estimate. </p> <p>The advantage of the complementary filter is its simplicity and ease of implementation. The complementary filter's disadvantage is its accuracy; it will never behave better than a well tuned Kalman filter. </p> <p>I am unfamiliar with the other filters you've listed :/</p> <p>For your system, I would recommend using an extended Kalman filter or an unscented Kalman filter, both are capable of handling the nonlinear equations that you'll need for dead reckoning. </p> <p>Choosing filter parameters will vary depending on the filter you end up using. I would recommend looking at Optimal State Estimation by Dan Simon which goes over linear and nonlinear Kalman filters as well as choosing filter parameters. </p> <hr> <p>side note: Dead reckoning is an unobservable system. Your state estimate will slowly drift from truth no matter how accurate your sensors are. This may not be a problem if you're just trying to track motion over a couple of seconds. However, if you're trying to navigate a map using only the two sensors you've listed, you're bound to run into trouble. </p>
7645
2015-07-07T12:39:49.987
|mobile-robot|localization|kalman-filter|imu|
<p>I'm searching filter to reduce noise and smooth the signal while dead reckoning with an IMU (6dof gyro+accelerometer). What are the differences/advantages/disadvantages of the following filters:</p> <ul> <li>Kalman</li> <li>Complementary</li> <li>moving average</li> <li>Mahony </li> </ul> <p>I applied kalman and complementary filters to an IMU and both of them gives time lag to actions with respect to filter parameters. Also kalman filter works slower than moving average and complementary. How can I choose right filter and filter parameters?</p>
How do I choose the best filter for dead reckoning with an IMU?
<p>With your question, i assume that you are new to this field. This cannot be solved with one concept. </p> <p>The mentioned objective requires <a href="https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping" rel="nofollow">SLAM technique</a> - Simultaneous localization and mapping . </p> <p>Using SLAM, the robot can be navigated to any location. A camera is not enough to map an area or a room in your case. You also require depth sensors such as kinect or <a href="https://www.stereolabs.com/" rel="nofollow">Zed</a>. You need to develop a program to reach the object for your robot, depending on the architecture that you are using. Here comes the role of a platform. A typical arduino, or a raspberry pi single board computer is enough for your project. You should be very familiar with these platforms to integrate your sensors and play with them. </p> <p>Finally, once the location is reached and the object is recognized by the sensors, Here comes the Inverse kinematics into the action. There is no such thing called "real-world co-ordinates", until and unless you are dealing with GPS for long range SLAM. </p> <p>In your case, the co-ordinates of the object are measured in the context of the location of the sensor mounted on your car. </p> <p>Once the parameters of the object are known, you can use IK(inverse kinematics) for your robotic arm to grab it from its position. (Assuming that you know about IK, as you have mentioned in the question)</p>
7652
2015-07-07T21:16:43.993
|mobile-robot|robotic-arm|
<p>I have a Robotic arm mounted on a car. There's a camera attached to it. Suppose the camera takes the image of a room, and finds that there's something, say an object, that has to be picked up. Say it's 50 feet away from the robot. My question is that how will the robot reach the object in the first place, and secondly, when it has reached the object, how will it know the real world co-ordinates of the object, to pick the object up, using inverse kinematic equations. Any help would be appreciated. Thanks</p>
How do I control the robotic arm motion?
<p>A <a href="http://en.wikipedia.org/wiki/relay#Latching_relay" rel="nofollow">latching relay</a> remains fixed in its last position when power is disconnected.</p> <p>A <a href="http://en.wikipedia.org/wiki/stepping_switch" rel="nofollow">stepping switch</a> is a specialized multi-position latching relay.</p> <p>A <a href="http://en.wikipedia.org/wiki/circuit_breaker" rel="nofollow">circuit breaker</a>, an <a href="http://en.wikipedia.org/wiki/earth_leakage_circuit_breaker" rel="nofollow">earth leakage circuit breaker</a>, a <a href="http://en.wikipedia.org/wiki/residual-current_device" rel="nofollow">residual-current circuit breaker</a>, etc. also remain fixed in its last position when power is disconnected.</p>
7664
2015-07-10T00:59:05.327
|electronics|actuator|
<p>I would like to find an electronic actuator that mimics the characteristics of a hydraulic actuator, in that the position remains fixed without power drain when the actuator is not moving. Which actuators exist that match these criteria?</p>
What actuator types exist that remain locked in their last position like hydraulic piston?
<p>If one of the named researchers is still at that Oxford University group, it can be reasonable to write/email to the researcher explaining your interest and ask if it is possible to obtain a copy - even though it's old. Or if you're shy ask your supervisor if this would be a good idea first.</p> <p>However it is a very general sounding title, it's possible it doesn't have the precise thing that you're looking for... and many things have been written on the subject since.</p>
7666
2015-07-10T07:19:37.107
|mobile-robot|localization|kalman-filter|
<p>I'm sorry for this question that might not fit in here however, I would like to give it a shot. I've chosen this stack since the question is somehow related to mobile robots. I've came across a paper in Mobile Robot Localization that has cited the following reference, </p> <blockquote> <p>C. Brown, H. Durrant-Whyte, J. Leonard, B. Rao, and B. Steer. Kalman filter algorithms, applications, and utilities. Technical Report OUEL-1765/89, Oxford U. Robotics Research Group, 1989.</p> </blockquote> <p>I couldn't find this reference. Nothing show up in Google not even in Google Scholar. In my university which allows me to access to a massive database, also nothing show up. Since this is a technical report, I'm interested to read it to have more appreciation about Kalman Filter. Has anyone came across this reference?</p>
where to get this reference about Kalman filter, technical report
<p>Kinect: Pro:</p> <ul> <li><p>cheap</p></li> <li><p>already calibrated</p></li> <li><p>active system (works also on textureless surfaces)</p></li> <li><p>dense stereo</p></li> </ul> <p>Con:</p> <ul> <li><p>defined range (low maximal range)</p></li> <li><p>does not work good outdoors in direct sunlight</p></li> </ul> <p>Stereo:</p> <p>Pro: - adjustable (different camera, different baseline possible for different ranges)</p> <ul> <li><p>higher framerate possible</p></li> <li><p>works outdoors</p></li> </ul> <p>Cons:</p> <ul> <li><p>hard to built right (cameras must not move at all)</p></li> <li><p>higher computational load</p></li> <li><p>no dense 3d (does rely on texture)</p></li> </ul>
7671
2015-07-11T07:09:16.273
|kinect|cameras|stereo-vision|
<p>As I'm advancing in my project I realized I need better hardware, particularly for video input and processing.</p> <p>From an intuitive feeling sounds like stereo cameras offers a more powerful and flexible solution, on the other hand the Kinect looks like a great out-of-the-box solution for depth sensing and it also takes away a lot of computational complexity as it output directly the depth.</p> <p>So I would like to know what are the upsides and downsides of the 2 solutions and if they have any well known limitation and/or field of application and why.</p> <p>Thank you </p>
Kinect VS Stereo cameras
<p>You mentioned step size of 10 microns...It is not certain that you can achieve this precision with conventional stepper motors and diy hardware. the backlash in the gearbox (especially if multi stage) would exceed your required precision...I would recommend looking into precision piezo actuators...</p> <p>These come to mind: <a href="http://www.physikinstrumente.com/products.html" rel="nofollow">http://www.physikinstrumente.com/products.html</a></p> <p>To my knowledge they have high precision x-y tables and suitable controller for them that you can interface also form an arduino (or something else, e.g. Matlab). Should be right around your budget, maybe a bit higher... You can also look for different manufacturer for piezo linear actuators...</p>
7685
2015-07-13T10:31:49.377
|raspberry-pi|stepper-motor|
<p>In the scope of my PhD, I would like to build an automated microscopy set-up that should take images of a sample of 2cm by 2cm. This should be done by taking pictures of 500 micrometers by 500 micrometers.</p> <p>Therefore I need to design an XY-stage moving my sample over the optical setup. I would use a Raspberry Pi to steer all the hardware.</p> <p>Could you direct me to material about how to best make an XY-stage ? My questions are about what types of motors to use (stepper?), how many, how to create a good sliding mechanism to avoid jerky steps, etc.</p> <p>Simple links to basic engineering of such set-ups would be more than enough for me to start, as I am a complete layman in this field.</p> <p>EDIT: I have found this <a href="http://letsmakerobots.com/node/31381" rel="nofollow noreferrer">blogpost</a>. It does what I require, if I get small enough angle step stepper motors.</p> <p>EDIT2: I need a maximal range of motion of 10 cm in both directions. The overall size should not exceed 30x30 cm^2. Step sizes should not exceed 10 microns. I do not care about moving speed. Based upon the design in the link, buying a stepper motor with a 100:1 gear box could allow my very small radial steps (&lt;0.05 deg) which would result in about 5 micron steps, assuming a rotor radius of about 1cm. As far as price goes, it should not exceed commercially available options which start at about 5k USD</p>
What is required to build a simple XY-stage?
<p>You can't understand this because you don't know what you observe. The electromotive force (EMF) of your motor is proportional to its speed (the ratio is called Kv). So, when the speed of your motor rises, the EMF rises too.</p> <p>What does an ESC ? Basically, it hashes the input voltage to generate the expected output voltage.</p> <p>So, what is this decreasing voltage value you observe ? It won't be the output voltage of the ESC, because it's rising, not falling, and, as it is hashed, you probably can't measure it with your multimeter (basic multimeters only measure continuous or sinusoidal signals).</p> <p>I expect the decreasing voltage to be the input of your ESC. Why would it drop ? Because your driving a motor with a load that rises with speed (typically a propeller), and this requires current. The wires between your battery and your ESC are resistors. Let say you have 50mOhms of resistivity. If you uses 10A, the input voltage of your ESC is 0.5V less than your battery output. Moreover, your battery also have internal resistivity. If you drive more current, the output voltage of your battery drops too.</p>
7698
2015-07-17T07:16:18.710
|brushless-motor|electronics|esc|
<p>I measure the voltage ESC drawing while increasing the dc motor speed. Multimeter shows that as long as the speed increases the voltage value decreases. Can anybody explain why this is happening? </p>
Voltage rpm relation
<p>Increasing the sample rate probably isn't going to buy you much if you can't do something useful. That is, if you don't update the control signal (e.g., motor current) at the same high sample rate, then I don't think you're going to gain much. Similarly, if your IMU or other sensors don't update at the higher frequency, polling them at a higher frequency isn't going to get you much either.</p> <p>You need high frequency feedback, high frequency output, and the headroom to process at those high frequencies. If you could increase the PID sample rate, if you're still relying on the 1kHz process to provide the output then you haven't gained much. However, note that the input frequency doesn't matter <em>too</em> much; at a bounding <em>very</em> low frequency input the controller would interpret the reference like a step input, but you still get PID control/stability.</p> <p>What I would suggest is either going with a faster processor or (preferred) splitting PID off into its own processor. Use a "master" to interface with the user, generate a reference signal, then pass that reference off to the PID controller. Doing this frees up the PID controller to handle only the relevant feedback (sensors) and outputs (motors). </p> <p>For example, say you are running a quadcopter. The master controller gets the motion control input from the pilot, but also gets periphery input from the pilot, for things such as maybe camera gimbal control, camera control, lighting/display control, audio, heartbeat messages, feedback vehicle parameters to the pilot, etc. </p> <p>Of all of these things, parse the motion refernce (pitch/roll/yaw/thrust etc.) and pass that off to the PID control chip. </p> <p>Now the PID control chip gets the motion reference at the 1kHz input, but the <em>reference</em> doesn't need to update very quickly. The PID control chip can now focus only on polling the IMU/compass/other sensors, calculating reference-feedback errors, and modifying the outputs. </p>
7703
2015-07-17T17:29:39.947
|pid|embedded-systems|
<p>my robotic project is running at every 1ms and the processes are taking about 0.9ms. I am running PID so my max clock rate is 1kHz. About half of the processing time are taken by SPI peripherals, IMU and encoders. Is there any recommendation on how I can run faster PID sampling rate?</p>
increase PID sampling rate on embedded system
<p>I misread the text. r should be a random number between 0 and M^-1. Changing this should solve all your problems. </p> <hr> <p>The re-sampling algorithm's purpose is (roughly) to remove particles that have a low probability of representing the system that you're tracking. This is done by stacking the all particles together. Each particle's size is equal to the probability that it represents the system. So the weight of all the particles is 1 or %100. Now M random spots on this range of particles are chosen and the particles that occupiesy those spots are kept. Particles that are not chosen are discarded.</p> <p>The low variance re-sampling algorithm splits the particle range space into M sections and keeps the particle occupying the rand(0,M^-1) spot at each section. This is what U is, the random spot on the jth section of the probability space.</p> <p>The advantage of the low variance re-sampling algorithm is the ease of implementation and it's robustness. If we randomly choose a spot M times, there is a non zero chance we could get the same particle each time. This could cause huge problems for the filter.</p> <hr> <p>ignore: It looks like your random weight r should have a standard deviation of M^-1. </p>
7705
2015-07-18T00:32:52.687
|mobile-robot|algorithm|particle-filter|probability|
<p>For my particle filter, I decided to try using the low variance resampling algorithm as suggested in Probabilistic Robotics. The algorithm implements systematic resampling while still considering relative particle weights. I implemented the algorithm in Matlab, almost word-for-word from the text:</p> <pre><code>function [state] = lowVarianceRS(prev_state, weight, state_size) state = zeros(1,state_size); % Initialize empty final state r = rand; % Select random number between 0-1 w = weight(1); % Initial weight i = 1; j = 1; for m = 1:state_size U = r + (m - 1)/state_size; % Index of original sample + size^-1 while U &gt; w % I'm not sure what this loop is doing i = i + 1; w = w + weight(i); end state(j) = prev_state(i); % Add selected sample to resampled array j = j + 1; end end </code></pre> <p>As would be expected given the while loop structure, I am getting an error for accessing weight(i), where i exceeds the array dimensions.</p> <p>To solve this, I was considering circularly shifting my weight array (putting the first index used as the first value in weight, so that I never exceed matrix dimensions). However, I wasn't sure if this would negatively impact the rest of the algorithm, seeing as I'm having trouble understanding the purpose of the U calculation and while loop.</p> <p>Could anyone help clarify the purpose of U and the while loop, and whether or not a circular shift is an acceptable fix?</p>
Low variance resampling algorithm for particle filter
<p>You could have a look at the papers of the <a href="http://www.eecs.harvard.edu/ssr/projects/progSA/kilobot.html%20Kilobot" rel="nofollow">kilobot project</a>. They have your capabilities so it would be a good starting point. </p>
7710
2015-07-20T18:07:01.117
|mobile-robot|multi-agent|swarm|
<p>I'm looking for an algorithm for formationing multiple robots in 2D simulation. Can you suggest resources about this topic. Also I need suggestions and comments about these topics:</p> <ul> <li>Can I recruit algorithm from optimization algorithms like particle or ant?</li> <li>Is there any way except "go to goal" for each robot</li> <li>Is patter formationing algorithms feasible?</li> <li>Suggestions about a fast way of formationing/ aligning</li> </ul> <p>Notes:</p> <ul> <li>Im not using a robotics simulator or physics engine for this. </li> <li>Robots are represented as dots.</li> <li>multi robot system is homogeneous</li> <li>every robot can sense obstacles and other robots in a sense range circle around the robot. </li> <li>number of obstacles and robots can vary from 2 to 100 </li> <li>multi robot system is not a central </li> </ul>
Formationing Algorithm for Multiple Robots
<blockquote> <p>do they use some optimal control to calculate in advance the required impulse to reach the new desired state of the system</p> </blockquote> <p>This is a property every stable controlled system has/should have. Figuring out how to get to some other state is the core of control theory.</p> <p><em>Doing</em> something in advance is not possible, but <em>knowing</em> it certainly is. Given that the system is controllable an stable, it should be able to follow a given trajectory. It is a lot easier to calculate a trajectory beforehand instead of generating desired states "on the fly".</p> <p>The great joy of control theory is that it is <s>only boring mathematical formulas and numbers, entirely unaware of what it's actually going on in the "real world"</s> able to describe very different systems in a common abstract way. So let's substitute the spaceship with a car.</p> <p>You are standing at the traffic lights. You think about what to do when they turn green. You create a mental image of how the acceleration (and thus velocity and thus position) of the car should change over time. In a car, that resembles to how hard you should hit the pedal to the metal.</p> <p>In a multiaxes setup in space, there are many angular accelerations to consider, but it's still the same: how should these values change over time?</p> <p>There are infinite solutions, but some are favourable. when you are docking in space for a example, you don't want to rotate much at the end of the motion. You want to have pretty much only translational movement between the two docking parts towards each other at the end of the motion. That's why the most part of possibly necessary rotation should happen at the beginning of the motion.</p> <p>And don't forget that you are in space. finding a trajectory that uses minimal fuel is desirable.</p> <blockquote> <p>they use "pulse" mode just for precise magnitude variation of provided thrust (like average voltage in PWM(pulse-width modulation)) in a classic PID control loop?</p> </blockquote> <p>This is not an OR part, but AND. The "pulse" mode is all they have. You can open or close the valve. But the "pulse" is a controlled system in on itself. They measure all kinds of properties of the tanks like temperature and pressure to estimate what one "pulse" will do.</p> <p>Additionally, the computers know the payload and can calculate the model that's being controlled. In a sense, the system is self aware and can adjust its control accordingly.</p> <p><a href="http://science.ksc.nasa.gov/shuttle/technology/sts-newsref/sts-rcs.html" rel="nofollow noreferrer">Check this document on the RCS</a> which I got a bit of information from.</p> <hr> <p>An attempt to explain the "it's only a pulse" idea:</p> <p>You are probably familiar with one very common type of spaceship: microwave ovens. (admittedly, they are bound to some big rock by gravity, but still, they are in space (like everything else), so they are clearly spaceships)</p> <p>There are different kinds of microwave ovens available. They all have the pulsed operation in common.</p> <p>The simple ones look like this:</p> <p><img src="https://i.stack.imgur.com/h7oXq.jpg" alt="enter image description here"></p> <p>It only has two dials: time and power. The power dial adjusts the pulse width and the time dial the overall duration.</p> <p>With this microwave, a lot of the control algorithm is on you. You cannot put different things into the oven, use the same settings and expect proper results. The frozen chicken and the rasins in the image will look different after 5 minutes at full power. You have to dial in the settings that work for the food you put into the oven.</p> <p>There are "smarter" microwaves look like this one, with more dials and buttons: <img src="https://i.stack.imgur.com/ELexV.png" alt="enter image description here"> You can tell it what you put inside and what you want to do with it. The oven will then figure out what to do. It might even vary the settings over time, to get better heat penetration, even cooking, etc.</p> <p>It could be as smart as to first run some defrosting program, then cooking and finally turn on the grill. As it knows that you placed a frozen chicken of a certain weight into it, it will be able to figure out how to do all that, without overheating during defrosting, drying out while cooking or burning while grilling. Well, at least in theory.</p> <p>As you can see, even something as primitive as a simple on/off pulse can do rather sophisticated things if wrapped into a good control loop.</p> <p>Would it be even better if they could continuously change values of the valve? Probably yes, but not by that much as you know from the averaged PWM signal. Additionally an adjustable valve is a more complex part than a simple on/off solenoid valve. Simple stuff fails less often. Stuff failing in space is a <strong>bad</strong> thing.</p>
7722
2015-07-22T05:25:39.873
|control|automatic|rocket|
<p>Reaction Control Systems (RCS) on these vehicles are implemented by using small rocket thrusters. For me it looks like these thrusters work in some kind of "pulse" mode. And I can't understand - do they use some optimal control to calculate in advance the required impulse to reach the new desired state of the system OR they use "pulse" mode just for precise magnitude variation of provided thrust (like average voltage in PWM(pulse-width modulation)) in a classic PID control loop?</p>
What type of control law is used in "Reaction Control System" of Apollo Lunar Module or Space Shuttle?
<p>I use CAD software to generate 3D models, convert those 3D models to an STL format, then load those STL files to Matlab with the following script:</p> <pre><code>function [points,success] = LoadSTL(path) % LoadSTL loads an ASCII STL file with units in [mm]. % The output is a structure array where each element in the array is % a vertex. % % [points , success] = LoadSTL(path) % % path = String containing the path and extension of the file to be loaded. % points = Matrix of locations where each column is a vertex, rows are x/y/z/1: % points(1,:) = x % points(2,:) = y % points(3,:) = z % points(4,:) = 1 % NOTE - every three columns is a new face/facet. % success = Bit indicating if the file was successfully opened or not. success = 0; fileID = fopen(path); if fileID &lt;0 fprintf('File not found at path %s\n',path); return; end fprintf('Loading Path %s...\n',path); fileData = fread(fileID,'*char'); eol = sprintf('\n'); stlFile = strsplit(fileData',eol); fclose(fileID); fprintf('Done.\n') pause(0.25); clc assignin('base' , 'stlFile' , stlFile) pointsTracker = 1; for i=2:size(stlFile,2) if mod(pointsTracker,100)==0 clc fprintf('Parsing file at %s...\n',path); fprintf('Currently logged %d points\n',pointsTracker); end testLine = stlFile{i}; rawStrip = strsplit(testLine , ' ' , 'CollapseDelimiters' , true); if numel(rawStrip) == 5 points(1,pointsTracker) = str2double(rawStrip{1,3})/1000; points(2,pointsTracker) = str2double(rawStrip{1,4})/1000; points(3,pointsTracker) = str2double(rawStrip{1,5})/1000; points(4,pointsTracker) = 1; pointsTracker = pointsTracker + 1; end end disp('Done.') pause(0.25); clc; if mod(size(points,2),3) &gt; 0 disp('File format in an unexpected type.') disp('Check the file specified is an STL format file with ASCII formatting.') disp('(Error - number of vertices not a multiple of 3)') disp(numel(points.x)) return; end success = 1; return; </code></pre> <p>Once this is done I save the resulting output (points) as a .mat file and load exclusively from that .mat instead of the STL file because the .mat file loads <em>significantly</em> faster than parsing the STL file every time.</p> <p>Once you've loaded the file, you can quickly plot the STL file in Matlab with the following command:</p> <pre><code>myPlotColor = [0.5 0.5 0.5]; nFaces = size(points,2)/3; patch(... reshape(points(1,:),3,nFaces) , ... reshape(points(2,:),3,nFaces) , ... reshape(points(3,:),3,nFaces) , ... myPlotColor); </code></pre> <p>At this point, standard Matlab plot commands work; e.g., <code>axis equal</code>, <code>hold on</code>, etc. </p> <p>Now, given how to load and display STL files in Matlab plots from an m-file, you can go about doing whatever form of control you want. Transform the CAD file with any 4x4 transform matrix composed of a rotation matrix $R$ and translation vector $s$:</p> <p>$$ \mbox{transformedPoints} = \begin{bmatrix} &amp; R&amp; &amp; s \\ 0 &amp;0 &amp;0&amp; 1 \\ \end{bmatrix} \mbox{points} $$</p> <p>Plot the transformed points with the <code>patch</code> command shown above. As long as you've generated your CAD model such that the origin of your 3D model corresponds with the origin of your mathematical model then any movement/rotation of the mathematical model about its origin can be correctly applied to the CAD model, which can then be displayed.</p> <p>I'd just like to reiterate that this file loads STL files that are in ASCII format with units in millimeters, and the resulting plot is with units in meters, but of course you can change that with the <code>/1000</code> scaling during the loading of the file. </p>
7723
2015-07-22T07:11:51.783
|matlab|simulation|
<p>I'm learning to make a 3D simulation in MATLAB based on a model designed from SOLIDWORKS. There is an example: <a href="https://youtu.be/1nSafEJlySs" rel="nofollow">SIMULINK+SOLIDWORKS</a></p> <p>The way used here is: </p> <ol> <li><p>Create a 3D model in SOLIDWORKS</p></li> <li><p>Create a xml file applicable to import to MATLAB via SimMechanics Link</p></li> <li><p>Import the model to MATLAB/SIMULINK. A simulink system is created.</p></li> </ol> <p>After these steps, controlling the system will be implemented in <strong>SIMULINK</strong>. But I feel simulink is kind of strict to control. I want to be more flexible, apply any algorithm to the model. And using matlab *.m file to control is more efficient way. </p> <p>So my question is this: Is there any way to do 3D simulation (MATLAB+SOLIDWORKS) by using only *.m file to control, no <strong>SIMULINK</strong> anymore? </p> <p>All model information will be contained in the *m.file. Maybe the step 1 and 2 are inherited, but step 3 is different.</p>
MATLAB 3D Simulation with SOLIDWORKS model
<blockquote> <p>For example, how does price vary with precision, speed, reach and strength?</p> </blockquote> <p>The price vary a lot, from a couple of hundreds of bucks to hundreds of thousands of dollars ( <a href="http://www.gizmag.com/pr2-robot-commercially-available/16306/" rel="nofollow">Willow Garage's the one-armrobot PR2 costs \$285,000 and The two-armed costs \$400,000 </a>), it goes up- as you can guess- whenever the robot arm is precise, fast, long, strong, collaborative (i.e. designed to interact with its environment), or have more DoF's (Degrees of Freedom). Also, take into account the flanges, tools, the palette of software they come with, certifications, etc. It really depends.</p> <blockquote> <p>Is there a database of industrial robot arms including information like the price, precision, speed, reach and strength of each model?</p> </blockquote> <p>I don't know of such a database, but you can find all these information (except prices, these companies are pretty secretive, they usually reveal it to potential customers. I work on KUKA's robots in my company, I didn't know about their price until our PR told me about the deal) on the Robot's manuals available on the companies' websites. If you want to narrow down your choices, check what's available on the applications you want your robot to perform (if the robot will do repetitive work inside a confined space, there's no need to buy an expensive collaborative robot). I think, the big guys in the industry like Kuka, ABB, or Fanuc have a complete range of robots for different applications and at different sizes. Check this link out:</p> <p><a href="http://blog.robotiq.com/bid/63528/What-are-the-different-types-of-industrial-robots" rel="nofollow">http://blog.robotiq.com/bid/63528/What-are-the-different-types-of-industrial-robots</a></p>
7731
2015-07-23T01:15:46.730
|robotic-arm|industrial-robot|
<p>I'd like to study the capabilities of industrial robot arms. For example, to answer the question how does price vary with precision, speed, reach and strength?</p> <p>Is there a database of industrial robot arms including information like the price, precision, speed, reach and strength of each model?</p>
Comparing industrial robot arms
<p>Battery life is generally measured in amp hours. From a PWM perspective, you are pulsing current at a constant voltage, so you could estimate current output to be (duty cycle)*(rated current) where rated current is the nominal current draw at your operating voltage. Remember too that P=IV, so you could get a rated current by dividing rated power by operating voltage. </p> <p>So now that you have an estimate for current draw, you can do a simplistic state of charge estimate called "<a href="https://en.wikipedia.org/wiki/State_of_charge" rel="nofollow">Coulomb counting</a>", which is just current integration. </p> <p>This means that you take your current and multiply it by whatever your sample time interval is, then add it to the previous measurement. </p> <pre><code>spentCapacity = spentCapacity + (dutyCycle*maxCurrent)*dT </code></pre> <p>In this manner you accumulate how much current you have "spent" and can compare that to how much you started with. </p> <p>This is not super accurate by any means, but if you're interested in a basic setup that will get you in the right ballpark this is it. </p> <p>If you go this route, better current measurements are an easy way to improve accuracy, but Texas Instruments makes some "battery fuel gage" chips that are pretty slick too.</p>
7735
2015-07-24T21:53:10.573
|quadcopter|battery|
<p>Is there any way of estimation the battery life from pwm outputs which goes to motors in <strong>microcontroller level</strong>. I'm planning to estimate path range with this. <em>Microcontroller, sensor and other electronic device should be neglected.</em> </p>
Estimation of Battery Life Time From PWM Signals in a Quadrotor
<p>I think the vast majority of compliant or "elastic" robots you will find have rigid links, but compliant joints. I believe robots with flexible links is still a topic of active research. So your best bet is to look through academic papers. It is not a simple solution. I found some old papers to get you started:</p> <ul> <li><a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.1578" rel="nofollow">Control Of Flexible Robot Arm</a> by J.D. Lee, L. S. Hayna, Ben Li Wang, and KwangHornp Tsai.</li> <li>Havlik, S., "Structural compliance of flexible robot arms: analysis, synthesis and control aspects," Advanced Robotics, 1991. 'Robots in Unstructured Environments', 91 ICAR., Fifth International Conference on , vol., no., pp.1622,1625 vol.2, 19-22 June 1991</li> </ul>
7741
2015-07-25T10:25:53.017
|kinematics|simulator|dynamics|
<p>We can easily compute the rigid robot kinematics and dynamics. There is many resources, simulators and modelling tools about it. But i couldnt find any of these for elastic robots. Can you suggest resources and modelling tools?</p>
Mathematical Moddeling of Elastic Robots
<p>I have a library called <strong><a href="https://github.com/ahundt/grl" rel="nofollow">grl</a></strong> which integrates control of a KUKA iiwa in C++. </p> <p>Right now the most reliable mechanism I've found for control is to receive state over FRI, then send state via Java. All the tools necessary to do this are integrated into grl.</p> <p>While I've been able to receive state over FRI nicely, sending FRI commands so the robot drives reliably has proven more complicated. I'm close to a working implementation and I have a few simple test applications that move a single joint correctly. Once the bugs are worked out it should be very easy to use, and I'm hopeful it will work well. For a specific function controlling FRI see <a href="https://github.com/ahundt/grl/blob/master/test/KukaFRIClientDataDriverTest.cpp" rel="nofollow">KukaFRIClientDataDriver.cpp</a>.</p> <p>Unfortunately I've found the direct API KUKA provides to be a bit difficult to use as well, so I'm implementing functions that communicate over the underlying protobuf network messages and UDP.</p> <p>While this isn't a 100% answer, it is a solution that is 90% of the way to completion.</p> <p><strong>Update:</strong> FRI based control using grl is now working with the Sunrise Connectivity Suite 1.7. </p> <p>Here is an example of how to use it in the simplest case of <a href="https://github.com/ahundt/grl/blob/master/test/KukaFRIClientDataDriverTest.cpp" rel="nofollow">KukaFRIClientDataDriverTest.cpp</a>:</p> <pre><code>// Library includes #include &lt;string&gt; #include &lt;ostream&gt; #include &lt;iostream&gt; #include &lt;memory&gt; #include "grl/KukaFRI.hpp" #include "grl/KukaFriClientData.hpp" #include &lt;boost/log/trivial.hpp&gt; #include &lt;cstdlib&gt; #include &lt;cstring&gt; #include &lt;iostream&gt; #include &lt;boost/asio.hpp&gt; #include &lt;vector&gt; #include &lt;iostream&gt; using boost::asio::ip::udp; #include &lt;chrono&gt; /// @see https://stackoverflow.com/questions/2808398/easily-measure-elapsed-time template&lt;typename TimeT = std::chrono::milliseconds&gt; struct periodic { periodic():start(std::chrono::system_clock::now()){}; template&lt;typename F, typename ...Args&gt; typename TimeT::rep execution(F func, Args&amp;&amp;... args) { auto duration = std::chrono::duration_cast&lt; TimeT&gt; (std::chrono::system_clock::now() - start); auto count = duration.count(); if(count &gt; previous_count) func(std::forward&lt;Args&gt;(args)...); previous_count = count; return count; } std::chrono::time_point&lt;std::chrono::system_clock&gt; start; std::size_t previous_count; }; enum { max_length = 1024 }; int main(int argc, char* argv[]) { periodic&lt;&gt; callIfMinPeriodPassed; try { std::string localhost("192.170.10.100"); std::string localport("30200"); std::string remotehost("192.170.10.2"); std::string remoteport("30200"); std::cout &lt;&lt; "argc: " &lt;&lt; argc &lt;&lt; "\n"; if (argc !=5 &amp;&amp; argc !=1) { std::cerr &lt;&lt; "Usage: " &lt;&lt; argv[0] &lt;&lt; " &lt;localip&gt; &lt;localport&gt; &lt;remoteip&gt; &lt;remoteport&gt;\n"; return 1; } if(argc ==5){ localhost = std::string(argv[1]); localport = std::string(argv[2]); remotehost = std::string(argv[3]); remoteport = std::string(argv[4]); } std::cout &lt;&lt; "using: " &lt;&lt; argv[0] &lt;&lt; " " &lt;&lt; localhost &lt;&lt; " " &lt;&lt; localport &lt;&lt; " " &lt;&lt; remotehost &lt;&lt; " " &lt;&lt; remoteport &lt;&lt; "\n"; boost::asio::io_service io_service; std::shared_ptr&lt;KUKA::FRI::ClientData&gt; friData(std::make_shared&lt;KUKA::FRI::ClientData&gt;(7)); std::chrono::time_point&lt;std::chrono::high_resolution_clock&gt; startTime; BOOST_VERIFY(friData); double delta = -0.0001; /// consider moving joint angles based on time int joint_to_move = 6; BOOST_LOG_TRIVIAL(warning) &lt;&lt; "WARNING: YOU COULD DAMAGE OR DESTROY YOUR KUKA ROBOT " &lt;&lt; "if joint angle delta variable is too large with respect to " &lt;&lt; "the time it takes to go around the loop and change it. " &lt;&lt; "Current delta (radians/update): " &lt;&lt; delta &lt;&lt; " Joint to move: " &lt;&lt; joint_to_move &lt;&lt; "\n"; std::vector&lt;double&gt; ipoJointPos(7,0); std::vector&lt;double&gt; offsetFromipoJointPos(7,0); // length 7, value 0 std::vector&lt;double&gt; jointStateToCommand(7,0); grl::robot::arm::KukaFRIClientDataDriver driver(io_service, std::make_tuple(localhost,localport,remotehost,remoteport,grl::robot::arm::KukaFRIClientDataDriver::run_automatically) ); for (std::size_t i = 0;;++i) { /// use the interpolated joint position from the previous update as the base if(i!=0 &amp;&amp; friData) grl::robot::arm::copy(friData-&gt;monitoringMsg,ipoJointPos.begin(),grl::revolute_joint_angle_interpolated_open_chain_state_tag()); /// perform the update step, receiving and sending data to/from the arm boost::system::error_code send_ec, recv_ec; std::size_t send_bytes_transferred = 0, recv_bytes_transferred = 0; bool haveNewData = !driver.update_state(friData, recv_ec, recv_bytes_transferred, send_ec, send_bytes_transferred); // if data didn't arrive correctly, skip and try again if(send_ec || recv_ec ) { std::cout &lt;&lt; "receive error: " &lt;&lt; recv_ec &lt;&lt; "receive bytes: " &lt;&lt; recv_bytes_transferred &lt;&lt; " send error: " &lt;&lt; send_ec &lt;&lt; " send bytes: " &lt;&lt; send_bytes_transferred &lt;&lt; " iteration: "&lt;&lt; i &lt;&lt; "\n"; std::this_thread::sleep_for(std::chrono::milliseconds(1)); continue; } // If we didn't receive anything new that is normal behavior, // but we can't process the new data so try updating again immediately. if(!haveNewData) { std::this_thread::sleep_for(std::chrono::milliseconds(1)); continue; } /// use the interpolated joint position from the previous update as the base /// @todo why is this? if(i!=0 &amp;&amp; friData) grl::robot::arm::copy(friData-&gt;monitoringMsg,ipoJointPos.begin(),grl::revolute_joint_angle_interpolated_open_chain_state_tag()); if (grl::robot::arm::get(friData-&gt;monitoringMsg,KUKA::FRI::ESessionState()) == KUKA::FRI::COMMANDING_ACTIVE) { #if 1 // disabling this block causes the robot to simply sit in place, which seems to work correctly. Enabling it causes the joint to rotate. callIfMinPeriodPassed.execution( [&amp;offsetFromipoJointPos,&amp;delta,joint_to_move]() { offsetFromipoJointPos[joint_to_move]+=delta; // swap directions when a half circle was completed if ( (offsetFromipoJointPos[joint_to_move] &gt; 0.2 &amp;&amp; delta &gt; 0) || (offsetFromipoJointPos[joint_to_move] &lt; -0.2 &amp;&amp; delta &lt; 0) ) { delta *=-1; } }); #endif } KUKA::FRI::ESessionState sessionState = grl::robot::arm::get(friData-&gt;monitoringMsg,KUKA::FRI::ESessionState()); // copy current joint position to commanded position if (sessionState == KUKA::FRI::COMMANDING_WAIT || sessionState == KUKA::FRI::COMMANDING_ACTIVE) { boost::transform ( ipoJointPos, offsetFromipoJointPos, jointStateToCommand.begin(), std::plus&lt;double&gt;()); grl::robot::arm::set(friData-&gt;commandMsg, jointStateToCommand, grl::revolute_joint_angle_open_chain_command_tag()); } // vector addition between ipoJointPosition and ipoJointPositionOffsets, copying the result into jointStateToCommand /// @todo should we take the current joint state into consideration? //BOOST_LOG_TRIVIAL(trace) &lt;&lt; "position: " &lt;&lt; state.position &lt;&lt; " us: " &lt;&lt; std::chrono::duration_cast&lt;std::chrono::microseconds&gt;(state.timestamp - startTime).count() &lt;&lt; " connectionQuality: " &lt;&lt; state.connectionQuality &lt;&lt; " operationMode: " &lt;&lt; state.operationMode &lt;&lt; " sessionState: " &lt;&lt; state.sessionState &lt;&lt; " driveState: " &lt;&lt; state.driveState &lt;&lt; " ipoJointPosition: " &lt;&lt; state.ipoJointPosition &lt;&lt; " ipoJointPositionOffsets: " &lt;&lt; state.ipoJointPositionOffsets &lt;&lt; "\n"; } } catch (std::exception&amp; e) { std::cerr &lt;&lt; "Exception: " &lt;&lt; e.what() &lt;&lt; "\n"; } return 0; } </code></pre> <p>Here is the <a href="https://github.com/ahundt/grl/blob/master/src/java/grl/src/friCommunication/FRIHoldsPosition_Command.java" rel="nofollow">Java side of the application</a>:</p> <pre><code>package friCommunication; import static com.kuka.roboticsAPI.motionModel.BasicMotions.positionHold; import static com.kuka.roboticsAPI.motionModel.BasicMotions.ptp; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; import com.kuka.connectivity.fri.FRIConfiguration; import com.kuka.connectivity.fri.FRIJointOverlay; import com.kuka.connectivity.fri.FRISession; import com.kuka.roboticsAPI.applicationModel.RoboticsAPIApplication; import com.kuka.roboticsAPI.controllerModel.Controller; import com.kuka.roboticsAPI.deviceModel.LBR; import com.kuka.roboticsAPI.geometricModel.CartDOF; import com.kuka.roboticsAPI.motionModel.controlModeModel.CartesianImpedanceControlMode; /** * Creates a FRI Session. */ public class FRIHoldsPosition_Command extends RoboticsAPIApplication { private Controller _lbrController; private LBR _lbr; private String _hostName; @Override public void initialize() { _lbrController = (Controller) getContext().getControllers().toArray()[0]; _lbr = (LBR) _lbrController.getDevices().toArray()[0]; // ********************************************************************** // *** change next line to the FRIClient's IP address *** // ********************************************************************** _hostName = "192.170.10.100"; } @Override public void run() { // configure and start FRI session FRIConfiguration friConfiguration = FRIConfiguration.createRemoteConfiguration(_lbr, _hostName); friConfiguration.setSendPeriodMilliSec(4); FRISession friSession = new FRISession(friConfiguration); FRIJointOverlay motionOverlay = new FRIJointOverlay(friSession); try { friSession.await(10, TimeUnit.SECONDS); } catch (TimeoutException e) { // TODO Automatisch generierter Erfassungsblock e.printStackTrace(); friSession.close(); return; } CartesianImpedanceControlMode controlMode = new CartesianImpedanceControlMode(); controlMode.parametrize(CartDOF.X).setStiffness(100.0); controlMode.parametrize(CartDOF.ALL).setDamping(0.7); // TODO: remove default start pose // move to default start pose _lbr.move(ptp(Math.toRadians(10), Math.toRadians(10), Math.toRadians(10), Math.toRadians(-90), Math.toRadians(10), Math.toRadians(10),Math.toRadians(10))); // sync move for infinite time with overlay ... _lbr.move(positionHold(controlMode, -1, TimeUnit.SECONDS).addMotionOverlay(motionOverlay)); //_lbr.moveAsync(ptp(Math.toRadians(-90), .0, .0, Math.toRadians(90), .0, Math.toRadians(-90), .0)); // TODO: remove default start pose // move to default start pose _lbr.move(ptp(Math.toRadians(10), Math.toRadians(10), Math.toRadians(10), Math.toRadians(-90), Math.toRadians(10), Math.toRadians(10),Math.toRadians(10))); // done friSession.close(); } /** * main. * * @param args * args */ public static void main(final String[] args) { final FRIHoldsPosition_Command app = new FRIHoldsPosition_Command(); app.runApplication(); } } </code></pre>
7760
2015-07-28T12:57:17.507
|robotic-arm|c++|
<p>Until now I have been programming the robot using Java on KUKA's IDE "KUKA Sunrise.Workbench", what I want to do is control the robot arm via my C++.Net application (I would use a camera or Kinect to get commands). I'm reading the documents provided by Kuka, but as I'm a bit in hurry, I want to understand how a C++ client application (running on my laptop) can send/receive information to/from the robot's controller "KUKA Sunrise Cabinet" (running the server application) via FRI. I still have issues grasping the whole mechanism.</p> <p>A simple application (Server/Client) source code with explanation (or a schematic) would be more than helpful .</p>
Create a simple C++ client Application to control KUKA's Robot-arm LBR iiwa via FRI
<p>I have been doing a lot of reading up on kinematic calibration and here is what I found:</p> <p>From [1]:</p> <blockquote> <p>A kinematic model should meet three basic requirements for kinematic-parameter identification:</p> <p>1) Completeness: A complete model must have enough parameters to describe any possible deviation of the actual kinematic parameters from the nominal values. </p> <p>2) Continuity: Small changes in the geometric structure of the robot must correspond to small changes in the kinematic parameters. In mathematics, the model is a continuous function of the kinematic parameters.</p> <p>3) Minimality: The kinematic model must include only a minimal number of parameters. The error model for the kinematic calibration should not have redundant parameters.</p> </blockquote> <p>While DH parameters are complete and minimal, they are not continuous. In addition, there is a singularity when two consecutive joints have parallel axes. From [2]:</p> <blockquote> <p>Our assumption is that small variations in the position and orientation of two consecutive links can be modeled by small variations of the link parameters. This assumption is violated if we use the Denavit and Hartenberg link geometry characterization when the two consecutive joints have parallel or near parallel axes.</p> </blockquote> <p>This has led a number of researchers to propose alternative models. Namely the Hayati model [2], Veitschegger and Wu’s model [3], Stone and Sanderson’s S-model [4], and the "Complete and Parametrically Continuous" (CPC) model [5].</p> <p>These models typically involve adding parameters. Which creates redundancy that has to be dealt with. Or they are specifically tailored to the geometry of their robot. Which eliminates generality.</p> <p>One alternative is the Product of Exponentials formulation [6]. The kinematic parameters of in POE model vary smoothly with changes in joint axes and can handle kinematic singularities naturally. However, due to the use of joint twists, this method is not minimal. This led Yang et al. [7] to propose a POE formulation with only 4 parameters per joint which is minimal, continuous, complete, and general. They do this by choosing joint frames very specifically. (Which actually vaguely resemble D-H frames).</p> <hr> <p>[1]: Ruibo He; Yingjun Zhao; Shunian Yang; Shuzi Yang, "Kinematic-Parameter Identification for Serial-Robot Calibration Based on POE Formula," in Robotics, IEEE Transactions on , vol.26, no.3, pp.411-423, June 2010</p> <p>[2]: Hayati, S.A., "Robot arm geometric link parameter estimation," in Decision and Control, 1983. The 22nd IEEE Conference on , vol., no., pp.1477-1483, - Dec. 1983</p> <p>[3]: W. Veitschegger and C. Wu, “Robot accuracy analysis based on kinematics,” IEEE Trans. Robot. Autom., vol. RA-2, no. 3, pp. 171–179, Sep. 1986.</p> <p>[4]: H. Stone and A. Sanderson, “A prototype arm signature identification system,” in Proc. IEEE Conf. Robot. Autom., Apr. 1987, pp. 175–182.</p> <p>[5]: H. Zhuang, Z. S. Roth, and F. Hamano, “A complete and parametrically continuous kinematic model for robot manipulators,” IEEE Trans. Robot. Autom., vol. 8, no. 4, pp. 451–463, Aug. 1992.</p> <p>[6]: I. Chen, G. Yang, C. Tan, and S. Yeo, “Local POE model for robot kinematic calibration,” Mech. Mach. Theory, vol. 36, no. 11/12, pp. 1215– 1239, 2001.</p> <p>[7]: Xiangdong Yang, Liao Wu, Jinquan Li, and Ken Chen. 2014. A minimal kinematic model for serial robot calibration using POE formula. Robot. Comput.-Integr. Manuf. 30, 3 (June 2014), 326-334.</p>
7763
2015-07-28T16:57:05.820
|robotic-arm|kinematics|calibration|dh-parameters|
<p>I am specifically interested in DH parameters versus other representations in terms of kinematic calibration. The best (clearest) source of information I could find on kinematic calibration is in the book "<a href="https://www.google.com/shopping/product/18328483085519729280" rel="noreferrer">Robotics: Modelling, Planning and Control</a>" by Bruno Siciliano, Lorenzo Sciavicco, Luigi Villani, Giuseppe Oriolo, chapter 2.11. Which requires a description of the arm in DH parameters, multiplying out the kinematics equation, partial differentiation w.r.t. each DH parameter, then a least-squares fit (with the left pseudo-inverse), then iterate. </p> <p>Is there some fundamental reason why DH parameters are used instead of a different representation (like xyz + euler angles). I understand that there are fewer parameters (4 versus 6 or more), but for a calibration procedure like this I will be taking much more data than unknowns anyway. All the robotics textbooks i have read just present DH parameters and say "this is what you should use", but don't really go into <em>why</em>. Presumably this argument can be found in the original paper by Denavit, but I can't track it down.</p>
Why do we generally prefer DH parameters over other kinematic representations of robot arms?
<blockquote> <h2>Power Law:</h2> <p>Power = Current x Voltage</p> </blockquote> <p>The voltage of the batteries plays a huge difference in whether electronics function properly. </p> <p>Each of them is built to run using a particular voltage - 3.3V, 5V, 6V, 7.2V, 11.1V, 12V, 19V and 24V are some common operating voltages for electronics. Some devices have a <em>voltage regulator</em> that shift input voltages to a safe level. This allows you to use a higher voltage power supply to power your robot/computer/sensor. For example, the Arduino Uno has a voltage regulator that accepts voltages between 7-30V and converts them to the 5V that the microprocessor runs at.</p> <p>From a motor standpoint, decreased voltage means lower power output, which means that either your motors won't turn as fast or they won't be able to output as much torque. </p> <p><strong>The best thing for you to do right now is look up the datasheets online for your quadcopter or the individual components and find out what voltage each of the parts run at.</strong> If your second quadcopter needs anything higher than 7.2V, then there's your problem. If you have trouble deciphering the datasheets, you're welcome to edit your post to include the links and we'll try to help you understand them.</p>
7765
2015-07-28T21:59:14.000
|battery|
<p><strong>Background:</strong></p> <ol> <li>6 propeller drone w 20C 3s 6400 mAh 11.1 liPO battery</li> <li>4 propeller drone w 25C 2s 5000 mAh 7.40 liPO battery</li> </ol> <p><strong>Behavior:</strong></p> <ul> <li><p>Drone 1 flies with ease</p></li> <li><p>Drone 2 struggles hover 2-3 inches above ground</p></li> </ul> <p><strong>Question:</strong></p> <p>The microcontroller, all props, ESCs, and motors are the same. I'm thinking the reason the drones are flying so differently is because of the difference in batteries. IF the batteries are the reason, what would be the property that is most responsible for the difference in flight?</p>
Drone Battery Question
<p>Usually you would use a PID controller's output as a "substitute" for your (the operator's) output. If you bypassed the PID controller, what signal would you be providing to the ESC? It is easiest to put the input to the PID controller and the feedback to the PID controller in the same units as you would like the output to be. </p> <p>In your case, I would use a input and feedback as speeds in percent, then the output of the PID controller should be speed in percent, which is easy to provide in PWM form. If you try to compare speeds in km/hr or m/s or something else like that, then you wind up having to convert that speed into a percentage anyways because you don't send "2 m/s" to a speed controller, you send a speed setpoint - either in rpm or percentage. Again, whatever units the ESC is using are the units you should be using for your PID control. </p>
7766
2015-07-29T05:00:29.137
|arduino|motor|esc|microcontroller|
<p>I need to build a conversion/mapping algorithm from a controller (PID etc.) output to the duty cycle in order to command my bldc motor via esc. I couldn't do it yet because l think l dont know the meaning of controller output. Anybody highlights my way?</p>
Duty cycle mapping
<p>The problem you have is that you're not sending <em>any</em> PWM signal, so the ESC is behaving erratically. You could try to do code to correct for this, but you could have any number of issues. </p> <p>What I do any time I need to implement a kill switch like this is to use <a href="https://www.google.com/search?q=80A+relay+and+socket&amp;tbm=isch" rel="nofollow noreferrer">a power relay</a>. You can drive the relay coil from the microcontroller with a transistor as depicted in my crude schematic below. </p> <p>In this circuit, you have a power switch to turn on the microcontroller board, but then you have to also send HIGH to whatever pin you assign "ESC Enable" in order to close the contacts in the load relay. </p> <p>Now, with this setup, you can setup a heartbeat. The transmitter sends a packet that sets a heartbeat bit HIGH in the receiver. The receiver uses an interrupt to poll the heartbeat bit and, if it's HIGH, clears the bit. If it is LOW, then you have the option to either kill power on the spot (set ESC Enable to LOW) or you could wait for N heartbeats to be LOW.</p> <p>As a final note, be sure that you enable, add, configure, etc. a <a href="http://playground.arduino.cc/CommonTopics/PullUpDownResistor" rel="nofollow noreferrer">pulldown resistor</a> to the ESC Enable pin to ensure that the pin is LOW any time it would otherwise be floating. </p> <p><a href="https://i.stack.imgur.com/HOZRC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HOZRC.png" alt="Load Enable Schematic"></a></p>
7772
2015-07-29T19:02:52.247
|raspberry-pi|esc|
<p>I have a 16 Channel Servo Driver board from Adafruit (see <a href="https://www.adafruit.com/products/815" rel="nofollow">here</a>), and I communicate to it via I2C using a Raspberry Pi. The servo board is controlling a <a href="http://www.hobbyking.com/hobbyking/store/__36674__q_brain_4_x_20a_brushless_quadcopter_esc_2_4s_3a_sbec.html" rel="nofollow">Qbrain</a> by sending a PWM pulse between 1ms to 2ms and it works great.</p> <p>Problem is, I'm trying to create a kill switch such that the signal from the servo board would cease, and the ESC would stop because it detects no PWM signal. I have placed a toggle switch that cuts the VCC to the servo board, so technically it should no longer produce any PWM signal, however when the power is cut, the ESC jumps to 100% throttle, I can only assume this is because the ESC believes the signal is 100% duty cycle, but how do I solve this?</p>
How to cut throttle signal to ESC properly?
<p>Assuming a constant update of 5Hz, your sample time is (1/5) = 0.2s. </p> <p>Get one position of the target, p1. Get a second position of the target, p2. </p> <p>Target speed is the difference in position divided by difference in time: $$ v = (p_2 - p_1)/dT \\ v = (p_2 - p_1)/0.2 $$</p> <p>Now predict where they will be in the future, where future is $x$ seconds from now:</p> <p>$$ p_{\mbox{future}} = p_2 + v*x $$ where again, $p_2$ is the current point of the target, $v$ is the target's calculated speed, and $x$ is how far in the future you want to predict, in seconds.</p> <p>You can do a number of things from here such as filtering a number of samples in the past, you could try curve fitting, etc., but this is a simple method of predicting future location based on known position updates. </p> <p>If you knew more about the path or potential paths your target could take you could optimize the prediction or provide a series of cases, but assuming relatively linear motion this model should hold. </p>
7774
2015-07-29T20:24:13.467
|arduino|kalman-filter|automatic|probability|
<p>The task of the robot is as follows. My robot should catch another robot in the arena, which is trying to escape. The exact position of that robot is sent to my robot at 5Hz. Other than that I can use sonsor to identify that robot. Is that possible to estimate the next position of other robot using a mathematical model. If so, can anyone recommend tutorials or books to refer..?</p>
Modeling a robot to find its position
<p>Regarding point 1, yes you are understanding the problem correctly.</p> <p>Regarding points 1 and 2, I believe what you are looking for is the <a href="https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem" rel="nofollow noreferrer">Nyquist-Shannon sampling theory</a>. This theory says that your sampling frequency should be greater than 2x your &quot;highest frequency of interest&quot;. This is to prevent aliasing, where you can incorrectly measure a high-frequency signal as low frequency.</p> <p><a href="https://i.stack.imgur.com/LXPDO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LXPDO.png" alt="From Wikipedia on Aliasing" /></a></p> <p>The image above is from Wikipedia. So, you have your robot with all its joints and limbs and such - how fast can those limbs move? Your moments and torques will all cause accelerations at the joints; what is the top rotational speed at a joint? Or, put another way, what's the peak moment you would expect and how long would it be applied? You can calculate a speed from that as well.</p> <p>You want to sample your joints fast enough that you can capture the full dynamics of the system. That's the sampling threshold (minimum!) I would set for my own robotics project for <em>sensing</em>. For <em>control</em>, <a href="http://www.ni.com/white-paper/2709/en/" rel="nofollow noreferrer">most</a> , <a href="http://ai.stanford.edu/manips/publications/pdfs/Vischer_1990.pdf" rel="nofollow noreferrer">reputable</a> , <a href="https://books.google.com/books?id=2WQP5JGaJOgC&amp;pg=PA316&amp;lpg=PA316&amp;dq=sampling+frequency+for+control+frequency+of+interest+five+times#v=onepage&amp;q=sampling%20frequency%20for%20control%20frequency%20of%20interest%20five%20times&amp;f=false" rel="nofollow noreferrer">sources</a> , <a href="https://books.google.com/books?id=hRtn5ZyViMAC&amp;pg=PA102&amp;lpg=PA102&amp;dq=sampling+frequency+for+control+frequency+of+interest" rel="nofollow noreferrer">say 5-10 times the frequency of interest.</a></p> <p>Your peak accelerations, from your peak torques and moments, are going to be limited by the mass (moment of inertia) of your limbs. The limbs that limit your accelerations are also going to act as a low-pass filter to keep the system relatively constant between samples such that the fact that you're off by one sample shouldn't matter too much.</p> <p>Hope this helps!</p>
7781
2015-07-31T14:41:57.560
|control|actuator|stability|legged|
<p><strong>My Background:</strong></p> <p>My experience is in solid mechanics and FEA. So I have zero experience in robotics/controls. </p> <p><strong>Problem Description</strong></p> <p>I'm developing a control strategy to stabilize a complicated 6-legged dynamical system. Torques <strong><em>Ti</em></strong> from each leg's joints will be used to create a net moment <strong><em>M</em></strong> on the body, stabilizing the system. This moment <strong><em>M</em></strong> is known from the pre-determined control strategy. (Side note: the dynamical solver is of the nonlinear computational type)</p> <p>Due to my lack of background, I have a fundamental confusion with the dynamical system. I want to use joint torques <strong><em>Ti</em></strong> to create this known net moment <strong><em>M</em></strong> on the body. This moment <strong><em>M</em></strong> is a function of the</p> <ol> <li>current positions/angles of all the leg segments</li> <li>reaction forces and moments (that cannot be controlled) of each leg</li> <li>controllable joint torques <strong><em>Ti</em></strong> of each leg</li> <li>time</li> </ol> <p><strong>$(*)$ At a given time $(n-1)\Delta$t:</strong> </p> <blockquote> <p>--From the control strategy, the desired net moment <strong><em>M</em></strong> is computed/known</p> <p>--One can read/sense the legs' positions, angles, reaction forces, and reaction moments (say, from well placed sensors), at this time $t = (n-1)\Delta$t. </p> <p>--From this information, vector algebra easily yields the desired joint torques <strong><em>Ti</em></strong> required to create the net moment <strong><em>M</em></strong></p> </blockquote> <p><strong>$(**)$ At the time $(n)\Delta$t:</strong></p> <blockquote> <p>--one applies the previously determined joint torques <strong><em>Ti</em></strong> (determined at $t=(n-1)\Delta$t) to create the desired moment <strong><em>M</em></strong> </p> <p>--of course these torques <strong><em>Ti</em></strong> are applied at the immediate proceeding time step because they cannot be applied instantaneously</p> </blockquote> <p>So this is exactly where my fundamental confusion exists. The torques <strong><em>Ti</em></strong> were calculated in $(*)$, based on data of angles/positions/reactions in $(*)$, with the objective to create moment <strong><em>M</em></strong>. However, these torques <strong><em>Ti</em></strong> are applied in $(**)$, where the data (angles/positions/reactions) are <strong>now different</strong> - thus the desired net moment <strong><em>M</em></strong> can never be created (unless you an magically apply actuation at the instantaneous time of sensing). Am I understanding the controls problem correctly? </p> <p><strong>Questions</strong></p> <ol> <li>Am I understanding the robotics problem correctly? What are the terms and strategies around this dilemma?</li> <li>Of course I could create the time steps between the sensing and the actuation to be infinitely small, but this would be unrealistic/dishonest. What is the balance between a realistic time step, but also performs the task well?</li> </ol>
How do I decide the size of the time steps between sensing and control actuation?
<p>Try to couple the batteries with voltage regulator. A simple IC DC regulator would do. It would help maintain the input voltage of the motor and helps getting the relation between RPM vs Thrust independent to the voltage irregularities.</p>
7791
2015-08-01T14:48:42.370
|quadcopter|brushless-motor|esc|
<p>I try to find out the relation between rpm vs. thrust for a battery+motor+propeller combination. the image shows my setup and also the measurement result. Can anyone explain how l should use this datas (I know Kv.v gives the rpm but my voltage values decreasing because of P=V.I relation etc.) </p>
thrust measurement
<p>This is a standard dynamics problem. Let's use this figure I drew:</p> <p><a href="https://i.stack.imgur.com/VQ0si.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VQ0si.png" alt="Corrected car going up incline."></a></p> <p>Some definitions:</p> <p>$$ \begin{align} m &amp; \mbox{, the mass of the vehicle in kg.} \\ \mu_{\mbox{rolling}} &amp; \mbox{, the rolling friction coefficient of your tires.} \\ \theta &amp; \mbox{, the incline of the plane in radians.}\\ g &amp; \mbox{, the gravitational constant, 9.81 } m/s^2 \\ r_{\mbox{tire}} &amp; \mbox{, the radius of the tire in m.} \end{align} $$</p> <p>The normal force of the vehicle against the incline is going to be $$ f_{\mbox{normal}} = mg \cos{\theta} \\ $$ The normal force is going to play into your frictional force in that $$ f_{\mbox{friction}} = \mu_{\mbox{rolling}} f_{\mbox{normal}} \\ $$</p> <p>The force opposing the vehicle, trying to cause it to roll down the slope, is $$ f_{\mbox{opposing}} = m g \sin{\theta} \\ $$</p> <p>Now, keep in mind that the opposing force and the frictional forces are both <em>at the point of contact between the wheel and the surface</em>. Also note that frictional coefficients are constant for a given force <a href="http://www.physlink.com/Education/AskExperts/ae140.cfm" rel="nofollow noreferrer">regardless of contact surface area</a>, so as long as all four tires are equal and evenly loaded, you don't need to worry about the per-tire information <strong>yet</strong>. </p> <p>So, you have your forces to overcome; opposing and friction. Now you need to convert those to torques:</p> <p>$$ \tau = f r_{\mbox{tire}} \sin{\phi} $$</p> <p>Where here I've used $\phi$ to help distinguish between the applied force angle and the incline angle $\theta$. Anyways, the force is being applied at a right angle, $\sin{90} = 1$, so you can leave that term off and you're left with</p> <p>$$ \begin{align} \tau_{\mbox{ losses}} &amp;= (f_{\mbox{opposing}} + f_{\mbox{rolling friction}}) r_{\mbox{tire}} \\ &amp;= (mg( \sin{\theta}+\cos{\theta} \mu_{\mbox{rolling}})) r_{\mbox{tire}} \\ \end{align} $$</p> <p>Now your speed comes into play, with the formula:</p> <p>$$ P_{\mbox{rotating}} = \tau_{\mbox{applied}} \omega \\ $$ where $P_{\mbox{rotating}}$ is the rotational power output in Watts, $\tau_{\mbox{applied}}$ is the torque applied in Nm, and $\omega$ is the rotational velocity in rad/s. Your vehicle's linear speed, in $\frac{m}{s}$, is given by:</p> <p>$$ v = r_{\mbox{tire}} \omega \\ $$ so $$ \omega = \frac{v}{r_{\mbox{tire}}} \\ $$</p> <p>Putting it all together:</p> <p>$$ \begin{align} P_{\mbox{rotating}} &amp;= (mg (\sin{\theta} + \cos{\theta} \mu_{\mbox{rolling}})) r_{\mbox{tire}} \frac{v}{r_{\mbox{tire}}} \\ &amp;= v m g(\sin{\theta} +\cos{\theta} \mu_{\mbox{rolling}})) \\ \end{align} $$ </p> <p>Now, finally, this is the rotating power required to move the entire vehicle. At this point, however many (equally sized and loaded) tires you use needs to contribute a corresponding fraction of this power; 4 tires = each one needs to deliver at least 1/4 of the power given above. </p>
7796
2015-08-03T13:53:02.913
|motor|motion|torque|wheel|
<p>Say I had an object with 4 motors/wheels attached (in a fairly standard arrangement).</p> <p>I need to calculate the amount of torque required from the motors to be able to move the object of x kilograms consistently (without skipping any steps) at a velocity of y, travelling up a slope of angle z.</p> <p>I'm guessing this would also depend on factors like the grip of the tyre and such?</p>
Calculating Required Torque
<p>Please refer to this Article "Keeping a Good Attitude: A Quaternion-Based Orientation Filter for IMUs and MARGs".</p> <p>They are using low-pass filter at stationary gyro to estimate the gyro bias and then subtract it from the original signal. They are doing that only if the gyro is stationary. However, I recommend to remove the bias when the gyro is stationary by means of subtracting the mean value. </p> <p>The method of subtracting the bias only when the gyro is stationary based on low pass filter is called online gyro compensation.</p>
7803
2015-08-04T21:37:12.240
|gyroscope|matlab|filter|
<p>I'm using Matlab to suppress low frequency components with a high pass filter. </p> <p><strong>Objective</strong></p> <ul> <li>Filter angular velocity measurements affected by high frequency noise and bias in order to get the best estimate of the angular position.</li> </ul> <p>The output when the gyroscope is still looks like this.</p> <p><a href="https://i.stack.imgur.com/y8isi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y8isi.png" alt="Simulated angular velocity"></a></p> <p><strong>First Approach</strong></p> <p>The easiest way to remove baseline is to remove the average and can be achieved with Matlab using one line of code.</p> <pre><code>yFilt = y - mean(y) </code></pre> <p><strong>Second Approach</strong></p> <p>We can design a high pass filter to attenuate low frequency components. If we analyze the frequency components of the signal we will see one peak at low frequency and "infinite" small components in all frequencies due to Noise. With a second order ButterWorth filter with normalized cutoff freq <code>Wn = 0.2</code> we will get what we are looking for.</p> <p><a href="https://i.stack.imgur.com/FVmKM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FVmKM.png" alt="First half of DFT in normalized scale"></a></p> <p><strong>Filtered data</strong></p> <p><a href="https://i.stack.imgur.com/ELXzb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELXzb.png" alt="Filtered data"></a></p> <p><strong>Tilting the Gyro</strong></p> <p>When we tilt the gyroscope the situation changes. With a sampling frequency of 300Hz we get the following plot.</p> <p><a href="https://i.stack.imgur.com/m1XaJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m1XaJ.png" alt="Angular Velocity Wx [deg/sec]"></a></p> <p>The first half of the dft is shown below in a normalized scale.</p> <p><a href="https://i.stack.imgur.com/w8mvU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w8mvU.png" alt="First half of DFT in normalized scale"></a></p> <p>You can find the sample.mat file <a href="https://drive.google.com/file/d/0BwU34T-_OF2saFVGS0hqMUk0cVE/view?usp=sharing" rel="nofollow noreferrer">here</a></p> <p>The first approach works great. I would like to apply the second one to this particular case but here there are other low frequency components that make to job harder.</p> <p><strong>How can I apply the second approach based on the High Pass filter to remove the bias?</strong></p> <p><strong>EDIT 1</strong> You can find more information <a href="http://userk.co.uk/filtering-matlab-gyroscope-bias/" rel="nofollow noreferrer">here</a></p> <p><strong>EDIT 2</strong> How can we filter this signal to remove bias while keeping the angular velocity information (from 110-th to 300-th sample) intact?</p> <p>If gyroscopes have the bias problem only when they are not experiencing any rotation, then the offset is present only in the first ~110 samples.</p> <p>If the above hypothesis is correct, maybe if we apply high pass filtering only in the first 110 samples and desactivate the filter during rotations of the gyro, the estimated angular position will be more accurate.</p>
Gyroscope - How can I remove low frequency component with a high pass filter only?
<p>The Vex robotics platform has many structural pieces you can use to build a robot. It is basically the same as an old school erector set. It also contains motors and sensors that you could interface with your Arduino.</p> <p>The most important part is that it has motor controllers so that you can control the motor. With a standard Arduino you can not drive a motor directly because it would not be able to handle the current.</p> <p><a href="http://www.vexrobotics.com/vex/products/" rel="nofollow">http://www.vexrobotics.com/vex/products/</a></p>
7809
2015-08-06T00:07:14.997
|arduino|kit|
<p>I have an arduino, wires, resistors, all of that good stuff. However, I don't have materials to build the structure of the robot. What do you guys recommend? I don't have a place to solder yet so I can't solder but is there a kit or material that you guys recommend? Will it work well with motors and other stuff? Thanks! </p> <p>P.S. I plan on building a standard driving robot, but I want to be able to make other robots with the same materials/kit. I don't want a kit that only makes one robot, I want a Lego-esque approach to building the structure where I can build whatever I want with it. (Bump2)</p>
Robot structure kit or materials
<p>For the most part, it will increase the gain of the controller.</p> <blockquote> <p>doesn't affect lift capabilities.</p> </blockquote> <p>Adding weight to something that flies always decreases lift capabilities. However, this influence is likely very small.</p> <p>So here's your quadrocopter with 1 DOF rotating around an axis:</p> <p>$$a\ddot r + b\dot r + c r$$</p> <p>The general differential equation<sup>1</sup> for a mechanical system. $r$ is the angle of rotation and $a$, $b$ and $c$ are coefficients describing the system.</p> <p>You have probably concluded by now that <sup>1</sup>this was a blatant lie, for the obvious lack of the other side of the equal sign. that missing side is the load that you apply. This is usually in the form of rotors, producing thrust. For simplicities sake let's assume that this can be modelled as a force $f$. This force is applied at a distance from the center of rotation and that's where the arm length $l$ comes into play:</p> <p>$$a\ddot r + b\dot r + c r = l f$$</p> <p>Transforming into...</p> <p>$$as^2 R + bsR + cR = l F$$</p> <p>getting the transfer function of the system that we are interested in: $$\frac{R}{F}= \frac{l}{as^2 + bs + c}$$</p> <p>$\frac{R}{F}$ can be understood as "if I add this much $f$, how much $r$ do I get back?" $l$ is basically the gain. It is a constant factor, written a little differently, this becomes more obvious:</p> <p>$$\frac{R}{F}= l\cdot\frac{1}{as^2 + bs + c}$$</p> <p><em>How much $r$ do I get back?</em> Can be answered as "$l$ times something". With a bigger $l$ you get more bang for your buck (or more rotation for your force respectively). Which means your motors don't have to <em>go to 11</em> all the time.</p> <hr> <p>But what about stability? Stability can be determined from the roots of the denominator, called poles. The question essentially is how does $l$ influence $a$, $b$ and $c$ and how does that affect stability?</p> <ul> <li><strong>a</strong> - moment of inertia: While it's unclear how the mass is distributed in the system, one can assume that, based on the general formula, $l$ has a quadratic influence on a, that is $a = a(l^2)$ That means increasing $l$ will increase $a$ a lot </li> <li><strong>b</strong> - damping: This is hard to estimate. I guess most of the damping in the system comes from wind resistance. Increasing $l$ will only add little surface to the copter, hence wind resistance will not increase much (if at all). I conclude that $l$ has little to no influence on $b$</li> <li><strong>c</strong> - spring coefficient: there's certainly an influence, but you want to keep that as minimal as possible by design, because you want to make the arms as stiff as possible. Nobody likes wobbly structures.</li> </ul> <p>Now where <em>are</em> the poles?</p> <p>$$s_{1/2} = -\frac{b}{2a} \pm\frac{\sqrt{b^2-4ac}}{2a}$$</p> <p>The important part for stability is that $s_{1/2}$ has a negative real part. Increasing $a$ due to increasing $l$ certainly reduces the negativity of the term, but it will not change it to be positive.</p> <p>The conclusion of this very rough estimation is that the system will not become unstable when the arm length is increased.</p> <p>Of course, this is a <strong>very</strong> handwavy estimate without knowing any of the actual values. If you want more, <a href="http://sal.aalto.fi/publications/pdf-files/eluu11_public.pdf" rel="nofollow">go</a> <a href="http://math.ucsd.edu/~mleok/pdf/LeLeMc2010_quadrotor.pdf" rel="nofollow">right</a> <a href="http://www.control.lth.se/documents/2008/5823.pdf" rel="nofollow">ahead</a> (&lt;- 3 links)</p> <p>Quadrocopters are a popular topic not just by enthusiast but also academia, so you find a lot of papers about it, giving more insights on more detailed models of the system.</p> <p>I think the rough estimate given in this answer is sufficient to explain the influences of the length of the arm.</p>
7819
2015-08-06T14:47:06.183
|control|quadcopter|stability|
<p>How quadcopter's arm length affect stability?</p> <p>As per my view I'll have better control on copter with longer arms but with stresses in arms and also it doesn't affect lift capabilities.</p>
Deciding length of quadcopter arms
<p>I would use linear or rotary encoders - the difference in terminology, encoder vs. potentiometer, is in the fact that any linear potentiometer could be used as a linear encoder, but not all linear encoders are potentiometers. From the <a href="https://en.wikipedia.org/wiki/Linear_encoder" rel="nofollow">Wikipedia article on linear encoders</a>: Optical linear encoders, "following interpolation, can provide resolutions as fine as a nanometre."</p> <p>I've worked on full-scale vehicle autonomy projects before, and I can say that, for the same cost, sealed (IP-67) linear potentiometers are far less expensive and easier to attach than are other forms of linear encoders, but you haven't commented on your application so I don't know if water resistance (or cost for that matter) makes a difference for you. </p> <p>As a final note, there is typically a degree of temperature dependence in resistive elements, so again, be sure to take your expected operating conditions into account when making this selection. A resistive element can provide absolute position but will require a range calibration to correlate readings to position, but other forms of absolute linear position indicators typically do not. You may or may not need absolute position, but again, this all depends on your application. </p>
7823
2015-08-07T04:34:52.770
|control|sensors|accelerometer|
<p>We're building an 6dof joystick, and we need to accurately measure the displacement of our central device. We can easily use a mechanical connection to the edges, but there has been some discussion about what the best way to achieve this is. The range of motion will be fairly small, but accuracy is incredibly important for us.</p> <p>Which sensors are most easily and accurately measured?</p> <p>My impulse response is that rotational and linear potentiometers are the most reliable, but others have been arguing for using gyros/accelerometers. I've also heard that hall effect sensors can be used to great effect.</p>
Which sensor type most accurately measures position?
<p>The underlying mechanism used by high performance laser painting devices is usually a <em>galvo</em> (Galvanometer scanner, Laser Galvanometer or Mirror Galvanometer).</p> <p>High performance galvos, like this one from scanlab are essential to the laser <a href="http://www.scanlab.de/en/applications/micromachining" rel="noreferrer">micro-machining</a> (drilling, cutting, marking) industries:</p> <p><a href="https://en.wikipedia.org/wiki/File:DynAXIS_L.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/L2wI2.jpg" alt="Scanlab mirror galvanometer from Wikipedia"></a></p> <p>Their cost and performance may significantly exceed your requirements however. These are designed to position themselves in microseconds, to sub-micron (near field) accuracy.</p> <p><sub>I have no association with Scanlab, but I worked for many years in the laser micro-machining industry and they are the products I'm most familiar with.</sub></p>
7825
2015-08-07T08:44:40.683
|laser|
<p>I'm searching for a (commercial) projector that just projects a single laser point into the world (e.g. using two moving mirrors). However, I'm struggling because I'm not sure what such a thing is called. I either find area projectors that use lasers, party equipment or laser pointers. </p> <p>What is the name for such a device? </p>
What device do I need to project a laser to point at a specific location?
<p>I believe this blog has at least part of the answer to your question. </p> <p><a href="http://www.rcgroups.com/forums/showthread.php?t=2098877" rel="noreferrer">Brushless gimbal motor vs. Brushless Motor vs. Servo</a></p> <p>A user here described the comparison as: </p> <blockquote> <p>A brushless gimbal motor is a regular brushless motor but wound for very low speed (ie lots of turns of thin wire) and many do indeed have more poles.</p> </blockquote> <p>Another post continues:</p> <blockquote> <p>They are both low and high speed at the same time. Low speed, in that they never actually complete a full 360 rotation so you wouldn't normally think of measuring their speed in rpm, but high speed in that they need to respond <em>instantly</em> to sometimes very rapid changes in direction as the aircraft may be getting shook by vibrations and turbulence. That means they're meant to be light and fast, not powerful and slow. These motors are not meant to carry any kind of continuous load.</p> </blockquote> <p>Generally, I think it can be said that a brushless gimbal motor is one specifically wound to be used in fine control applications. A brushless motor on the other hand might be considered more of a load bearing motor. </p> <p>That being said, most brushless gimbal motors are used in applications that could otherwise be filled by servo motors: finely controlled applications.</p> <p>I hope this helps.</p>
7828
2015-08-07T17:14:29.157
|motor|
<p>How are the brushless motors in a gimbal assembly designed?</p> <p>Obviously it doesn't need continual rotation, but it does need accurate control of precise position. I've noticed that the motors in my gimbal don't have the usual magnetic 'snap' positions that my other motors do. </p> <p>What are the primary design differences in these kinds of motor, if any?</p>
How is a brushless gimbal motor different from a regular brushless motor?
<p>The math involved for controlling a holonomic robot is really not too bad. It is basically just high-school trigonometry and knowing how to set up the problem.</p> <p>First, lets start with the joystick. I think the easiest way to deal with joysticks is to convert it's cartesian $(x,y)$ readings into polar $(r,\phi)$ coordinates. This will allow us to control the holomonic robot much more naturally.</p> <p><a href="https://i.stack.imgur.com/PW7zX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PW7zX.png" alt="joystick diagram"></a></p> <p>The polar coordinates are simply: $$ \phi_{joy} = atan2(Y_{joy}, X_{joy}) $$ $$ r_{joy} = \sqrt{Y_{joy}^2 + X_{joy}^2} $$</p> <p>Now we can think of this as a vector, where the angle indicates the direction to move, and the length of the vector is the desired speed.</p> <p><a href="https://i.stack.imgur.com/PaCP4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PaCP4.png" alt="speed vector"></a></p> <p>If we want full speed to be when the joystick is pushed straight forward, we have a little bit of a problem. If the joystick is pushed diagonally, $r$ will be greater than when it is forward. Basically $\sqrt{Y_{max}^2 + X_{max}^2} &gt; Y_{max}$. This is because the joystick space is square, whereas we want a circular space. </p> <p><a href="https://i.stack.imgur.com/tb4TT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tb4TT.png" alt="joystick corner problem"></a></p> <p>There are a number of ways to compensate for this. Some simple ways are to make diagonal full speed, but this isn't great because straight forward is not full speed anymore. You could also simply cap the speed, but now your joystick has a large dead zone. My preferred method is to adaptively scale $r$ depending on where it is in joystick space. You need to extend the line of where the joystick is currently $(r,\phi)$, to the boundary of the space. Let's call this $b$. </p> <p><a href="https://i.stack.imgur.com/yKkzn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/yKkzn.png" alt="adaptive scaling"></a></p> <p>Computation of this length is simply a matter of a few conditionals (to determine which side of the square you intersect) and some more trigonometry. This is left as an exercise for the reader. Now the translational speed is:</p> <p>$$ speed_t = maxspeed * r_{joy} / b $$</p> <p>Where $maxspeed$ is what ever you want the max translational velocity of your robot to be. Note that your joystick units cancel out here.</p> <p>Now to figure out the individual wheel speeds. This procedure will work for any type of robot with omni-wheels. (But not mecanum wheels. Those are different). 3 wheels, 4 wheels, and even odd combinations with different radii from the robot center and wheel diameters. As long as all the wheel axes converge at a single point. But lets focus on 3 wheels as in your question.</p> <p><a href="https://i.stack.imgur.com/hzFWX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hzFWX.png" alt="kinds of omni-wheel robots"></a></p> <p>First, number the wheels.</p> <p><a href="https://i.stack.imgur.com/8ELBn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8ELBn.png" alt="wheel numbers"></a></p> <p>Then create a coordinate system. (i.e. designate the front of your robot). Now lets call the angle to each wheel axis. $\alpha_1$, $\alpha_2$, and $\alpha_3$. </p> <p><a href="https://i.stack.imgur.com/9yB8q.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9yB8q.png" alt="wheel angles"></a></p> <p>And lets set our convention to be positive motor speeds will make the robot spin clockwise. </p> <p><a href="https://i.stack.imgur.com/KWrqM.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KWrqM.png" alt="wheel direction convention"></a></p> <p>Now we overlay our joystick vector on our robot. We create a right triangle between the motor axis and the joystick vector. For wheel 1, it will look like this:</p> <p><a href="https://i.stack.imgur.com/1LGUx.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1LGUx.png" alt="wheel 1 computation"></a></p> <p>Now wheel 1's speed ($M_1$) is simply the length of the line. To get this quantity, we subtract the angles, then use $sin$.</p> <p>$$ \theta_1 = \alpha_1 - \phi $$ $$ M_1 = speed_t * sin(\theta_1) $$</p> <p>Note that this is actually the ground speed that the wheel should have. So you will probably want to convert it into rotational velocity (rpm for example). Use the equation for circle circumference, and keep your units straight.</p> <p>And that's it. Believe it or not, but the same equation works for all wheels. This entire procedure is only for translating the robot. If you want the robot to rotate too then you can simply add some rotation to all 3 motors.</p>
7829
2015-08-07T21:15:51.847
|mobile-robot|wheeled-robot|
<p>I have created a three wheeled omni robot like the diagram below. Now I am unsure of how to program it. I want to use a single joystick so one x and one y value. The values for x and y are between -1 and 1, also the motors can be set anywhere from -1 to 1. How do I use this data to make the robot move based on the joystick without changing orientations? After doing some initial research this seems like a complex problem, but I am hoping there is a formula that I can.</p> <p><a href="https://i.stack.imgur.com/xDIs0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xDIs0.png" alt="enter image description here"></a></p>
How to program a three wheel Omni?
<p>I've found the answer: a minus sign is needed here: <code>gyro_y_scaled = - (self.result_list[5] / self.gyro_scale)</code>.</p> <p>Explanation: <code>gyro_y_scaled</code> is the velocity in rad/sec. If you check the code, especially lines 29-31 on <a href="http://blog.bitify.co.uk/2013/11/reading-data-from-mpu-6050-on-raspberry.html" rel="nofollow">this page</a>, you can see, there is a minus sign before <code>math.degrees(radians)</code>, but my implementation has no minus sign before <code>math.degrees(math.atan2(acc_x_scaled, self.dist(acc_y_scaled,acc_z_scaled)))</code>. In addition, <code>self.gyro_y_angle -= gyro_y_scaled * dt</code>, there is also minus sign instead of plus. To sum up, the velocity and the angles, mentioned above, had "opposite" values and this is why the filters didn't work. </p>
7831
2015-08-08T18:27:15.080
|kalman-filter|
<p>I'm working on a Python script which reads the data from the <strong>MPU6050</strong> IMU and returns the angles using sensor fusion algorithms: <strong>Kalman</strong> and <strong>Complementary</strong> <strong>filter</strong>. Here is the implementation: Class MPU6050 reads the data from the sensor, processes it. Class Kalman is the implementation of the Kalman filter. The problem is the next: None of the Kalman, neither the Complementary filter returns appropriate angle values from the <strong>Y</strong> angle. The filters work fine on the <strong>X</strong> angle, but the <strong>Y</strong> angle values make no sense. See the graphs below. I've checked the code million times, but still can't figure out where the problem is. <a href="https://i.stack.imgur.com/1hgV8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1hgV8.png" alt="X angle"></a> <a href="https://i.stack.imgur.com/Hugl7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hugl7.png" alt="Y angle"></a></p> <pre><code>class MPU6050(): def __init__(self): self.bus = smbus.SMBus(1) self.address = 0x68 self.gyro_scale = 131.072 # 65535 / full scale range (2*250deg/s) self.accel_scale = 16384.0 #65535 / full scale range (2*2g) self.iterations = 2000 self.data_list = array('B', [0,0,0,0,0,0,0,0,0,0,0,0,0,0]) self.result_list = array('h', [0,0,0,0,0,0,0]) self.gyro_x_angle = 0.0 self.gyro_y_angle = 0.0 self.gyro_z_angle = 0.0 self.kalman_x = Kalman() self.kalman_y = Kalman() def init_sensor()... def calculate_angles(self): dt = 0.01 comp_y = 0.0 comp_x = 0.0 print("Reading data...") while True: self.read_sensor_raw() gyro_x_scaled = (self.result_list[4] / self.gyro_scale) gyro_y_scaled = (self.result_list[5] / self.gyro_scale) gyro_z_scaled = (self.result_list[6] / self.gyro_scale) acc_x_scaled = (self.result_list[0] / self.accel_scale) acc_y_scaled = (self.result_list[1] / self.accel_scale) acc_z_scaled = (self.result_list[2] / self.accel_scale) acc_x_angle = math.degrees(math.atan2(acc_y_scaled, self.dist(acc_x_scaled,acc_z_scaled))) acc_y_angle = math.degrees(math.atan2(acc_x_scaled, self.dist(acc_y_scaled,acc_z_scaled))) comp_x = 0.95 * (comp_x + (gyro_x_scaled * dt)) + 0.05 * acc_x_angle comp_y = 0.95 * (comp_y + (gyro_y_scaled * dt)) + 0.05 * acc_y_angle kalman_y_angle = self.kalman_y.filter(acc_y_angle, gyro_y_scaled, dt) kalman_x_angle = self.kalman_x.filter(acc_x_angle, gyro_x_scaled, dt) self.gyro_x_angle += gyro_x_scaled * dt self.gyro_y_angle -= gyro_y_scaled * dt self.gyro_z_angle -= gyro_z_scaled * dt time.sleep(dt) def read_sensor_raw(self): self.data_list = self.bus.read_i2c_block_data(self.address, 0x3B, 14) for i in range(0, 14, 2): if(self.data_list[i] &gt; 127): self.data_list[i] -= 256 self.result_list[int(i/2)] = (self.data_list[i] &lt;&lt; 8) + self.data_list[i+1] def dist(self, a,b): return math.sqrt((a*a)+(b*b)) class Kalman(): def __init__(self): self.Q_angle = float(0.001) self.Q_bias = float(0.003) self.R_measure = float(0.03) self.angle = float(0.0) self.bias = float(0.0) self.rate = float(0.0) self.P00 = float(0.0) self.P01 = float(0.0) self.P10 = float(0.0) self.P11 = float(0.0) def filter(self, angle, rate, dt): self.rate = rate - self.bias self.angle += dt * self.rate self.P00 += dt * (dt * self.P11 - self.P01 - self.P10 + self.Q_angle) self.P01 -= dt * self.P11 self.P10 -= dt * self.P11 self.P11 += self.Q_bias * dt S = float(self.P00 + self.R_measure) K0 = float(0.0) K1 = float(0.0) K0 = self.P00 / S K1 = self.P10 / S y = float(angle - self.angle) self.angle += K0 * y self.bias += K1 * y P00_temp = self.P00 P01_temp = self.P01 self.P00 -= K0 * P00_temp self.P01 -= K0 * P01_temp self.P10 -= K1 * P00_temp self.P11 -= K1 * P01_temp return self.angle </code></pre> <p><strong>EDIT:</strong> I've added some information based on @Chuck's answer:</p> <ul> <li><code>self.result_list[3]</code> contains the temperature</li> <li>In my opinion the compl. filter is implemented correctly: <code>gyro_x_scaled</code> and <code>gyro_y_scaled</code> are angular velocities, but they are multiplied by <code>dt</code>, so they give <strong>angle</strong>. <code>acc_?_scaled</code> are accelerations, but <code>acc_x_angle</code> and <code>acc_x_angle</code> are <strong>angles</strong>. Check my comment, where the Complementary filter tutorial is.</li> <li>Yes, there was something missing in the Kalman filer, I've corrected it.</li> <li>I totally agree with you, <code>sleep(dt)</code> is not the best solution. I've measured how much time the calculation takes, and it is about 0.003 seconds. The Y angle filters return incorrect values, even if <code>sleep(0.007)</code> or <code>sleep(calculatedTimeDifference)</code> is used.</li> </ul> <p>The Y angle filters still return incorrect values.</p>
Complementary and Kalman filter don't work for Y angle
<p>First, a point of terminology - I believe when you say "direction" what you mean is what is commonly called a <a href="https://en.wikipedia.org/wiki/Course_(navigation)" rel="nofollow">heading</a>. That is, when you say "direction" what you mean is the angle between an arbitrary line from camera 1 (usually straight forward) and the line between camera 1 and camera 2.</p> <p>Assume camera 2 starts at a known distance from camera 1 (sufficiently long for a good filtered GPS location, hand placed, etc.) and assume you know the initial heading. </p> <p>Call the initial distance between camera 1 and camera 2 $a$.</p> <p>Now camera 2 moves. Your IMU reports the distance traveled. Call this distance $b$.</p> <p>Camera 1 reports the change in heading between the initial and second location, this is an angle (from the arc tangent of your camera 1 output). Call this angle $B$.</p> <p>Now you have an $a$, a $b$, and a $B$. Use the <a href="https://en.wikipedia.org/wiki/Law_of_sines" rel="nofollow">Law of Sines</a> to calculate the distance between camera 1 and camera 2 - this will be $c$.</p> <p>You can now use a basic Kalman filter to filter the IMU distance output to get a better $b$, and thus a better $c$, and you can use another to combine this reading with your noisy GPS position. The method I've described should give you better short term results than the GPS, and the GPS should correct any drift caused by long term error accumulation. </p>
7832
2015-08-08T18:32:13.087
|kalman-filter|cameras|pose|
<p>I am currently working on a pose estimation problem for which I would like to use filtering. To explain the system briefly, it consists of two cameras and each has its own GPS/IMU module. The main assumption is that Camera1 is fixed and stable, whereas camera2 has a noisy pose in 3D. I am using computer vision to obtain the pose (metric translation and rotation) of camera2 w.r.t. camera1, so that I can improve upon the inherent noise of GPS/IMU modules.</p> <p>The problem here is that the translation obtained through the vision method is only up to an arbitrary scale, i.e. at any given instant, I can only obtain a unit vector that specifies the "direction" of the translation and not absolute metric translation. The camera based estimation, although accurate, has no idea about how much actual distance is between the cameras, which is why I have the GPS, which gives me position data with some noise.</p> <p>Example: camera 2 is 5 m to the east of camera 1, the pose from my vision algorithm would say [1, 0, 0] ; 1 m north-east to camera 1, it would be something like [0.7, 0.7, 0]</p> <p>Hence, would it be possible to consider the GPS estimate of the metric translation as well as its covariance ellipse, and somehow link it with the normalized camera measurements to obtain a final, more accurate estimate of metric translation? I am not sure what kind of filters would be happy to use a measurement that has no absolute value in it.</p> <p>Thanks!</p>
Kalman filter for estimating position with “direction” measurements
<p>Although you have asked for facts and not opinions, that's a difficult thing to do. As of now, there are no current standards for much of anything in robotics. The field as a whole is so scattered with many avenues of research. In the end, it all comes down to what makes the most sense for the robot being designed. With that said, I do recall some information regarding this subject in this robotics handbook: <a href="https://www.cs.cmu.edu/%7Emmv/planning/readings/handbook.pdf" rel="nofollow noreferrer">https://www.cs.cmu.edu/~mmv/planning/readings/handbook.pdf</a> but I'm unaware of any specific papers on this topic - most focus on some specific aspect of robotics or AI, and the controller architecture is just a means to that end.</p> <h2>Single Controller Architecture</h2> <p>For small applications or one-offs, this is a perfectly viable choice. If you know that a single MCU can handle all of the data input and actuator output, then there is nothing wrong with just using one MCU.</p> <h3>Pros</h3> <ol> <li>Fewer Components</li> <li>Fewer (or none) board-board connections</li> <li>Lower Cost</li> <li>Less Power Use (potentially)</li> <li>Less Firmware to maintain/update</li> </ol> <h3>Cons</h3> <ol> <li>Not easily expandable (limited I/O / space for code)</li> <li>All components are coupled - removing 1 thing might break other things</li> <li>Not modular - must redesign/reprogram similar systems in the future</li> <li>No redundancy - MCU failure is catastrophic</li> <li>No Fail-safe - Any blocking code stalls all systems</li> </ol> <h2>Multi-Controller Architecture</h2> <p>For larger, complex systems or those which will likely be expanded upon or used for research purposes, a multi-controller architecture is the better option. It should also be noted that robot controllers are not only MCUs, but are commonly controller using microprocessors (MPU) as well, as they are capable of processing more data at higher speeds.</p> <h3>Pros</h3> <ol> <li>Modularity - simple systems can be reused (motor drivers, sensor interfaces)</li> <li>Easy to swap to different sensors using documented interfaces</li> <li>Decoupled components - Switching to a different sensor or motor does not affect other systems</li> <li>Multiple controllers could be less expensive than one powerful one</li> <li>Timing - different systems can run at different clock speeds</li> <li>Simple firmware - specific code is less cluttered</li> <li>Well known and easy to implement serial interfaces (SPI, I2C, USB, etc)</li> <li>Able to split control into layers of abstraction</li> </ol> <h3>Cons</h3> <ol> <li>More components to manage</li> <li>More power usage</li> <li>Complicated hardware infrastructure</li> <li>More cables to run</li> </ol> <h2>Wrapping Up</h2> <p>It's obvious that the pros of one architecture are the cons of the other, and vice versa. This is to be expected. For complex robotic systems, it is common (not necessarily standard, but typical) to break the problem down into smaller chunks - using small MCU circuits that do very specific things. These can then be grouped together, whatever makes sense to you.</p> <p>For example, there could be one central controller for all motor drivers or distance sensors. Similarly, all of the drivers for one section of a robot could be grouped together, such as a single arm in a humanoid robot. All of these mid-level-controllers are then commanded by a single high-level controller which decides when and why to do things, but doesn't really care how it gets done.</p> <p>On the other hand, for a simple robot using a vary unique structure that is unlikely to be reused, a single central controller is often used.</p>
7838
2015-08-09T18:11:37.063
|design|
<p>Are there any standards regarding single vs multiple MCU in a robotic system? More specifically, if a single MCU can handle all of the sensor data and actuator controls, is it better to use a single MCU or multiple MCUs in a hierarchical manner? Are there any references/papers regarding this topic? What are the arguments towards one or the other? I am looking for facts, not personal opinions, so pros, cons, standards and such.</p>
MCU architecture design
<p>When say you say you "want to control a brushless motor with [a] ESC ... and Arduino", do you mean:</p> <p>Do you want the Arduino to act like the RC receiver in the airplane in the air and make the propeller spin slow, stop, fast, etc.? Or,</p> <p>Do you want the Arduino to act like a person changing settings in the ESC using a programming card on the ground?</p> <h2>In the air</h2> <p>In the air, the 3 wire cable between the RC receiver and the ESC carries the so-called <a href="http://www.opencircuits.com/Servo_control" rel="nofollow">"RC PWM signals"</a>. The RC receiver sends exactly the same kind of signals to RC servos. The ESC neither knows nor cares where these signals come from -- the RC receiver, the Arduino PWM (using the <a href="https://www.arduino.cc/en/Reference/Servo" rel="nofollow">Arduino Servo library</a>), the <a href="http://playground.arduino.cc/ComponentLib/Servo" rel="nofollow">Arduino Software Servo library</a> ( periodically calling refresh() in the main loop() ), or some other source.</p> <h2>On the ground</h2> <p>The "beep sounds" are really only useful when changing the settings in the ESC on the ground, perhaps with a programming card. To learn what the "beep sounds mean", see the user manual <a href="http://yinyanmodel.com/EMAX%20User%20Instruction%20for%20Simonk%20Series%20ESC.pdf" rel="nofollow">"EMAX User Instruction for SimonK Series ESC"</a>.</p> <p>Normally this is a one-time setup, so most people do this once manually on the ground. </p>
7850
2015-08-10T19:30:38.450
|arduino|brushless-motor|esc|
<p>I want to control a brushless motor with the "EMAX Simon Series 30amp ESC" and Arduino (Leonardo) board. I am really confused how to do that. I can't understand which beep sounds mean what. I have tested many code examples but they weren't useful. </p>
EMAX ESC Simon Series with arduino
<p>I had problems understanding the diagram and the question. It might be because you didn't explain every step to keep the question short or just because I use some terminology differently so I'd like to go through this slowly. This is not to say that this is the right way to do it, but so that everybody is on the same page. The comments imply that others are also confused. Essentially, I would identify things differently and in control theory giving something a different name can be a major part of the solution.</p> <p>I was pretty much in WTF? mode until I saw this in the comments: <code>r+PID_output</code>, which gave me a hint on how this came to be. Well, at least I think.</p> <h2>The ramp is <em>not</em> part of the plant</h2> <p>No, not even if it is part of the plant. The reason is that adding the ramp makes the plant nonlinear. An LTI system is a <strong>L</strong>inear <strong>T</strong>ime <strong>I</strong>nvariant system. That means (among other things) shifting the input by some amount of time, should only result in a time shifted output, without changing the input value. The ramp breaks that rule.</p> <p>For the sake of some more confusion, $y = mx + n$ is also nonlinear<sup>1</sup>.</p> <p>I could come up with some gruesome simile about how you'd want to rather choke on your own innards than doing nonlinear control theory, but instead I'd say let's try to do this with linear control theory. It is able to handle the situation.</p> <p>The plant is just $k$, it's transfer function (which can be calculated now, just another benefit of kicking the ramp out) is simply:</p> <p>$$ output (t) = input(t) \cdot k \\ Output(s) = Input(s)\cdot k \\ \frac{Output(s)}{Input(s)} = k$$</p> <h2>Neither noise nor integrator are part of the controller</h2> <p>As you say, the noise isn't created in the controller, so why model it there? And the mysterious integrator is removed in a <strong>coincidently</strong> similar way to the ramp. * cough *</p> <p>We are left with the bog standard control loop without a sensor. (a sensor will never be present in this answer, the sensor in the image simply has a transfer function of 1) <a href="https://en.wikipedia.org/wiki/File:Feedback_loop_with_descriptions.svg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FvFIU.png" alt="open loop block diagram"></a></p> <h2>Finding a suitable controller for the system</h2> <blockquote> <p>$P=\frac{1}{k}$ is enough. Since every time the controller gets an error e, it can be calculated back that the compensation on controller output shall be $\frac{e}{k}$. However, Matlab auto-tuning always give me a PID. Why is that?</p> </blockquote> <p>It's tempting to use the inverse of the transfer function of the system as a controller.</p> <p>That would compensate the system exactly, if you are a glas-half-full kind of person. On the other hand, if you are a glas-already-empty-again-damnit-gotta-order-another-one kind of person, it <em>only</em> compensates the system exactly. The subtle difference is that a model is never exact. With slight inaccuracies in the model due to real world stuff, compensating the system transfer function exactly is not a good idea.</p> <p>Adding an integral part for good measure is usually a good idea, one reason is stated <a href="https://en.wikipedia.org/wiki/PID_controller#Integral_term" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>The integral term accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a pure proportional controller. </p> </blockquote> <h2>For those about to ramp, we compensate you<sup>2</sup></h2> <p>The existence of the ramp cannot be denied. Instead of modelling it as part of the system, let's model it as the disturbance variable as can be seen in the following image named $W$</p> <p><a href="http://ctms.engin.umich.edu/CTMS/index.php?example=Suspension&amp;section=ControlPID" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/idIrE.png" alt="closed loop block diagram with disturbance"></a> <em>hint: a click on the image leads you to the source of the image which is an article on how to simulate disturbance transfer functions with matlab</em></p> <p>Two important properties of a system are:</p> <ol> <li>how quickly it follows a change of the input, which is kinda moot, because that's constant $0$ in your case</li> <li>how quickly it can compensate a disturbance</li> </ol> <p>It's usually a trade-off between both. You can compare them by looking at their transfer functions.</p> <p>There's a great image showing that on a non English Wikipedia article. This is optional, so feel free to just skip this.</p> <p>Trouble is that the variables used there have different names. The block diagram with controller $G_R(s)$, system $G_S(s)$ and disturbance $D(s)$ looks like this:</p> <p><a href="https://de.wikipedia.org/wiki/Datei:Control_structure_B.svg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FVO7H.png" alt="yet another closed loop diagram with disturbance"></a></p> <p>The disturbance transfer function is thus given as:</p> <p>$$\frac{Y(s)}{D(s)}=\frac{G_S(s)}{1+G_R(s)G_S(s)}$$</p> <p>Ok, ready? Here it comes: <a href="https://de.wikipedia.org/wiki/Datei:Sprungantwort_der_st%C3%B6rgr%C3%B6%C3%9Fe_mit_angriffspunkt_auf_streckeneingang.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Ajsc.png" alt="German plot of responses from different transfer functions"></a></p> <p>Don't worry if you don't have the Sauerkraut to read it, because I do.</p> <p>There are 4 plots displaying unit jumps of disturbance and input and the output responses to these using two different controllers (both PID controllers but different parameters):</p> <ol> <li>the $\color{blue}{\text{thin blue}}$ line is the plot of the input $w(t)$ jumping to $1$ at $t = 0$, it's basically just a horizontal line at 1</li> <li>the $\color{green}{\text{thin green}}$ line is the plot of the disturbance $d(t)$ jumping to $1$ at $t = 10$</li> <li>the <strong>$\color{blue}{\text{thick blue}}$</strong> line is the response of the output controlled by a controller tuned for quickly adopting changes in the input. As you can see it takes the jump of the input rather well without overshooting much. The disturbance however is not compensated quickly. It causes a huge bump in the plot(up to 1.3) which never fully recovers back to 1 (where it should be) within the visible area of the diagram. <strong>You certainly don't want that.</strong></li> <li>the <strong>$\color{red}{\text{thick red}}$</strong> line shows the response of the output controlled by a controller tuned to quickly compensate disturbances. The response to the input jump is a disaster, but the response to the disturbance jump at $t = 10$ is barely visible. The disturbance jumps from 0 to 1, but the output only climbs to roughly 1.05 because of that. <strong>This is what you want</strong></li> </ol> <p>The only difference between the two is what PID parameters were used:</p> <p>$$\color{blue}{G_R(s) = \frac{0.8(2s+1)(s+1)}{s}}$$ $$\color{red}{G_R(s) = \frac{10(2s+1)(s+1)}{s}}$$</p> <p>There's some factor that should be rather big (10 vs. 0.8) in order to compensate disturbance well, let's call this factor $a$:</p> <p>$$G_R(s) = \frac{a(2s+1)(s+1)}{s}$$</p> <p>The generic PID controller has the transfer function</p> <p>$$G_{PID}(s) = K_p + K_i\frac{1}{s} + K_ds$$</p> <p>I conclude that multiplying all 3 PID parameters by some factor improves how well the system can handle disturbance. </p> <p>Your system is different and your disturbance is a ramp instead of a jump. But the example shows how to analyse a system, which should help you tune your controller.</p> <p>The disturbance is also what I'd add the noise to.</p> <h2>The mysterious integrator</h2> <p>Now that's all nice, but what the heck is this integrator thingy supposed to be? And how is <code>r+PID_output</code> a hint?</p> <p>It's a feed forward compensation of the disturbance. <code>r+PID_output</code> is a hint because it contains the <strong>+</strong>, which indicates that this is supposed to be a feed forward. The problem is that in the block diagram, it is a separate block <strong>after</strong> the PID controller, which means in the block diagram, it is actually $r \cdot PID_{output}$, which is <strong>not</strong> how feed forward should be.</p> <p>The following image shows how a feed forward compensates disturbance: (with yet again differently named variables)</p> <p><a href="http://www.atp.ruhr-uni-bochum.de/rt1/syscontrol/node88.html" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XuQU2.gif" alt="feed forward"></a></p> <p>The compensation of the disturbance is indeed added <strong>after</strong> the controller. As can be seen in the image, it is added via a negative summation node.</p> <p>As seen above, the controller itself should be able to handle disturbance, because it is tuned to do only that in your use case.</p> <p>A feed forward can take some workload off the controller. If you really think this is neccessary, go ahead and add the feed forward. I would keep it separate from the controller, for better readability, although adding the feed forward will basically just change some parameters of the PID controller.</p> <hr> <p><sup>1</sup> <a href="https://electronics.stackexchange.com/questions/29636/how-i-can-set-up-a-transfer-function-of-y-mxc">check this proof on electronics.SE</a></p> <p><sup>2</sup> with apologies to AC/DC</p>
7853
2015-08-12T12:01:22.133
|control|pid|
<p>Could I have your opinions on PID type selection?</p> <p><a href="https://i.stack.imgur.com/rrB8u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rrB8u.png" alt="Plant with a PID controller"></a> <strong>System description</strong></p> <ol> <li>Here comes a very simple system: $\mbox{Output}(t) = k * (\mbox{Input}(t) + \mbox{systemVariable}(t))$. $k$ is constant and $\mbox{systemVariable}(t)$ is a system variable which may change according to time.</li> <li>The goal of the whole system is to maintain system output at $0$. It has to be as close to zero as possible. The controller has to compensate the $\mbox{systemVariable}$.</li> <li>The change ($\mbox{systemVariable}$ ) is modeled by a very slow ramp.</li> </ol> <p><strong>Controller description</strong></p> <ol> <li><p>The controller's input is the output of the system. However, the measurements are always noisy, and I modeled Band-Limited White Noise into the measurements.</p></li> <li><p>After PID controller, the output goes into an integrator, since the PID controller always calculates the "change" of the plant input. </p></li> </ol> <p><strong>Questions</strong></p> <ol> <li><p>My original thoughts: Add a PID controller with P=1/k is enough. Since every time the controller gets an error $e$, it can be calculated back that the compensation on controller output shall be $e/k$. However, Matlab auto-tuning always give me a PID. Why is that?</p></li> <li><p>What is the relation between P of PID and measurement noises? If P is large, the system will tend to be rambling largely, due to the noises. If P is small, the system will tend not to converge to the correct value or very slow. How to make the trade-offs? Or how to prevent system from rambling largely and get quick system responses? </p></li> </ol> <p>Thanks a lot!</p>
How to tune PID for a Y(t) = k*X(t) system?
<p>Regarding the first statement in your question: "I know that RC servo motors are designed for precise movement, rather than a D.C. motors' continual rotation. Are most RC servo motors limited to movement within one rotation or can they actually be made to continually rotate?"</p> <p><strong>A Continuous Rotation RC Servo is NOT a Servo</strong></p> <p>Here is why</p> <p><strong>What is a Servo</strong></p> <p>A <a href="https://en.wikipedia.org/wiki/Servomotor" rel="nofollow noreferrer">Servo (Servomotor)</a> is a motor with a position sensor and a closed-loop controller that adjusts the motor power to assure the motor is accurately held at the commanded position.</p> <p><strong>What is an RC Servo</strong></p> <p>An RC Servo is a small DC motor geared down to a drive shaft that has a <a href="https://www.youtube.com/watch?v=cnOKG0fvZ4w" rel="nofollow noreferrer">potentiometer (rotary resistor)</a> for its position sensor and is controlled by a pulse train. The width of the pulse (PW) determines the drive shaft position. The controller compares the PW to the potentiometer position and the motor is driven to compensate for the error. In a typical RC Servo, a 1.5 ms pulse is center position. For an example, in this case, the pulse is compared to the center resistance value of potentiometer. If the potentiometer is at its center value, no power is applied to the motor. If, however, the servo is clockwise (CW) of center then the potentiometer value will be lower and the servo controller will apply power to turn the motor counter-clockwise (CCW) to bring it back to center. The bigger the error, the more power will be applied to the motor. If the error is in the opposite direction, the motor will be driven CW.</p> <p>The advantage of this design is that you can produce a very light weight servo out of small inexpensive components. The limitation of this design is that the travel of the drive shaft is limited by the rotational travel of the potentiometer. For RC that is not generally an issue as RC servos are usually used to drive control services that have very limited travel.</p> <p><strong>Why a "Continuous Rotation" RC Servo is not a Servo</strong></p> <p>As @Greenonline mentioned, you can modify an RC Servo for continuous rotation.</p> <p>Note is the <a href="https://www.youtube.com/watch?v=cnOKG0fvZ4w" rel="nofollow noreferrer">video</a> what they guy does</p> <ol> <li>He removes the end stop that protects the potentiometer</li> <li>He cuts the wires from the controller to the potentiometer and connects a fixed resistor</li> </ol> <p>So, going back to my description of an RC Servo, what does that result in?</p> <ol> <li>Since the position sensor (potentiometer) is gone, there is no longer a control loop so it is no longer a Servo.</li> <li>If you were to send in a PW of 1.5 ms, the controller would see the resistance at center point and would not apply power to the motor.</li> <li>If you were to send in a PW of &lt;1.5 ms (commanding a position CCW of center), the controller would see the resistance at center point and drive the motor CCW to get there (which it never will) and hence it will rotate continuously CCW.</li> <li>Because there is no feedback control you will not be able to rely on the speed or accuracy of the motor; but, you will have a small DC motor that you can command using a RC Servo controller (or a using a PW)</li> </ol>
7864
2015-08-14T07:47:08.320
|stepper-motor|rcservo|
<p>I know that RC servo motors are designed for precise movement, rather than a D.C. motor's continual rotation. Are most RC servo motors limited to movement within one rotation or can they actually be made to continually rotate? That is to say, is their movement limited to a specific arc? Or does it depend on the type of RC servo motor?</p> <p>I have seen videos of <a href="https://www.youtube.com/watch?v=EAUcQM5KChc" rel="nofollow noreferrer">industrial size steppers rotating constantly</a>, but, more specifically, I was wondering whether a MG995 can.</p> <p><a href="https://i.stack.imgur.com/vmpb3m.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vmpb3m.jpg" alt="MG995 Stepper motor" /></a></p> <p>I don't own any RC servo motors yet, so I can't actually test it myself. I just want to make sure before I make a purchase. I keep seeing conflicting information, for example the instructable, <a href="http://www.instructables.com/id/How-to-modify-a-servo-motor-for-continuous-rotatio/" rel="nofollow noreferrer">How to modify a RC servo motor for continuous rotation (One motor walker robot)</a>, implies that a RC servo motor will not continually rotate, else otherwise, why would there be a need to modify it?</p> <h3>Addendum</h3> <p>I have just realised, after <a href="https://learn.adafruit.com/all-about-stepper-motors/what-is-a-stepper-motor" rel="nofollow noreferrer">further digging about</a> on google, and as HighVoltage points out in <a href="https://robotics.stackexchange.com/questions/7864/can-stepper-motors-continually-rotate#answer-7865">their answer</a>, that I have confused steppers and servos.</p> <p>In addition, I found out <a href="https://www.youtube.com/watch?v=cnOKG0fvZ4w" rel="nofollow noreferrer">how to hack the TowerPro MG995 Servo for continuous rotation</a>.</p>
Can RC servo motors continually rotate?
<p>You could consider taking a look at <a href="http://gazebosim.org/" rel="nofollow">Gazebo Robotic Simulation</a> and the <a href="http://www.ros.org/" rel="nofollow">Robot Operating System</a>. Both might require a bunch of work getting used to, but I am sure it is worth the effort.</p>
7872
2015-08-15T16:01:52.207
|software|simulation|visual-servoing|
<p>I want to conduct the following experiment: I want to set up a scene with a kuka lwr4+ arm, a 3D model of an object and a camera overlooking them. I want to find the pose of the object using some pose estimation algorithm and move the arm towards the object. In general I want a piece of software or a combination of cooperating software that can do all that without having to reinvent the wheel. Is there anything available?</p>
Simulation environment for conducting visual servoing experiment
<p>Recall that for a circle with radius $r$, the circumference $u$ (that is the arc length of the "curve") is given as:</p> <p>$$u = 2 \pi r$$</p> <p>Further recall the definition or radians, which is something along the lines of:</p> <blockquote> <p>If a rope of length $r$ is placed on the circumference of a circle that has the radius $r$, the rope will "cover" a certain angle. This angle is 1 radian.</p> </blockquote> <p>The following awesome animation from the <a href="https://en.wikipedia.org/wiki/Radian" rel="nofollow noreferrer">wikipedia article on radian</a> covers this better than my words:</p> <p><a href="https://en.wikipedia.org/wiki/File:Circle_radians.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4fEl7.gif" alt="enter image description here"></a></p> <p>You can now see how both are somewhat similar: If you multiply the angle $1 rad$ with $2\pi $, you also get a rope length of $2\pi r$. Now this rope length is exactly the same as $u$.</p> <p>We can therefore conclude:</p> <p>The $2\pi$ in the formula for the circumference $u=2\pi r$ corresponds to the covered angle of the circle. A full circle is $360°$, which is equivalent to $2\pi $. Let's call this angle $\omega$ and write the formula again, with that angle:</p> <p>$$u = \omega r$$</p> <p><a href="https://en.wikipedia.org/wiki/Arc_%28geometry%29#Length_of_an_arc_of_a_circle" rel="nofollow noreferrer">The wikipedia article about the arc length of a circle</a> comes to the same conclusion.</p> <p>You can calculate any arc length given any angle. For example, half a circle, with an angle of $180° = \pi $ has (obviously) half the circumference:</p> <p>$$\frac{u}{2} = \frac{\omega}{2} r$$</p> <p>Or one third of a circle has this circumference:</p> <p>$$\frac{u}{3} = \frac{\omega}{3} r$$</p> <p>I mixed the terms circumference and arc length within my answer on purpose. If you think of the circumference as a <em>length</em> you can imagine that it's a distance that you can walk along. More precisely: drive along, because your robot has wheels.</p> <p>Each wheel covers a certain distance, $v_r$ and $v_l$ respectively.</p> <p>If $v_r \neq v_l$, the robot moves on a circle as seen in your image, with center $ICC$ and radius $R$. Each wheel is on its own circle with a different radius:</p> <ul> <li>the left wheel is closer to $ICC$, which means it has a smaller radius $R-\frac{I}{2}$</li> <li>the right wheel is further away from $ICC$, which means it has a bigger radius $R+\frac{I}{2}$</li> </ul> <p>If the robot drives around in that circle with center $ICC$ and radius $R$ by an angle of $\omega $, the relationship between the distances travelled by each wheel are the formulas that you found in the book:</p> <p>$$ \begin{align} \omega~ \left(R+\frac{l}{2}\right) &amp;= v_r\\ \omega~ \left(R-\frac{l}{2}\right) &amp;= v_l\\ \end{align}$$</p> <p>$v_r$ and $v_l$ are the arc lengths of the circle that the wheels are driving on and $\omega$ is the angle that the robot goes around the circle.</p> <p>The important thing to understand: <strong>The circle is just a geometric aid to describe the motion of the robot, which exists because it's easier to handle and understand.</strong> It's really just another way to describe $v_r$ and $v_l$, but those two values are hard to imagine and depending on the wheelbase $l$, they could describe different motions.</p> <p>If I give you $v_r = 5$, $v_l = 6$ and $l = 3$, can you easily answer the question whether the robot can do a full turn in a 50 x 50 room? It's kind of hard to tell. But if you had $R$, it's a lot easier.</p> <p>Thinking in terms of a circle is a lot easier than to construct the motion from $v_r$ and $v_l$.</p> <blockquote> <p>Why ω has to be same if we want to analyse the behavior after changing the wheel velocity relative to other? </p> </blockquote> <p>I think you are misunderstanding the text in the book. The book says that $\omega$ is the same for both wheels, given two values for $v_r$ and $v_l$. Of course, <strong>if you change the values of $v_r$ and $v_l$, you will get a different value for $\omega$, but $\omega$ will still be the same for both wheels</strong>. </p> <blockquote> <p>How do we know about the circle, in which the robot rotates by doing variations in one wheel velocity, surely passes through the center point between two wheels.</p> </blockquote> <p>Because that's the way the geometry works out. Both wheels are on circles with different radii ($R\pm \frac{I}{2}$) with the came center $ICC$, but they have a fixed relation to each other, so the center of the robot must be on a circle with radius $R$.</p> <p>Just imagine you are in a car driving in a roundabout. If the left side of the car is going in a circle and the right side is going in a circle, why do you think are you not going in a circle? Of course you do. And so does the center of the robot.</p>
7879
2015-08-16T16:29:05.110
|mobile-robot|
<p>Can anyone please explain me these lines found on <a href="http://www.diva-portal.org/smash/get/diva2:796235/FULLTEXT01.pdf" rel="nofollow noreferrer">page 5 of <em>Kinematics Equations for Differential Drive and Articulated Steering</em> by <em>Thomas Hellström</em></a>?</p> <blockquote> <p>Note that plugging in $r$ and $v$ for both left and right wheel result in the same $\omega $ (otherwise the wheels would move relative to each other). Hence, the following equations hold:</p> <p>$$ \begin{align} \omega~ \left(R+\frac{l}{2}\right) &amp;= v_r\\ \omega~ \left(R-\frac{l}{2}\right) &amp;= v_l\\ \end{align}$$</p> <p>where $R$ is the distance between ICC and the midpoint of the wheel axis, and $l$ is the length of the wheel axis (see Figure 6). </p> <p><a href="https://i.stack.imgur.com/M9Y7g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9Y7g.png" alt="Figure 6"></a></p> <p>Figure 6 . When left and right wheel rotate with different speeds, the robot rotates around a common point denoted ICC My questions are:</p> </blockquote> <ol> <li><p>How do these equations come to be?</p></li> <li><p>Why does $\omega$ have to be same if we want to analyse the behavior after changing the wheel velocity relative to other?</p></li> <li><p>How do we know about the circle, in which the robot rotates by doing variations in one wheel velocity, surely passes through the center point between two wheels.</p></li> </ol>
Forward kinematics: Why ω should remain same?
<p><a href="http://www.chrobotics.com/downloads/ExampleMatlabCode.zip" rel="nofollow">http://www.chrobotics.com/downloads/ExampleMatlabCode.zip</a> read um7 directly from serial</p>
7883
2015-08-17T06:36:43.263
|arduino|electronics|embedded-systems|matlab|
<p><em>I try to read IMU sensor data from an Arduino mega 2560 UART with serial receive block of Arduino support package for simulink. The IMU can send binary packets and also nmea packets and I can configure it to any output. When the serial recieve block output is directly used, it displays just the numbers between 0-255. l need help about how to parse the coming data which contains the euler angles that I want to use.</em></p> <p>Here is <strong>binary structure ;</strong></p> <p>"<strong>s</strong>","<strong>n</strong>","<strong>p</strong>",<strong>packet type(PT)</strong>,<strong>Address</strong>,<strong>Data Bytes (D0...DN-1)</strong>,<strong>Checksum 1</strong>,<strong>Checksum 0</strong></p> <p>The PT byte specifies whether the packet is a read or a write operation, whether it is a batch operation, and the length of the batch operation (when applicable). The PT byte is also used by the UM7 to respond to commands. The specific meaning of each bit in the PT byte is given below.</p> <p>Packet Type (PT) byte;</p> <p>7 Has Data, 6 Is Batch, 5 BL3, 4 BL2, 3 BL1, 2 BL0, 1 Hidden, 0 CF</p> <p>Packet Type (PT) Bit Descriptions;</p> <p>7...Has Data: If the packet contains data, this bit is set (1). If not, this bit is cleared (0). </p> <p>6...Is Batch: If the packet is a batch operation, this bit is set (1). If not, this bit is cleared (0) </p> <p>5:2..Batch Length (BL): Four bits specifying the length of the batch operation. Unused if bit 7 is cleared. The maximum batch length is therefore 2^4 = 16 </p> <p>1...Hidden: If set, then the packet address specified in the “Address” field is a “hidden” address. Hidden registers are used to store factory calibration and filter tuning coefficients that do not typically need to be viewed or modified by the user. This bit should always be set to 0 to avoid altering factory configuration.</p> <p>0...Command Failed (CF): Used by the autopilot to report when a command has failed. Must be set to zero for all packets written to the UM7.</p> <p>The address byte specifies which register will be involved in the operation. During a read operation (Has Data = 0), the address specifies which register to read. During a write operation (Has Data = 1), the address specifies where to place the data contained in the data section of the packet. For a batch read/write operation, the address byte specifies the starting address of the operation. The "Data Bytes" section of the packet contains data to be written to one or more registers. There is no byte in the packet that explicitly states how many bytes are in this section because it is possible to determine the number of data bytes that should be in the packet by evaluating the PT byte. If the Has Data bit in the PT byte is cleared (Has Data = 0), then there are no data bytes in the packet and the Checksum immediately follows the address. If, on the other hand, the Has Data bit is set (Has Data = 1) then the number of bytes in the data section depends on the value of the Is Batch and Batch Length portions of the PT byte. For a batch operation (Is Batch = 1), the length of the packet data section is equal to 4*(Batch Length). Note that the batch length refers to the number of registers in the batch, NOT the number of bytes. Registers are 4 bytes long. For a non-batch operation (Is Batch = 0), the length of the data section is equal to 4 bytes (one register). The data section lengths and total packet lengths for different PT configurations are shown below. The two checksum bytes consist of the unsigned 16-bit sum of all preceding bytes in the packet, including the packet header.</p> <p><strong>Read Operations;</strong></p> <p><strong>To initiate a serial read of one or more registers aboard the sensor, a packet should be sent to the UM7 with the "Has Data" bit cleared. This tells the device that this will be a read operation from the address specified in the packet's "Address" byte. If the "Is Batch" bit is set, then the packet will trigger a batch read in which the "Address" byte specifies the address of the first register to be read. In response to a read packet, the UM7 will send a packet in which the "Has Data" bit is set, and the "Is Batch" and "Batch Length" bits are equivalent to those of the packet that triggered the read operation. The register data will be contained in the "Data Bytes" section of the packet.</strong></p> <p>here is an Example Binary Communication Code;</p> <pre><code>{ uint8_t Address; uint8_t PT; uint16_t Checksum; uint8_t data_length; uint8_t data[30]; } UM7_packet; // parse_serial_data.This function parses the data in ‘rx_data’ with length ‘rx_length’ and attempts to find a packet in the data. If a packet is found, the structure ‘packet’ is filled with the packet data.If there is not enough data for a full packet in the provided array, parse_serial_data returns 1. If there is enough data, but no packet header was found, parse_serial_data returns 2.If a packet header was found, but there was insufficient data to parse the whole packet,then parse_serial_data returns 3. This could happen if not all of the serial data has been received when parse_serial_data is called.If a packet was received, but the checksum was bad, parse_serial_data returns 4. If a good packet was received, parse_serial_data fills the UM7_packet structure and returns 0. uint8_t parse_serial_data( uint8_t* rx_data, uint8_t rx_length, UM7_packet* packet ) { uint8_t index; // Make sure that the data buffer provided is long enough to contain a full packet The minimum packet length is 7 bytes if( rx_length &lt; 7 ) { return 1; } // Try to find the ‘snp’ start sequence for the packet for( index = 0; index &lt; (rx_length – 2); index++ ) { // Check for ‘snp’. If found, immediately exit the loop if( rx_data[index] == ‘s’ &amp;&amp; rx_data[index+1] == ‘n’ &amp;&amp; rx_data[index+2] == ‘p’ ) { break; } } uint8_t packet_index = index; // Check to see if the variable ‘packet_index’ is equal to (rx_length - 2). If it is, then the above loop executed to completion and never found a packet header. if( packet_index == (rx_length – 2) ) { return 2; } // If we get here, a packet header was found. Now check to see if we have enough room left in the buffer to contain a full packet. Note that at this point, the variable ‘packet_index’contains the location of the ‘s’ character in the buffer (the first byte in the header) if( (rx_length – packet_index) &lt; 7 ) { return 3; } // We’ve found a packet header, and there is enough space left in the buffer for at least the smallest allowable packet length (7 bytes). Pull out the packet type byte to determine the actual length of this packet uint8_t PT = rx_data[packet_index + 3]; // Do some bit-level manipulation to determine if the packet contains data and if it is a batch.We have to do this because the individual bits in the PT byte specify the contents of the packet. uint8_t packet_has_data = (PT &gt;&gt; 7) &amp; 0x01; // Check bit 7 (HAS_DATA) uint8_t packet_is_batch = (PT &gt;&gt; 6) &amp; 0x01; // Check bit 6 (IS_BATCH) uint8_t batch_length = (PT &gt;&gt; 2) &amp; 0x0F; // Extract the batch length (bits 2 through 5) // Now finally figure out the actual packet length uint8_t data_length = 0; if( packet_has_data ) { if( packet_is_batch ) { // Packet has data and is a batch. This means it contains ‘batch_length' registers, each // of which has a length of 4 bytes data_length = 4*batch_length; } else // Packet has data but is not a batch. This means it contains one register (4 bytes) { data_length = 4; } } else // Packet has no data { data_length = 0; } // At this point, we know exactly how long the packet is. Now we can check to make sure we have enough data for the full packet. if( (rx_length – packet_index) &lt; (data_length + 5) ) { return 3; } // If we get here, we know that we have a full packet in the buffer. All that remains is to pullout the data and make sure the checksum is good. Start by extracting all the data packet-&gt;Address = rx_data[packet_index + 4]; packet-&gt;PT = PT; // Get the data bytes and compute the checksum all in one step packet-&gt;data_length = data_length; uint16_t computed_checksum = ‘s’ + ‘n’ + ‘p’ + packet_data-&gt;PT + packet_data-&gt;Address; for( index = 0; index &lt; data_length; index++ ) { // Copy the data into the packet structure’s data array packet-&gt;data[index] = rx_data[packet_index + 5 + index]; // Add the new byte to the checksum computed_checksum += packet-&gt;data[index]; } // Now see if our computed checksum matches the received checksum // First extract the checksum from the packet uint16_t received_checksum = (rx_data[packet_index + 5 + data_length] &lt;&lt; 8); received_checksum |= rx_data[packet_index + 6 + data_length]; // Now check to see if they don’t match if( received_checksum != computed_checksum ) { return 4; } // At this point, we’ve received a full packet with a good checksum. It is already fully parsed and copied to the ‘packet’ structure, so return 0 to indicate that a packet was processed. return 0; } </code></pre>
How can I get the values of a IMU from the serial message received in Simulink via UART?
<p>What you are describing is a <a href="https://en.wikipedia.org/wiki/Crankshaft" rel="nofollow noreferrer">crankshaft</a>.</p> <blockquote> <p>if I attach a sliding block to the end of my second arm, I don't know whether my equation for $x_2$ would change at all. </p> </blockquote> <p>No it wouldn't, because your restraint:</p> <blockquote> <p>the second arm is attached to a block that slides <strong>along the x-axis</strong></p> </blockquote> <p>doesn't restrain $x_2$ but $y_2$</p> <p>"Being on the x axis" basically means $y = 0$, which in your case means $y_2 = 0$, which is where you "lose" the second degree of freedom.</p> <p>Take a look at the following image:</p> <p><a href="https://de.wikipedia.org/wiki/Datei:Schubkurbel.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oqSUc.jpg" alt="crankshaft"></a></p> <p>The formula for the relation between $x$ and $\varphi$ in the image is:</p> <p>$$x=l_1\cos \varphi + l_2 \sqrt{1-\left(\frac{l_1}{l_2}\right)^2 \sin^2 \varphi}$$</p> <p>To get there:</p> <ol> <li>use the Pythagorean theorem for the second summand in the formula for $x_2$ instead of trigonometry</li> <li>replace $y_1$ with its definition</li> </ol> <p>$$ \begin{align} x_2 &amp;= x_1 + \sqrt{l_2^2 - \underset{\overbrace{l_1\sin\varphi}}{y_1^2}}\\ \end{align}$$</p> <p>The relationship between the two angles can be derived from their shared opposite side in their respective triangles:</p> <p>$$l_1\sin a_1 = l_2\sin a_2$$ $$ a_1 = \arcsin \left(\frac{l_2}{l_1}\sin a_2\right)$$ </p>
7889
2015-08-18T13:08:52.783
|forward-kinematics|
<p>I was wondering whether maybe you could help me with this problem. I have a double pendulum. I have set the origin of cartesian coordinates to be the "head" of the first arm, which is fixed. The end of the second arm is attached to a block that slides along the x-axis. What I want to do is to derive the equations relating the pendulum's angles with the distance from the origin to the block. </p> <p>Now, I know how I could go about deriving the equations without the constraint. </p> <p>$$x_1 = L_1cos(a_1)$$ $$y_1 = L_1sin(a_1)$$</p> <p>Where $x_1$ and $y_1$ is where the first arm joins the second arm and $a_1$ is the angle between the horizontal and the first arm. </p> <p>Similarly, I can derive the equations for the end of the second arm $x_2 = x_1 + L_2 cos(a_2)$ and $y_2 = y_1 - L_2 sin(a_2)$</p> <p>Now then, if I attach a sliding block to the end of my second arm, I don't know whether my equation for $x_2$ would change at all. I don't think it would but would I have to somehow restrict the swing angles so that the block only moves along the x direction? </p> <p>Well, basically the problem is finding the equation of $x_2$ if it's attached to a block that only moves along the x- direction. </p>
Forward kinematics of constrained double pendulum