Answer
stringlengths
44
28.2k
Id
stringlengths
1
6
CreationDate
stringlengths
23
23
Tags
stringlengths
4
94
Body
stringlengths
54
35.7k
Title
stringlengths
13
150
<p>If you're working with the TK1 and stereo camera, you might consider this:</p> <p><a href="https://devtalk.nvidia.com/default/topic/994170/jetson-tk1-and-stereo-vision/" rel="nofollow noreferrer">https://devtalk.nvidia.com/default/topic/994170/jetson-tk1-and-stereo-vision/</a></p> <p>These stereo cameras are supports ROS: <a href="http://rosindustrial.org/news/2016/1/13/3d-camera-survey" rel="nofollow noreferrer">http://rosindustrial.org/news/2016/1/13/3d-camera-survey</a></p>
6508
2015-02-03T18:03:17.887
|computer-vision|cameras|
<p>I would like to build a motorized robot with stereo vision that sends visual input to a PC over wifi for processing. A Raspberry Pi would have been perfect for the task if it would be possible to connect 2 cameras. The Pi 2 would be powerful enough for two USB webcams (with minimum 8fps) I guess, but the shared USB seems to be a bottleneck (2 cameras + wifi dongle).</p> <p>What other options are there to send input from 2 cameras and control a motor (or an arduino)?</p>
Low power computer for stereo vision
<p>Because it is an under-actuated system. You can not directly control the linear velocity with only one PID. To move in the 3D space you need to control the linear velocity.</p> <p>So the first loop uses the angular velocity (or the attitude itself) as control inputs to control the linear velocity, while the second loop uses the torques or rpms of the fans (real commands) to achieve that angular velocity: backstepping-like effect. </p> <p>Is this comment helpful?</p>
6510
2015-02-03T20:18:50.893
|quadcopter|pid|
<p>I am wondering what the use is of two PID loops to control a quadcopter. One PID for stability control and another PID for rate control.</p> <p>Why can't you just use one PID per axis to control a quadcopter where you take the current and desired angle as input and motor power as the output?</p>
Multiple PIDs in quadcopter
<p>First of all industrial robots drive trains with basically no play/backlash, plus the whole construction is very rigid. AFAIK they all use harmonic drive gearing, which is extremely expensive. Most DIY'ers are not willing to spend that kind of money, and that is one reason why they have jerky movements. </p> <p>With regards to Universal Robots. They use motors from Kollmorgen, with integrated harmonic drive gearing. </p> <p>In the Universal Robots FAQ (<a href="https://www.universal-robots.com/how-tos-and-faqs/faq/ur-faq/what-type-of-17271/" rel="nofollow noreferrer">link</a>) I found the following. </p> <ul> <li>Motors: Permanent Magnet AC Synchronous Drive motors are used in both CB3 and e-Series robots. </li> <li>Motor drivers: Integrated Motor Drives, Space Vector (Field Oriented) Control @25Khzhttps://en.wikipedia.org/wiki/Vector_control_(motor)</li> <li>Servo controller: Integrated Servo Controllers in each joint, based on a dual encoder system</li> </ul>
6511
2015-02-03T21:26:42.290
|robotic-arm|otherservos|
<p>I've noticed that the industrial robot arms have very smooth, fast, and strong movement. Does anyone know what type of servos they use. I'm trying to build my own and don't want to have the jerky movement that is seen in a lot of DIY robot arms? Thanks. </p>
What type of servos are used in industrial robot arms like Universal Robot UR5?
<p>Honestly, I don't think you should use the cameras just because you have them. The ultrasonic sensor is a way simpler and more effective solution. There are ways to tell depth with one camera, but they are far from effective in most cases. If you want to try it, I would say play around it when you don't have a deadline if you're interested in the learning experience. As far as whether you could get it working in 4 months or not, it depends on how familiar you are with this kind of application.</p> <p>If you really want to use the cameras, it might be easier to get another camera and mount it so that it's facing forward somehow, either on top of the car or somewhere else. This might be more effort than it's worth, though. I would go this route if you really want to use cameras for personal reasons.</p>
6515
2015-02-04T10:27:08.773
|sensors|computer-vision|ultrasonic-sensors|
<p>I am going to start a new project consisting in implementing an autonomous driving RC car. The car as it is now, has a camera installed on each side of the car, i.e. 4 cameras in total. They are connected to a board which is able to read and process the video input.</p> <p>I have been researching about obstacle detection using a single camera (without stereo cameras, e.g. <a href="https://robotics.stackexchange.com/questions/1348/single-camera-vision-and-mapping-system">Single camera vision and mapping system</a> ) and although it seems possible it also seems quite complex. Modifying the cameras set-up is not an option. I already have some video processing algorithms, like dense optical flow, which might help me, but I am not sure whether I might me able to implement the system in the time I have (4 months). I also don't know how reliable would be the final solution.</p> <p>If the first approach is not feasible, as an alternative option I also could install distance sensors in the car to detect obstacles. It seems that usually the most preferred choice is to use ultrasonic sensors. I would need to install them and I would not take advantage of the cameras, but it seems that the final complexity would be lower.</p> <p>Is the first approach feasible? What are the pros and cons of each approach? If I implemented the second option, how many sensor would I need?</p>
Computer vision with single camera vs. distance sensors for obstacle detection
<p>One possible option would be a linear solenoid. I would get a push type (Push solenoid's will push a rod out when energized and the builtin spring will auto retract it when the power is removed.)</p>
6516
2015-02-04T12:25:44.397
|motor|
<p>I have a project where I need a motor that can turn some number of rotations which will spool up a cable attached to a spring-closed device to open it up. When power is disconnected, the spring closure will cause the spool to unwind and the device to close.</p> <p>In the closed position, no power is available. (i.e. The closure mechanism needs to be 100% passive.)</p> <p>In order to keep this open for some time, I will need a motor that is capable of being stalled for long periods without having the windings burn up. I know some motors can do this, such as the motors they use on spring closed HVAC dampers, but I don't know how to find them or if there's a particular name I should be using to find them. I know I could probably do this with a stepper motor, but that seems overkill for the application.</p> <p>The only requirements are higher torque to open this mechanism, no gearing that prevents the motor from spinning when power is disconnected, and the ability to be stalled.</p>
Stallable motor - 100% duty cycle higher torque motor that can be stalled without burning up
<p>The most common way to address this is to actually change the setpoint sent to the controller over time. For example, to get from point A to point B in T seconds, you would do $$ \begin{align} x^d(t) &amp;= t p_B + (T-t) p_A \end{align} $$ where $x^d$ is the desired position sent to the controller, $p_A$ and $p_B$ are the two setpoints, and $t$ is time (assume $t=0$ at the start of the maneuver). For best control it is also good to define a desired velocity: $$ \begin{align} v^d &amp;= \frac{||p_B - p_A||}{T} \end{align} $$</p> <p>This process in general is called trajectory generation.</p>
6522
2015-02-04T19:31:18.033
|quadcopter|pid|
<p>We implemented a PID controller for our quadcopter which allows us to fly from point A to B. The precise position of the quadcopter is measured using an external tracking system.</p> <p>Now that we can fly from A to B, we would like to implement a controller to fly more complex trajectories with multiple set points. E.g. from A to B to C or flying in a circle using sample points.</p> <p>We tried to use our regular PID controller for this but this of course doesn't work well since the PID controller forces the quadcopter to stabalize at any set point. We would like to have a controller that allows the quadcopter to fly a trajectory fairly smoothly. I think this has to be a controller that takes into account multiple set points in the trajectory at the same time so that it can already slow down/speed up based on the trajectory that is ahead.</p> <p>Can someone point me to some controllers / algorithms that I can look at to realize this? Do I need a completely different controller to do this or will it be an adapted version of the PID controller that I have now?</p>
PID controller for trajectory with mutliple setpoints
<p>I don't think it would make a difference what the orientation is. I would recommend maybe using hall effect sensors instead of a 3 axis magnetometer if ALL else fails. That would be a last resort though because they are probably not going to give you the kind of accuracy and precision you are looking for. </p>
6526
2015-02-04T22:44:40.590
|imu|magnetometer|
<p>Apologies if this is a stupid question, but if I have a 3-axis magnetometer, and I calculate the vector magnitude as</p> <pre><code>sqrt(magX * magX + magY * magY + magZ * magZ) </code></pre> <p>...then should I not always get the same value, regardless of the sensor's orientation? Mine is all over the place, and I feel as though I'm missing something obvious.</p>
3-axis Magnetometer Question
<p>A bearing should be used, as it is a basic principle of mechanical engineering, but many times it is not. It is often avoided as a cost benefit result, hobby servos are cheap.</p> <p>Servos themselves may have bearings and even metal gears. Brackets are available that have opposing bearing blocks. You have to decide if it worth the costs or is a mechanical necessity.</p> <p><img src="https://i.stack.imgur.com/Qdp9N.jpg" alt="enter image description here"></p>
6531
2015-02-05T14:12:21.383
|rcservo|
<p>I am designing a pan-tilt camera mount using standard hobby servos. Many existing designs use the servo shaft as a revolute joint, as opposed to simply a torque producing element. As a revolute joint the servo mechanism is subject to different torques and forces. Is using a servo shaft as a revolute joint recommended practice or should a bearing be used?</p>
Using hobby servo as axle
<p>This answer is specifically meant to help OP. Assuming you are from developing country like India, I can say that the current implementations of robotics is bleak (apart from some nice stuffs in cities like Bangalore and other metropolitan cities; like Team Indus trying to launch a rover on moon, Team Dhanvantari building IoT devices etc).</p> <p>So, I would suggest something like trying to get some hands-on experience during free time. (Say, trying to control a remote robotic arm just by sitting and clicking something in your browser and developing codes from scratch and trying even better stuffs like RTOS for arm and learning the development of beautiful web interface at the same time; Personally I wish I had learnt RTOS and web development also during college days) That way, even if robotics/automation goes out of hand, still there will be some web development as a by-product 'scope'.</p> <p>P.S.: I would edit this and add more info if needed.</p> <p>Reference: Google and <a href="https://www.quora.com/What-is-the-future-scope-of-robotics-in-India" rel="nofollow noreferrer">https://www.quora.com/What-is-the-future-scope-of-robotics-in-India</a></p>
6532
2015-02-05T14:27:03.177
|control|electronics|embedded-systems|
<p>I am beginner in Robotics .I have taken admission for Electronics engineering one year back as we don't have specifically Robotics Engineering Branch in my Country.Now I am suffering from questions like what is the scope of Electronics( <strong>not Electrical</strong>) Engineering in Robotics/ Automation?I am <strong>unable to distinguish between the role of Electronics engineer and Computer Engineer</strong> in Robotics <strong>as in both cases programming is required</strong></p> <p><strong>Also,if I don't like to do programming(coding),are there any other options to stick to Robotics / Automation field as per my branch(Electronics Engineering ) is concerned?</strong>. </p>
What is the scope of Electronics Engineering in Robotics/Automation?
<p>Any stepper motor can be microstepped. If you microstep, you can theoretically go as crazy as you want, but as you increase the number of microsteps, although you increase your resolution, you decrease the accuracy of the step. </p> <p><a href="http://www.linengineering.com/contents/stepmotors/pdf/Product_Guides/Lin_RG_StepMotorBasics.pdf" rel="nofollow">http://www.linengineering.com/contents/stepmotors/pdf/Product_Guides/Lin_RG_StepMotorBasics.pdf</a></p> <p><a href="http://www.micromo.com/microstepping-myths-and-realities" rel="nofollow">http://www.micromo.com/microstepping-myths-and-realities</a></p> <p>Edit:</p> <p>In regards to the original question, the smallest angle you can rotate is 8.57 degrees with a 14 pole 12 stator configuration. You can find a detailed explanation of this system here: <a href="http://www.aerodesign.de/peter/2001/LRK350/index_eng.html" rel="nofollow">http://www.aerodesign.de/peter/2001/LRK350/index_eng.html</a> by clicking on the link for "Why does the Torquemax rotate so slowly and so forcefully" and scrolling down.</p>
6544
2015-02-06T14:52:52.943
|motor|brushless-motor|
<p>For example, I have a brush-less outrunner with 14 poles and 12 stator windings. It has three connectors. Can this motor be controlled in a way that it performs a single step of 30 degrees (360/12)?</p>
What's the smallest rotation a brush-less motor can perform?
<p>You should feedback on position and heading using the information from the webcam overhead. </p> <p>So you have an list of cells that make up the path that you would like to travel in the order that you would like them visited. Give each cell a property 'traveled' with an initial value of false. Give the robot some slow constant speed with a desired heading that points it to the next untraveled cell. Once the robot gets withing some distance of the cell, mark the cell as traveled and move on to the next one. As long as the robot is moving slow enough compared to the rate you are sending commands, this should get you through the maze.</p> <p>Good luck!</p>
6566
2015-02-09T13:56:32.053
|computer-vision|navigation|motion-planning|
<p>In my project, I've successfully analyzed the arena and have detected the obstacles using an overhead webcam. I also have computed the shortest path. The path data is transmitted to the robot via Zigbee, based on which it moves to its destination.</p> <p>The problem is: My robot is not taking accurate turns which would cause error in the rest of path it follows.</p> <p>Could anyone please suggest any methods/techniques for feedback from the robot so as the path is corrected and the robot follows the original path computed without any deviation? (Basically a tracking mechanism to avoid deviation from the original computed path)</p>
Robot Navigation Feedback using Image Processing
<p>If you have paid online or DVD access to Oxford English Dictionary, you can look up the entry for <em>teached</em>. Here is its entry from OED1 (1919 vintage, available free online):</p> <blockquote> <p>Teached ... <em>Obs.</em> or <em>dial.</em> = TAUGHT. 1639 LO. DIGAY, etc. <em>Lett. conc. Relig.</em> (1651) 96 By the frequent misapprehension of the teached,..either let slip or supplanted. 1644 G. PLATTES in Hartlib's Legacy (1655) 176 The Teachers and the Teached were nothing else but the blind leading of the blind.</p> </blockquote> <p>That is, OED (which typically is regarded as a definitive reference) in its first release referred to <em>teached</em> as obsolete or dialectic, and labels it as meaning <em>taught</em>.</p> <p><em><a href="https://books.google.com/books?id=mVcJqKs1isUC&amp;pg=PA798&amp;lpg=PA798&amp;dq=teached+or+taught&amp;source=bl&amp;ots=zuw_rJNhvz&amp;sig=T1nJwq74BT9upWTw9JMDTs7yv6A&amp;hl=en&amp;sa=X&amp;ei=UoTbVJijEoG1ogTajoCQCQ&amp;ved=0CEMQ6AEwBTgK#v=onepage&amp;q=teached%20or%20taught&amp;f=false" rel="nofollow">Garner's Modern American Usage</a></em> (2009, Oxford University Press) by Bryan Garner in its <strong><em>Teach</strong> > taught > taught</em> entry states</p> <blockquote> <p><code>*</code><em>Teached</em> is a form that isn't taught anywhere and is no part of STANDARD ENGLISH. But it sometimes appears –</p> </blockquote> <p>and offers some examples of it appearing by mistake in American newspaper articles.</p> <p>The <a href="http://en.wiktionary.org/wiki/teached" rel="nofollow">en.wiktionary</a> entry for <em>teached</em> labels it as “nonstandard, colloquial, dialectal” and offers some dialectal examples.</p> <p>In short, use <em>taught</em>, not <em>teached</em>. But note that rather than referring to a robot “replaying a collection of <em>taught</em> poses”, more often one refers to it “replaying a collection of <em>learned</em> poses”, making the viewpoint that of the taught instead of of the teacher.</p>
6576
2015-02-11T13:41:23.267
|reference-request|
<p>As a non-native speaker I have a (maybe trivial, but to me not clear) question concerning the verb 'to teach'. Of course, from school (and online dictionaries) I know the past tense of 'teach' is 'taught' not 'teached'. But in robotics 'to teach' has a special meaning (like: 'to make special ponts/orientations known to the (arm-)robot', e.g. by guiding the robot to those points/orientations.) </p> <p>Does it make sense to have a different past tense for 'teach' (i.e. 'teached') in this case ? Maybe a reference were it is used/explained ?</p> <p>(I would say 'No. The past of teach is taught, and that's it.', but some of my colleagues - also not native speakers - have a different opinion.)</p>
Robotics jargon question: How to conjugate 'teach'?
<blockquote> <p>Kinect is an active 3D depth estimation setup that employs IR laser structured patterns for Depth calculation and are reliable for Indoor applications only.</p> </blockquote> <p>Whereas,</p> <blockquote> <p>Stereo Camera setup is a passive 3d depth estimation setup that typically involves use of Stereo calibration procedure in order to compute projection matrix that will transform a 2D point(generally observed in left camera) into a 3D point in the left camera coordinate system(assuming left camera is at origin[0,0,0]). These stereo camera setups performs way better than Kinect in case of outdoor applications.</p> </blockquote>
6581
2015-02-12T08:18:32.867
|sensors|robotic-arm|kinect|stereo-vision|
<p>Any one can advice me on the ideal perception sensors for pick and place application using a robotic manipulator with ROS support. I have generally looked at things like Kinect and stereo cameras (Bumblebee2) which provide depth that can be used with PCL for object recognition and gripper positioning. Are there any other sensors would be preferred for such application and if not what are the drawbacks of stereo cameras in comparison to Kinect or other sensing capability. </p> <p>Thanks,</p> <p>Alan</p>
Stereo camera Vs Kinect
<p>You are confused by the "channels" term here. The resource you are linking to is talking about "conventional" remote control, using it's own frequency and it's own protocol, not compatible with WiFi in any way. In this case the term "channel" means just a specific radio frequency the control is operating. When talking about WiFi, the channels are frequencies as well, and their number is varying based on country regulations. But in this case, the channel is MOSTLY selected automatically by the wifi adapter, or manually based on how badly the channels are cluttered by surrounding communications. So in general, if using wifi you do not have to worry about number of channels available, because at any point of time you are using just one. <br> As for GPS - the location is read into your hardware, and depending on what do you intend to use it for, can be processed on the robot itself or sent via your selected communication method.</p>
6583
2015-02-12T15:44:09.123
|quadcopter|navigation|radio-control|wifi|
<p>We are building a hobby drone(Quad-coptor) with a camera for footage. So to control the quad, i have been suggested(<a href="http://www.rchelicopterfun.com/helicopter-radios.html" rel="nofollow">Web</a>,<a href="http://blog.oscarliang.net/choose-rc-transmitter-quadcopter/" rel="nofollow">Here</a>) to use minimum of four channels.For power and for turning etc., So which means i need one channel for every separate task to be done. For eg., if i want to rotate the camera, then i suppose i need the 5th channel and so on..</p> <p>Now my question would be - i have seen a lot of drones(ardrone, walkera) which are controlled by an Android or an iPhone app. So the wifi used to connect the drones is, single channel or multi-channel? If single channel then how can i control different tasks like to control turning of the quad or camera in different axis?</p> <p>Also if i want the GPS location from the quad, do i require to have another transmitter?</p> <p>I am using(Planned to) a Raspberry Pi 2, OpenPilot-CC3D for flight control.</p> <p>P.S this is my first drone, Kindly show some mercy if i ask/don't understand your comments.</p>
Is our laptop wifi single channel or multiple channel? This is for controlling a bot
<p>There are actually two sources of yaw torque in a quadrotor and a multirotor in general.</p> <p>The first is the torque from an imbalance between the torque generated by the CW spinning rotors and that generated by the CCW spinning rotors. This is entirely a function of friction in the motor bearings and aerodynamic drag.</p> <p>The second is torque arising from the conservation of angular momentum when the rotor speeds are changed, similar to how a reaction wheel works (see the Cubli). This effect is present in a vacuum, so it does not rely on aerodynamic forces.</p> <p>The first effect causes angular acceleration of the vehicle proportional to the difference in rotor speeds between the CW and CCW rotors. The second effect causes angular acceleration of the vehicle proportional to the difference in the derivative of the rotor speeds between the CW and CCW rotors.</p> <p>See <a href="https://aapt.scitation.org/doi/pdf/10.1119/1.5033880" rel="nofollow noreferrer">https://aapt.scitation.org/doi/pdf/10.1119/1.5033880</a> for a short experimental study of these two effects.</p>
6590
2015-02-13T03:28:43.043
|quadcopter|dynamics|
<p>In this picture, a sketch of a quadcopter is displayed with rotor's direction of motion. The magnitude of the rotational velocity is depicted by the thickness of the lines (thicker lines are higher velocity, thinner lines are lower velocity).</p> <p><img src="https://i.stack.imgur.com/WomRo.png" alt="enter image description here"></p> <p>I'm told this is the correct way to produce turning motion, but my intuition (which is usually wrong) tells me that the two pictures should be reversed. My argument is as follows: For the picture on the left, the two rotors of higher velocity are spinning clockwise. If the motion of the rotors of greater velocity are clockwise, shouldn't the quadcopter also rotate clockwise? What am I missing here?</p>
How do quadcopters turn left and right?
<p>What I think you are asking:</p> <p>Given a map of the environment (e.g., the positions of landmarks), and a sensor that can measure the location of these landmarks, is odometry required when the position of the robot can be uniquely determined from the landmark measurements?</p> <p>My answer:</p> <p>In short, yes. If every measurement can be uniquely associated with a large enough set of landmarks, you can get a unique estimate of your position. Note I said "estimate" of your position. Of course there will still be uncertainty of your estimate because your measurement of the landmarks will not be perfect. However, here are a couple reasons having odometry in this situation would be helpful anyway:</p> <ul> <li><p>As an initial guess. A lot of data association algorithms work much better if you have a good initial guess. Having odometry here helps increase the likelihood of a good match. It helps you make the correct choice between two possible matches, for example.</p></li> <li><p>To help deal with infrequent or highly uncertain landmark measurements. There are often situations where your landmark measurements won't be very good for stretches of time. Maybe you briefly were in an unmapped or poorly mapped area. Maybe the environment has changed. Maybe something is interfering with how your sensor is working (e.g., different lighting). If you are not constantly getting good updates based on measurements of the environment, it could lead to unrecoverable errors. Odometry gives you a (dead-reckoning) estimate during this time, making recovery easier when you start to get good landmark measurements again.</p></li> </ul> <p>In short, you need to ask yourself: Is the map very good? Does it full cover the operational area (i.e., is it possible for the robot to take measurements of things not in the map)? Is it possible the map has changed? Does the sensor work well in different conditions? Is the data association algorithm reliable for different amounts of noise? Will the updates provided by the sensor be frequent enough based on how fast the robot is travelling? Etc.</p>
6597
2015-02-14T07:41:32.773
|kalman-filter|
<p>I have recently been studying Kalman filters. I was wondering that if sensor model of a robot gives a <strong>unimodal Gaussian</strong> ( as is assumed for LKF) and the environment is pre-mapped, then the sensor reading can be completely trusted( ie. max value of Kalman gain), removing the need for odometry for localization or target tracking purposes and hence the need for the Kalman filter. Please Clarify.</p>
Need of Kalman filters in unimodal measurement model
<p>Why not write a subscription based node? It could take messages from whoever then passes the information on to the camera. From what you've said, it seems like a service wouldn't be necessary. </p> <p>Response. Yes. Create a new message that has all the necessary fields and a new topic, maybe /camera_parameters. </p>
6607
2015-02-16T17:52:54.913
|ros|cameras|serial|
<p>I have been asked to write code to implement serial communications with a camera in order to control its pedestal (movable base) as well as set a few dozen other camera options. The catch is that I have to make it usable by ROS. </p> <p>What would be the best practice to implement this functionality in ROS? I understand the concept of services, but I think that there should be a better way than creating a different service/file for each option.</p> <p>Thanks, Daniel.</p>
Best practice to write a ROS service for a serial-communication class with many options
<blockquote> <p>Is having one motor per degree of freedom the only possibility?</p> </blockquote> <p>There are many ways to use each motor for multiple purposes. My <a href="https://robotics.stackexchange.com/a/1096/478">answer to a previous question</a> has photos of gear trains that perform one function when a motor rotates one way and another when it rotates the other. You can also use clutches and/or shifters to control what function a motor drives. It may be possible to use one drive motor and one selector motor for any operation that is done for only one leg at once. </p> <p>However, for a one-off construction, unless you have machine-shop skills and mechanical-design ability, it seems likely that using many copies of a cheap component (RC servos) will cost you less and be more reliable than trying to use motors for multiple purposes. The necessary mechanical parts (eg solenoids, cams, gear and belt systems) are likely to be low-volume and expensive by comparison, and might save weight but not save money.</p>
6613
2015-02-17T12:25:39.030
|mobile-robot|servomotor|
<p>Note: I'm a firmware developer experienced with sensors and networks, but not much with motors.</p> <p>I am trying to build a small hobby robot, like a cat-sized spider. I am thinking of using servo motors with position control, so I don't have to use encoders to know where the motor is. Assuming six legs (I know, spiders have eight), with each leg being able to move up-down and left-right, that already translates to 12 motors. If you want to bend a knee, that gets the number to 18.</p> <p>18 motors on such a small robot is overkill, isn't it?</p> <p>I have thought of a couple of ideas, but not having a strong mechanical background, I cannot tell whether they are doable/sane.</p> <p>One of my ideas is to use a magnet on the end of the limb (the end inside the chassis) and a small permanent magnet above it. The magnets attract each other and this keeps the limb firm under the weight of the robot. A stronger controllable magnet (a coil) would attract the limb even more and let it lift in the air. The following drawing may help:</p> <p><img src="https://i.stack.imgur.com/XFdlT.png" alt="magnet to control limb motion"></p> <p>This would allow the up-down movement of the leg, and a servo would control its left-right movement. However, I fear that such a system would not be strong enough to hold under the weight of the robot, or whether a reasonable coil would be compact enough.</p> <p>In short, my question is, how can I control six legs each with two or three degrees of freedom with a reasonable number of motors? Is having one motor per degree of freedom the only possibility?</p>
Multiple limbs on small robots
<p><a href="https://robotics.stackexchange.com/a/6636">Ugo's answer</a> refers to <a href="http://www.springer.com/de/book/9781846286414" rel="nofollow noreferrer">"Sciavicco-Siciliano"</a> which is a good book I'll quote as well.</p> <p>Chapter 3.6 introduces the so-called analytical Jacobian which is not the same as the so called geometrical Jacobian as it shows up in: $ \omega=J_{geom} \cdot \dot{q}, $ but has to be obtained from $J_{geom}$ with the help of the transformation matrix $T_A$:</p> <p>$ J_{analytical}=T_A \cdot J_{geom} $</p> <p>or even more informative:</p> <p>$ \dot{\phi}=J_{analytical} \cdot \dot{q} $</p> <p>I allow myself one quotation from this chapter:</p> <blockquote> <p>From a physical viewpoint, the meaning of $\omega_e$ is more intuitive than that of $\dot{\phi}_e$. The three components of ωe represent the components of angular velocity with respect to the base frame. Instead, the three elements of $\dot{\phi}_e$ represent nonorthogonal components of angular velocity defined with respect to the axes of a frame that varies as the end-effector orientation varies. On the other hand, while the integral of $\dot{\phi}_e$ over time gives $\phi_e$, the integral of $\omega_e$ does not admit a clear physical interpretation...</p> </blockquote> <p>So the good news is that we can work with $\phi_e$ -- orientation of the end effector expressed in Euler angles, the bad news is that the transformation matrix $T_A$ introduces singularities which makes the authors of <a href="http://www.springer.com/de/book/9781846286414" rel="nofollow noreferrer">2</a> abandon the concept of analytical Jacobian.</p> <p>But here comes OpenRave:</p> <pre><code>manip=robot.GetManipulators()[0] # we calculate the rotational Jacobian in quaternion space quatJ = manip.CalculateRotationJacobian() quatManip0 = manip.GetTransformPose()[0:4] # order of multiplication may be wrong in the next line quatManipTarget = qmult(quatManip0, deltaQuat) # wSol=np.linalg.lstsq(quatJ, quatManipTarget) </code></pre> <p><s>So by using the analytical Jacobian in quaternion space you avoid the singularities of the transformation matrix $T_A$. Your Jacobian can still have numeric instability, but this is something different. </s></p> <p><strong>UPDATE</strong>: I thought again about the singularities of $T_A$ and possible numeric instabilities of the analytical Jacobian in quaternion space. Indeed my previous claim about the singularities in $\mathbb{R}^3$ being not the same as numeric instabilities in quaternion space is too "strong". </p>
6617
2015-02-17T22:11:10.117
|kinematics|jacobian|
<p>I would like to control my 7 DOF robot arm to move along a Cartesian trajectory in the world frame. I can do this just fine for translation, but I am struggling on how to implement something similar for rotation. So far, all my attempts seem to go unstable. </p> <p>The trajectory is described as a translational and rotational velocity, plus a distance and/or timeout stopping criteria. Basically, I want the end-effector to move a short distance relative to its current location. Because of numerical errors, controller errors, compliance, etc, the arm won't be exactly where you wanted it from the previous iteration. So I don't simply do $J^{-1}v_e$. Instead, I store the pose of the end-effector at the start, then at every iteration I compute where the end-effector <em>should</em> be at the current time, take the difference between that and the current location, then feed that into the Jacobian.</p> <p>I'll first describe my translation implementation. Here is some pseudo OpenRave Python:</p> <pre><code># velocity_transform specified in m/s as relative motion def move(velocity_transform): t_start = time.time() pose_start = effector.GetTransform() while True: t_now = time.time() t_elapsed = t_now - t_start pose_current = effector.GetTransform() translation_target = pose_start[:3,3] + velocity_transform[:3,3] * t_elapsed v_trans = translation_target - pose_current[:3,3] vels = J_plus.dot(v_trans) # some math simplified here </code></pre> <p>The rotation is a little trickier. To determine the desired rotation at the current time, i use Spherical Linear Interpolation (SLERP). OpenRave provides a quatSlerp() function which I use. (It requires conversion into quaternions, but it seems to work). Then I calculate the relative rotation between the current pose and the target rotation. Finally, I convert to Euler angles which is what I must pass into my AngularVelocityJacobian. Here is the pseudo code for it. These lines are inside the while loop:</p> <pre><code>rot_t1 = np.dot(pose_start[:3,:3], velocity_transform[:3,:3]) # desired rotation of end-effector 1 second from start quat_start = quatFromRotationMatrix(pose_start) # start pose as quaternion quat_t1 = quatFromRotationMatrix(rot_t1) # rot_t1 as quaternion # use SLERP to compute proper rotation at this time quat_target = quatSlerp(quat_start, quat_t1, t_elapsed) # world_to_target rot_target = rotationMatrixFromQuat(quat_target) # world_to_target v_rot = np.dot(np.linalg.inv(pose_current[:3,:3]), rot_target) # current_to_target v_euler = eulerFromRotationMatrix(v_rot) # get rotation about world axes </code></pre> <p>Then v_euler is fed into the Jacobian along with v_trans. I am pretty sure my Jacobian code is fine. Because i have given it (constant) rotational velocities ok. </p> <p>Note, I am not asking you to debug my code. I only posted code because I figured it would be more clear than converting this all to math. I am more interested in why this might go unstable. Specifically, is the math wrong? And if this is completely off base, please let me know. I'm sure people must go about this somehow. </p> <p>So far, I have been giving it a slow linear velocity (0.01 m/s), and zero target rotational velocity. The arm is in a good spot in the workspace and can easily achieve the desired motion. The code runs at 200Hz, which should be sufficiently fast enough. </p> <p>I can hard-code the angular velocity fed into the Jacobian instead of using the computed <code>v_euler</code> and there is no instability. So there is something wrong in my math. This works for both zero and non-zero target angular velocities. Interestingly, when i feed it an angular velocity of 0.01 rad/sec, the end-effector rotates at a rate of 90 deg/sec.</p> <p><strong>Update:</strong> If I put the end-effector at a different place in the workspace so that its axes are aligned with the world axes, then everything seems works fine. If the end-effector is 45 degrees off from the world axes, then some motions seem to work, while others don't move exactly as they should, although i don't think i've seen it go unstable. At 90 degrees or more off from world, then it goes unstable. </p>
Jacobian-based trajectory following
<p>What you are referring to is plotting the estimate with the uncertainty bounds - in particular the $3\sigma$ ($\pm3$ standard deviations) bounds which corresponds to 99.7% probability that the true state is within this region. The uncertainty bounds can be extracted from the state covariance matrix. I think what you are plotting is the residuals of some observation VS expected observation? In this case I think it is also applicable to use the covariance matrix of the residual observation error.</p> <p>For how to extract the standard deviation from the covariance matrix see: <a href="https://stats.stackexchange.com/questions/50830/can-i-convert-a-covariance-matrix-into-uncertainties-for-variables">https://stats.stackexchange.com/questions/50830/can-i-convert-a-covariance-matrix-into-uncertainties-for-variables</a></p>
6639
2015-02-20T23:08:47.760
|slam|ekf|simulation|mapping|
<p>I have implemented 2D-SLAM using EKF. The map is based-feature in which there is only one landmark for the sake of simplicity. I've read some papers regarding this matter. They plot the $\pm3\sigma$ plus the error. I would like to make sure that I'm doing the right thing. In my project, I have the estimate of the landmark's position and its true values. The true values here are the ones that the sensor measure not the ideal case. For example, the ideal case of the landmark position is (30,60) but this value is not accessible by any means, therefore I will consider the true values the ones that are coming from the sensor. </p> <p>Now the error in the landmark's position in x-axis is formulated as follows </p> <p>$$ \text{error}_{x} = \hat{x} - x $$</p> <p>The below picture shows the error in blue color. The red color represents the error bounds which is $\pm 3 \sigma_{x}$</p> <p>My question is now is this the way people plot the errors in the academics papers because I've seen some papers the bounds doesn't not look like mine. Even though mine decreases monotonically however in some papers it is more curved and it seems more reasonable to me. Any suggestions?</p> <p><img src="https://i.stack.imgur.com/heEm9.png" alt="enter image description here"></p>
how to plot $\pm 3 \sigma$ of a landmark in EKF-SLAM
<p>No. Once you're quad is tilted you are projecting the thrust of the motors at an angle which is not parallel to the z axis causing your quadcopter to move linearly along the x-y plane.</p> <p>If you must hover while tilted, you can use stepper motors to keep the thrust projecting parallel to the z axis.</p> <p>That would be an unusual quad though.</p> <p>edit: spelling</p>
6643
2015-02-21T16:54:04.097
|quadcopter|
<p>Is there a way to make a quadcopter maintain steady hovering (no lateral movement, constant altitude) while tilted left or right? If so, how can i accomplish this? </p>
Can a quadcopter hover while tilted?
<p>According to Harvard University who are actively working on a <a href="http://robobees.seas.harvard.edu/" rel="nofollow">Robobees</a> project which has various goals in mind, one of which is autonomous pollination, they see autonomous pollination as being a couple of decades away yet. They admit that the better solution is to remedy the natural problem of Colony Collapse Disorder. I quote from their website:</p> <p><em>One potential application of micro-robotic "insects" might someday be to artificially pollinate crops. However, we do not see robotic pollination as a wise or viable long-term solution to Colony Collapse Disorder. If robots were used for pollination—and we are at least 20 years away from that possibility— it would only be as a stop-gap measure while a solution to CCD is implemented to restore natural pollinators.</em> </p> <p><a href="http://robobees.seas.harvard.edu/robobees-project-and-colony-collapse-disorder-ccd" rel="nofollow">More about that here</a>.</p>
6651
2015-02-22T14:41:41.583
|mobile-robot|
<p>With the bee hive collapses, growers are desperate for pollenation options. Is anyone working on swarms of tiny flying robots to augment the bees? They could look for a certain color, poke around inside the flower for a moment, and move on to the next. When they need recharging, they fly back to their hive (<em>the same reason bees fly back</em>). </p> <p>Of course, replacing germinators that run the seeds through their digestive systems would be a different problem.</p>
Are robotic pollenators being designed?
<p>You can follow the example code <a href="http://wiki.ros.org/rospy/Overview/Publishers%20and%20Subscribers#Subscribing_to_a_topic" rel="noreferrer">here</a> and simply add a second subscription like so:</p> <pre><code>import rospy from std_msgs.msg import String def callback1(data): rospy.loginfo("Callback1 heard %s",data.data) def callback2(data): rospy.loginfo("Callback2 heard %s",data.data) def listener(): rospy.init_node('node_name') rospy.Subscriber("chatter1", String, callback1) rospy.Subscriber("chatter2", String, callback2) # spin() simply keeps python from exiting until this node is stopped rospy.spin() </code></pre> <p>ROS Python nodes are inherently multi-threaded. When you get each callback, it will be in a separate thread. So this is an event-driven model. </p> <p>Note that this behavior is different than the default behavior for ROS C nodes which are inherently single threaded unless you specifically make them multi-threaded. </p>
6652
2015-02-22T15:43:34.163
|ros|python|
<p>How a ROS node written in Python could subscribe to multiple topics and publish to multiple topics?</p> <p>All examples I found were for a single topic. Is this an event-driven model so subscription to multiple "events" is allowed or it is more like a loop, so it can listen only to one "source" at a time?</p>
How to get a python node in ROS subscribe to multiple topics?
<p>You're right, it's a second order plant representing the mass-spring-damper system $P\left(s\right)=1/\left(s^2-bs -k\right)$ (stiffness $k$ and damping $b$ should be thus negative) under the closed-loop control action of the compensator $C\left(s\right)$, whose transfer function is:</p> <p>$ C\left(s\right)=\frac{\alpha}{s+\beta}. $</p> <p>The negative feedback is closed over the position of the mass, while the set-point is given by $u$.</p> <p>We can easily derive these results by applying the <em>Laplace transform</em> to the system of equations and then substituting $x_1$:</p> <p>$$ \begin{align} s^2x_2&amp;=bsx_2+kx_2+x_3\\ sx_3&amp;=\alpha (u-x_2)-\beta x_3 \end{align}, $$</p> <p>from which we obtain the following transfer functions:</p> <p>$$ \begin{align} P\left(s\right)&amp;=\frac{x_2\left(s\right)}{x_3\left(s\right)}=\frac{1}{s^2-bs-k}\\ C\left(s\right)&amp;=\frac{x_3\left(s\right)}{u\left(s\right)-x_2\left(s\right)}=\frac{\alpha}{s+\beta} \end{align}, $$</p> <p>corresponding to the closed-loop system here below:</p> <p><img src="https://i.stack.imgur.com/CfxJU.jpg" alt="closed-loop system"></p>
6657
2015-02-23T02:20:30.407
|control|dynamics|
<p>I have equations of a dynamic system. I need to figure out what this physical system is. </p> <p>The equations are:</p> <p>\begin{align} \dot{x}_1&amp;=bx_1+kx_2+x_3\\ \dot{x}_2&amp;=x_1\\ \dot{x}_3&amp;=\alpha (u-x_2)-\beta x_3 \end{align}</p> <p>All I can figure out is that it is maybe a mass-spring-damper system, plus a feedback control, but I am not quite sure about the terms $x_3$ and $\dot{x}_3$. What do these two terms mean?</p>
What dynamic system could these equations represent?
<p>After painstakingly trying to find someone on the internet with more experience on this subject to help me out (to no avail), I finally gave up and decided to take matters into my own hands and figure it out myself! As it turns out, the constraints are imposed twice as a direct result of applying the chain rule for derivatives when obtaining the gradient of the negative log posterior belief function equal to zero (which is equivalent to finding the maximum of the belief). </p> <p>Unfortunately, there's no easy way to demonstrate this other than going through the math one step at a time.</p> <p><strong>Problem Setup</strong></p> <p>To help explain, let me setup an example to work with. For simplicity, lets assume that the robot moves in only on direction (in this case, the x-direction). In one dimension, the covariance matrices for motion and sensor data are simply the variances $\sigma^2_{motion}$ and $\sigma^2_{sensor}$. Again, for simplicity, let's assume that $\sigma^2_{motion}=\sigma^2_{sensor}=1$. </p> <p>Now, let's assume that the robot starts at the point $x_0=0$ and then executes two motion commands in this following order:</p> <ol> <li>Move forward by 10 units ($x_1 = x_0 + 10$)</li> <li>Move forward by 14 units ($x_2 = x_1 + 14$)</li> </ol> <p>Let's also assume that the robot world only contains one landmark $L_0$ which lies somewhere in the 1D world of the robot's motion. Suppose that the robot senses the following distances to the landmark from each of the three positions $x_0, x_1, x_2$:</p> <ol> <li>At $x_0$: The robot sensed Landmark $L_0$ at a distance of 9 units ($L_0-x_0=9$)</li> <li>At $x_1$: The robot sensed Landmark $L_0$ at a distance of 8 units ($L_0-x_1=8$)</li> <li>At $x_2$: The robot sensed Landmark $L_0$ at a distance of 21 units ($L_0-x_1=8$)</li> </ol> <p>(These numbers may look a little strange, but just take them as a given for this exercise).</p> <p><strong>Belief Function</strong> </p> <p>So, each of the relative motion and measurement constraints contributes a Gaussian function to the "posterior belief" function. So, with the information assumed above, we can write the belief function as the product of gaussians as follows:</p> <p>$$Belief = C e^{-\frac{(x_0-0)^2}{2\sigma^2}}e^{-\frac{(x_1-x_0-10)^2}{2\sigma^2}}e^{-\frac{(x_2-x_1-14)^2}{2\sigma^2}} * e^{-\frac{(L_0-x_0-9)^2}{2\sigma^2}}e^{-\frac{(L_0-x_1-8)^2}{2\sigma^2}}e^{-\frac{(L_0-x_2-21)^2}{2\sigma^2}}$$</p> <p>Note that $C$ is a constant, but we won't really need to know the exact value of $C$. Recall that we assume all the variances $\sigma^2=1$, so we obtain</p> <p>$$Belief = C e^{-\frac{(x_0-0)^2}{2}}e^{-\frac{(x_1-x_0-10)^2}{2}}e^{-\frac{(x_2-x_1-14)^2}{2}} * e^{-\frac{(L_0-x_0-9)^2}{2}}e^{-\frac{(L_0-x_1-8)^2}{2}}e^{-\frac{(L_0-x_2-21)^2}{2}}$$</p> <p><strong>Negative Log Posterior</strong> </p> <p>Our main goal is to find the values of $x_0,x_1,x_2,L_0$ that maximize this function. However, we can make some transformations to the "belief" function that enable us to find the maximum very easily. First, finding the maximum of the $Belief$ is equivalent to finding the maximum of $log(Belief)$, which allows us to exploit the properties of logarithms which gives us:</p> <p>$$log(Belief)= log(C) - \frac{1}{2}(x_0-0)^2-\frac{1}{2}(x_1-x_0-10)^2-\frac{1}{2}(x_2-x_1-14)^2 -\frac{1}{2}(L_0-x_0-9)^2-\frac{1}{2}(L_0-x_1-8)^2-\frac{1}{2}(L_0-x_2-21)^2$$</p> <p>Also, finding the maximum of a function $f(x)$ is equivalent to finding the minimum of the function $-f(x)$. So we can restate this problem as finding the minimum of </p> <p>$$F\equiv-log(Belief)= -log(C) + \frac{1}{2}(x_0-0)^2+\frac{1}{2}(x_1-x_0-10)^2+\frac{1}{2}(x_2-x_1-14)^2 +\frac{1}{2}(L_0-x_0-9)^2+\frac{1}{2}(L_0-x_1-8)^2+\frac{1}{2}(L_0-x_2-21)^2$$</p> <p><strong>Optimization</strong></p> <p>To find the minimum, we take the partial derivative of the $F$ function with respect to each of the variables: $x_0, x_1, x_2,$ and $L_0$: </p> <p>$F_{x_0}= (x_0 - 0) - (x_1 - x_0 - 10) - (L_0-x_0-9) = 0$<br> $F_{x_1}= (x_1 - x_0 - 10) - (x_2-x_1-14)- (L_0-x_1-8) = 0$ $F_{x_2}= (x_2 - x_1 - 14) - (L_0-x_2-21) = 0$<br> $F_{L_0}= (L_0-x_0-9) + (L_0-x_1-8)+ (L_0-x_2-21) = 0$ </p> <p>Notice that the 1st and second equations impose the first relative motion constraint $x_1=x_0+10$ twice: the first equation with a negative sign as a result of the chain rule for derivatives and the second equation with a positive sign (also as a result of the chain rule). Similarly, the second and third equation contain the second relative motion constraint, with opposite signs as a result of applying the chain rule for derivatives. A similar argument can be said for the measurement constraints in their corresponding equations. There's no inherent explanation for why it MUST necessarily work out this way... It just happens to have this structure in the end after working out the math. You may notice that only the initial position constraint $(x_0-0)$ is imposed only once because its quadratic term $\frac{1}{2}(x_0-0)^2$ only features a single variable inside the parentheses, so it is impossible for this term to appear in the gradient of $F$ with respect to any other variable other than $x_0$.</p> <p><strong>Conclusion</strong></p> <p>It was not apparent to me just by looking at the structure of the belief function that the gradient takes on this form without working through the details explicitly. Of course, I've made a number of simplifying assumptions along the way, including avoiding the problem of "data association" and assuming linear expressions for the motion constraints and measurement constraints. In the more general version of GraphSLAM, we do not necessarily assume this and the algorithm becomes more complicated. But in the linear case (that is, with linear motion and measurement constraints), the gradients of the negative log posterior belief function leads to imposing the motion and measurement constraints twice (once with a positive sign, once with a negative sign), each of which is also weighted by the corresponding covariance. There appears to be no inherent or more fundamental reason why it must necessarily work out this way based upon higher principles... It's just a bunch of trivial calculus that happens to work out this structured way.</p>
6659
2015-02-23T17:35:53.097
|slam|
<p>I was watching Sebastian Thrun's video course on AI for robotics (freely available on udacity.com). In his final chapter on <a href="http://youtu.be/nLEbJZFm5-E" rel="nofollow">GraphSLAM</a>, he illustrates how to setup the system of equations for the mean path locations $x_i$ and landmark locations $L_j$.</p> <p><a href="http://youtu.be/V41eTlGU0gw" rel="nofollow">To setup the matrix system</a>, he imposes each robot motion and landmark measurement constraint twice. For example, if a robot motion command is to move from x1 by 5 units to the right (reaching x2), i understand this constraint as</p> <p>$$-x_2+x_1= -5$$</p> <p>However, he also imposes the negative of this equation $$x_2-x_1=5$$ as a constraint and superimposes it onto a different equation and i'm not sure why. In his video course, he briefly mentions that the matrix we're assembling is known as the <a href="http://youtu.be/U8iFMhtJyek" rel="nofollow">"information matrix"</a>, but i have no why the information matrix is assembled in this specific way.</p> <p>So, I tried to read his book Probabilistic Robotics, and all i can gather is that these equations come from obtaining the minimizer of the negative log posterior probability incorporating the motion commands, measurements, and map correspondences, which results in a quadratic function of the unknown variables $L_j$ and $x_i$. Since it is quadratic (and the motion / measurement models are also linear), the minimum is obviously obtained by solving a linear system of equations.</p> <p>But why are each of the constraints imposed twice, with once as a positive quantity and again as the negative of the same equation? Its not immediately obvious to me from the form of the negative log posterior probability (i.e. the quadratic function) that the constraints must be imposed twice. Why is the "information matrix assembled this way? Does it also hold true when the motion and measurement models are nonlinear?</p> <p>Any help would be greatly appreciated.</p>
GraphSLAM: why are constraints imposed twice in the information matrix?
<p>There is a significant amount of information on this site: (<a href="http://allaboutroboticsurgery.com/surgicalrobots.html" rel="nofollow">http://allaboutroboticsurgery.com/surgicalrobots.html</a>) <BR> That should give you the history and details about the past and current devices. Which should hopefully allow you to fine tune your questions to get more detailed specific answers. Good Luck. -Frank</p>
6680
2015-02-25T18:52:18.497
|sensors|robotic-arm|automatic|
<p>Can anyone help me, because I am doing a project on robotical surgeries and I would like someone to help me and advise me. I wonder if anyone could give me some data on tests he or she has run in a surgical robot... Thank you for your attention! Anything else will be much appreciated!</p>
I am doing a project on robotic surgeries! Can anyone help me and give me some details related to this topic?
<p>The problem was the part we were using. Make sure you use the right transistor. The schedule and clock button still dont produce any output but everything else behaves as it should. </p> <p>Error = 2n7000 transistor</p> <p>Answer = 2n4403</p> <p>Thanks to Steve for pointing it out though it doesn't just invert the signal it mutates it into the table I posted in my question.</p>
6686
2015-02-26T22:05:55.407
|arduino|irobot-create|
<p>I have the create 2 and have it hooked up to an arduino. Almost all the commands work fine except when retrieving sensor information. If i send a request for packet 18 I get back values that while consistent don't match up, unless I am missing something. So if I press the clean button I get 127 or 11111110 and if i then press spot I get something like 11111010. I might be messing up my endianness but regardless the data isnt formatted how I expected it to be according to the spec sheet. I have 3 create 2s and they all do the same thing. Any ideas? I am using a 2n7000 along with the tutorial from the site but i dont think that has anything to do with the formatting of the byte.</p> <p>this is the library I am using: <a href="https://github.com/DomAmato/Create2" rel="nofollow">https://github.com/DomAmato/Create2</a></p> <p>Sorry to take so long to get back on this, anyways the data we get is always formatted this way. It is not a baud rate issue since it understands the commands properly.</p> <hr> <pre><code> day hour minute schedule clock dock spot clean day 3 x x x x x x x hour 6 7 x x x x x x minute 13 14 15 x x x x x schedule x x x x x x x x clock x x x x x x x x dock 27 29 30 x x 31 x x spot 55 59 61 x x 62 63 x clean 111 119 123 x x 125 126 127 </code></pre> <p>Note that the schedule and clock buttons return nothing</p>
iRobot Create 2 and Open Interface 2 Spec not syncing up with incoming data
<p>@Darth Hodorus answers is great ! But I just like to add some info about the second option.</p> <p>I would definitely go for it and use <a href="https://en.wikipedia.org/wiki/Dead_reckoning" rel="nofollow">dead-reckoning</a>. It's easy to do if you got the encoders and you have plenty of tutorial on it =).</p> <p>You'll realize that <strong>the</strong> major problem of D-R is <em>drift</em>. So you might have to think of a procedure to calibrate the robot each time. For example, with a Cartesian robot it's <em>as easy as</em> going to a <em>locked position</em> (a position where the robot can't move anymore i.e min in X, Y and Z) and define it as the (0,0,0) position. Indeed, you don't need to measure this position as you can <strong>mechanically</strong> find it. </p>
6715
2015-03-04T10:26:19.073
|arduino|sensors|navigation|
<p>I am a total newbie in robotics so please bare with me. </p> <p>I have a school project where my team has to design a robot that is capable of picking up 3 golf balls in different sizes at predefined locations. Then it has to drop these balls into their respective holes. </p> <p>We are using an arduino chip in our robot. </p> <p>I thought I could perhaps define a path for the robot, an invisible virtual path you may call. So imagining the platform as Cartesian plane, can I tell the robot go to where I want it to go? For example, go to (5,12)</p> <p>Or do I need some sort of sensors so the robot figures it out by itself. Thanks for your time! </p>
Is it possible for a robot to navigate through predefined coordinates?
<p>So, I figured out that the problem was vibration. I first tried using a running average to smooth the data that I was getting from my MPU, but the delay that it created made it oscillate a lot, so I tried a low pass filter instead, which worked GREAT! Thank you to everyone for your help!</p>
6720
2015-03-04T21:36:39.187
|quadcopter|pid|raspberry-pi|
<p>I am attempting to build a Raspberry Pi based quadcopter. So far I have succeeded in interfacing with all the hardware, and I have written a PID controller that is fairly stable at low throttle. The problem is that at higher throttle the quadcopter starts thrashing and jerking. I have not even been able to get it off the ground yet, all my testing has been done on a test bench. I have ruled out bad sensors by testing each sensor individually, and they seem to be giving correct values. Is this a problem with my code, or perhaps a bad motor? Any suggestions are greatly appreciated.</p> <p>Here is my code so far:</p> <p>QuadServer.java:</p> <pre><code>package com.zachary.quadserver; import java.net.*; import java.io.*; import java.util.*; import se.hirt.pi.adafruit.pwm.PWMDevice; import se.hirt.pi.adafruit.pwm.PWMDevice.PWMChannel; public class QuadServer { private static Sensor sensor = new Sensor(); private final static int FREQUENCY = 490; private static double PX = 0; private static double PY = 0; private static double IX = 0; private static double IY = 0; private static double DX = 0; private static double DY = 0; private static double kP = 1.3; private static double kI = 2; private static double kD = 0; private static long time = System.currentTimeMillis(); private static double last_errorX = 0; private static double last_errorY = 0; private static double outputX; private static double outputY; private static int val[] = new int[4]; private static int throttle; static double setpointX = 0; static double setpointY = 0; static long receivedTime = System.currentTimeMillis(); public static void main(String[] args) throws IOException, NullPointerException { PWMDevice device = new PWMDevice(); device.setPWMFreqency(FREQUENCY); PWMChannel BR = device.getChannel(12); PWMChannel TR = device.getChannel(13); PWMChannel TL = device.getChannel(14); PWMChannel BL = device.getChannel(15); DatagramSocket serverSocket = new DatagramSocket(8080); Thread read = new Thread(){ public void run(){ while(true) { try { byte receiveData[] = new byte[1024]; DatagramPacket receivePacket = new DatagramPacket(receiveData, receiveData.length); serverSocket.receive(receivePacket); String message = new String(receivePacket.getData()); throttle = (int)(Integer.parseInt((message.split("\\s+")[4]))*12.96)+733; setpointX = Integer.parseInt((message.split("\\s+")[3]))-50; setpointY = Integer.parseInt((message.split("\\s+")[3]))-50; receivedTime = System.currentTimeMillis(); } catch (IOException e) { e.printStackTrace(); } } } }; read.start(); while(true) { Arrays.fill(val, calculatePulseWidth((double)throttle/1000, FREQUENCY)); double errorX = -sensor.readGyro(0)-setpointX; double errorY = sensor.readGyro(1)-setpointY; double dt = (double)(System.currentTimeMillis()-time)/1000; double accelX = sensor.readAccel(0); double accelY = sensor.readAccel(1); double accelZ = sensor.readAccel(2); double hypotX = Math.sqrt(Math.pow(accelX, 2)+Math.pow(accelZ, 2)); double hypotY = Math.sqrt(Math.pow(accelY, 2)+Math.pow(accelZ, 2)); double accelAngleX = Math.toDegrees(Math.asin(accelY/hypotY)); double accelAngleY = Math.toDegrees(Math.asin(accelX/hypotX)); if(dt &gt; 0.01) { PX = errorX; PY = errorY; IX += errorX*dt; IY += errorY*dt; IX = 0.95*IX+0.05*accelAngleX; IY = 0.95*IY+0.05*accelAngleY; DX = (errorX - last_errorX)/dt; DY = (errorY - last_errorY)/dt; outputX = kP*PX+kI*IX+kD*DX; outputY = kP*PY+kI*IY+kD*DY; time = System.currentTimeMillis(); } System.out.println(setpointX); add(-outputX+outputY, 0); add(-outputX-outputY, 1); add(outputX-outputY, 2); add(outputX+outputY, 3); //System.out.println(val[0]+", "+val[1]+", "+val[2]+", "+val[3]); if(System.currentTimeMillis()-receivedTime &lt; 1000) { BR.setPWM(0, val[0]); TR.setPWM(0, val[1]); TL.setPWM(0, val[2]); BL.setPWM(0, val[3]); } else { BR.setPWM(0, 1471); TR.setPWM(0, 1471); TL.setPWM(0, 1471); BL.setPWM(0, 1471); } } } private static void add(double value, int i) { value = calculatePulseWidth(value/1000, FREQUENCY); if(val[i]+value &gt; 1471 &amp;&amp; val[i]+value &lt; 4071) { val[i] += value; }else if(val[i]+value &lt; 1471) { //System.out.println("low"); val[i] = 1471; }else if(val[i]+value &gt; 4071) { //System.out.println("low"); val[i] = 4071; } } private static int calculatePulseWidth(double millis, int frequency) { return (int) (Math.round(4096 * millis * frequency/1000)); } } </code></pre> <p>Sensor.java:</p> <pre><code>package com.zachary.quadserver; import com.pi4j.io.gpio.GpioController; import com.pi4j.io.gpio.GpioFactory; import com.pi4j.io.gpio.GpioPinDigitalOutput; import com.pi4j.io.gpio.PinState; import com.pi4j.io.gpio.RaspiPin; import com.pi4j.io.i2c.*; import com.pi4j.io.gpio.GpioController; import com.pi4j.io.gpio.GpioFactory; import com.pi4j.io.gpio.GpioPinDigitalOutput; import com.pi4j.io.gpio.PinState; import com.pi4j.io.gpio.RaspiPin; import com.pi4j.io.i2c.*; import java.net.*; import java.io.*; public class Sensor { static I2CDevice sensor; static I2CBus bus; static byte[] accelData, gyroData; static long accelCalib[] = new long[3]; static long gyroCalib[] = new long[3]; static double gyroX = 0; static double gyroY = 0; static double gyroZ = 0; static double accelX; static double accelY; static double accelZ; static double angleX; static double angleY; static double angleZ; public Sensor() { //System.out.println("Hello, Raspberry Pi!"); try { bus = I2CFactory.getInstance(I2CBus.BUS_1); sensor = bus.getDevice(0x68); sensor.write(0x6B, (byte) 0x0); sensor.write(0x6C, (byte) 0x0); System.out.println("Calibrating..."); calibrate(); Thread sensors = new Thread(){ public void run(){ try { readSensors(); } catch (IOException e) { System.out.println(e.getMessage()); } } }; sensors.start(); } catch (IOException e) { System.out.println(e.getMessage()); } } private static void readSensors() throws IOException { long time = System.currentTimeMillis(); long sendTime = System.currentTimeMillis(); while (true) { accelData = new byte[6]; gyroData = new byte[6]; int r = sensor.read(0x3B, accelData, 0, 6); accelX = (((accelData[0] &lt;&lt; 8)+accelData[1]-accelCalib[0])/16384.0)*9.8; accelY = (((accelData[2] &lt;&lt; 8)+accelData[3]-accelCalib[1])/16384.0)*9.8; accelZ = ((((accelData[4] &lt;&lt; 8)+accelData[5]-accelCalib[2])/16384.0)*9.8)+9.8; accelZ = 9.8-Math.abs(accelZ-9.8); double hypotX = Math.sqrt(Math.pow(accelX, 2)+Math.pow(accelZ, 2)); double hypotY = Math.sqrt(Math.pow(accelY, 2)+Math.pow(accelZ, 2)); double accelAngleX = Math.toDegrees(Math.asin(accelY/hypotY)); double accelAngleY = Math.toDegrees(Math.asin(accelX/hypotX)); //System.out.println((int)gyroX+", "+(int)gyroY); //System.out.println("accelX: " + accelX+" accelY: " + accelY+" accelZ: " + accelZ); r = sensor.read(0x43, gyroData, 0, 6); if(System.currentTimeMillis()-time &gt;= 5) { gyroX = (((gyroData[0] &lt;&lt; 8)+gyroData[1]-gyroCalib[0])/131.0); gyroY = (((gyroData[2] &lt;&lt; 8)+gyroData[3]-gyroCalib[1])/131.0); gyroZ = (((gyroData[4] &lt;&lt; 8)+gyroData[5]-gyroCalib[2])/131.0); angleX += gyroX*(System.currentTimeMillis()-time)/1000; angleY += gyroY*(System.currentTimeMillis()-time)/1000; angleZ += gyroZ; angleX = 0.95*angleX + 0.05*accelAngleX; angleY = 0.95*angleY + 0.05*accelAngleY; time = System.currentTimeMillis(); } //System.out.println((int)angleX+", "+(int)angleY); //System.out.println((int)accelAngleX+", "+(int)accelAngleY); } } public static void calibrate() throws IOException { int i; for(i = 0; i &lt; 3000; i++) { accelData = new byte[6]; gyroData = new byte[6]; int r = sensor.read(0x3B, accelData, 0, 6); accelCalib[0] += (accelData[0] &lt;&lt; 8)+accelData[1]; accelCalib[1] += (accelData[2] &lt;&lt; 8)+accelData[3]; accelCalib[2] += (accelData[4] &lt;&lt; 8)+accelData[5]; r = sensor.read(0x43, gyroData, 0, 6); gyroCalib[0] += (gyroData[0] &lt;&lt; 8)+gyroData[1]; gyroCalib[1] += (gyroData[2] &lt;&lt; 8)+gyroData[3]; gyroCalib[2] += (gyroData[4] &lt;&lt; 8)+gyroData[5]; try { Thread.sleep(1); } catch (Exception e){ e.printStackTrace(); } } gyroCalib[0] /= i; gyroCalib[1] /= i; gyroCalib[2] /= i; accelCalib[0] /= i; accelCalib[1] /= i; accelCalib[2] /= i; System.out.println(gyroCalib[0]+", "+gyroCalib[1]+", "+gyroCalib[2]); } public double readAngle(int axis) { switch (axis) { case 0: return angleX; case 1: return angleY; case 2: return angleZ; } return 0; } public double readGyro(int axis) { switch (axis) { case 0: return gyroX; case 1: return gyroY; case 2: return gyroZ; } return 0; } public double readAccel(int axis) { switch (axis) { case 0: return accelX; case 1: return accelY; case 2: return accelZ; } return 0; } } </code></pre> <p><strong>Edit:</strong></p> <p>I have re-written my code in C++ to see if it will run faster but it's still running at about the same speed(about 15 ms per cycle or about 66 Hz).</p> <p>This is my new code in C++:</p> <pre><code>#include &lt;wiringPi.h&gt; #include &lt;wiringPiI2C.h&gt; #include &lt;sys/socket.h&gt; #include &lt;netinet/in.h&gt; #include &lt;string.h&gt; #include &lt;string&gt; #include &lt;iostream&gt; #include &lt;unistd.h&gt; #include &lt;boost/thread.hpp&gt; #include &lt;time.h&gt; #include &lt;cmath&gt; #define axisX 0 #define axisY 1 #define axisZ 2 #define kP 20 #define kI 0 #define kD 0 #define FREQUENCY 490 #define MODE1 0x00 #define MODE2 0x01 #define SUBADR1 0x02 #define SUBADR2 0x03 #define SUBADR13 0x04 #define PRESCALE 0xFE #define LED0_ON_L 0x06 #define LED0_ON_H 0x07 #define LED0_OFF_L 0x08 #define LED0_OFF_H 0x09 #define ALL_LED_ON_L 0xFA #define ALL_LED_ON_H 0xFB #define ALL_LED_OFF_L 0xFC #define ALL_LED_OFF_H 0xFD // Bits #define RESTART 0x80 #define SLEEP 0x10 #define ALLCALL 0x01 #define INVRT 0x10 #define OUTDRV 0x04 #define BILLION 1000000000L using namespace std; double accelCalX = 0; double accelCalY = 0; double accelCalZ = 0; double gyroCalX = 0; double gyroCalY = 0; double gyroCalZ = 0; double PX; double PY; double IX = 0; double IY = 0; double DX; double DY; double lastErrorX; double lastErrorY; int throttle = 1471; int sensor = wiringPiI2CSetup(0x68); int pwm = wiringPiI2CSetup(0x40); array&lt;int,4&gt; motorVal; struct timespec now, then; int toSigned(int unsignedVal) { int signedVal = unsignedVal; if(unsignedVal &gt; 32768) { signedVal = -(32768-(unsignedVal-32768)); } return signedVal; } double getAccel(int axis) { double X = (toSigned((wiringPiI2CReadReg8(sensor, 0x3B) &lt;&lt; 8)+wiringPiI2CReadReg8(sensor, 0x3C)))/1671.8; double Y = (toSigned((wiringPiI2CReadReg8(sensor, 0x3D) &lt;&lt; 8)+wiringPiI2CReadReg8(sensor, 0x3E)))/1671.8; double Z = (toSigned((wiringPiI2CReadReg8(sensor, 0x3F) &lt;&lt; 8)+wiringPiI2CReadReg8(sensor, 0x40)))/1671.8; X -= accelCalX; Y -= accelCalY; Z -= accelCalZ; Z = 9.8-abs(Z-9.8); switch(axis) { case axisX: return X; case axisY: return Y; case axisZ: return Z; } } double getGyro(int axis) { double X = (toSigned((wiringPiI2CReadReg8(sensor, 0x43) &lt;&lt; 8)+wiringPiI2CReadReg8(sensor, 0x44)))/1671.8; double Y = (toSigned((wiringPiI2CReadReg8(sensor, 0x45) &lt;&lt; 8)+wiringPiI2CReadReg8(sensor, 0x46)))/1671.8; double Z = (toSigned((wiringPiI2CReadReg8(sensor, 0x47) &lt;&lt; 8)+wiringPiI2CReadReg8(sensor, 0x48)))/1671.8; X -= gyroCalX; Y -= gyroCalY; Z -= gyroCalZ; switch(axis) { case axisX: return X; case axisY: return Y; case axisZ: return Z; } } void calibrate() { int i; for(i = 0; i &lt; 1500; i++) { accelCalX += (toSigned((wiringPiI2CReadReg8(sensor, 0x3B) &lt;&lt; 8)+wiringPiI2CReadReg8(sensor, 0x3C)))/1671.8; accelCalY += (toSigned((wiringPiI2CReadReg8(sensor, 0x3D) &lt;&lt; 8)+wiringPiI2CReadReg8(sensor, 0x3E)))/1671.8; accelCalZ += (toSigned((wiringPiI2CReadReg8(sensor, 0x3F) &lt;&lt; 8)+wiringPiI2CReadReg8(sensor, 0x40)))/1671.8; gyroCalX += (toSigned((wiringPiI2CReadReg8(sensor, 0x43) &lt;&lt; 8)+wiringPiI2CReadReg8(sensor, 0x44)))/1671.8; gyroCalX += (toSigned((wiringPiI2CReadReg8(sensor, 0x45) &lt;&lt; 8)+wiringPiI2CReadReg8(sensor, 0x46)))/1671.8; gyroCalX += (toSigned((wiringPiI2CReadReg8(sensor, 0x45) &lt;&lt; 8)+wiringPiI2CReadReg8(sensor, 0x46)))/1671.8; usleep(1000); } accelCalX /= i; accelCalY /= i; accelCalZ /= i; accelCalZ -= 9.8; gyroCalX /= i; gyroCalY /= i; gyroCalZ /= i; cout &lt;&lt; accelCalX &lt;&lt; " " &lt;&lt; accelCalY &lt;&lt; " " &lt;&lt; accelCalZ &lt;&lt; "\n"; } int calculatePulseWidth(double millis, int frequency) { return (int)(floor(4096 * millis * frequency/1000)); } void add(double value, int i) { value = calculatePulseWidth(value/1000, FREQUENCY); if(motorVal[i]+value &gt; 1471 &amp;&amp; motorVal[i]+value &lt; 4071) { motorVal[i] += value; }else if(motorVal[i]+value &lt; 1471) { //System.out.println("low"); motorVal[i] = 1471; }else if(motorVal[i]+value &gt; 4071) { //System.out.println("low"); motorVal[i] = 4071; } } void getThrottle() { int sockfd,n; struct sockaddr_in servaddr,cliaddr; socklen_t len; char mesg[1000]; sockfd=socket(AF_INET,SOCK_DGRAM,0); bzero(&amp;servaddr,sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr = htonl(INADDR_ANY); servaddr.sin_port = htons(8080); bind(sockfd,(struct sockaddr *)&amp;servaddr,sizeof(servaddr)); while(true) { len = sizeof(cliaddr); n = recvfrom(sockfd,mesg,1000,0,(struct sockaddr *)&amp;cliaddr,&amp;len); mesg[n] = 0; string message(mesg); string values[5]; int valIndex = 0; int lastIndex = 0; for(int i = 0; i &lt; message.length(); i++) { if(message[i] == ' ') { values[valIndex] = message.substr(lastIndex+1, i); lastIndex = i; valIndex++; } } values[valIndex] = message.substr(lastIndex+1, message.length()); throttle = calculatePulseWidth(((stoi(values[4])*12.96)+733)/1000, FREQUENCY); } } void setAllPWM(int on, int off) { wiringPiI2CWriteReg8(pwm, ALL_LED_ON_L, (on &amp; 0xFF)); wiringPiI2CWriteReg8(pwm, ALL_LED_ON_H, (on &gt;&gt; 8)); wiringPiI2CWriteReg8(pwm, ALL_LED_OFF_L, (off &amp; 0xFF)); wiringPiI2CWriteReg8(pwm, ALL_LED_OFF_H, (off &gt;&gt; 8)); } void setPWM(int on, int off, int channel) { wiringPiI2CWriteReg8(pwm, LED0_ON_L + 4 * channel, (on &amp; 0xFF)); wiringPiI2CWriteReg8(pwm, LED0_ON_H + 4 * channel, (on &gt;&gt; 8)); wiringPiI2CWriteReg8(pwm, LED0_OFF_L + 4 * channel, (off &amp; 0xFF)); wiringPiI2CWriteReg8(pwm, LED0_OFF_H + 4 * channel, (off &gt;&gt; 8)); } void setPWMFrequency(double frequency) { double prescaleval = 25000000.0; prescaleval /= 4096.0; prescaleval /= frequency; prescaleval -= 1.0; double prescale = floor(prescaleval + 0.5); int oldmode = wiringPiI2CReadReg8(pwm, MODE1); int newmode = (oldmode &amp; 0x7F) | 0x10; wiringPiI2CWriteReg8(pwm, MODE1, newmode); wiringPiI2CWriteReg8(pwm, PRESCALE, (floor(prescale))); wiringPiI2CWriteReg8(pwm, MODE1, oldmode); usleep(50000); wiringPiI2CWriteReg8(pwm, MODE1, (oldmode | 0x80)); } void initSensor() { wiringPiI2CWriteReg8(sensor, 0x6B, 0x0); wiringPiI2CWriteReg8(sensor, 0x6C, 0x0); } void initPWM() { setAllPWM(0, 0); wiringPiI2CWriteReg8(pwm, MODE2, OUTDRV); wiringPiI2CWriteReg8(pwm, MODE1, ALLCALL); usleep(50000); int mode1 = wiringPiI2CReadReg8(pwm, MODE1); mode1 = mode1 &amp; ~SLEEP; wiringPiI2CWriteReg8(pwm, MODE1, mode1); usleep(50000); setPWMFrequency(FREQUENCY); } double millis(timespec time) { return (time.tv_sec*1000)+(time.tv_nsec/1.0e6); } double intpow( double base, int exponent ) { int i; double out = base; for( i=1 ; i &lt; exponent ; i++ ) { out *= base; } return out; } int main (void) { initSensor(); initPWM(); cout &lt;&lt; "Calibrating..." &lt;&lt; "\n"; calibrate(); boost::thread server(getThrottle); clock_gettime(CLOCK_MONOTONIC, &amp;then); while(true) { motorVal.fill(throttle); clock_gettime(CLOCK_MONOTONIC, &amp;now); double dt = (millis(now)-millis(then))/1000; then = now; double accelX = getAccel(0); double accelY = getAccel(1); double accelZ = getAccel(2); double hypotX = sqrt(intpow(accelX, 2)+intpow(accelZ, 2)); double hypotY = sqrt(intpow(accelY, 2)+intpow(accelZ, 2)); double accelAngleX = (180/3.14)*(asin(accelY/hypotY)); double accelAngleY = (180/3.14)*(asin(accelX/hypotX)); double errorX = -getGyro(0); double errorY = getGyro(1); PX = errorX; PY = errorY; IX += errorX*dt; IY += errorY*dt; IX = 0.95*IX+0.05*accelAngleX; IY = 0.95*IY+0.05*accelAngleY; DX = (errorX-lastErrorX)*dt; DY = (errorY-lastErrorY)*dt; lastErrorX = errorX; lastErrorY = errorY; double outputX = kP*PX+kI*IX+kD*DX; double outputY = kP*PY+kI*IY+kD*DY; add(outputY, 0);//-outputX+ add(outputY, 1);//-outputX- add(outputY, 2);//outputX- add(outputY, 3);//outputX+ setPWM(0, motorVal[0], 12); setPWM(0, motorVal[1], 13); setPWM(0, motorVal[2], 14); setPWM(0, motorVal[3], 15); } } </code></pre> <p>In addition two of the motors seem like they are lagging when I turn the quadcopter fast in one direction. Also for some strange reason the quadcopter seems less responsive to P gain; I have it at 20 in the C++ version and it is working about the same as when I had it at 1.5 in the java version.</p> <p><strong>Edit:</strong></p> <p>After doing some more testing I have determined that reading from the MPU6050 and writing to the PCA9685 board that I am using to control the ESCs is the source of the delay. Does anybody know how to speed this up?</p> <p><strong>Edit:</strong></p> <p>I managed to speed up my code to about 200 Hz by changing the i2c baud rate, but the quadcopter is still thrashing. I have spent hours trying to tune the pid controller, but it doesn't seem to help at all.</p>
Raspberry Pi quadcopter thrashes at high speeds
<p>While I still like the rake idea, parallel slots wider than the support pegs, I worry it would push the pink ball instead of retrieve it. </p> <p>My second thought is a simple suction device that retrieves the balls and drops them into a funnel in the bot. Though there might be concerns with the mass of a golf ball.</p> <p>Collect all 6 without worrying about the fencing. Travel to the smallest hole, open the gate on the funnel and let them pour. </p> <p>Sensors: Golf ball detection to direct the suction. (Chroma?Known Ball Color, Shape?Round, Dead Reckoning?Known positions)</p> <p>Finding the target hole, positioning the funnel output over the hole Fall back to a lower value hold if timer is running too low. </p>
6721
2015-03-05T09:20:25.370
|arduino|control|robotic-arm|
<p><img src="https://i.stack.imgur.com/cpwJd.png" alt="enter image description here">I and my team have to design a robot using an arduino chip. The objective of the robot is to grab golf balls at a set of golf pins at different heights and pre-defined locations. We couldn't figure out a possible mechanism that could collect the balls and drop them into the trailer except for a robot arm. However, we don't have experience and time in designing a sophisticated system for the arm like recognizing where the ball is and then grabbing it accordingly. What would a feasible option be compared to a non-sophisticated robot arm?</p> <p>Note:The robot must be autonomous.</p>
How feasible is the idea of operating a robotic arm in a non-sophisticated way?
<p>It is in frequency domain instead of time domain.</p> <p>$$ G(s)=\frac{K(s+a)^2}{s} = \frac{Ks^2+2Kas+Ka^2}{s} $$</p> <p>according to the Laplace form of the PID controller $$ G(s)=\frac{K_ds^2+K_ps+K_i}{s} $$</p> <p>so $$ K_d = K, K_p = 2Ka, K_i = Ka^2 $$</p>
6740
2015-03-07T21:31:46.463
|control|pid|tuning|
<p>I'm trying to understand how to obtain the Kp, Ki, Kd values after finding a combination of K and a that works for me. Do I just expand the equation and take the coefficients? </p>
In the PID equation K[((s+a)^2)/s] what values correspond to the PID coefficients Kp, Ki, Kd?
<p>I know you said "not considering laser based detectors," but I am assuming you are discounting them due to price? There is a <a href="https://www.sparkfun.com/products/13167" rel="nofollow">LIDAR unit</a> available from sparkfun for less than $100 which claims to have a range to 40m. That's about half the price of a Kinect sensor. AFAIK, this unit only measures range to a single point, I'm not sure how easy it would be to implement it as a rastering scanner.</p>
6741
2015-03-08T00:27:45.220
|computer-vision|stereo-vision|
<p>I am a newbie in Robotics. As far as I know, there're generally two ways to acquire depth information of a scene. One is the stereo vision method which applies two cameras. The other is the RGBD sensors such as Kinect and Primesense. It seems that both methods are in use currently. However, I do not know what are their advantages against each other.</p> <p>I think the Kinect is a perfect solution over stereo vision ignoring its expense. So I have two questions here:</p> <ol> <li>Is there advantages of binocular methods over Kinect besides expense?</li> <li>As I know, both the two methods are confined with a limited distance detection range. In real world applications, we sometimes also need depth data at distance. Is there a method that we can acquire or estimate data information at far distance( not considering laser-based detectors ).</li> </ol> <p>Further more, my application may be a small flight vehicle. Which method and equipment should I choose? Will the traditional binocular camera be too slow for my application?</p>
How many methods can I use to acquire depth data?
<p>Any magnetic compass absolutely <em>will</em> work underwater. </p> <p>Several companies (e.g., PNI, OceanServer) build and sell electronic compass modules based on flux-gate technology specifically for underwater applications. </p> <p>Buoyancy gliders and small AUVs or ROVs usually rely on flux gate compasses to sense heading because they are small, relatively inexpensive, and consume little power. Larger, more expensive vehicles often have a magnetic compass for backup, but use a gyrocompass (RLG or FOG) as the primary orientation sensor because it is more accurate than a magnetic compass and is not susceptible to local magnetic disturbances (like the vehicle's motors or batteries). </p> <p>The accuracy of your compass is important because if you are trying to navigate by dead reckoning -- or by inertial navigation -- any errors in your orientation estimate make errors in your position estimate grow quickly. To improve the data rate and accuracy of a magnetic compass, and to make it less susceptible to local magnetic disturbances, you can combine angular rates from a gyro with the angles from a compass using a complementary filter or a Kalman filter. But trying to use <em>only</em> a gyro, you will run into trouble as you integrate to get angle -- over long periods, your integration will drift. That's why the complementary filter solution works -- the magnetic compass is a bit noisy, but stable over the long term, while the gyro is accurate and fast over the short term, but drifts unbounded in the long term.</p> <p>As Ian mentioned, if you are trying to track your robot's position, you will also need a velocity estimate for dead reckoning. If you are in stationary body of water, like a pool, you can probably get away with "counting propeller turns", but you would need to calibrate your propeller. In the field, most vehicles will use a <a href="http://en.wikipedia.org/wiki/Acoustic_Doppler_current_profiler#Bottom_tracking" rel="nofollow">Doppler velocity log (DVL)</a> to measure speed relative to the seafloor. That is an expensive sensor to add, but well worth it if your application requires precision navigation.</p> <p>Refer to <a href="https://scholar.google.com/scholar?cluster=18101485865476627367&amp;hl=en&amp;as_sdt=0,5" rel="nofollow">this article</a> for a fairly recent survey on underwater navigation technology.</p>
6750
2015-03-08T22:58:41.950
|sensors|compass|
<p>I'm building a submersible ROV, so I need a way to navigate. So using a compass would help but this brings up the question, does an electronic compass work underwater?</p> <p>My thoughts are the water might act as a faraday cage, and therefore interfere with the magnetic field. Therefore it might not even work. Maybe a gyroscope might be a better solution.</p>
Does an electronic compass work underwater
<p>I went through the header files of the 12cdev lib and I figured it out.</p> <p>you have to first add the line VectorInt16 gyro; to your motion variables, then you add the line</p> <p>mpu.dmpGetGyro(&amp;gyro, fifoBuffer);</p> <p>to your outputs.</p>
6764
2015-03-10T23:08:12.563
|arduino|imu|accelerometer|gyroscope|
<p>I'm currently calibrating the MPU6050 chip using an arduino mega 2560. I am using the J Rowberg 12c dev libraries. I can get it to print raw accelerometer and gyroscpe values (very unstable, wildly changing values). In the digital motion processing chip library, I can get it to print euler angles, quaternions, real world acceleration and actual acceleration but there is no option to get gyroscope data. </p> <p>can I use the DMP library to get gyro data or is it only possible to get raw unprocessed gyro values?</p>
How do I get MPU 6050 gyroscope data using "MPU6050_6Axis_MotionApps20.h" library
<ol> <li><p>ROS is not the Jack of all trades when there is a question related to robotics. It is just a frame to help you handle package communication between different nodes(functions).</p></li> <li><p>You can buy a kit of 3D-robotics' quadcopter. Their APM flight controller is an open-sourced control system based on Arduino and you can modify its framework and download it into the quadcopter. All source are available in their <a href="https://github.com/diydrones/ardupilot" rel="nofollow">github</a>.</p></li> <li><p>If you don't want to pay too much attention on low-level development, which means the control control algorithm itself, and you want to develop some applications based on the quadcopter. It is another good choice to try AR.Drone, which provides many APIs.</p></li> </ol>
6765
2015-03-11T12:07:48.033
|quadcopter|
<p>my question is general so please bear with me. I'm now interested in buying a quadcopter and develop some functions that it does for example an android app to control it, or objects detection. So my question is what are the available quadcopters which has a software that allows me to do such things not just a flying toy?</p> <p>P.S: I'm asked to buy a kit within 600$ and not build it by myself</p>
Open Source software for Quadcopters
<p>The crazyflie comes ready to use with their iPhone and Android App, no extra components required. But the idea is that you can develop and make changes so if you wanted (depending on the precise outline of your project) you could use it for your project but program it yourself to do something in addition to the out of the box functionality, or add extra hardware like the LED component to perform some form of light show as you fly it, or automate some flight instructions.</p> <p>It is worth noting that the source code for the app's are also available.</p>
6772
2015-03-11T23:05:30.440
|quadcopter|
<p>I am currently doing a project for school and we are told that we must use a micro controller that ends up controlling some external hardware, now i know the crazyflie is controlling the motors which counts as external hardware but is it a micro controller? My second question is i want to purchase the kit so i can assembly it myself however I saw that you can use an expansion board so you need not solder and also i plan on not buying a remote its possible to control the crazyflie via my iPhone correct? I would appreciate it if someone could answer my questions. Thank you in advance</p>
Is the crazyflie control board considered a microcontroller
<p>As it has already been pointed out, this is such a broad topic, it's almost impossible to answer it... People do entire doctoral thesis for years on "just" one aspect like speech recognition!!</p> <p>But here is what you could do:</p> <p>1) get a Raspberry Pi for the brain. You'll have to learn how to use Linux and a programming language like Python</p> <p>2) get a cheap robotic arm: <a href="http://www.maplin.co.uk/p/robotic-arm-kit-with-usb-pc-interface-a37jn" rel="nofollow">this one</a> has been reverse-engineered before and you can find all the code on the web to control it with the raspberry pi</p> <p>3) add a microphone and a little speaker and use the google Speech API for voice recognition</p> <p>When you get there, you'll be where most hobbyist can get too easily without needing a lab or expensive equipment. Many people have done very similar projects, just google it, look on YouTube.</p>
6776
2015-03-12T08:29:54.090
|mobile-robot|quadcopter|microcontroller|mechanism|embedded-systems|
<p>I'm very passionate about robots from my childhood.I'm a java developer. I love sci-fi movies.I have a little bit knowledge in embedded systems and electronics. My ambition is to build a robot like Jarvis (In Iron Man Movie).Which is a voice controlled robot.I would like to implement that in my house as a home automation system.It would take voice as input and take appropriate action.. Please help me to do this. Any kind of help is appreciated..</p>
Iron man Jarvis like robot
<p>The manual you linked to is not for the Roomba 780. You'll want to reference the one on this page: <a href="http://www.irobot.com/create" rel="nofollow">http://www.irobot.com/create</a></p> <p>You can tell if the robot is on the dock by checking the "Home Base" bit in the charging sources available packet (#34).</p> <p>There is currently no way to know via the OI if Roomba is trying to find the dock.</p>
6816
2015-03-17T19:42:38.943
|roomba|
<p>Using the <a href="http://www.robotappstore.com/files/KB/Roomba/Roomba_SCI_Spec_Manual.pdf" rel="nofollow" title="SCI messages">SCI messages</a>, I would like to determine the current operating mode or state of a iRobot Roomba 780. Finally, I would like to detect and separate four states: </p> <ul> <li>Cleaning</li> <li>In docking station</li> <li>Returning to docking station</li> <li>Error (e.g. trapped on obstacle)</li> </ul> <p>What is a fast and reliable way to detect those states using <a href="http://www.robotappstore.com/files/KB/Roomba/Roomba_SCI_Spec_Manual.pdf" rel="nofollow" title="SCI messages">SCI data</a>?</p> <p>The Roomba SCI Sensor packets "Remote Control Command" and "Buttons" seem to return the currently called commands and not the currently executed ones.</p>
Determine current roomba state / operating mode
<p>The best control you can design is "exact feed forward linearization based on differential flatness". This is a complete approach to design control laws for flat systems. </p> <p>Just PID is not a good idea : you will get overshoot and you do not master precisely position/velocity.</p> <p>Flatness is a generalization on controllability. Hence you are sure that your system can achieve any position/velocity profile provided you remain inside the physical constraints.</p>
6817
2015-03-17T21:48:56.893
|control|pid|
<p>Suppose I have a mechanical system which is free to move on a given rail [-5m, 5m] like a motorized cart. The whole system can be mathematically expressed through a <em>linear timeinvariant</em> system equations.</p> <p>If I need to <em>control</em> <em>only</em> the position (for example saying the controller: "move to +2.3") I can simply design a PID controller that, given a set point moves the cart to that position.</p> <p>Now I need much more and I want to <em>control</em> the <em>position</em> <strong>and</strong> the <em>velocity</em> of the cart. So I need for example to say: "move to +2.3 with a specific velocity's profile". Of course the <em>vel = 0</em> at the end position.</p> <p>Question: how should I design such a controller? Do I need specifically a special type of controller? Or I have a huge choice?</p> <p>Any help, graph, link and example is really appreciated.</p> <p>Regards</p>
Basic general question about controllers
<p>Sure, as along as you can measure orientation (compass, GPS, whatever). Remember the initial orientation and then before figuring out the direction to command the robot, you just need to rotate the coordinates. So if the joystick (DRIVER) and robot are aligned to start (initial offset 0) and then the robot is pointing, say at 90 degrees, and you command the robot to move in the 180 direction FROM THE DRIVER PERSPECTIVE, you take the 180 minus the current orientation of the robot (90) and issue a command for the robot to move in the 90 degree direction IN IT'S OWN ROTATED COORDINATE SYSTEM.</p> <p>So robot command is driver coordinate command direction minus current robot direction relative to the driver coordinate (which was stored at initiation). Just add one more factor if the robot and driver aren't initially aligned. </p>
6818
2015-03-18T01:33:25.630
|control|sensors|
<p>Is it possible to remote control a 'robot' relative to the driver with an angle sensor (or any other sensor)? For example, if the robot starts in this position</p> <pre><code>-------------- | Front | | -------- | | |________| | [robot] | Back | -------------- </code></pre> <p>and the joystick is in this configuration</p> <pre><code>-------------- | Forwards | | [joystick] | | Backwards | -------------- </code></pre> <p>then if the robot turns around,</p> <pre><code>-------------- | Back | | -------- | | |________| | [robot] | Front | -------------- </code></pre> <p>pushing the controller forwards will still make the robot go forward</p> <pre><code>-------------- | ^ | | [joystick] | | Backwards | -------------- -------------- | ^ | | -------- | | |________| | [robot] | Front | -------------- </code></pre> <p><em>even though from the robot's POV, he's going backwards</em>.</p> <p>Any ideas/solutions?</p>
Remote Control Relative to Driver
<p>"Precision motors" and "planetary gear motors" aren't really describing the same attribute of the motor. It's like comparing "fine-point pencils" to "red pencils" -- they could be both fine-point and red, either fine-point or red, or neither.</p> <p>Planetary gears are generally used to convert high RPM at low torque to low RPM at high torque. Because each rotation of the motor shaft results in a very small movement of the gear system's output shaft, you could also say that planetary gears offer you very fine precision in the output. It's unclear from these links whether that's the case (and it's possible that both of them are using planetary gears).</p> <p>Perhaps "precision" is just being used a marketing term. Are there specific requirements you have for your motors that aren't being shown in these listings?</p>
6826
2015-03-18T22:45:48.897
|mobile-robot|motor|gearing|
<p>i'm working on a building a rover and would like some advice on selecting motors. In particular, i want to understand the difference between precision and planetary gear motors. My robot will way about 10-15lbs i think and would like it to be responsive and quick. I have two sabertooth 2x12 motor controllers (which can supply up to 12amps). I have been looking at these motors and i am not sure which is better choice for my application. </p> <p>These are the two sets of motors i am thinking about. <a href="https://www.servocity.com/html/precision_robotzone_gear_motor.html" rel="nofollow">https://www.servocity.com/html/precision_robotzone_gear_motor.html</a> <a href="https://www.servocity.com/html/3-12v_precision_planetary_gear.html" rel="nofollow">https://www.servocity.com/html/3-12v_precision_planetary_gear.html</a></p> <p>googling does provide some info on planetary gears, but the application of these two is still is unclear to me.</p> <p>Thanks</p>
Difference between planetary and precision gear motors
<p>PNG does not support float numbers (afaik). Hence the depth value needs to be encoded as a fixed point value. The fixed point was chosen, I guess, to get a good trade-off between accuracy (0.2 mm) and range (13 m). </p>
6829
2015-03-19T11:19:30.690
|slam|
<p>There are several robotics datasets for SLAM, like <a href="https://vision.in.tum.de/data/datasets/rgbd-dataset/file_formats" rel="nofollow">this one</a>.</p> <p>In this webpage you can see that the depth image is scaled by a factor of 5000, so that float depth images can be stored in 16 bit png files:</p> <pre><code>The depth images are scaled by a factor of 5000, i.e., a pixel value of 5000 in the depth image corresponds to a distance of 1 meter from the camera, 10000 to 2 meter distance, etc. A pixel value of 0 means missing value/no data. </code></pre> <p>I do not understand why this value is chosen. Why not simply 1000, so that there is a conversion of meters to millimeters? </p>
Robotics SLAM datasets - scaling factor
<p>Operating an ESC is practically the same as operating a <a href="http://en.wikipedia.org/wiki/Servo_control" rel="nofollow">servo</a>. The main difference is that instead of the pulse width translating to a position on the servo, it translates to an output speed. The range of widths is identical with ~1000µs representing the lowest setting and ~2000µs representing full on. These values can sometimes go beyond (as overshoot), but those are usually considered the 0% and 100% values.</p> <p>I would search for any sort of servo libraries for the RasPi, they will do the trick. Using an additional servo controller though is always an option. </p> <p>It is completely reasonable to use the BEC to power your electronics.</p>
6837
2015-03-20T13:32:24.290
|quadcopter|raspberry-pi|brushless-motor|esc|multi-rotor|
<p>I'm looking to build a new (first) quadcopter without the conventional flight controller and radio, with an onboard RPi and applying some newfound knowledge on autonomous control to improve my coding skills.</p> <p>Although, since I've never actually built a quadcopter, I don't actually have any experience in using brushless motors.</p> <p>I'll be using a RPi B+, so controlling them over I2C was something I looked into. The B+ though only has two I2C interfaces. It also only has two hardware PWM pins and I'm unsure whether software PWM would be enough. I found the Afro SimonK-based ESCs from HobbyKing which have I2C (Intended for the MikroKopter).</p> <p>I've looked around and people have used the Adafruit 16-channel PWM/Servo drivers to control them. Is this an option to look into? Or is there perhaps a better way?</p> <p>Also, would be it particularly safe if the RPi is run off the ESC's BEC? It's confusing because, when the ESC is powered on, well, it'll be powered on before the RPi comes up. What do ESCs do when they have bad input?</p>
Controlling an ESC for Brushless Motors with an RPi
<p>There's a lot of questions here, so I expect pushback from the peanut gallery, but I'll answer what I can.</p> <p>First off, a wall <em>is</em> an obstacle, right? There's no difference there other than what words you use to describe "a place a robot cannot go".</p> <p>So now, we just need a way to <em>store</em> the locations of obstacles so that we know where the robot can go. This is your map. One way of storing this map, as you mentioned, is an occupancy grid. </p> <p>An occupancy grid (for two dimensional environments, like a floor) is simply an N by M matrix of numbers. In each "cell" we store the probability that an object is present. Cells that contain a zero are known to be clear (unlikely objects). Cells with a 1 are obstacles.</p> <p>Here, N and M are the dimensions of the area. If your workspace is 10 meters by 10 meters, and your robot is small, then a reasonable choice of N and M is 10x10. This gives you a resolution of one square meter per cell. For higher resolution, 100x100 would give you a tenth of a meter resolution. Good enough for navigating an iRobot Create, in my experience.</p> <p><em>How</em> you fill these cells is up to you. If you know that a wall goes from [0,0] to [5,5], for example, you can fill each cell between those two points with ones.</p> <h3>Sidebar</h3> <p>So why <em>wouldn't</em> we use this representation? Well one reason is the space the map takes up. If you are navigating in a hallway, you really only need to know that it is bounded by two lines. So storing the length and position of these lines is enough. That's a small number of variables to represent any length or size of hallway. With an occupancy grid, we'd have to store <em>all of the cells</em> to understand the configuration of the hallway. </p> <p>However, occupancy grids are probably the most modular, extensible, flexible, etc mapping techniques.</p> <h3>How to "fill" the cells</h3> <p>Great question. You'd want to fill all cells that the line touches. To see if a line goes through a certain cell, you can check if the line (ray) collides with the cell (box). Since the box is aligned with the X and Y axes, this is known as the Axis-Aligned Bounding Box intersection problem. <a href="https://gamedev.stackexchange.com/q/18436">see this question</a>. This may not be the best way to "fill" the cells, as you may have to iterate over the whole map. <a href="https://stackoverflow.com/q/10350258">here</a> is a way to find all cells intersected by a line.</p>
6838
2015-03-20T17:00:29.473
|control|localization|simulation|
<p>I have a task of developing a simulation of an adaptive robot control system but I don't seem too have anyone to discuss my uncertainties with. I want to keep the simulation as simple as possible as I have a very tight deadline and it's only a one off project that most probably will never be used in my life again. </p> <p>The minimal behaviour that the agent is supposed to exhibit is wall and obstacle avoidance. It can be extended to avoiding small objects and exploring large ones. </p> <p>I've decided to go with a simple feedback control system. To begin with I'm struggling to decide how to represent the map of agent's environment. What I mean is, what if I want a wall to be from coordinate [0,0] to [0.5]. I could hard code it, e.g. have a matrix with coordinates of all obstacles but how small units do I make... I.e. what if I have two neighbouring coordinates [0,0.01] and [0,0.02] but the agents gets a 'clear to go' to coordinate [0,0.05]. In this case it doesn't know that it actually is about to walk into a wall. I've heard of something called occupancy grid map but I don't exactly get how it works and how to implement it. </p> <p>Another thing that I am struggling with is how do I distinct between a wall and an obstacle? And then, how do I let the agent know how big that obstacle is so that it can either avoid it or explore it. </p> <p>Eh, I'm really puzzled with this project. I would really appreciate any thoughts or directions. Thank you. :-)</p>
Implementation of wall and obstacle avoidance
<p><a href="http://www.radioshack.com" rel="nofollow">Radioshack</a> is good, although it recently went bankrupt, but a few of their stores are still around. <a href="http://www.ti.com" rel="nofollow">Texas Instruments</a> is a good source, it's out of Texas and has nearly any electronic components needed.</p>
6845
2015-03-22T07:47:53.363
|mobile-robot|motor|
<p>Im looking for a good source for robotic components like sheel/tracked robot chasis, motors, sensors, communication and mechanics. I thought about using raspberry and arduino as platforms for automation, is that an good idea? Im asking as i dont know yet much about the motors/drives uses for powering robots.</p> <p>Thanks!</p> <p>Uli</p>
Recommendation for good source of Robotic Components
<p>I think that just PID is not a good approach to solve a tracking problem. </p> <p>It is better to use nonlinear control method that are suitable for that task like "Exact feedforward linearization based on differential flatness". You can get an overview in the open access paper: <a href="https://hal.archives-ouvertes.fr/hal-00431712" rel="nofollow">https://hal.archives-ouvertes.fr/hal-00431712</a> and references therein. </p>
6859
2015-03-24T04:31:32.250
|control|pid|dynamics|
<p>I'm trying to implement the tracking problem for this <a href="https://robotics.stackexchange.com/questions/4793/proportional-controller-error-doesnt-approach-zero">example</a> using PID controller. The dynamic equation is </p> <p>$$ I \ddot{\theta} + d \dot{\theta} + mgL \sin(\theta) = u $$</p> <p>where </p> <p>$\theta$ : joint variable. </p> <p>$u$ : joint torque</p> <p>$m$ : mass. </p> <p>$L$ : distance between centre mass and joint. </p> <p>$d$ : viscous friction coefficient</p> <p>$I$ : inertia seen at the rotation axis.</p> <p>$\textbf{Regulation Problem:}$</p> <p>In this problem, the desired angle $\theta_{d}$ is constant and $\theta(t)$ $\rightarrow \theta_{d}$ and $\dot{\theta}(t)$ $\rightarrow 0$ as $t$ $\rightarrow \infty$. For PID controller, the input $u$ is determined as follows</p> <p>$$ u = K_{p} (\theta_{d} - \theta(t)) + K_{d}( \underbrace{0}_{\dot{\theta}_{d}} - \dot{\theta}(t) ) + \int^{t}_{0} (\theta_{d} - \theta(\tau)) d\tau $$</p> <p>The result is </p> <p><img src="https://i.stack.imgur.com/3EFpL.png" alt="enter image description here"></p> <p>and this is my code <code>main.m</code></p> <pre><code>clear all clc global error; error = 0; t = 0:0.1:5; x0 = [0; 0]; [t, x] = ode45('ODESolver', t, x0); e = x(:,1) - (pi/2); % Error theta plot(t, e, 'r', 'LineWidth', 2); title('Regulation Problem','Interpreter','LaTex'); xlabel('time (sec)'); ylabel('$\theta_{d} - \theta(t)$', 'Interpreter','LaTex'); grid on </code></pre> <p>and <code>ODESolver.m</code> is </p> <pre><code>function dx = ODESolver(t, x) global error; % for PID controller dx = zeros(2,1); %Parameters: m = 0.5; % mass (Kg) d = 0.0023e-6; % viscous friction coefficient L = 1; % arm length (m) I = 1/3*m*L^2; % inertia seen at the rotation axis. (Kg.m^2) g = 9.81; % acceleration due to gravity m/s^2 % PID tuning Kp = 5; Kd = 1.9; Ki = 0.02; % u: joint torque u = Kp*(pi/2 - x(1)) + Kd*(-x(2)) + Ki*error; error = error + (pi/2 - x(1)); dx(1) = x(2); dx(2) = 1/I*(u - d*x(2) - m*g*L*sin(x(1))); end </code></pre> <p>$\textbf{Tracking Problem:}$</p> <p>Now I would like to implement the tracking problem in which the desired angle $\theta_{d}$ is not constant (i.e. $\theta_{d}(t)$); therefore, $\theta(t)$ $\rightarrow \theta_{d}(t)$ and $\dot{\theta}(t)$ $\rightarrow \dot{\theta}_{d}(t)$ as $t$ $\rightarrow \infty$. The input is </p> <p>$$ u = K_{p} (\theta_{d} - \theta(t)) + K_{d}( \dot{\theta}_{d}(t) - \dot{\theta}(t) ) + \int^{t}_{0} (\theta_{d}(t) - \theta(\tau)) d\tau $$</p> <p>Now I have two problems namely to compute $\dot{\theta}_{d}(t)$ sufficiently and how to read from <code>txt</code> file since the step size of <code>ode45</code> is not fixed. For the first problem, if I use the naive approach which is </p> <p>$$ \dot{f}(x) = \frac{f(x+h)-f(x)}{h} $$</p> <p>the error is getting bigger if the step size is not small enough. The second problem is that the desired trajectory is stored in <code>txt</code> file which means I have to read the data with fixed step size but I'v read about <code>ode45</code> which its step size is not fixed. Any suggestions!</p> <hr> <p>Edit:</p> <p>For tracking problem, this is my code </p> <p><code>main.m</code></p> <pre><code>clear all clc global error theta_d dt; error = 0; theta_d = load('trajectory.txt'); i = 1; t(i) = 0; dt = 0.1; numel(theta_d) while ( i &lt; numel(theta_d) ) i = i + 1; t(i) = t(i-1) + dt; end x0 = [0; 0]; options= odeset('Reltol',dt,'Stats','on'); [t, x] = ode45(@ODESolver, t, x0, options); e = x(:,1) - theta_d; % Error theta plot(t, x(:,2), 'r', 'LineWidth', 2); title('Tracking Problem','Interpreter','LaTex'); xlabel('time (sec)'); ylabel('$\dot{\theta}(t)$', 'Interpreter','LaTex'); grid on </code></pre> <p><code>ODESolver.m</code></p> <pre><code>function dx = ODESolver(t, x) persistent i theta_dPrev if isempty(i) i = 1; theta_dPrev = 0; end global error theta_d dt ; dx = zeros(2,1); %Parameters: m = 0.5; % mass (Kg) d = 0.0023e-6; % viscous friction coefficient L = 1; % arm length (m) I = 1/3*m*L^2; % inertia seen at the rotation axis. (Kg.m^2) g = 9.81; % acceleration due to gravity m/s^2 % PID tuning Kp = 35.5; Kd = 12.9; Ki = 1.5; if ( i == 49 ) i = 48; end % theta_d first derivative theta_dDot = ( theta_d(i) - theta_dPrev ) / dt; theta_dPrev = theta_d(i); % u: joint torque u = Kp*(theta_d(i) - x(1)) + Kd*( theta_dDot - x(2)) + Ki*error; error = error + (theta_dDot - x(1)); dx(1) = x(2); dx(2) = 1/I*(u - d*x(2) - m*g*L*sin(x(1))); i = i + 1; end </code></pre> <p>trajectory's code is </p> <pre><code>clear all clc a = 0:0.1:(3*pi)/2; file = fopen('trajectory.txt','w'); for i = 1:length(a) fprintf(file,'%4f \n',a(i)); end fclose(file); </code></pre> <p>The result of the velocity is </p> <p><img src="https://i.stack.imgur.com/Lr1EO.png" alt="enter image description here"></p> <p>Is this correct approach to solve the tracking problem?</p>
how to implement tracking problem with PID controller
<p>I noticed a weird thing in your code -- It looks like you are calculating a first derivative of something labeled altitude in <code>LidarLitePwm::getAcceleration()</code> (which would be velocity, and not acceleration) and later integrating it to get something labeled velocity and (which would in fact be altitude).</p> <p>So either your variable and method names are wrong, or you are missing one more derivative in <code>LidarLitePwm::getAcceleration()</code>. Also it is a litle weird that you are using the <code>time</code> when integrating acceleration and not when differentiating it. It should be the same in both directions. If you include the time in seconds, your units will be cm/s/s as stated in the comments, if you don't they will be cm/tick/tick. </p> <p>Since it seems that you want to reach velocity as the final product anyway, I would skip calculating the accelerations, rename <code>LidarLitePwm::getAcceleration()</code> to <code>LidarLitePwm::getVelocity()</code> and change it to something like this:</p> <pre><code>float LidarLitePwm::getVelocity(float time) { int currentAltitude = read(); float velocity = (currentAltitude - _oldAltitude) / time; _oldAltitude = currentAltitude; return velocity; //cm/s/s } </code></pre> <p>... all this assuming that the <code>read()</code> function returns distance from ground in cm.</p> <p>The accelerometer stuff seems ok.</p> <p>Btw if you are holding altitude with PID don't you need the altitude rather than velocity?</p>
6882
2015-03-25T14:38:21.050
|sensors|accelerometer|lidar|
<p>I'm writing some Quad Copter software and beginning to implement an altitude hold mode. </p> <p>To enable me to do this I need to get an accurate reading for vertical velocity. I plan to use a Kalman filter for this but first I need to ensure that I'm getting the correct velocity from each individual sensor.</p> <p>I have done this but I'm not 100% sure its correct so I was hoping to get some confirmation on here.</p> <p>My first sensor is a Lidar distance sensor, I calculated acceleration and velocity using the following code:</p> <pre><code>float LidarLitePwm::getDisplacement() { int currentAltitude = read(); float displacement = currentAltitude - _oldAltitude; _oldAltitude = currentAltitude; return displacement; //cm } //Time since last update float time = (1.0 / ((float)FLIGHT_CONTROLLER_FREQUENCY / 10.00)); // 50Hz, 0.02s float lidarDisplacement = _lidar-&gt;getDisplacement(); _currentLidarVelocity = lidarDisplacement / time; </code></pre> <p>The second sensor is an accelerometer. I calculated acceleration and velocity using the following code:</p> <pre><code>Imu::Acceleration Imu::getAcceleration() { //Get quaternion float q[4]; _freeImu.getQ(q); //Get raw data float values[9]; _freeImu.getValues(values); //Extract accelerometer data float acc[3]; acc[0]= values[0]; //x acc[1]= values[1]; //y acc[2]= values[2]; //z //Gravity compensate _freeImu.gravityCompensateAcc(acc, q); //Convert acceleration from G to cm/s/s _acceleration.x = acc[0] * 9.8 * 100; _acceleration.y = acc[1] * 9.8 * 100; _acceleration.z = acc[1] * 9.8 * 100; return _acceleration; //cm/s/s } //Time since last update float time = (1.0 / ((float)FLIGHT_CONTROLLER_FREQUENCY / 10.00)); // 50Hz, 0.02s //Get accel Imu::Acceleration imuAcceleration = _imu-&gt;getAcceleration(); //Get velocity currentZVelocity += imuAcceleration.z * time; //cm/s </code></pre> <p>It would be great if someone could confirm if this is correct (or not)</p> <p>Thanks Joe</p>
Calculating acceleration and velocity
<p>I think your Pmotor calculation is correct. The reason for the difference with the CIRC link is in the P=F*v equation they are referring to the linear velocity. Your conversion is converting the rotational velocity back into linear velocity.</p> <p>I don't think the Ftotal calculation is correct because your estimated weight isn't correct. Kg is a unit of mass, not weight. To convert to weight, you need to multiply by the gravitational pull of the earth (9.8 m/s^2). This is also why the units on your power estimate isn't in Watts. Your torque units should then be N*m or equivalently oz(force)*in. Note that this is ounce force not the sometimes used ounce mass. It is important to distinguish between mass and force even though the ounce unit is sometimes abused to ambiguously mean either in one Earth gravity environments.</p>
6890
2015-03-26T15:18:22.297
|mobile-robot|motor|design|torque|force|
<p>I'm trying to select a brushed DC motor for a project. I tried following the advice on <a href="http://www.scribd.com/doc/38698/Sizing-Electric-Motors-for-Mobile-Robotics" rel="nofollow noreferrer">sizing electric motors</a>, mentioned in <a href="https://robotics.stackexchange.com/questions/913/are-power-and-torque-required-related-in-some-way">this question</a>, but a few details were missing, and I'm unsure if I properly followed the procedure.</p> <p>For my application, I need:</p> <ul> <li>Nm = number of motors = 2</li> <li>Wd = wheel diameter = 12 cm</li> <li>Wp = estimated weight of platform = 5 kg</li> <li>Minc = maximum incline under load = 5 degrees</li> <li>Vmax = maximum velocity under load = 5 km/hr</li> <li>Fpush = maximum pushing force = 1.25 kg</li> <li>Ur = coefficient of rolling friction = 0.015</li> </ul> <p>These are my calculations:</p> <p>Step 1: Determine total applied force at worst case.</p> <pre><code>Ftotal = Wp * (Ur*cos(Minc) + sin(Minc)) + Fpush = 1.7604933161 kilogram </code></pre> <p>Step 2: Calculate power requirement.</p> <pre><code>Vradps = maximum velocity under load in radians/second = 23.1481481481 radian / second Pmotor = required power per motor = (Ftotal * Vradps * Wd/2)/Nm = 1.22256480284 kilogram * meter * radian / second </code></pre> <p>Step 3: Calculate torque and speed requirement.</p> <pre><code>Tmotor = required torque per motor = Pmotor/Vradps = 5281.47994829 centimeter * gram = 73.345953832 inch * ounce RPMmin = required revolutions per minute per motor = Vradps / 0.104719755 = 221.048532325 rev / minute </code></pre> <p>Are my calculations correct? Intuitively, the final <code>Tmotor</code> and <code>RPMmin</code> values seem right, but my calculation for <code>Pmotor</code> doesn't exactly match the one used in the link, which doesn't explicitly do the conversion to radians / second and therefore doesn't result in the proper units.</p> <p>Here's my Python script for reproducing the above calculations:</p> <pre><code>from math import * #http://pint.readthedocs.org/en/0.6/tutorial.html from pint import UnitRegistry ureg = UnitRegistry() def velocity_to_rpm(v, r): kph = v.to(kilometer/hour) r = r.to(kilometer) d = r*2 rpm = (kph / (2*pi*r)) * ((1*hour)/(60.*minute)) * rev return rpm def velocity_to_radps(v, r): return velocity_to_rpm(v, r).to(radian/second) # Units km = kilometer = ureg.kilometer meter = ureg.meter newton = ureg.newton cm = centimeter = ureg.centimeter hr = hour = ureg.hour mm = millimeter = ureg.millimeter rev = revolution = ureg.revolution minute = ureg.minute sec = second = ureg.second kg = kilogram = ureg.kilogram gm = gram = ureg.gram deg = degree = ureg.degree rad = radian = ureg.radian oz = ureg.oz inch = ureg.inch # Conversions. km_per_mm = (1*km)/(1000000.*mm) hour_per_minute = (1*hour)/(60.*minute) minute_per_second = (1*minute)/(60*sec) minute_per_hour = 1/hour_per_minute gm_per_kg = (1000*gm)/(1*kg) cm_per_km = (100000*cm)/(1*km) # Constraints target_km_per_hour = (5*km)/(1*hour) # average walking speed estimated_platform_weight = 5*kg maximum_incline_degrees = 5*deg maximum_incline_radians = maximum_incline_degrees * ((pi*rad)/(180*deg)) maximum_pushing_force = estimated_platform_weight/4. maximum_velocity_at_worst_case = (5*km)/(1*hour) rolling_friction = 0.015 # rubber on pavement velocity_under_max_load = target_km_per_hour number_of_powered_motors = 2 # Variables wheel_diameter_mm = 120*mm wheel_radius_mm = wheel_diameter_mm/2 wheel_radius_km = wheel_radius_mm * km_per_mm rev_per_minute_at_6v_unloaded = 33*rev/(1*minute) rev_per_minute_at_6v_loaded = rev_per_minute_at_6v_unloaded/2. mm_per_rev = (wheel_diameter_mm * pi)/(1*rev) target_rpm = velocity_to_rpm(target_km_per_hour, wheel_radius_mm) target_radps = velocity_to_radps(target_km_per_hour, wheel_radius_mm) # Calculate total applied force at worst case. total_applied_force_worst_case = estimated_platform_weight * (rolling_friction*cos(maximum_incline_radians) + sin(maximum_incline_radians)) + maximum_pushing_force print 'Ftotal:',total_applied_force_worst_case # Calculate power requirement. vel_in_radps = velocity_to_radps(velocity_under_max_load, wheel_radius_mm) print 'Vradps:',vel_in_radps required_power = total_applied_force_worst_case * velocity_to_radps(velocity_under_max_load, wheel_radius_mm) * wheel_radius_mm.to(meter) required_power_per_motor = required_power/number_of_powered_motors print 'Pmotor:',required_power_per_motor # Calculate torque and speed requirement. required_angular_velocity = velocity_under_max_load/wheel_radius_km * hour_per_minute * minute_per_second * rad #rad/sec required_rpm = required_angular_velocity / 0.104719755 * (rev/rad) * (sec/minute) required_torque_per_motor = (required_power_per_motor/required_angular_velocity).to(gm*cm) print 'Tmotor: %s, %s' % (required_torque_per_motor, required_torque_per_motor.to(oz*inch)) print 'PRMmin:',required_rpm </code></pre>
Verifying motor selection calculations
<p>I've addressed your first two questions below.</p> <blockquote> <p>Firstly, does the inverse sensor model have to compute range and bearing, as opposed to cartesian coordinates? Why is this approach used?</p> </blockquote> <p>Given your current state (which includes the pose of the robot and the position of the landmarks), the inverse sensor model predicts what the observation <em>should</em> be. In other words, if you have a laser scanner that reports the range and bearing of the landmark from your current pose, you want to compare that observation with what the inverse sensor model predicts. The reason why it is not converted to Cartesian coordinates is because the noise of the laser scanner observation is in range and bearing. Usually you want your observation model to generate a prediction whose form is as close to what your physical sensor provides as possible.</p> <blockquote> <p>Secondly, what format should my robot provide its heading in? Currently I just use a running offset from the origin angle (0), without wrapping it between 0 and 360. Turning right yields positive degrees, and left negative. I ask this as I assume the sensor model expects a certain format.</p> </blockquote> <p>If the only place the heading is used is in cosines and sines, then it shouldn't matter. However, you need to be careful whenever you take the difference between two angles. For example 179 degrees minus -179 degrees gives 358 degrees. In reality, the difference is -2 degrees, (which is equivalent to 358 degrees if you know it must be wrapped), but an algorithm minimizing the error might think 358 degrees is a large error.</p> <p>Also, the traditional 2D coordinate system has the angle being report with respect to the x-axis, with x forward, y left and z up. As a result, turning right should <em>decrease</em> the heading angle.</p>
6905
2015-03-28T03:25:08.680
|slam|ekf|python|
<p>I've recently been learning about SLAM and have been attempting to implement EKF-SLAM in python. I've been using this <a href="http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-412j-cognitive-robotics-spring-2005/projects/1aslam_blas_repo.pdf" rel="nofollow">great article</a> as a guide. Some progress has been made, but I'm still confused by certain stages.</p> <p>Firstly, does the <em>inverse sensor model</em> have to compute range and bearing, as opposed to cartesian coordinates? Why is this approach used?</p> <p>Secondly, what format should my robot provide its heading in? Currently I just use a running offset from the origin angle (0), without wrapping it between 0 and 360. Turning right yields positive degrees, and left negative. I ask this as I assume the sensor model expects a certain format.</p> <p>Thirdly, when computing the jacobians for adding new landmarks, (page 35) is Jz simply the absolute rotation of the robot (-540 degrees for example) plus the bearing the landmark was detected at?</p> <p>And finally, what's the best approach for managing the huge covariance matrix? I'm currently thinking of a good way to 'expand' P when adding new landmarks.</p> <p>Here's my current implementation: <a href="http://pastebin.com/r7wUMgY7" rel="nofollow">http://pastebin.com/r7wUMgY7</a></p> <p>Any help would be much appreciated! Thanks.</p>
SLAM noob here, a few questions regarding EKF-SLAM
<p>You must use one matrix, because you must manage the cross covariances. The best way to add a landmark is to expand the matrix, by copying it into a larger one (with two extra rows for x,y and two extra columns for the same). Then, you have to initialize all the off-diagonal terms using the jacobian matrix. Similarly, the state vector must be expanded to include the new landmark as well.</p> <p><a href="https://robotics.stackexchange.com/a/5173/163">This answer</a> shows how to initialize a new landmark in this way. Note the matrix shown there is <em>block</em> represented. It could in fact be very large.</p> <p>Now how you represent that matrix is up to you. However, keeping many small matrices that you stitch together is no more efficient (cpu or memory) than keeping one large one.</p>
6908
2015-03-28T18:17:45.683
|slam|ekf|python|
<p>I've recently been learning about SLAM and EKF-SLAM. </p> <p>I've began my implementation in python, but have had trouble managing the updating of P, especially when it comes to adding new landmarks. Currently there is no 'P' but just a few separate matrices that I have to stitch together when needed.</p> <p>My implementation can be seen here: <a href="http://pastebin.com/r7wUMgY7" rel="nofollow">http://pastebin.com/r7wUMgY7</a> </p> <p>How best should I manage the large covariance matrix, should I be using one matrix, like the algorithm suggests? Thanks in advance.</p>
EKF-SLAM, how best to manage the 'P' covariance matrix, programatically
<h3>The answer</h3> <p>Specifically, the arguments to this jacobian are the state of the robot.</p> <h3>The reason</h3> <p>It is the jacobian of the measurement function with respect to the landmark state. </p> <p>If you knew the state of the robot and landmark, what function would you use to predict what the measurement would be? If you have a range sensor, it would be the distance between the landmark and robot positions. Take the <a href="https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant" rel="nofollow">jacobian</a> of this function, with respect to each of the state variables (robot x, robot y, robot theta, landmark x, landmark y)</p> <p>In their first example, we're measuring the difference in x and y.</p> <p>So the predicted position of the landmark equals the position of the robot plus the $x$ and $y$ difference (rotated a little by the robot rotations).</p> <p>So the jacobian is $$\begin{bmatrix} \frac{d h_1}{dx_r}, \frac{d h1}{dy_r}, \frac{d h1}{d\theta_r}\\ \frac{d h_2}{dx_r}, \frac{d h2}{dy_r}, \frac{d h2}{d\theta_r}\\ \end{bmatrix}$$</p> <p>Here, $h1$ is the function which gives the $x$ coordinate of the landmark, and $h2$ is the function which gives the $y$ coordinate of the landmark. </p>
6909
2015-03-28T18:25:41.363
|slam|ekf|python|
<p>I've been working through <a href="http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-412j-cognitive-robotics-spring-2005/projects/1aslam_blas_repo.pdf" rel="nofollow">this informative guide</a> on EKF-SLAM but I'm having difficulty understanding the jacobians required for the 'landmark update', on page 35.</p> <p>What exactly is Jxr and Jz taking as input? Is it taking the the current rotation of the robot, plus the addition of the odometry update? IE, the rotation that is now stored in the 'X' state vector. Or are they taking the angle from the Inverse Sensor Model, and if so, what's the 'delta' angle from?</p> <p>Thanks.</p>
EKF-SLAM Computing the jacobians for landmark updates
<p>Robotics is a relatively new field of study. Besides technological problems, there are psychological and even moral issues that need to be resolved.</p> <p>Some examples of these non-technological issues include familiarity with and fear of robots, trusting robots and robot rights. Other issues have to do with popularity. You may be interested in reading <a href="https://robotics.stackexchange.com/q/790/158">this similar question asking whether toy robots move technology forwards</a>. Many of the arguments there hold for any kind of activity including robots. With that in mind, let's focus on the technological aspects only.</p> <p>With contests for simpler robots, such as Robocup, generally the effect is that the popularity of robotics increases. With more people getting drawn to robotics, the technology is sure to move forwards. In other words, <strong>popularity brings advancements</strong>, which is true for most if not all scientific fields.</p> <p>The more complex challenges, like the DARPA challenge surely help advance the technology. However, they are more of a short-term solution than a long-term one. With the DARPA challenge, you attract expert roboticists to create a solution for a complex problem. That's good, but that's just it. With Robocup, you create interest in robotics among many young students. You don't really get anything out of it immediately, but you get a new generation of roboticists that each would go on the rest of their lives inventing new things.</p> <p>While both kinds of challenges are definitely beneficial, I would say the ones like Robocup have a better shot at advancing robotics towards its goals (pun responded). Imagine if only one type of challenge existed. If there were DARPA challenges only, robotics would advance greatly, but eventually roboticists would die out and there would be no one to pick up where they left. If there were Robocup challenges only, you don't make an instant improvement, but you will have a continuous supply of fresh minds entering the field.</p> <p>In conclusion, DARPA challenges provide instant solutions, while Robocup contests provide a sustainable source of inventors.</p>
6927
2015-03-31T13:40:33.730
|design|
<p>Are events like Robocup advantageous to the development of robotic advancement? <br> Or is it merely entertainment which advances robotics by allowing entry level participation which helps maintain interest? <br> Do the DARPA Grand's provide a better vehicle for advancement? (pun intended) </p>
How important are events like "Robocup" to the advancement of Robotics in general?
<p>This is really simple. First of all, you need to understand how the sensor works. In other words, you need understand whether the measurements is coming from linear or nonlinear model. Second, what is the type of the sensor's noise?</p> <p>CASE STUDY: Let's say you want to simulate DC Voltemeter to measure a battery's voltage of 5 Volt. In an ideal case, the model of the system is </p> <p>$$ V_{b}(k+1) = V_{b}(k) $$ It is just a constant value whether you measure it now or in the future, the value is 5 Volt. In the reality, this is not the case. If you measure it now, you might get 4.9 Volt, later on 5.1 Volt. The measurements vary which means the sensor is noisy. In many cases, the noise is Gaussian with zero mean and some variance (i.e. $\mathcal{N}(0, \sigma^{2}))$. Therefore, the measurement model is </p> <p>$$ Z(k+1) = V_{b}(k+1) + \delta $$</p> <p>where $\delta$ is the measurement noise (i.e. $\mathcal{N}(0, \sigma^{2})$). </p> <p>To simulate this scenario in Matlab, </p> <pre><code>clear all; clc; ideal_value = 5; sigma = 0.01; real_value = ideal_value + sigma*randn(5,1) </code></pre> <p>The output is then </p> <pre><code>real_value = 5.0009 5.0173 4.9939 4.9926 4.9825 </code></pre> <p>In reality, there is no way to know the ideal value. If you want to decrease the accuracy of your sensor, you need to increase the value of the $\sigma$. In 2D laser scenario, the input of the sensor is the Cartesian location ($x, y$) and the output is the polar coordinates ($r, \phi$) with some Gaussian noise. Let's say we have a 2D laser sensor that is located in the origin and we have a tree in planar environment. The sensor with update rate 10 Hz provides the range $r$ and the bearing $\phi$ to the tree with some noise in both the range and the bearing. Therefore the following code simulates the scenario</p> <pre><code>clear all clc tree_x = 4; tree_y = 4; sigma_r = 0.1; % m sigma_p = deg2rad(1); % rad dt = 1/10; t = 0:dt:0.5; for i = 1:numel(t) r(i) = sqrt( tree_x^2 + tree_y^2 ) + sigma_r*randn(); p(i) = atan2( tree_y, tree_x ) + sigma_p*randn(); end r rad2deg(p) </code></pre> <p>The results are</p> <pre><code>r = 5.7598 5.7581 5.6356 5.5525 5.6130 5.7552 p = 44.6549 45.6293 44.1343 44.7299 44.5913 44.7023 </code></pre>
6931
2015-03-31T23:21:23.030
|imu|accelerometer|gyroscope|simulation|
<p>If I have a robot path in 2D space, </p> <p>i.e. a vector of (x,y) locations, and I need to generate artificial IMU data (simulate them), how would I go about it? </p> <p>How do I model equations to generate the values given a time frame and positions?</p> <p>I've come across <em>imusim</em> I'd like to know how to model them and generate using Matlab or something similar.</p>
Simulate IMU (2D gyro and accelerometer) data
<p>Having an overhead camera (like a single low cost webcam if needed). Ultrasonics is meant to be directional while cameras are not meant to be. The downward facing camera placed at a height and horizontally centered over the area can detect robot and obstacles in Cartesian coordinates and that information can be transferred to the moving robot using Bluetooth or similar methods. If the camera cannot be mounted overhead, inverse perspective transform can be applied or rotation matrices can be employed. If occlusion occurs, more number of cameras can be used generally using data fusion. </p>
6944
2015-04-03T09:34:21.310
|sensors|computer-vision|sonar|ultrasonic-sensors|
<p>So the idea is that there would be one robot acting as overwatch, which would detect all of the obstacles in an area (which are not necessarily static), and then send the data about the obstacles' positions to another robot that would navigate around the obstacles to a goal.</p> <p>My initial thought was to have the overwatch robot be in an elevated position in the centre of the area, then sweep around using an ultrasonic sensor. This way, it could keep track of the obstacles in a set of polar coordinates (distance, angle). But then I realised that this method doesn't account for collinear obstacles.</p> <p>So the question is, what is the best way to detect a bunch of non-static obstacles within an area?</p> <p>As a side note, I have seen a system similar to this, where there was a robot detecting obstacles (in that case, a crowd of people) and another robot pathfinding around the obstacles (the people), but I'm unsure exactly how that system was detecting the obstacles.</p>
Dynamically detect changing obstacles
<blockquote> <p>The system could be seen as an equivalent of a mass-spring-damper system, where damper and spring are removed</p> </blockquote> <p>This is correct if you assume $x(t)$ to be the output but you are not. </p> <blockquote> <p>y is simply the force moving the mass. </p> </blockquote> <p>If $y$ is the force, this means it is the input of the system?!</p> <blockquote> <p>BUT in this case I need to drive the force using x(t) and not the contrary</p> </blockquote> <p>This is impossible. What you are saying makes no sense. </p> <p>This is not a dynamic system and your simulink diagram is not correct. Your actual system is an amplifier of the second derivative of a given signal. You have the output as a function of time (i.e. $y(t)$), therefore there is no point of applying transfer function tool. The input of this system is the second derivative of a given signal, thus if the input is the step function which means $x(t) = 1, t \geq 0 $, its second derivative is zero. If you set the input as $x(t) = sin(t)$ and $a=0.5$, the output is </p> <p><img src="https://i.stack.imgur.com/MynfO.png" alt="enter image description here"></p> <p>and this is the matlab script for the above picture. </p> <pre><code>clear all; clc; dt = 0.0001; t = -pi:0.01:2*pi; x = sin(t); % the input % plot(t, x, 'b') dx = diff(x)./diff(t); % the first derivative of the input % hold on % plot(t(2:end), dx, 'r') ddx = diff(dx)./diff(t(2:end)); % the second derivative of the input % hold on plot(t(3:end), ddx, 'b') a = 0.5; % output y = 1/a * ddx; hold on plot(t(3:end), y, 'r') l= legend(['$\ddot{x}(t)$'], 'y(t)'); set(l,'Interpreter','Latex'); </code></pre>
6945
2015-04-03T09:44:16.797
|control|pid|matlab|
<p>I have a generic problem to create a controller for the following system: $$\ddot{x}(t) = a y(t)$$ where $a$ is a constant real value. The system could be seen as an equivalent of a mass-spring-damper system, where damper and spring are removed. Also $x(t)$ is the $x$ dimension and $y$ is simply the force moving the mass. <strong>BUT</strong> in this case I need to drive the force using $x(t)$ and not the contrary.</p> <p>Transforming according Laplace I get: $$ y(t) = \frac{1}{a}\ddot{x}(t)$$ $$ Y(s) = \frac{1}{a}s^{2}X(s)$$ $$ G(s) = \frac{Y(s)}{X(s)} = \frac{s^{2}}{a}$$</p> <p>Considering that $a = 1$ I implemented a possible example in Simulink. </p> <p><img src="https://i.stack.imgur.com/wuzjG.png" alt="enter image description here"></p> <p><em>Please not that I put the output given by the scope for showing up the resulting answer of the system.</em></p> <p>So I have 2 questions:</p> <ol> <li>Is it possible to develop such a system? As far as I know the degree of the numerator should be $=&lt;$ the degree of the denominator. So is the above system possible?</li> <li>Is it possible to create a PID or PD controller to stabilize the output of the system?</li> </ol> <p>Regards</p>
Help to dimension the right controller for the following Tranfer Function
<p>You can configure it with a static ip address, and assign your laptop a (different) static ip address. <a href="http://derekmolloy.ie/set-ip-address-to-be-static-on-the-beaglebone-black/" rel="nofollow">Here</a> is a link to a guide to setting a static ip address.</p>
6980
2015-04-08T13:52:44.213
|beagle-bone|
<p>Since the day I bought it I always use ethernet over USB connection, now I need to use RJ45 LAN cable to connect Beaglebone from my laptop, but my laptop can't even detect LAN connection from it, what could go wrong? Do I need straight or crossover cable? Do I need to configure something first on my BeagleBone?</p> <blockquote> <p>UPDATE: Managed to connect it through Crossover cable and assign it IP address by running DHCP server on my laptop.</p> </blockquote> <p><img src="https://i.stack.imgur.com/X3e16.png" alt="enter image description here"></p> <blockquote> <p>As seen above my laptop assign IP 169.254.223.76, but when I tried to connect to that IP using puTTY it gives me connection refused.</p> </blockquote> <p><img src="https://i.stack.imgur.com/LUNAe.png" alt="enter image description here"> Please help.</p>
Beaglebone not accessible through LAN?
<p>You have a few problems that I see. The biggest problem is that your values for P, I, and D and too aggressive. This is making the PID loop oscillate from hard stop to hard stop.</p> <p>You generally want to take a cautious approach to tuning a PID loop. Since PID control is feedback based, it is pretty easy to get a positive feedback loop which makes the system diverge. My preferred approach to tuning is to start with tuning P leaving I and D at 0.0. With the way your system is defined, a P value of 1.0 will try to remove all the error in a single time step. I would start with a lower P value and increase it until the system responds quickly without much overshoot. You might start with 0.1 and increase in small steps until you find something reasonable. Note, that using the P value alone, you will always have some residual error.</p> <p>After tuning the P value, I would move onto the I value. The purpose of the I value is to reduce steady state error. It does this by responding to the accumulation of error over time. If the steady state error is acceptable, it is possible to not use the I term at all. Because it is integrating the error over time, it is more powerful than the P value and you will need a smaller constant. You might start with a value of 0.01 and try increasing it from there. You should be able to remove the steady state error with just P and I. Increasing the value of I will add momentum to your system and will tend to increase overshoot. The implementation you are using calculates the integral error with a simple integration, so the value of I you need will depend on your update rate for an actual physical system.</p> <p>After tuning P and I, then I would move onto D. The purpose of the D term is to reduce overshoot. It does this by predicting future reduction in error based on estimating the derivative of the error. The derivative of the error can be a noisy estimate on some systems, so keep that in mind if you use D. The value you need for D will depend on the values you are using for P and I. The implementation you are using calculates the D term based on a simple difference, so the constant you need will also depend on your integration rate for an actual physical system. Using some D might allow you to be more aggressive with your usage of P and I.</p> <p>You have some other problems as well. Your output range is pretty tight around the desired output. Ideally your input would be 120 which implies your ideal output would be 90. The value 90 is near the edge of your allowed output range of [50,100]. This is going to make it a bit more difficult on the PID controller.</p> <p>The implementation you are using has a bug in the integral error integration. Integral error has a problem that it can accumulate without bound over time in some situations. This integral windup can cause very unintuitive outputs from PID systems. For this reason, all practical PID systems limit the magnitude of the integral error. The calculation used in the implementation only really works for outputs near 0 I think.</p> <p>The implementation has:</p> <pre><code>/* Integrate the errors as long as the upcoming integrator does not exceed the minimum and maximum output thresholds */ if (((m_totalError + m_error) * m_I &lt; m_maximumOutput) &amp;&amp; ((m_totalError + m_error) * m_I &gt; m_minimumOutput)) { m_totalError += m_error; } </code></pre> <p>but I think this should be:</p> <pre><code>/* Integrate the errors as long as the upcoming integrator does not exceed the minimum and maximum output thresholds */ /* What the output would be if integral error is updated */ double outputWithI = (m_P * m_error + m_I * (m_totalError + m_error) + m_D * (m_error - m_prevError)); if ((outputWithI &lt; m_maximumOutput) &amp;&amp; (outputWithI &gt; m_minimumOutput)) { m_totalError += m_error; } </code></pre> <p>The problem with the original implementation is that if the integral term alone doesn't get the output into the correct range then the error isn't integrated.</p> <p>A common alternative approach is to directly limit the magnitude of m_totalError and therefore the total contribution of the I term. An implementation along these lines might look something like:</p> <pre><code>m_totalError += m_error; if (m_totalError &lt; -m_maxTotalErrorMagnitude) { m_totalError = -m_maxTotalErrorMagnitude; } else if (m_totalError &gt; m_maxTotalErrorMagnitude) { m_totalError = m_maxTotalErrorMagnitude; } </code></pre> <p>I'm more used to seeing this solution used, although I think either should work.</p>
6992
2015-04-09T22:05:35.707
|control|pid|
<p>I was looking for an implementation of a PID controller in Java and I found this one:</p> <p><a href="https://code.google.com/p/frcteam443/source/browse/trunk/2010_Post_Season/Geisebot/src/freelancelibj/PIDController.java?r=17" rel="nofollow">https://code.google.com/p/frcteam443/source/browse/trunk/2010_Post_Season/Geisebot/src/freelancelibj/PIDController.java?r=17</a></p> <p>So, for what I could understand about it I am using it this way:</p> <pre><code>package lol.feedback; public class dsfdsf { public static void main(String[] args) throws InterruptedException { final PIDController pidController = new PIDController(1, 1, 1); pidController.setInputRange(0, 200); // The input limits pidController.setOutputRange(50, 100); // The output limits pidController.setSetpoint(120); // My target value (PID should minimize the error between the input and this value) pidController.enable(); double input = 0; double output = 0; while (true) { input = output + 30; pidController.getInput(input); output = pidController.performPID(); System.out.println("Input: " + input + " | Output: " + output + " | Error: " + pidController.getError()); Thread.sleep(1000); } } } </code></pre> <p>But he never stabilizes. He doesn't behave like a PID at all... This is the output I get:</p> <pre><code>Input: 30.0 | Output: 100.0 | Error: 90.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 Input: 80.0 | Output: 100.0 | Error: 40.0 Input: 130.0 | Output: 50.0 | Error: -10.0 </code></pre> <p>Can someone help me tell me what I am missing?</p> <p>Thank you!</p>
My PID Controller in Java is not operating correctly
<p>Those two designs you posted are identical as far as the placement of propellers goes, so yes, they would both work, and with exactly the same flight electronics. </p> <p>The second ring design though, uses approximately twice as much material for the frame hence it will be heavier, while in practice you'd probably want to minimize the weight of the airframe. There doesn't seem to be a convenient place to mount a central flight computer, but I'm sure a little creativity could lead to a ring design that is practical.</p> <p>Here's a slightly different configuration for a quadcopter (called V-tail) that seems to be gaining popularity. It would need slightly different logic.</p> <p><img src="https://i.stack.imgur.com/2sC2f.jpg" alt="enter image description here"></p>
6999
2015-04-11T02:54:17.493
|quadcopter|design|frame|
<p>Quadcopter frames seem to consistently follow the same X design. For example:</p> <p><a href="https://i.stack.imgur.com/IpaTl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IpaTlm.png" alt="Quadcopter X Frame"></a></p> <p>I'm curious to know why that is. It certainly seems like the most efficient way to use space but is it the only frame design that would work for quadcopters?</p> <p>For instance, would a design like this work?</p> <p><a href="https://i.stack.imgur.com/cMDxM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cMDxMm.png" alt="Quadcopter O Frame"></a></p> <p>Why or why not?</p>
Quadcopter frame design
<p>You need to use the mavlink parameter protocol, which is a coordinated set of mavlink messages between the ground control station and the vehicle (a bit trickier than just sending strings over the serial port, unfortunately). The protocol is documented here: <a href="http://qgroundcontrol.org/mavlink/parameter_protocol" rel="nofollow noreferrer">http://qgroundcontrol.org/mavlink/parameter_protocol</a></p> <p>The following diagrams shows the flow of messages (replace "QGroundControl" with your application and "MAV Component" with your vehicle):</p> <p><img src="https://i.stack.imgur.com/h60s2.png" alt="enter image description here"></p> <p>(You might also want to double check the parameter names for the particular flight controller/vehicle you're using, e.g. here are the parameters for ArduCopter/APM:Copter: <a href="http://copter.ardupilot.com/wiki/configuration/arducopter-parameters/" rel="nofollow noreferrer">http://copter.ardupilot.com/wiki/configuration/arducopter-parameters/</a>)</p>
7002
2015-04-11T17:32:14.280
|c++|mavlink|
<p>I want to submit my gains for the PID regulator via MAVLink. Unfortunately, I am not very used to MAVLink and there are several functions which may be used for that purpose (I think). My string is currently JSON formatted and I was directly sending it to the serial port before. </p> <p>Is there a straight forward way to submit the data like it is (see below) with MAVLink, or is it better not to transfer a JSON string with MAVLink and submit each single value? If yes, what is the function of choice. </p> <p>So far I noticed that for most of the sensors, there are already MAVLink function defined. For the PID gains I found not so much.</p> <pre><code>AP_HAL::UARTDriver *pOut = uartX == UART_C ? hal.uartC : hal.uartA; pOut-&gt;printf( "{\"t\":\"pid_cnf\"," "\"p_rkp\":%.2f,\"p_rki\":%.2f,\"p_rkd\":%.4f,\"p_rimax\":%.2f," "\"r_rkp\":%.2f,\"r_rki\":%.2f,\"r_rkd\":%.4f,\"r_rimax\":%.2f," "\"y_rkp\":%.2f,\"y_rki\":%.2f,\"y_rkd\":%.4f,\"y_rimax\":%.2f," "\"p_skp\":%.2f,\"r_skp\":%.2f,\"y_skp\":%.4f}\n", static_cast&lt;double&gt;(pit_rkp), static_cast&lt;double&gt;(pit_rki), static_cast&lt;double&gt;(pit_rkd), static_cast&lt;double&gt;(pit_rimax), static_cast&lt;double&gt;(rol_rkp), static_cast&lt;double&gt;(rol_rki), static_cast&lt;double&gt;(rol_rkd), static_cast&lt;double&gt;(rol_rimax), static_cast&lt;double&gt;(yaw_rkp), static_cast&lt;double&gt;(yaw_rki), static_cast&lt;double&gt;(yaw_rkd), static_cast&lt;double&gt;(yaw_rimax), static_cast&lt;double&gt;(pit_skp), static_cast&lt;double&gt;(rol_skp), static_cast&lt;double&gt;(yaw_skp) ); </code></pre>
Easiest way to submit a longer non standard character string via MAVLink
<p>Euler method is horrible for general ODE timestepping because it is globally only first order accurate. This means that to decrease the error by a factor of 1/2, you'll need to decrease the timestep by half. Effectively, this also means that you'll need twice as many timesteps to improve your accuracy at the same final timestep. Furthermore, the error tends to increase quite dramatically as you take more and more timesteps. This is especially true for more complex systems.</p> <p>The system you posted in your code appears to be very well behaved; no extreme oscillations or sharp changes in the gradient of the solution. It doesn't surprise me that euler and runge-kutta yield nearly the same solution in this case. I caution you that this is generally not the case for most systems. I highly discourage the use of forward euler method for general purpose ode timestepping.</p> <p>The Runge-Kutta (ode45) method is more popular because it is fourth order accurate globally (and fifth order locally, hence the numbers 45 in the name). This means that if you decrease your timestep by 1/2, your error decreases by a factor of 1/16. The extra accuracy you gain from the 4th order accuracy of runge-kutta far outweighs the extra computational cost. This is especially true if you need adaptive timestepping, as @fibonatic suggests, because you may not need as many refinement steps if you use a higher order accurate method like runge-kutta.</p>
7004
2015-04-12T00:18:48.990
|control|
<p>The dominant approach for solving ODE in control systems books is <code>ode45</code> since the majority of these books use Matlab. I'm not acquainted with how the <code>ode45</code> works but lately I started reading about Euler's method in this book <a href="http://rads.stackoverflow.com/amzn/click/0073401064" rel="nofollow noreferrer">Numerical Methods for Engineers</a>. If the step size is very small, then the results are satisfactory. For simulation, one can actually set the step size to be very small value. I've used <code>ode45</code> in <a href="https://robotics.stackexchange.com/questions/6859/how-to-implement-tracking-problem-with-pid-controller/6867?noredirect=1#comment9843_6867">here</a> for regulation and tracking problems. I faced some difficulties for using <code>ode45</code> for tracking problem since the step size is not fixed. Now for the same experiment, I've used the Euler's method with step size 0.001 sec. The results are amazing and so friendly in comparison with <code>ode45</code>. This is a snapshot from the result </p> <p><img src="https://i.stack.imgur.com/J5nLD.png" alt="enter image description here"> </p> <p>And this is the code </p> <pre><code>clear all; clc; dt = 0.001; t = 0; % initial values of the system a = 0; % angular displacement da = 0; % angular velocity % PID tuning Kp = 50; Kd = 18.0; Ki = 0.08; error = 0; %System Parameters: m = 0.5; % mass (Kg) d = 0.0023e-6; % viscous friction coefficient L = 1; % arm length (m) I = 1/3*m*L^2; % inertia seen at the rotation axis. (Kg.m^2) g = 9.81; % acceleration due to gravity m/s^2 % Generate Desired Trajectory y = 0:dt:(3*pi)/2; AngDes = y; % Ang: angle , Des: desired AngDesPrev = 0; for i = 1:numel(y) % get the first derviative of the desired angle using Euler method. dAngDes = ( AngDes(i) - AngDesPrev )/ dt; AngDesPrev = AngDes(i); % torque input u = Kp*( AngDes(i) - a ) + Kd*( dAngDes - da ) + Ki*error; % accumulated error error = error + ( AngDes(i) - a ); %store the erro E(i) = ( a - AngDes(i) ); T(i) = t; dda = 1/I*(u - d*da - m*g*L*sin(a)); % get the function and its first dervative da = da + dda*dt; a = a + da*dt; %store data for furhter investigation A(i) = a; dA(i) = da; t = t + dt; end plot(T, AngDes, 'b', T, A, 'g', 'LineWidth', 1.0) h = legend('$\theta_{d}(t)$', '$\theta(t)$'); set(h, 'Interpreter','LaTex') </code></pre> <p>My question is why <code>ode45</code> is preferred in many control books assuming the step size is very small. </p>
Euler’s Method Or ode45 for solving ODE for control systems
<p>To further explain my comment, I've decided to turn it into an answer.</p> <p>Using the law of sines to calculate distance.</p> <p>First mount the cameras on the robot parallel like this ---¦--- ---¦---</p> <p>¦ is the lens. Mount them 100mm apart - measuring from the lenses.</p> <p>Second find the viewing angle of your camera, in this case 75*.</p> <p>Third get data from the camera: Cam1 : (50, 50); Cam2 : (10, 50);</p> <p>Fourth (optional depending on the co-ordinate system used) Normalize the data (transform the co-ordinate system) to center the origin and re-map your co-ordinates.</p> <p>Fifth Calculate your degrees per division - take your viewing angle and the range of the supplied co-ordinate system and calculate how many degrees are represented per division.</p> <p>Sixth multiply this Degrees per division by each of your co-ordinate values. To get the degrees from center line to the object for each camera.</p> <p>Seventh using the Law of Sines you can calculate the distance from each camera to the object using both of the angles you calculated plus the distance between the camera lenses. Optionally average them to get the distance from the camera mounting mid-point to the object.</p> <p>There are many optimizations that can be applied to this process, especially calibrating the camera lens but that's another question. I would say that errors do creep in and taking multiple readings and averaging the result can improve reliability.</p>
7012
2015-04-13T10:53:22.843
|microcontroller|robotic-arm|cameras|stereo-vision|
<p>I'm working in a project implementing a vision system. I'm a student and this is the first time I'm doing something like this, it has been a challenge.</p> <p>I'm using a controller (Netduino+2, .Net MicroFramework) and a camera (CmuCam5 - Pixy) and for now it's working well. I'm communicating with the robot(Fanuc M430iA) using Modbus, and aquiring the data from the camera using I2C. </p> <p>But, the next challenge is using 2 cameras to implement stereo vision and I'm not shure how to achieve that. I'm reading a lot about that and I understand the process and generally how it works, but I think my case is very specific.</p> <p>My cameras detect the center of an object and give me the coordinates, so, I have that, and that's good. </p> <p>What do you think it's the more reasonable approach?</p> <p>(sorry for my english, let me know if I'm not being explicit, I'll edit the question if I see there's not enough information)</p>
algorithm Simple Stereo Vision
<p>The best answer is probably going to be an ultrasonic sensor on a servo, you can get them in a huge range of values, from very close to very far range depending on your application, and varying beam widths depending on your accuracy needs. If you need more than 255 steps you could go with a motor and a encoder but that will be slightly more complex. </p>
7016
2015-04-14T09:18:09.897
|mobile-robot|sensors|cameras|stereo-vision|lidar|
<p>I wonder is there any <strong>simple (can be computed in microcontroller level)</strong> option which is suitable for <strong>3d object perception</strong> (depth, position, pose or coordinate estimation) of flying robots except <em>LIDAR, stereovision, omnidirectional camera, laser scanner</em> or any other machine vision based techniques </p>
Simple way of 3d perception
<p>The controller has a definite performance problem after t=1.5sec. [This can be due to the system model or some other coding error.]</p> <p>The system (model + controller) behaves smooth until t=1.5sec. It is best to magnify and debug the code at that region. The controller behaves as expected until then. Only the X has no error from T0, which makes assessing the X-axis controller impossible. </p> <p>Another point also worth noting is that, the Y and Z errors should have caused some error in X, but we don't see this. So the simulation model could be simplified (like decoupled in X direction from others maybe?)</p> <p>Euler angles sometimes cause computational errors in attitudes (for exact zero, or pi or pi/2's). That might be the reason.</p> <p>For an understanding of the maths behind euler representation singularities, and their solutions, please refer to textbooks or articles, one example is: <a href="http://lairs.eng.buffalo.edu/pdffiles/pconf/C10.pdf" rel="nofollow">http://lairs.eng.buffalo.edu/pdffiles/pconf/C10.pdf</a></p>
7043
2015-04-17T04:36:31.547
|pid|quadcopter|
<p>I'm trying to implement two PIDs for stabilizing quadrotor for position tracking. The inputs are $x_{d}(t), y_{d}(t), z_{d}(t)$ and $\psi_{d}(t)$. For position tracking, usually the small angle assumption is assumed. This assumption allows for acquiring $\theta_{d}$ and $\phi_{d}$. These are the results </p> <p><img src="https://i.stack.imgur.com/TOsgv.png" alt="enter image description here"></p> <p>The x-axis position is driving me crazy. After alot of attempts for tuning the PIDs, I felt something wrong is going on. Is this a normal behavior for PID controller? Also, what I've noticed is that once $\psi$ reaches to zero, the platform starts oscillating (after 1.5 second in the figure). </p> <p>For solving ODEs and computing the derivatives for the velocities, I use Euler methods. </p> <hr> <p>It is simulation in Matlab. </p>
How to tune the two PIDs for quadrotor
<ol> <li><p><a href="http://pointclouds.org/" rel="nofollow">PCL</a> library has all the functionality you need for 3D scanning, as well as it have the complete GPU pipeline, take a look at <a href="http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php" rel="nofollow">kinfu</a>. Here is some <a href="http://pointclouds.org/documentation/tutorials/in_hand_scanner.php#in-hand-scanner" rel="nofollow">tutorial</a> regarding 3D scanning. Take a look at <a href="https://www.youtube.com/watch?v=4g9Hap4rX0k" rel="nofollow">MeshLab</a> too see how to do it manually.</p></li> <li><p>Any approach will work. With object rotating on the platform you potentially has more information of object position, thus making registration step more robust (see tutorial). But anyway all the scanning algorithm steps will be the same.</p></li> <li><p>I don't recommend you to use stereo cameras or single moving camera (structure from motion approach) for 3D scanning. <a href="http://en.wikipedia.org/wiki/Structured-light_3D_scanner" rel="nofollow">Structured light</a> sensors and <a href="http://en.wikipedia.org/wiki/Time-of-flight_camera" rel="nofollow">time-of-flight</a> sensors will give you much better results. </p></li> <li><p>It is possible. Results will probably be marginally better. More data => better results.</p></li> <li><p>Resolution and FPS - yes. But you need beefier computer to process all the data. Depending on used scanning algorithm global shuttering as well as controlled illumination and exposure are important if you want to achieve submillimeter accuracy or reconstruct texture. But I would not bother about that in the beginning.</p></li> <li><p>Here are some structure light or ToF depth imaging sensors you might consider, including the ones you mentioned. Choose one according to your min/max range requirements, budget, processing power, SDK availability. Currently the easiest to start working with are Primesense/ASUS cameras. ROS/OpenCV/PCL - all of the support these two cameras and Kinect with a little bit of hacking:</p> <ul> <li><p><a href="http://rads.stackoverflow.com/amzn/click/B006UIS53K" rel="nofollow">Kinect 1</a>, <a href="http://rads.stackoverflow.com/amzn/click/B00KZIVEXO" rel="nofollow">2</a></p></li> <li><p><a href="http://rads.stackoverflow.com/amzn/click/B00KO92DA2" rel="nofollow">Primsense Carmine 1.082</a>, 1.09 or <a href="http://rads.stackoverflow.com/amzn/click/B00KK2OGC6" rel="nofollow">Asus Xtion</a></p></li> <li><p><a href="http://rads.stackoverflow.com/amzn/click/B00EVWX7CG" rel="nofollow">Creative Senz3D</a> (based on <a href="http://www.softkinetic.com/Products/DepthSenseCameras" rel="nofollow">SoftKinectic</a> module)</p></li> <li><p><a href="https://store.structure.io/store" rel="nofollow">Structure.io</a></p></li> <li><p><a href="https://software.intel.com/en-us/realsense/f200camera" rel="nofollow">Intel RealSense</a></p></li> <li><p><a href="http://www.pmdtec.com/products_services/reference_design_pico_pico_s.php" rel="nofollow">PMD Pico</a></p></li> <li><p><a href="http://www.espros.ch/3d-imagers" rel="nofollow">ESPROS</a></p></li> </ul></li> </ol>
7050
2015-04-18T04:43:58.217
|sensors|computer-vision|kinect|
<p>A few days ago, I just shared my concerns about the price of computer vision hardware on this same exact forum (see <a href="https://robotics.stackexchange.com/questions/7014/what-main-factors-features-explain-the-high-price-of-most-industrial-computer-vi">What main factors/features explain the high price of most industrial computer vision hardware?</a>) and I think a new but related post is needed. So here we go.</p> <p>Here are some details to consider regarding the overall scanner I want to build:</p> <ul> <li><p>Restricted space: my overall scanner can't be larger than 3 feet cube.</p></li> <li><p>Small objects: the objects I will be scanning shouldn't be larger than 1 foot cube.</p></li> <li><p>Close range: the camera would be positioned approximately 1 foot from the object.</p></li> <li><p>Indoor: I could have a dedicated light source attached to the camera (which might be fixed in a dark box)</p></li> </ul> <p>Here are the stereo cameras/sensors I was looking at (ordered by price):</p> <ul> <li><p>Two Logitech webcams (no model in particular)</p> <ul> <li><p>Cheap</p></li> <li><p>Harder to setup and calibrate</p></li> <li><p>Need to create your own API</p></li> <li><p>Built for: what you want to achieve</p></li> </ul></li> <li><p>Intel RealSense: <a href="http://click.intel.com/intel-realsense-developer-kit.html" rel="nofollow noreferrer">http://click.intel.com/intel-realsense-developer-kit.html</a></p> <ul> <li><p>$100</p></li> <li><p>High resolution: 1080p (maybe not for depth sensing)</p></li> <li><p>Workable minimum range: 0.2 m</p></li> <li><p>Unspecified FOV</p></li> <li><p>Built for: hands and fingers tracking</p></li> </ul></li> <li><p>Kinect 2.0: <a href="https://www.microsoft.com/en-us/kinectforwindows/" rel="nofollow noreferrer">https://www.microsoft.com/en-us/kinectforwindows/</a></p> <ul> <li><p>$150</p></li> <li><p>Low resolution (for depth sensing): 512 x 424</p></li> <li><p>Unworkable minimum range: 0.5 m</p></li> <li><p>Excellent FOV: 70° horizontal, 60° vertical</p></li> <li><p>Built for: body tracking</p></li> </ul></li> <li><p>Structure Sensor <a href="http://structure.io/developers" rel="nofollow noreferrer">http://structure.io/developers</a></p> <ul> <li><p>$380</p></li> <li><p>Normal resolution with high FPS capability: 640 x 480 @ 60 FPS</p></li> <li><p>Unspecified minimum range</p></li> <li><p>Good FOV: 58° horizontal, 45° vertical</p></li> <li><p>Built for: 3D scanning (tablets and mobile devices)</p></li> </ul></li> <li><p>ZED Camera: <a href="https://www.stereolabs.com/zed/specs/" rel="nofollow noreferrer">https://www.stereolabs.com/zed/specs/</a></p> <ul> <li><p>$450</p></li> <li><p>Extreme resolution with high FPS capability: 2.2K @ 15 FPS (even for depth sensing) and 720p @ 60 fps</p></li> <li><p>Unviable minimum range: 1.5 m</p></li> <li><p>Outstanding FOV: 110°</p></li> <li><p>Built for: human vision simulation</p></li> </ul></li> <li><p>DUO Mini LX: <a href="https://duo3d.com/product/duo-minilx-lv1" rel="nofollow noreferrer">https://duo3d.com/product/duo-minilx-lv1</a></p> <ul> <li><p>$595</p></li> <li><p>Normal resolution with high FPS capability: 640 x 480 @ 60 FPS</p></li> <li><p>Workable minimum range: 0.25 m (see <a href="https://stackoverflow.com/questions/27581142/duo-3d-mini-sensor-by-code-laboratories">https://stackoverflow.com/questions/27581142/duo-3d-mini-sensor-by-code-laboratories</a>)</p></li> <li><p>Phenomenal FOV: 170° (with low distortion)</p></li> <li><p>Built for: general engineering</p></li> </ul></li> <li><p>Bumblebee2: <a href="http://www.ptgrey.com/bumblebee2-firewire-stereo-vision-camera-systems" rel="nofollow noreferrer">http://www.ptgrey.com/bumblebee2-firewire-stereo-vision-camera-systems</a></p> <ul> <li>Too much expensive (not even worth mentioning)</li> </ul></li> </ul> <p><strong>Note: All prices are in date of April 18th 2015 and might change overtime.</strong></p> <p>As you can see, some have really goods pros, but none seems to be perfect for the task. In fact, the ZED seems to have the best specifications overall, but lacks of minimum range (since it is a large baselined camera designed for long range applications). Also, the DUO Mini LX seems to be the best for my situation, but unlike the ZED which generates really accurate depth maps, this one seems to lack of precision (lower resolution). It might be good for proximity detection, but not for 3D scanning (in my opinion). I could also try to build my own experimental stereo camera with two simple webcams, but I don't know where to start and I don't think I will have enough time to deal with all the problems I would face doing so. I am now stuck in a great dilemma.</p> <p>Here are my questions:</p> <ol> <li><p>What good resources on the internet give you a good introduction on 3D scanning concepts (theoretically and programmatically)? I will be using C++ and OpenCV (I already worked with both a lot) and/or the API provided with the chosen camera (if applies).</p></li> <li><p>Should you have a static camera capturing a moving object or a moving camera capturing a static object?</p></li> <li><p>Should I use something in conjunction with stereo camera (like lasers)?</p></li> <li><p>Is it profitable to use more than two cameras/sensors?</p></li> <li><p>Are resolution, FPS and global shuttering really important in 3D scanning?</p></li> <li><p>What camera should I get (it can also be something I didn't mention, in the range of $500 maximum if possible)? My main criteria is a camera that would be able to generate an accurate depth map from close range points.</p></li> </ol> <p>Thanks for your help!</p>
Questions regarding 3D scanning and camera choice
<p>First of all, assume that the sensors we have are ideal (e.g. no latency, perfect accuracy, infinite (continuous) data frequency).</p> <p>Then we can know the rotation rate of each of the wheels. and how much they have been turning from T_zero (initial time) until time, T.</p> <p>making a few more assumptions, we can arrive at a computed X-Y location:</p> <ol> <li>wheel sizes do not change with time (because we are trusting the angular position of the wheels to determine the movement in that direction, and we have to assume a radius for the wheel)</li> <li>no slip in the wheels' turning direction.</li> </ol> <p>Given these assumptions, calculations would follow as simple mathematics.</p> <p>Depending on the operational scenarios, the validity of the assumptions should be re-considered. This might require some other fixing of coordinates using a secondary sensor-type. For example, in case there is an unnoticed slip of the wheels, the error would simly build up with time, and after some time the solution would be out of tolerances.</p> <p>I came across <a href="http://www.idsia.ch/~foerster/2006/1/omnidrive_kiart_preprint.pdf" rel="nofollow">this wonderful introduction to Xdrive robots</a>. Which gives kinematics for their motion schemes, and so on.</p> <p>It gives another method for calculating the location, using external sensors. But, it is definitely a good article worth reading.</p>
7056
2015-04-19T16:07:27.550
|localization|
<p>I have an omni-directional robot, such as a X-Drive or mecanum Drive that I need to track the position of. I can put encoders on the wheels, but that is all I can do in terms of the sensors. I have no external beacons that I can link to. The issue is that I needed to keep track of X-Y position, including strafing, and my heading. Does anyone have any resources that could help me with this.</p>
Locating Omni-Directional Robot
<p>You will need to output a <a href="http://en.wikipedia.org/wiki/Servo_control" rel="nofollow">servo PWM signal</a> with a 20ms period and a pulse width generally between 1 and 2ms depending on your ESC, <a href="http://beagleboard.org/support/BoneScript/analogWrite/" rel="nofollow">Here</a> is some information about using the PWM output on a beaglebone, and <a href="https://www.linux.com/learn/tutorials/776799-servo-control-from-the-beaglebone-black" rel="nofollow">here</a> is an article describing controlling a servo with a beaglebone, it is the same as an ESC.</p>
7057
2015-04-20T05:08:29.827
|brushless-motor|beagle-bone|
<p>I can't really find <a href="https://www.google.co.id/?gws_rd=cr,ssl&amp;ei=mIg0VZv5K4-HuASa4oCgBg#q=beaglebone%20esc%20brushless%20motor" rel="nofollow">a real straightforward tutorial</a> for that. There are <a href="https://www.google.co.id/?gws_rd=cr,ssl&amp;ei=mIg0VZv5K4-HuASa4oCgBg#q=arduino%20esc%20brushless%20motor" rel="nofollow">a lot for Arduino</a> but I only have an original beaglebone, an ESC, and brushless motor with me. Please help.</p>
How to control Brushless Motor+ESC with BeagleBone?
<p>I wasn't totally sure what you meant by "<em>draw and arc and check where the arc is on valid map position. Then move the robot there and calculate the angle to rotate the robot</em>". Perhaps you mean something like this:</p> <p><img src="https://i.stack.imgur.com/IBXKd.png" alt="Robot Arc Path"></p> <p>You draw an arc starting at the face, coming straight out of the all (tangent to the face's normal vector), and ending at the robot.</p> <p>That's certainly one way to plan the path, and it works well if there's no obstacle in the way.</p> <p>However, my own favourite path planning method is more computationally intensive, but works very well in many cases, handles acceleration and cornering, initial and final direction, and in many cases handles obstacle avoidance.</p> <p><strong>Iterative Finite Element Path Planning</strong></p> <p>Start by defining a path as a sequence of points from the face to the robot. The first 2 points should be along the line of the robot's forward travel direction, and the last 2 points should be along the Face's surface normal vector. </p> <p><img src="https://i.stack.imgur.com/kNbpZ.png" alt="Robot Finite Element Path"></p> <p>Now we will iteratively adjust the path:</p> <pre><code>while(path_is_not_OK()) { n = number of points of path // Loop through all the blue points for(i=2; i&lt;n-2; i++) { // move this point in the direction which reduces acceleration prev_velocity = point[i] - point[i-1] next_velocity = point[i+1] - point[i] acceleration = next_velocity - prev_velocity point[i-1] -= acceleration * 0.01 point[i] += acceleration * 0.02 point[i+1] -= acceleration * 0.01 } } </code></pre> <p>I just wrote out that code from memory, so I don't guarantee its correctness.</p> <p>After some number of iterations (about 50 - 500), you should see a nice smooth path from the robot to the target.</p> <p>Feel free to add other adjustments into the loop to satisfy other constraints to may have on the path (for example wall or obstacle avoidance).</p> <p><img src="https://i.stack.imgur.com/7U7lu.png" alt="Robot Finite Element Path"></p>
7059
2015-04-20T13:04:03.530
|mobile-robot|mapping|
<p>My question is more on a basic/conceptual level.</p> <p>I'm looking into a way to approach an object in map, that I have detected earlier. My robot is localized in a map using SLAM. And object position is 2D point that I recieve from my algorithm. (Object is actually a face picture on a wall). Is there a smart way to approach the point and "look" at it?</p>
How to approach an object
<p>As TomSchober said, you will need to send a PWM signal to your servo either way. Tom has already covered Arduino, so I'll make two other suggestions:</p> <ol> <li><p>Texas Instruments <a href="http://www.ti.com/tool/ez430-rf2500" rel="nofollow">MSP430 with a CC2500</a> radio (or maybe a newer version of this). It has a 2.4GHz radio with an antenna that you can configure to work as you like. The downside is it is harder to program than, say, an Arduino, but it is definitely doable. The kit is also kind of expensive. If you max out the signal strength, the range is fairly good. If you are interested, I can edit this post with a link to my github code for the radio.</p></li> <li><p><a href="http://www.st.com/web/catalog/tools/FM116/SC959/SS1532/PF254044" rel="nofollow">STM32 Discovery</a> board. You would need to get a bluetooth or a Wi-Fi dongle for it, but it has hardware PWM. It is fairly easy to program - there is lots of application code available through STM, and it is quite inexpensive at about 12-20 dollars per board. You will also get a fairly capable ARM processor that you can use to do lots of fun stuff. The F3 variant that I linked comes with a build in accelerometer and gyro.</p></li> </ol> <p><strong>The only warning I would have is:</strong> make sure the servo can get sufficient current from whatever you use to power it. For example, connecting the servo power line of a high-torque servo directly to the STM32 power pins blows the power circuit of the board (<em>cough cough</em>) - it is not a particularly hard fix, but still. If your camera is heavy enough and the servo is torque-y enough, I would recommend a separate power supply for the servo.</p>
7067
2015-04-21T02:11:00.790
|motor|raspberry-pi|wireless|
<p>I want to use a Raspberry Pi to pan a camera mounted on a servo wirelessly from ~100 feet away. What are some good servos and transceivers for this? To clarify, there should be no physical connection between the RasPi and servo. Do I need an additional RasPi on the servo end?</p>
How can I communicate wirelessly between a RasPi and servo?
<p>welcome to the Robotics stack exchange! your question is very broad, so I will try to give you an overview of what you need to do:</p> <ul> <li><p>First determine orientation based on information from both the gyroscope and accelerometer. The accelerometer is accurate in the long term while the gyroscope is accurate in the short term, so you will need some kind of sensor fusion algorithm to determine the"true" orientation of the quad. The most popular one is the <a href="http://en.wikipedia.org/wiki/Kalman_filter" rel="nofollow">Kalman Filter</a> but many others are available to choose from.</p></li> <li><p>The next thing you will want to do is build a simple rate controller where you try to match the rate of rotation on a certain axis to the rate commanded by the controller</p></li> <li><p>after you have a rate controller you can build a simple stabilized controller, where instead of commanding the rate you command a certain angle. You take the desired angle and the current angle and based on those values calculate a desired rate to close that gap and then feed that into the rate controller.</p></li> <li><p>You will probably need a RC controller and receiver set, you will need to read the PWM values from the receiver in order to take command input</p></li> </ul> <p><a href="https://ghowen.me/build-your-own-quadcopter-autopilot/" rel="nofollow">Here</a> is my favorite resource on the matter it helped me when I was making my first flight controller from scratch.</p> <p>Good luck!</p>
7080
2015-04-21T23:16:46.810
|sensors|
<p>how do you calculate the PID values and stabilise the quadcopter using the on board sensors The gyro accelerometer and magnetometer</p>
building a quadcopter using STM32F3 Discovery board
<p>So you can line the Xbox controller to the FIRST dashboard software as a joystick, this would be the easiest solution, If you want to attach it directly you could use the serial port on the Crio with a <a href="http://www.arduino.cc/en/Main/ArduinoUSBHostShield" rel="nofollow">USB host shield</a> and an arduino (similar to <a href="http://scuola.arduino.cc/courses/lessons/cover/wXd2go1" rel="nofollow">this</a>) and configure the arduino to translate the data to serial and then read it. the second option is to connect the SPI lines on the shield directly to the GPIO on the digital sidecar and use the included SPI interface in the WPI library. Additional modules for the Crio will not be supported in the FIRST configuration, requiring to write your own libraries and FPGA implementation from the ground up to support more than just the offical modules, along with the fact that they are very expensive.</p>
7097
2015-04-23T20:01:53.487
|control|industrial-robot|wireless|usb|first-robotics|
<p>I have a FIRST Robotics spec National Instruments cRIO. I would like to connect a USB wireless Xbox controller to it in order to control it from a distance with minimal extra hardware (which is why I am not using the more traditional WiFi radio method). To this point I have been able to find either</p> <p>A. A sidecar for the cRIO which allows it to act as a USB host or</p> <p>B. A method that does not use NI specific hardware to connect the two together</p> <p>If someone who is knowledgeable on the subjects of industrial system and robot control could provide some assistance that would be greatly appreciated, thanks!</p>
Connecting USB Xbox Controller to National Instruments cRIO
<p>Yes! There is something called the <a href="http://www.emlid.com/shop/navio-plus/" rel="nofollow">Navio+</a> it runs a Linux kernel on the raspberry pi with the real-time extension, and runs Adrucopter on top of that.</p> <p>You can also just connect the serial pins directly to the MultiWii and let the rpi do high level control and let the MultiWii do low level control.</p> <p>Edit: <a href="http://www.multiwii.com/wiki/index.php?title=Multiwii_Serial_Protocol" rel="nofollow">Here</a> is the reference for the MultiWii serial protocol (MSP) it is very powerful allowing you to do everything from sending RC stick commands, to receiving and injecting GPS values.</p>
7105
2015-04-25T17:48:37.403
|arduino|ros|quadcopter|
<p>I have a quadcopter using a MultiWii(Arduino Mega) controller. I am curious if there is any way to connect it with ROS capable RPI. (That I could add to the quad itself).</p>
Is it possible to use ROS on RPI with controller like MultiWii or similar?
<p>Looking at some of the docs I see that the startMotor function usually takes values like portN as in:</p> <pre><code>startMotor(port8,-32); startMotor(port9,128); </code></pre> <p>and I am also seeing that some code looks like this:</p> <pre><code>motor[port8] = -32; motor[port9] = 128; </code></pre> <p>so that tells me that since it is an index into an array it must be just simply an <code>int</code>.</p> <p><strong>Update:</strong> I can see on <a href="http://help.robotc.net/WebHelpCortex/index.htm#page=ROBOTC_Functions_New/VEX2_Functions_Motors_and_Servos.htm" rel="nofollow">this page</a> that within VEX it is actually a variable of type <code>tMotor</code>. But it can't work as an index in C unless tMotor is an alias for some sort of int. Ultimately, the best choice would be to use <code>tMotor</code> in your code.</p>
7112
2015-04-27T14:21:05.150
|robotc|vex|
<p>I'm trying to create a function that allows me to more easily start a motor, but I'm running into a problem, I don't know the type to use for the <code>motorName</code> argument. I'm using a VEX 269 Motor. Here's the function.</p> <pre><code>void runMotor(MotorTypeHere motorName, int speed, int time) { startMotor(motorName, speed); wait(time); } </code></pre> <p>I just don't know what type to put for the <code>motorName</code> argument. What type would it be?</p>
What is the RobotC type for motors?
<p>To get the firmware version:</p> <p>To get what software the robot has, connect it to a serial application (such as PuTTy) and turn the robot off then on. It should print it out then</p> <p>New says: r3_robot/tags/release-3.4.1:</p> <p>old says: r3_robot/tags/release-3.2.6:</p>
7121
2015-04-28T18:42:13.090
|irobot-create|roomba|
<p>I have been working on trying to get the angle of the Create 2. I am trying to use this angle as a heading, which I will eventually use to control the robot. I will explain my procedure to highlight my problem.</p> <p>I have the Create tethered to my computer. </p> <p><ol> <li>I reset the Create by sending Op code [7] using RealTerm. The output is:</p> <blockquote> <p>bl-start<br> STR730<br> bootloader id: #x47175347 4C636FFF<br> bootloader info rev: #xF000<br> bootloader rev: #x0001<br> 2007-05-14-1715-L<br> Roomba by iRobot!<br> str730<br> 2012-03-22-1549-L<br> battery-current-zero 252</li> </ol> (The firmware version is somewhere in here, but I have no clue what to look for--let me know if you see it!) </p> </blockquote> <ol start="2"> <li>I mark the robot so that I will know what the true angle change has been.</li> <li>I then send the following codes [128 131 145 0x00 0x0B 0xFF 0xF5 142 6]. This code starts the robot spinning slowly in a circle and request the sensor data from the sensors in the group with Packet ID 2. The output from the Create seen in RealTerm is 0x000000000000, which makes sense.</li> <li>I wait until the robot has rotated a known 360 degrees, then I send [142 2] to request the angle difference. The output is now 0x00000000005B.</li> </ol> <p>The OI specs say that the angle measurement is in degrees turned since the last time the angle was sent; converting 0x5B to decimal is 91, which is certainly not 360 as expected. </p> <p><strong>What am I doing wrong here?</strong> Is the iRobot Create 2 angle measurement that atrocious, or is there some scaling factor that I am unaware of? are there any better ways to get an angle measurement?</p>
iRobot Create 2: Angle Measurement
<h2>OpenROV Cape</h2> <p>If I understand things correctly, the OpenROV motors are connected to ESCs that are connected to Arduino Digital pins 9, 10, and 11. According to the <a href="http://www.arduino.cc/en/Hacking/PinMapping168" rel="nofollow noreferrer">"ATmega168/328-Arduino Pin Mapping"</a>, that corresponds to physical pins 15, 16, and 17 respectively on the 28-pin ATmega328 DIP.</p> <p><img src="https://i.stack.imgur.com/CyLW7.jpg" alt="enter image description here"> A nice diagram on the OpenROV wiki showing a photo of the OpenROV Cape and where to connect the Port, Vertical, and Starboard ESCs to the cape (diagram from <a href="https://forum.openrov.com/t/wiki-buildout/1585" rel="nofollow noreferrer">https://forum.openrov.com/t/wiki-buildout/1585</a> ).</p> <p><img src="https://i.stack.imgur.com/MD3Gh.jpg" alt="enter image description here"> The Arduino IDE is used to program the ATmega328 in a 28-pin DIP package on the other side of the OpenROV Cape, shown here. (photo from <a href="http://www.bluebird-electric.net/artificial_intelligence_autonomous_robotics/open_rov_source_underwater_robots_for_exploration_education.htm" rel="nofollow noreferrer">"Bluebird Marine Systems: OpenROV underwater robots for educational exploration"</a>)</p> <p>Why are there 2 not-quite-the-same Arduino firmware folders, <a href="https://github.com/OpenROV/openrov-software/tree/v2.5.0/arduino/OpenROV" rel="nofollow noreferrer">https://github.com/OpenROV/openrov-software/tree/v2.5.0/arduino/OpenROV</a> and <a href="https://github.com/OpenROV/openrov-software-arduino/tree/master/OpenROV" rel="nofollow noreferrer">https://github.com/OpenROV/openrov-software-arduino/tree/master/OpenROV</a> ? I'm guessing that the "openrov-software-arduino" version is the latest version.</p> <p>The <a href="https://github.com/OpenROV/openrov-software-arduino/blob/master/OpenROV/Cape.h" rel="nofollow noreferrer">OpenROV Cape.h file</a> defines those pins:</p> <pre><code>#define PORT_PIN 9 #define VERTICAL_PIN 10 #define STARBOARD_PIN 11 </code></pre> <p>Those definitions are used in the <a href="https://github.com/OpenROV/openrov-software-arduino/blob/master/OpenROV/Thrusters2X1.cpp" rel="nofollow noreferrer">OpenROV Thrusters2X1.cpp file</a></p> <pre><code>Motor port_motor(PORT_PIN); Motor vertical_motor(VERTICAL_PIN); Motor starboard_motor(STARBOARD_PIN); </code></pre> <p>The above lines pass the pin numbers to the Motor constructor declared in <a href="https://github.com/OpenROV/openrov-software-arduino/blob/master/OpenROV/Motor.h" rel="nofollow noreferrer">Motor.h</a>. Later the <a href="https://github.com/OpenROV/openrov-software-arduino/blob/master/OpenROV/Motor.cpp" rel="nofollow noreferrer">OpenROV Motor.cpp file</a> stores those pin numbers and passes them to functions in the <a href="https://github.com/OpenROV/openrov-software-arduino/blob/master/OpenROV/openrov_servo.cpp" rel="nofollow noreferrer">openrov_servo.cpp file</a> to control the ESCs.</p> <h2>BeagleBone PWM</h2> <p>Some of the pins on the BeagleBone are internally connected to a hardware PWM driver. Some people have several ESCs or other things controlled by a standard <a href="http://www.opencircuits.com/Servo_control" rel="nofollow noreferrer">RC control signal</a>, each one driven by one such pin on a BeagleBone.</p> <p>(FIXME: this would be a good place for a link to the part of the OpenROV code that runs on the BeagleBone and sends a message to the Arduino with the desired PWM; can that bit of code can be tweaked to directly drive that desired PWM out the BeagleBone pins?)</p> <ul> <li>Simon Monk. Adafruit. <a href="https://learn.adafruit.com/controlling-a-servo-with-a-beaglebone-black/overview" rel="nofollow noreferrer">"Controlling a Servo with a BeagleBone Black"</a></li> <li>Ben Martin. <a href="https://www.linux.com/learn/tutorials/776799-servo-control-from-the-beaglebone-black" rel="nofollow noreferrer">"How to Control a Servo Motor from a BeagleBone Black on Linux"</a>.</li> <li><a href="http://beagleboard.org/support/BoneScript/analogWrite/" rel="nofollow noreferrer">"BoneScript: analogWrite"</a></li> <li>Meng Cao. <a href="http://www.egr.msu.edu/classes/ece480/capstone/fall13/group05/meng.pdf" rel="nofollow noreferrer">"How to use all the GPIO on Beaglebone Black in Python"</a></li> <li>Akkana. <a href="http://shallowsky.com/blog/hardware/beaglebone-black-gpio.html" rel="nofollow noreferrer">"GPIO tutorial for the BeagleBone Black"</a> implies that the BeagleBone Black hardware PWM can drive up to 8 pins (?).</li> <li>Justin Cooper. Adafruit. <a href="https://learn.adafruit.com/setting-up-io-python-library-on-beaglebone-black/pwm" rel="nofollow noreferrer">"Setting up IO Python Library on BeagleBone Black: Using GPIO, PWM and more with Python"</a></li> <li>Babak Parvizi. <a href="http://digital-drive.com/?p=146" rel="nofollow noreferrer">"BeagleBone Black ... Controlling a Servo Using HTML5, JavaScript, and Node.js."</a></li> <li><a href="https://briancode.wordpress.com/2015/01/06/working-with-pwm-on-a-beaglebone-black/" rel="nofollow noreferrer">"Working with PWM on a BeagleBone Black"</a>.</li> <li>Saad Ahmad. <a href="https://github.com/SaadAhmad/beaglebone-black-cpp-PWM" rel="nofollow noreferrer">"beaglebone-black-cpp-PWM"</a>.</li> <li>eLinux. <a href="http://elinux.org/BeagleBoardPWM" rel="nofollow noreferrer">BeagleBoardPWM</a></li> <li>Hipstercircuits. <a href="http://hipstercircuits.com/enable-pwm-on-beaglebone-with-device-tree-overlays/" rel="nofollow noreferrer">"Enable PWM on BeagleBone with Device Tree overlays"</a>.</li> <li><a href="http://beaglebone.cameon.net/home/using-pwm-outputs" rel="nofollow noreferrer">"Using the BeagleBone‎: Using PWM outputs"</a> describes how to drive 6 servos from a white BeagleBone.</li> </ul>
7138
2015-05-01T08:11:07.247
|mobile-robot|beagle-bone|
<p>I'm working on my own ROV project, but I find OpenROV have a <a href="https://github.com/OpenROV/openrov-software/releases/tag/v2.5.0" rel="nofollow">ready to use image</a> for my BB so want to use that instead of making my own program, and I already deployed the image, but I can't find which three pins find that send PWM signal for ESC's? Please help.</p>
Pins in OpenROV that control the motors?
<p>Imagine for a moment that you keep the input velocity fixed throughout the identification experiment, then you might inject into the system a sudden change in the final commanded position set-point while measuring as feedback the current position of your equipment (e.g. joint encoders). You will thus come up with a bunch of profiles of <em>commanded vs. feedback positions</em> for your identification goal. To this end you can profitably rely on the <code>ident</code> tool of the <strong>MATLAB System Identification Toolbox</strong>.</p> <p>Explore the system response against different input position steps and remember to validate any result over profiles sets that you did not use during identification.</p> <p>Finally, you could assume that varying the input velocity will have an impact on the internal controller responsivity, since of course what you're going to model is the whole apparatus made up of the internal actuators, controller, etc. In theory, you should repeat the identification experiment over a range of different input velocities.</p> <hr> <p>I'll expand hereinafter a little bit further, given the fresh info you provided.</p> <p>It's clear that there is an internal controller that converts your velocity input in a proper signal (usually voltage) actuating the motors. If you don't trust this internal loop, then you have to identify the plant and apply compensation as follows.</p> <p><strong>Setting</strong>: identification of a system controlled in velocity. Hence, input $=$ commanded velocity $v$; output $=$ encoder feedback $\theta$.</p> <p><strong>Procedure</strong>: you inject a chirp in velocity and you collect encoders. You can use <code>ident</code> to come up with a transfer function of your motor controlled in velocity at "high-level". This transfer function should resemble a pure integrator but it won't. What makes this difference needs to be compensated with the design of your velocity controller. This procedure has to be repeated for the two axes of the PTU. How to design a proper controller by putting its poles and zeros it's a matter of knowledge you should have; to do that of course you'll exploit the identified transfer function.</p> <p><strong>Note</strong>: you don't have vision in the loop yet, just position feedback from the encoders. This way you can refine the velocity control of your system, so that in the end, given a target angular position $\theta_d$ where you want to go, you know how to form the proper velocity commands $v$ to send to the device at run-time, while reading back the corresponding encoders $\theta$.</p> <p>Then <strong>vision kicks in</strong>. The vision processing will tell you where the face centroid $p_d$ is with respect to the image center; this information is refreshed continuously at run-time. Then, using the <strong>intrinsic parameters</strong> of the <a href="http://en.wikipedia.org/wiki/Pinhole_camera_model" rel="nofollow">pinhole model</a> of your camera, you'll have an estimate of which angular positions this pixel corresponds to.</p> <p>This is not that difficult to determine. Knowing the centroid coordinates $p_d$ and assuming that we know how far the face lies from the camera (let's say <code>1 m</code> but we don't care about the real distance), that is we know its $z$ component in the camera reference frame, the pinhole model gives us a way to find out the face $x$ and $y$ components in the camera frame. Finally, trigonometry provides you with the delta angles to add up to the current camera encoders that will in turn let you compute the absolute target angular positions. These latter values will represent the angular set-point for the above velocity controller.</p> Here comes the math <p>Given $p_d=\left(u,v\right)$ the face centroid and $z$ the distance of the face from the camera, it holds: $$ \left( \begin{array}{c} x \\ y \\ z \\ 1 \end{array} \right) = \Pi^\dagger \cdot \left( \begin{array}{c} z \cdot u \\ z \cdot v \\ z \end{array} \right), $$</p> <p>where $x,y,z$ are the Cartesian coordinates of the face in the camera frame and $\Pi^\dagger$ is the pseudoinverse of the matrix $\Pi \in \mathbb{R}^{3 \times 4}$ containing the intrinsic parameters of your camera (i.e. the focal length, the pixel ratio and the position of the principal point - browse internet for that - there are standard procedures to estimate this matrix). We are not interested in $z$, so that you can put in the above equation whatever value for $z$ you want (say <code>1 m</code>), but remember to be consistent in the following. Given $u,v$ you get $x,y$ as output.</p> <p>Once you have $x,y$ you can compute the angular variations $\Delta\phi_p$ and $\Delta\phi_t$ for the <em>pan</em> and the <em>tilt</em>, respectively: $$ \Delta\phi_p=\arctan\frac{x}{z} \\ \Delta\phi_t=-\arctan\frac{y}{z} $$</p> <p>Finally, the absolute angular positions used as set-point will be: $$ \phi_p:=\phi_p+\Delta\phi_p \\ \phi_t:=\phi_t+\Delta\phi_t $$</p> <p>Alternatively, we could also identify the whole system with the visual feedback in place of the motor encoders (<em>visual servoing</em>). Here, the transfer function will tell us the impact of a velocity command directly on the displacement a pixel undergoes. Intuitively, this identification will be more difficult because we put everything together and it's likely that we won't achieve the same performance of the first method.</p>
7139
2015-05-01T10:38:24.860
|control|
<p>I have a PTU system whose transfer function I need to determine. The unit receives a velocity and position, and move towards that position with the given velocity. What kind of test would one perform for determining the transfer function...</p> <p>I know Matlab provides a method. The problem, though, is that I am bit confused on what kind of test I should perform, and how I should use Matlab to determine the transfer function.</p> <p>The unit which is being used is a Flir PTU D48E</p> <p>---> More about the system </p> <p>The input to the system is pixel displacement of an object to the center of the frame. The controller I am using now converts pixel distances to angular distances multiplied by a gain $K_p$. This works fine. However, I can't seem to prove why that it works so well, I mean, I know servo motors cannot be modeled like that.</p> <p>The controller is fed with angular displacement and its position now => added together give me angular position I have to go to. The angular displacement is used as the speed it has to move with, since a huge displacement gives a huge velocity.</p> <p>By updating both elements at different frequency I'm able to step down the velocity such that the overshoot gets minimized. </p> <p>The problem here is: if I have to prove that the transfer function I found fits the system, I have to do tests somehow using the <code>ident</code> function in Matlab, and I'm quite unsure how to do that. I'm also a bit unsure whether the PTU already has a controller within it, since it moves so well, I mean, it's just simple math, so it makes no sense that I'll convert it like that.</p>
Determining transfer function of a PTU for visual tracking
<p>Why did you switch to <strong>EKF</strong>? <strong>KF</strong> is standard for estimating the angular rate of a electrical motor, whose transfer function is linear indeed. You might want to consider exploring also the <strong>Savitzky-Golay</strong> filter as a valid alternative, which is not model based.</p> <p>Concerning the sample period of the controller, it is normally a good rule to take a frequency separation of one decade, at least, between the rate of the system and the bandwidth of the input signal. Therefore, the actual sample rate of the joystick input is an overkill.</p> <p>A sample period for the PID of <code>5-10 ms</code> is usually fine, given the low mechanical cut-off frequency of the motor (higher rates are better, but not required). Then, acquire the joystick at <code>&gt;50 ms</code>.</p>
7146
2015-05-02T20:27:20.423
|pid|ekf|
<p>I am new to robotics, however I am designing a PID to control the angular velocity (azimuth, elevation) of a couple of Faulhaber motors. The input to the PID control is the actual angular velocity, which is not observed though, since it is derived from the position of each motor at time $t$.</p> <p>The PID sample period is aprox. <code>30 ms</code>, whereas the input data rate from the joystick is aprox. <code>300 samples/s</code>, corresponding to a sample period of <code>3.33 ms</code>. The joystick input gets transformed into the desired angle speed, that the PID will control.</p> <p>I was initially filtering position data using a normal 2D linear <strong>Kalman Filter</strong>, but the angular velocity is not linear (by formula), hence I switched to <strong>Extended Kalman</strong> filtering.</p> <p>My questions are the following:</p> <ol> <li>Is this latter approach that makes use of EKF correct?</li> <li>Which are the parameters that I have to check in order to properly set the update rate of the PID loop?</li> </ol> <p>Thx in advance!</p>
Estimating angular speed from position for control purpose
<p><a href="http://www.st.com/web/en/resource/technical/document/datasheet/DM00058181.pdf" rel="nofollow">http://www.st.com/web/en/resource/technical/document/datasheet/DM00058181.pdf</a></p> <p>Let's use this datasheet as an example. Page 18 holds what is known as the clock tree. It shows with what frequency every component ticks with. So let's try following it. You have your crystal oscillator working on some frequency (or some internal clock). You multiply it by PLL multiplication factor. PLL is used to make a higher frequency signal from a lower one. You then divide by AHB prescaler. You then divide by APB1 or APB2 prescaler (depending on which timer you're using) and then multiply by 2 if APB prescaler is something other than 1. What we have now is the clock timer module "sees". Now it divides that clock with timer prescaler and it increments (or decrements) the counter with that frequency. Once it reaches a set point it will throw the interrupt.</p> <p>AHB and APB registers are usually set in the project settings of your programming environment. For timer prescaler register refer to the datasheet of your MCU.</p>
7156
2015-05-04T18:17:24.567
|microcontroller|
<p>I have an STM32F3 discovery board. I want to go to the next step and I want to try to use timers in a few configurations. </p> <p>How can I calculate variables (such as prescaler, period)? I looked in all datasheets, manuals and didn't find anything that can describe these values as - Input capture mode, OP, PWM, etc. </p> <p>I think that prescaler is for downgrading a frequency from 1-65575. </p> <p>So if I have f<sub>cpu</sub>=72MHz and want to generate a signal of frequency=40kHz, am I supposed to do: 72MHz/40kHz=1800? </p> <p>Now should I subtract this prescaler with -1?</p>
STM32F3 timers & computing
<p>I agree with Ben, your situation is eye-to-hand(camera outside robot) and AX=XB,the X is pose between camera and the robot base(but you get the fucucial with the end-effector). And in eye-in-hand(camera bind to robot tip) the X is the pose between camera and the robot's end-effector. There is another Matlab code (Dual Quaternions)in <a href="http://math.loyola.edu/~mili/Calibration/index.html" rel="nofollow noreferrer">http://math.loyola.edu/~mili/Calibration/index.html</a></p>
7163
2015-05-05T01:48:05.680
|robotic-arm|stereo-vision|calibration|
<p>I'm trying to use a dual quaternion <a href="https://github.com/hengli/camodocal/blob/master/include/camodocal/calib/HandEyeCalibration.h" rel="noreferrer">Hand Eye Calibration Algorithm Header</a> and <a href="https://github.com/hengli/camodocal/blob/master/src/calib/HandEyeCalibration.cc" rel="noreferrer">Implementation</a>, and I'm getting values that are way off. I'm using a robot arm and an optical tracker, aka camera, plus a fiducial attached to the end effector. In my case the camera is not on the hand, but instead sitting off to the side looking at the arm.</p> <p>The transforms I have are:</p> <ul> <li>Robot Base -> End Effector</li> <li>Optical Tracker Base -> Fiducial</li> </ul> <p>The transform I need is:</p> <ul> <li>Fiducial -> End Effector</li> </ul> <p><img src="https://i.stack.imgur.com/0k7V9.jpg" alt="HandEyeCalibrationQuestion"></p> <p>I'm moving the arm to a series of 36 points on a path (blue line), and near each point I'm taking a position (xyz) and orientation (angle axis with theta magnitude) of Camera->Fiducial and Base->EndEffector, and putting them in the vectors required by the <a href="https://github.com/hengli/camodocal/blob/master/include/camodocal/calib/HandEyeCalibration.h" rel="noreferrer">HandEyeCalibration Algorithm</a>. I also make sure to vary the orientation by about +-30 degrees or so in roll pitch yaw.</p> <p>I then run estimateHandEyeScrew, and I get the following results, and as you can see the position is off by an order of magnitude. </p> <p>[-0.0583, 0.0387, -0.0373] Real [-0.185, -0.404, -0.59] Estimated with HandEyeCalib</p> <p>Here is the full transforms and debug output:</p> <pre><code># INFO: Before refinement: H_12 = -0.443021 -0.223478 -0.86821 0.321341 0.856051 -0.393099 -0.335633 0.470857 -0.266286 -0.891925 0.36546 2.07762 0 0 0 1 Ceres Solver Report: Iterations: 140, Initial cost: 2.128370e+03, Final cost: 6.715033e+00, Termination: FUNCTION_TOLERANCE. # INFO: After refinement: H_12 = 0.896005 0.154992 -0.416117 -0.185496 -0.436281 0.13281 -0.889955 -0.404254 -0.0826716 0.978948 0.186618 -0.590227 0 0 0 1 expected RobotTipToFiducial (simulation only): 0.168 -0.861 0.481 -0.0583 expected RobotTipToFiducial (simulation only): 0.461 -0.362 -0.81 0.0387 expected RobotTipToFiducial (simulation only): 0.871 0.358 0.336 -0.0373 expected RobotTipToFiducial (simulation only): 0 0 0 1 estimated RobotTipToFiducial: 0.896 0.155 -0.416 -0.185 estimated RobotTipToFiducial: -0.436 0.133 -0.89 -0.404 estimated RobotTipToFiducial: -0.0827 0.979 0.187 -0.59 estimated RobotTipToFiducial: 0 0 0 1 </code></pre> <p>Am I perhaps using it in the wrong way? Is there any advice you can give?</p>
Hand Eye Calibration
<p>Absolutely, I use this method all the time for small robots, and actuators. I can find no information on the formal torque ratings, but I did find in a document called "<a href="http://www.expert4pvc.com/Documents/PVC_Piping_Systems-PPFA.pdf" rel="nofollow">PVC Piping Systems: Helpful tips for Avoiding Problems</a>" </p> <blockquote> <p>The recommended best practice is to use a thread sealant (not a thread lubricant) and to assemble the joint to finger tight plus one and one-half turns, two turns at the most. Finger tight can be defined as: tightened using the fingers, no tools, to a torque of about 1.2 to 1.7 foot-pounds (1.7 to 2.3Nm).</p> </blockquote> <p>I am assuming the pipe can withstand the torque of tightening threads. I suspect that tie pipe can withstand much more torque than that though.</p>
7167
2015-05-05T13:57:07.377
|control|brushless-motor|
<p>I am working on a project that involves speed regulation of a BLDC motor under no-load and load conditions. I wish to use another machine operated as generator, acting as load on the motor, as shown in <a href="https://www.youtube.com/watch?v=L8gvQIgOiGo" rel="nofollow">this video</a>. </p> <p>The coupling used in this motor/generator arrangement looks handmade out of a rubber tube or somethhing. I am considering using it as an alternative to a flexible coupling. Purchasing an actual flexible coupling is not an option for me. Moreover, I need the coupling on an urgent basis. </p> <p>My question is, can this arrangement (or something similar) be used to couple a 15W motor to a similar rating machine, if the rated torque is not exceeding 0.1 N.m?</p>
Is this rubber/PVC coupling a good enough for small torque (0.1 N.m)
<p>Raspberry Pi 2 will give you lot of options to play with in robotics field, ardunio would be nice addon with it. Another nice option would be ODROID , you can setup it up with ROS. Now regarding the language, C would be good.</p>
7175
2015-05-06T06:57:57.783
|artificial-intelligence|
<p>I am very new to robotics. but I will be writing algorithm for my robot to move around and gather information from its surroundings and process it. It will also process audio-visual signals. but I am in confusion about which micro-controller to use so it would be performance efficient and consumes less power.</p> <p>The controller should also be capable of communication with wireless network (internet through wi-fi) and should also support memory integration.</p> <p>Also I know to program in Java and C. please suggest which would be the best language to use for programming.</p> <p>Thanks.</p> <p>P.S. I would really like to use a microprocessor as it is highly customizable. Please suggest the best to use</p>
Need suggestion about which microcontroller/processor and language to be used in my project
<p>"Save World as" is broken on Gazebo 5.1.0 . </p> <p>You can either install the experimental version of Gazebo 6 using the <code>gazebo6-prerelease</code> package (assuming you are on Ubuntu) or wait for the fix to be released in Gazebo 5.2.0 . </p> <p>The relevant bug on Gazebo issue tracker is: <a href="https://bitbucket.org/osrf/gazebo/issue/1593/world-save-as-broken-on-gazebo-510" rel="nofollow">https://bitbucket.org/osrf/gazebo/issue/1593/world-save-as-broken-on-gazebo-510</a> .</p>
7177
2015-05-06T08:16:40.143
|gazebo|
<p>How can I test whether my gazebo installation works properly or not? I'm trying to "save myworld" and "save as" options but no window is shown.</p>
How to test GAZEBO works properly. Save windows don't show any component
<p>Disclaimer: This answer is somehow a reaction to the Shahbaz' post, so it has a pro-ROS bias. </p> <p>I do not think that ROS is mandatory, but it is a great starting point and worth the time to invest. It started within Willow Garage, but this company vanished and ROS is still alive, used and developed. Most of ROS is fully open source and also commercially usable so there is no way that ROS is just going to vanish if a company is no longer interested in it. The code quality of course differs between the core modules and implementations of cutting edge algorithms that some phd student published with his paper. </p> <p>ROS is picking up more and more speed in industrial settings (I'd be surprised if there is a significant portion of robotics startup worldwide that do not use ROS). Some algorithms are going to be further maintained and developed by the ros-industrial consortium and if you have a look at the members, it's a good bet that ROS is going to become a standard in the industry:</p> <p><a href="http://ROS%20industrial%20consortium">http://rosindustrial.org/ric/current-members/</a></p> <p>The distributed way of using ROS helps a lot to create and maintain new packages, especially within teams. The message and action definitions help a lot in defining interfaces so that hardware and algorithms can be exchanged quickly. It also helps to integrate new team members as a new node will node bring down other nodes if it crashes (as long as it does not eat all the RAM..) so it's rather safe to integrate partially working nodes into the running system as their effect is limited. The communication uses TCP which is reliable and fast (on a local machine), so that message passing is very quick (several hundred Hz for a control loop is possible). </p> <p>Non-Real-Time</p> <p>ROS is currently not realtime as the vast majority of algorithms have no need for realtime. Sensing or planning does not have realtime constraints in most cases (how many people are building self driving high speed cars?). It's enough if the final control loop runs in realtime and this can in many cases be done directly on the motor (to which the final position is send e.g. via CAN). Real Time however is one of the core goals of ROS2 (<a href="https://github.com/ros2/ros2/wiki/Real-Time-Programming" rel="nofollow noreferrer">https://github.com/ros2/ros2/wiki/Real-Time-Programming</a>) so even if you need this in the future for the whole system, ROS has you covered. </p> <p>If you really want to run stuff embedded, there is of course a connection to arduino, so that you can write ROS messages directly on the arduino which then are send via serial connection. </p> <p>Running ROS on Windows is currently rather a pain, but as Windows is moving closer to Linux (even starting to have something bash-like), it's only a matter of time until it's possible. (But who wants to run a robot with windows anyway?)</p> <p>Hardware Interfaces and Algorithms:</p> <p>I think this really is a strong point for ROS. A lot of commercially available robots already come with a ROS interface or someone already invested some time to implement the interface. Most commercial arms can be used in MoveIt so much of the work to get an application to run with a specific arm can be reused with another hardware. </p> <p>Community:</p> <p>Another strong point for ROS. New algorithms get a ROS-interface very quickly and a lot of people had the same problems as you so you will find someone to help you.</p> <p><a href="http://ROS%20metrics">http://download.ros.org/downloads/metrics/metrics-report-2016-07.pdf</a> </p>
7178
2015-05-06T08:22:26.253
|ros|simulator|
<p>Do we have to build ROS for robotic research/application? What is the main advantage? When or in which situations ROS is mandatory?</p>
Is ROS (Robot Operating System) mandatory?
<p>I don't know if there is a formal proof to this, but in general, no the set of all possible joint configurations that correspond to a particular end-effector pose is not continuous. I think of the set as islands in joint space. Where each island has some local continuous joint range, but is disconnected from the other islands. </p> <p>I think there are a few reasons for this: joint limits, and kinematic limits, and physical construction. </p> <p>As for joint limits, i think a good example is the "standard" robot roll-pitch-roll wrist like the LBR has. if these joints have limits, it will not be continuous rotation, which affect the arm's ability to stay at the end-effector pose while the rest of the arm moves through its redundancy.</p> <p>As for physical construction, I think a good example is thinking of a planar 3 link arm which is redundant for end-effector 2D position, (with no rotation). For example:</p> <p><img src="https://i.stack.imgur.com/FYhwi.png" alt="Planar 3 link arm image"></p> <p>For many points, you could find a way to move the joints to keep the end-effector at the same point. But if the point is too close to the base then unless the links can pass through each other, the links will collide and prevent you from getting to the other configurations. </p> <p>Lastly, as for kinematic constraints. If you fix the end-effector at a point, you can think of the arm as a <a href="http://en.wikipedia.org/wiki/Four-bar_linkage" rel="nofollow noreferrer">four-bar linkage</a>. I can't post an animated gif, but the Wikipedia page has this nice example: <a href="http://en.wikipedia.org/wiki/File:4_bar_linkage_animated.gif" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/File:4_bar_linkage_animated.gif</a>. You should consider the arm to be the 3 links: AD, DC, and CB. The base of the arm is A and the end-effector is B. (Ignore point E and the large triangle). Here the arm is exploring the entire local null space of redundant solutions that it can reach. However, there are is another set of solutions that can be achieved if the arm is flipped horizontally. (if point C is below the line between A and B.) Clearly the arm cannot move their with the end-effector constrained to stay at B. But it is a valid arm configuration.</p>
7181
2015-05-06T10:40:32.930
|control|inverse-kinematics|
<p>Let's say my redundant robot is at an operationnal position $x$. Is the set of all possible joint configuration "continuous", which would mean that it is possible to explore all the possible configurations without moving the end effector. Is there a way to show that it is true or false? I am using a Kuka LBR robot with 7 dof so maybe there is a specific answer for this one.</p> <p>I have been searching it and did not find any result but I will gladly accept any link or answer that you may have.</p>
Is the geometric inverse problem's solution "continuous" for a redundant robot?
<p>Honestly it depends what you want, I will cover some options from cheapest to most expensive</p> <ul> <li><p>Ultrasonic sensor on a servo 5-50$ depending on the model. It can be fairly accurate with around 1cm accurcy and 255 steps on a full circle, but it can have poor performance in dusty environments and can have poor results with curved items.</p></li> <li><p>Ir distance sensor, 10-100$ depending on model. More accurate than ulatrosonic sensor (around .5cm accuracy). Preforms better with round objects, sensitive to dust, but can be placed in clear enclosures to protect them.</p></li> <li><p>1d lidar/laser raingefinder on servo, 80-300$ <a href="http://pulsedlight3d.com/" rel="noreferrer">some</a> <a href="http://www.lightware.co.za/shop/en/rangefinders-and-altimeters/7-sf02f.html" rel="noreferrer">examples</a> they work over longer ranges and collect data faster potentially allowing two axis collection to get a "3d" point cloud, but are more expensive and will require a more sophisticated controller to take advantage of them.</p></li> <li><p>inexpensive 2d lidar sensor (possibly on a servo for 3d capture) <a href="http://www.robotshop.com/en/rplidar-360-laser-scanner.html" rel="noreferrer">example</a> alloes for true 2d capture and can be moved up and down to allow for a 3d point cloud, more expensive and required even more procssing power probably no less than a Raspberry Pi or Intel Edison</p></li> <li><p>Stereoscopic cameras, 400$+ my favorite is the <a href="https://www.stereolabs.com/" rel="noreferrer">ZED</a> these provide "true" 3d and can be fairly accurate at around 10 centimeters but give you a real 3d point cloud in addition to being cameras that you can use to do object recognition and detection on, the only problem is that they require significant horsepower no less than an Nvidia k1 which could restrict some mobile applications</p></li> <li><p>True LIDAR sensors, 1000$+ allowing for very robust 2d point clouds <a href="http://www.robotshop.com/en/hokuyo-urg-04lx-ug01-scanning-laser-rangefinder.html" rel="noreferrer">example</a> and are designed to be oscillated up and down to provide full 3d mapping, they can ve very expense up to 10k+ and will require an embedded computer to run</p></li> <li><p>True 3d LIDAR sensors, 50,000$+ dollars are very accurate and can provide immense amounts of data but are very expensive heavy and require lots of processing power in order to use</p></li> </ul> <p>If you provide more specifics on your application I can give a more specific answer.</p>
7190
2015-05-07T17:54:34.690
|mobile-robot|sensors|swarm|
<p>I'm searching for a <strong>cheap (under 100$) and efficient</strong> 3D sensor, which detects obstacles and moving objects, for robot applications like quadrotor navigation, swarm robotics, etc. Can you suggest a sensor that can be either a commercial product or a "do it yourself" project?</p>
Cheap and efficient 3D sensor?
<p>For most applications, I think the calculation you describe is good enough when selecting hardware. You want your arm to be able to have some lift capacity at the worst case, which is when the arm is fully extended. Note that you should also take into account the weight of the arm itself which is typically non-negligible. </p> <p>That being said, yes, there are fancy ways to do this math. The arm's Jacobian is a matrix that relates joint velocities $\dot{q}$ to end-effector velocity $v_e$ . Note that the Jacobian itself is a function of the current joint angles $q$: $$ v_e = J(q)\dot{q} $$ It should come as no surprise that the concepts of force and velocity are linked. In fact, they are orthogonal. This is called the kineto-statics duality. This image shows a planar 2 link arm at a number of different configurations. At each configuration, the velocity manipulability ellipsoid is shown. The arm can move quickly where the ellipse is long. There is also a force manipulability ellipsoid (not shown) that is the same except rotated 90 degrees. In the force manipulability ellipsoid, the arm is strong (and slow) where the ellipse is long.</p> <p><img src="https://i.stack.imgur.com/xmpiI.jpg" alt="http://www.kyoto-u.ac.jp/en/research/forefronts/archives/images/rakuyu6-a2.jpg/image"></p> <p>The Jacobian can represent this too: $$ \tau=J^T(q)\gamma_e $$</p> <p>Where $\tau$ is the vector of joint torques, and $\gamma_e$ is the end-effector forces.</p> <p>Calculating the Jacobian is beyond the scope of this thread. You should get yourself a good textbook or robotics software library to help. (Searching around on this site should provide some good starting points). </p> <p>Also, <a href="https://engineerjau.wordpress.com/2013/05/04/advanced-robotics-manipulability-ellipsoids/" rel="nofollow noreferrer">this page</a> seems to have a good discussion of manipulability ellipsoids.</p>
7194
2015-05-08T12:47:27.793
|robotic-arm|mechanism|software|
<p>I'm a complete beginner in robotics with background in programming...</p> <p>I started thinking about a robot project yesterday and want to order some motors to test with. I saw the specs for the motors, torque etc, and I think I remember enough physics from high school to do simple calculations. For example, if the motor rotates an arm, then given the torque and the length of the arm, how much could it lift? Also, if it doesn't lift it straight up, but at an angle, I could with a bit of thinking tweak the calculations a bit... If there would be several joints attached to each other, the calculations would be more complex, but I could then probably create a program in node.js, for example, to be able to experiment with different values.</p> <p>However, I would assume that these kinds of calculations would be very common when designing robots. So I would assume there are already programs for these types of calculations created. I don't know exactly what I mean with "these types of calculations", because I don't know yet what I don't know, so would like to ask which programs are there that you guys use for making calculations when you design your robots?</p> <p>Preferable they should be open source...</p>
Beginner question about software for calculations
<p>For building a dynamic model from scratch of any differential drive mobile robot (i.e., a tank), the best resource I've found so far is a paper by Dhaouadi and Hatab (<a href="http://www.omicsgroup.org/journals/dynamic-modelling-of-differentialdrive-mobile-robots-using-lagrange-and-newtoneuler-methodologies-a-unified-framework-2168-9695.1000107.pdf" rel="nofollow">PDF link</a>) titled "Dynamic Modelling of Differential-Drive Mobile Robots using Lagrange and Newton-Euler Methodologies: A Unified Framework". It even includes some discussion of how to do the dynamic modeling of the actuators.</p>
7198
2015-05-08T15:23:38.450
|wheeled-robot|
<p>I am planning a tank like robot for hobby purpose. I have control engineering background, however I never applied on robotics. I would like to test different control theory, namely MPC. I saw a lot of publications regarding the kinematics and inverse kinematics of such a robot, however I am wondering if somebody can point out regarding the dynamics modelling of such a system,taking into account the forces, mass etc?</p>
Dynamic model of a tank like robot
<p>So generally for Quadcopter design you want the hover throttle (the point where the weight is balanced with the thrust produced) should be 50%. So if you look at the product page for the motor you posted you will see some measured thrust values</p> <blockquote> <p>10x4.7 - 11.1v - 21A - 1150g Thrust</p> <p>9050 - 11.1v - 18A - 900g Thrust</p> <p>1060 (3 blade) - 11.1v - 23A - 1200g Thrust</p> </blockquote> <p>with a 9x4.5 prop your going to get a hair less than the rated thrust for a 9050 (9x5) prop, with four motors you will probably get around 3000g of thrust or, based on a 800g frame a 26% hover throttle, as you can see the motor is capping out at 18A meaning you could go with a 20A esc to save weight. This data is based on a 3s battery so with a hover current draw of 18A*4*.26=18.7A you could get around 7mins of flight time with a 2.2Ah battery (2.2Ah/18.75A)</p> <ul> <li><p>4 x motors 304g</p></li> <li><p>4 x props 36g</p></li> <li><p>4 x 30A esc 128g</p></li> <li><p>1 x <a href="http://www.hobbyking.com/hobbyking/store/uh_viewItem.asp?idProduct=21346" rel="nofollow">2.2Ah 3S battery</a> 163g</p></li> </ul> <p>this is already 632g pretty close to 800g this is without a flight controller or misc connectors receiver frame etc... is it possible? yes but expect it to be a little heavier.</p>
7206
2015-05-09T12:03:26.127
|multi-rotor|
<p>I am trying to build a quadcopter from scratch. I have selected few parts but I have no idea whether the quadcopter will come together and fly.</p> <p>I would appreciate your feedback on whether the parts I have selected are compatible (<a href="http://www.hobbyking.com/hobbyking/store/__15205__Hobby_King_30A_ESC_3A_UBEC.html" rel="nofollow">UBEC</a>, <a href="http://www.hobbyking.com/hobbyking/store/__15193__HobbyKing_Donkey_ST3007_1100kv_Brushless_Motor.html" rel="nofollow">Motor</a>). If not, I would appreciate suggestions.</p> <p>The frame for my quadcopter is in the X configuration and I am making my own. I am expecting the average weight of the quad to be around 800g. I hope the motors and prop combination can hover it well.</p>
Building my first quadcopter
<p>As it turns out, <strong>I was not loading the sensor values into the array properly.</strong> <strong>It seems that data is transmitted one byte at a time</strong>, so the sensorbytes array needed to be an array of 4 elements rather than two. Once this was corrected, the correct encoder counts could be determined by joining the upper and lower bytes of the encoder counts from each wheel using the following scheme:</p> <pre><code>// highbyte is shifted left eight bits, lowbyte is added to highbyte // encoder_count=highbyte&lt;&lt;8+lowbyte left_encoder=sensorbytes[0]&lt;&lt;8+sensorbytes[1]; </code></pre> <p>This scheme made sure that the resulting left_encoder was a signed int rather than an unsigned int had the word() function been used. The revised updateSensors() code can be found in this post: <a href="https://robotics.stackexchange.com/questions/7229/irobot-create-2-encoder-counts/7246?iemail=1&amp;noredirect=1#7246">iRobot Create 2: Encoder Counts</a>.</p> <p><strong>Serial communication synchronization was not a problem.</strong> I suspect that this sort of thing is handled behind the scenes and does not need to be programmed at the high level that I am working on.</p>
7215
2015-05-10T18:31:48.040
|arduino|irobot-create|roomba|
<p>Over the past few weeks, I have been attempting to interface the iRobot Create 2 with an Arduino Uno. As of yet, I have been unable to read sensor values back to the Arduino. I will describe by hardware setup and my Arduino code, then ask several questions; hopefully, answers to these questions will be helpful for future work with the Create 2.</p> <p><strong>Hardware:</strong> The iRobot Create 2 is connected to the Arduino Uno according to <a href="http://www.irobot.com/~/media/MainSite/PDFs/About/STEM/Create/Arduino_Tutorial.pdf?la=en" rel="nofollow noreferrer">the suggestions given by iRobot</a>. Instead of the diodes, a DC buck converter is used, and the transistor is not used because a software serial port is used instead of the UART port.</p> <p><strong>Software:</strong> The following is the code that I am implementing on the Arduino. The overall function is to stop spinning the robot once the angle of the robot exceeds some threshold. A software serial port is used, which runs at the default Create 2 Baud rate.</p> <pre><code>#include &lt;SoftwareSerial.h&gt; int rxPin=3; int txPin=4; int ddPin=5; //device detect int sensorbytes[2]; //array to store encoder counts int angle; const float pi=3.1415926; #define left_encoder (sensorbytes[0]) #define right_encoder (sensorbytes[1]) SoftwareSerial Roomba(rxPin,txPin); void setup() { pinMode(3, INPUT); pinMode(4, OUTPUT); pinMode(5, OUTPUT); pinMode(ledPin, OUTPUT); Roomba.begin(19200); // wake up the robot digitalWrite(ddPin, HIGH); delay(100); digitalWrite(ddPin, LOW); delay(500); digitalWrite(ddPin, HIGH); delay(2000); Roomba.write(byte(128)); //Start Roomba.write(byte(131)); //Safe mode updateSensors(); // Spin slowly Roomba.write(byte(145)); Roomba.write(byte(0x00)); Roomba.write(byte(0x0B)); Roomba.write(byte(0xFF)); Roomba.write(byte(0xF5)); } void loop() { updateSensors(); // stop if angle is greater than 360 degrees if(abs(angle)&gt;2*pi){ Roomba.write(173); delay(100); } } void updateSensors() { // call for the left and right encoder counts Roomba.write(byte(148)); Roomba.write(byte(2)); Roomba.write(byte(43)); Roomba.write(byte(44)); delay(100); // load encoder counts into an array int i = 0; while(Roomba.available()) { int c = Roomba.read(); sensorbytes[i] = c; i++; } angle=((right_encoder*72*pi/508.8)-(left_encoder*72*pi/508.8))/235; } </code></pre> <p><strong>Questions:</strong></p> <ul> <li><strong>Am I loading the sensor values into the array correctly?</strong> This same code works when a bump and run program is implemented, but that requires knowing only one bit rather than two bytes.</li> <li><strong>How many bytes can be read over the serial connection at a time?</strong> A previous post (<a href="https://robotics.stackexchange.com/questions/2530/help-sending-serial-command-to-roomba/2537#2537">Help sending serial command to Roomba</a>) highlights that one byte can be sent at a time. Does this imply that the reverse is true? If so, would a solution be to use a char array to read the values instead and then to append two chars to form an signed int?</li> <li><strong>Is serial communication synchronization a problem?</strong> I am assuming that synchronization is not a problem, but is it possible for the bytes to be split on the nibble boundaries? This would present a problem because there is not a nibble datatype. </li> </ul>
Arduino-Create 2: Reading Sensor Values
<p>There are far too many to list them all here but I can list the most common ones:</p> <ul> <li><p>As you said base the movement off the motor RPM, and the diameter of the wheels and move for a period of time. This is not normally very accurate and can result in the robot "leaning" one way or another.</p></li> <li><p>Use potentiometers to directly measure the wheel rotation and bring the robot to a halt when the desired distance is reached, this can be more accurate</p></li> <li><p>Measure an external reference with somthing e.g. ultrasonic sensor, camera, lidar etc. this can have varying degrees of accuracy depending on the specific setups but can be very good</p></li> <li><p>Use an optical flow sensor to measure the ground movement under you e.g. laser mouse sensor , this can have varying degrees of accuracy but generally very good </p></li> <li><p>Use a ball mouse in the same fashion, generally less accurate, and needs a very flat smooth surface. </p></li> </ul> <p>What specific application do you have in mind?</p>
7216
2015-05-10T19:59:33.537
|wheeled-robot|movement|
<p>I want to move a robot to a certain distance say 1 meter. What are the different ways that i can implement to do so? For example I can measure the circumference of the wheel and assign time of rotation to move it. what are other techniques to achieve this?</p>
What are the different ways to control distance to be covered by a robot?
<p>The program shown below illustrates two ways of determining result size, and a method that reads the right number of bytes to combine.</p> <p>The first method suggested in the program below is to define <code>singleSet</code>, a set containing the packet numbers of sensors that return single-byte results. [The function <code>resultSize()</code> as shown uses <code>return ord(sizesList[sensor]) - ord('0')</code>; a method depending on <code>singleSet</code> would instead say <code>return 1 if sensor in singleSet else 2</code>.</p> <p>The second method shown is slightly more adaptable than the first, and if extended would allow treatment of large packets #0-#6. With a bit more work it could accomodate packets #100-#107.</p> <p>Here is the output from the program:</p> <pre><code> 7 8 9 10 11 12 13 14 15 16 17 18 21 24 32 34 35 36 37 38 45 52 53 58 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 2 3 4 5 6 19 20 22 23 25 26 27 28 29 30 31 33 39 40 41 42 43 44 46 47 48 49 50 51 54 55 56 57 26 10 6 10 14 12 52 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 16 54 0x36 54.000 1 17 71 0x47 71.000 1 18 88 0x58 88.000 2 19 27002 0x697a 27002.000 2 20 35740 0x8b9c 35740.000 1 21 173 0xad 173.000 2 22 48847 0xbecf 48.847 2 23 57585 0xe0f1 57.585 1 24 3 0x3 3.000 2 25 5157 0x1425 5.157 2 26 13895 0x3647 13.895 2 27 22633 0x5869 22633.000 </code></pre> <p>Here is the program itself. Replace the call to dummy routine <code>connectionread()</code> with your <code>connection.read()</code> call.</p> <pre><code>#!/usr/bin/env python # re: Combining bytes from Create2 sensor inputs # http://robotics.stackexchange.com/questions/7219/issue-with-multiple-bytes-from-irobot-create-2 singleSet = set((7,8,9,10,11,12,13,14,15,16,17,18,21,24, 32,34,35,36,37,38,45,52,53,58)) sizesList = 'J:6:&gt;&lt;d1111111111112212212222222121111122222212222221122221' def resultSize(sensor): if sensor &lt; 59: # Sensor groups &gt; 58 not handled return ord(sizesList[sensor]) - ord('0') else: return -1 # The next 11 lines print debugging / demo output for i in singleSet: print '{:&gt;2}'.format(i), print for i in singleSet: print '{:&gt;2}'.format(sizesList[i]), print; print for i in range(len(sizesList)): if i not in singleSet: print '{:&gt;2}'.format(i), print for i in range(len(sizesList)): if i not in singleSet: print '{:&gt;2}'.format(resultSize(i)), print; print # Routine to scale a sensor result and return real result def scaleResult(sensor, r): if sensor in (22,23,25,26,54,55,56,57): return r/1000.0 else: return r # Dummy routine in place of connection.read() crv = 37 def connectionread(): global crv; crv = (crv+17)%255 return crv # Routine to query a sensor, read one or two bytes, and return result def readResult(sensor): if 6 &lt; sensor &lt; 59: # Work on basic sensors, not groups valu = 0 for i in range(resultSize(sensor)): valu = (valu&lt;&lt;8) | connectionread() return valu else: return -1 # Some test cases for i in range(16,28): r = readResult(i) print '{:&gt;2} {:&gt;3} {:&gt;7} {:8} {:9.3f}'.format(resultSize(i), i, r, hex(r), scaleResult(i, r)) </code></pre> <p>There are of course dozens of ways to address the issue, and the two shown above are two of the simpler ones. If you are going to create a large software suite, you might instead use a more sophisticated approach, in which you would set up a record for each kind of packet, with each record containing packet number and name, some brief id, size, scale, and units. For example, packet 25 would have a record like (25, 'Battery Charge', 'B_Ch', 2, 1000m 'Ah'). Records would be stored in a <code>dict</code> indexed by packet number. Records could also include group-packets membership information.</p>
7219
2015-05-11T03:42:25.527
|irobot-create|python|roomba|
<p>I was having problems reading sensor information from my Irobot Create 2 and sent an email asking for help from the Irobot staff. They were super helpful and gave me an answer(the next day!!!) that helped push along my project. I was requesting data from the create2 to print to the screen so I could figure out how to write a code that would read the data. I started with this section of code that was not working for me (I trimmed some of the code off that controlled other functions):</p> <pre><code>from Tkinter import * from subprocess import call import datetime import serial import ttk import struct import thread port = '/dev/ttyUSB0' #sets the com port for Atlas baud = 115200 #sets the baud rate connection = serial.Serial(port, baud) #starts the serial communication #program to read communication from create2 def program2(threadName): while True: x = connection.read() print x #program to write to create2 def program1(threadName): atlas = Tk() #starts a new GUI for atlas control atlas.geometry('1000x500') #sets the size of the control window atlas.title('Atlas Control Panal') #sets the name of the control window def sendCommandASCII(command):#used to send a command to the create2 cmd = "" for v in command.split(): cmd += chr(int(v)) sendCommandRaw(cmd) def sendCommandRaw(command):#used to send a command to the create2 global connection try: if connection is not None: connection.write(command) else: tkMessageBox.showerror('Not connected!', 'Not connected to a robot!') print "Not connected." except serial.SerialException: print "Lost connection" tkMessageBox.showinfo('Uh-oh', "Lost connection to the robot!") connection = None def test():#sets a test command up to check connection global buttonpress buttonpress='test' sendCommandASCII('142 7') #makes a button on the GUI that starts the test command button1 = Button(atlas, text = 'test mode', command=test) button1.place(x=600, y=400) button1.config(width=10, height=5) atlas.mainloop() #runs the GUI as a loop to keep the window open. #runs the read and the write program at the same time try: thread.start_new_thread(program1, ("program1",)) thread.start_new_thread(program2, ("program2",)) except Exception, e: print str(e) </code></pre> <p>They told me that the code was actually working fine but I was trying to print out the value of the sensor packet without parsing it in any way. They then recommended I change the code in program2 to this:</p> <pre><code>while True: def toHexFromByte(val): return hex(ord(val))[2:}.rjust(2, '0').upper() x = connection.read() for b in x: print toHexFromByte(b) </code></pre> <p>this works beautifully and prints to the screen if the bumper is pressed or a wheel drops. My question is how to deal with responses that are longer than one byte (ie Packet ID: 22 for voltage)? </p> <p>When I try Packet ID: 22 it prints to screen and it sends the high byte of 3F and a low byte of D7. I can manually combine them to get 3FD7 and convert it to a decimal of 16.343 Volts but I would like to figure a way to print to screen the the voltage by having my program do this for me. Many of the sensors send data in 2 bytes and I need a way to make sure that it is combined automatically. </p> <p>Robb </p>
Issue with multiple bytes from Irobot Create 2
<p>In my opinion, the main difference is :</p> <ul> <li><p>Positioning : gives information about the robot coordinates. It gives raw data that you can use.</p></li> <li><p>Localization : it is the process of the robot (or other actor) locating itself (or the robot) on the map. Here a "method is applied" to locate the robot. </p></li> </ul> <p>Positioning gives you the coordinates. Localization is determining the place of those coordinate on the map.</p> <p>Most of the time the difference boil down to "do you have the position information directly available" (positioning) or "do you need to determine it on a map using other data you got" (localization).</p> <p>Bear in mind that it is very often context dependent and that the two terms are nowadays used with very similar meaning. See <a href="https://gis.stackexchange.com/questions/106785/difference-between-positioning-and-localization">this link for more info</a>.</p>
7221
2015-05-11T10:24:33.383
|localization|slam|gps|
<p>I would like to know what are the differences between Positioning and Localization Systems. In most review papers they are used interchangeably. Are they the same? For example: GPS(Global Positioning system): gives coordinates of receiver and SLAM(Simultaneous localization and mapping): constructing or updating a map of an unknown environment</p> <p>Is difference :</p> <p><strong>Positioning:</strong> only gives information about receiver coordinates.No information about enviorement</p> <p><strong>Localization:</strong> gives information about receiver coordinates and also enviorement. positioning is a subtopic of localization</p>
What is the difference between Positioning and Localization Systems
<p>If you know how much force should be on the rope, you could simply mount the winch against a spring that takes that same amount of force to compress. Then, just use a limit switch to detect when the spring has been compressed. </p>
7225
2015-05-11T14:24:02.547
|motor|
<p>I have a requirement for a motor that pulls a piece of rope until the rope is taught. However I'm at a loss as to how to achieve this, I'm sure it must've been done before but I'm not sure how to best describe this in a way that would get me more results. I wondered if there are any sensors or pre-established methods for sensing resistance to motion in electrical motors?</p>
Robotic winch force sensor
<p>This is true. The encoders on the Create are square-wave, not quadrature. Therefore, they rely on the commanded direction to figure out which way the wheel is spinning. When driving forward, they count up, and when driving backward they count down as expected. But if you move the wheel when the robot is not moving, or move the wheel opposite the commanded direction, you will get incorrect results. The latest OI Spec available at <a href="http://www.irobot.com/create" rel="nofollow">www.irobot.com/create</a> has a small discussion of this in the section for Opcode 43 on page 30. This was only recently updated, and contains a number of notes regarding gotchas like this. So you may want to re-download.</p>
7229
2015-05-11T17:30:20.397
|irobot-create|roomba|
<p>This post is a follows from an earlier post (<a href="https://robotics.stackexchange.com/questions/7121/irobot-create-2-angle-measurement">iRobot Create 2: Angle Measurement</a>). I have been trying to use the wheel encoders to calculate the angle of the Create 2. I am using an Arduino Uno to interface with the robot.</p> <p>I use the following code to obtain the encoder values. A serial monitor is used to view the encoder counts.</p> <pre><code>void updateSensors() { Roomba.write(byte(149)); // request encoder counts Roomba.write(byte(2)); Roomba.write(byte(43)); Roomba.write(byte(44)); delay(100); // wait for sensors int i=0; while(Roomba.available()) { sensorbytes[i++] = Roomba.read(); //read values into signed char array } //merge upper and lower bytes right_encoder=(int)(sensorbytes[2] &lt;&lt; 8)|(int)(sensorbytes[3]&amp;0xFF); left_encoder=int((sensorbytes[0] &lt;&lt; 8))|(int(sensorbytes[1])&amp;0xFF); angle=((right_encoder*72*3.14/508.8)-(left_encoder*72*3.14/508.8))/235; } </code></pre> <p>The code above prints out the encoder counts; however, when the wheels are spun backwards, the count increases and will never decrement. Tethered connection to the Create 2 using RealTerm exhibits the same behavior; <strong>this suggests that the encoders do not keep track of the direction of the spin.</strong> Is this true? </p>
iRobot Create 2: Encoder Counts
<p>As extension to this <a href="https://robotics.stackexchange.com/a/7296/6941">answer</a>, I'd like to share with the community a <strong><a href="http://www.mathworks.com/matlabcentral/fileexchange/51076-displacement-control-of-a-quadcopter" rel="nofollow noreferrer">Simulink model</a></strong> implementing the system described above. It does not represent a solution for the code reported by <a href="https://robotics.stackexchange.com/users/2155/croco">CroCo</a> but it may give a better insight and, as such, it might be helpful.</p> <p>The system is depicted below: <img src="https://i.stack.imgur.com/zyBuq.png" alt="system"></p> <p>In red we have the blocks running at $1\,\text{KHz}$, which deal with the <strong>attitude</strong> control, whereas in green are shown those blocks that allow tracking the <strong>linear displacement</strong> and run at $100\,\text{Hz}$.</p> <p>As side note, since we have to ensure that the "<em>small angles</em>" assumption remains valid throughout the simulation, the <strong>take-off</strong> phase (i.e. reaching the desired height $z_d$) needs to be almost complete before commencing the <strong>displacement tracking</strong> of the $x_d\left(t\right)$ and $y_d\left(t\right)$ time-varying linear coordinates (a circle of radius $5\,\text{m}$ in the model). Further, for the same reason, the desired circular trajectory must start from the quadcopter's initial position $\left(x\left(0\right),y\left(0\right)\right)$.</p> <p>Depending on the controllers' gains, we can get quite nice tracking results: <img src="https://i.stack.imgur.com/Nv0Df.png" alt="results"></p> <p>The first top three diagrams shows the $x$, $y$ and $z$ components (green), respectively, while following the targets (blue). The last plot reports on the roll (blue), pitch (green) and yaw (red). Time is expressed in seconds, displacements are in meters and angles are in degrees.</p>
7235
2015-05-11T21:21:28.737
|quadcopter|matlab|microcontroller|
<p>The quadrotor system is multi-ODEs equations. The linearized model is usually used especially for position tracking, therefore one can determine the desired x-y positions based on the roll and pitch angles. As a result, one nested loop which has inner and outer controllers is needed for controlling the quadrotor. For implementation, do I have to put <code>while-loop</code> inside <code>ode45</code> for the inner attitude controller? I'm asking this because I've read in a paper that the inner attitude controller must run faster (i.e. 1kHz) than the position controller (i.e. 100-200 Hz). In my code, both loops run at 1kHz, therefore inside <code>ode45</code> there is no <code>while-loop</code>. Is this correct for position tracking? If not, do I have to insert <code>while-loop</code> inside <code>ode45</code> for running the inner loop? Could you please suggest me a pseudocode for position tracking?</p> <p>To be more thorough, the dynamics equations of the nonlinear model of the quadrotor is provided <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.149.4367&amp;rep=rep1&amp;type=pdf">here</a>, if we assume the small angles, the model is reduced to the following equations </p> <p>$$ \begin{align} \ddot{x} &amp;= \frac{U_{1}}{m} ( \theta \cos\psi + \phi \sin\psi) \\ \ddot{y} &amp;= \frac{U_{1}}{m} ( \theta \sin\psi - \phi \cos\psi) \\ \ddot{z} &amp;= \frac{U_{1}}{m} - g \\ \ddot{\phi} &amp;= \frac{l}{I_{x}} U_{2} \\ \ddot{\theta} &amp;= \frac{l}{I_{y}} U_{3} \\ \ddot{\psi} &amp;= \frac{1}{I_{z}} U_{4} \\ \end{align} $$</p> <p>The aforementioned equations are linear. For position tracking, we need to control $x,y,$ and $z$, therefore we choose the desired roll and pitch (i.e. $\phi^{d} \ \text{and} \ \theta^{d}$)</p> <p>$$ \begin{align} \ddot{x}^{d} &amp;= \frac{U_{1}}{m} ( \theta^{d} \cos\psi + \phi^{d} \sin\psi) \\ \ddot{y}^{d} &amp;= \frac{U_{1}}{m} ( \theta^{d} \sin\psi - \phi^{d} \cos\psi) \\ \end{align} $$</p> <p>Therefore, the closed form for the desired angles can be obtained as follows</p> <p>$$ \begin{bmatrix} \phi_{d} \\ \theta_{d} \end{bmatrix} = \begin{bmatrix} \sin\psi &amp; \cos\psi \\ -\cos\psi &amp; \sin\psi \end{bmatrix}^{-1} \left( \frac{m}{U_{1}}\right) \begin{bmatrix} \ddot{x}^{d} \\ \ddot{y}^{d} \end{bmatrix} $$</p> <p>My desired trajectory is shown below</p> <p><img src="https://i.stack.imgur.com/QDujZ.png" alt="enter image description here"></p> <p>The results are </p> <p><img src="https://i.stack.imgur.com/UKJKc.png" alt="enter image description here"></p> <p>And the actual trajectory vs the desired one is </p> <p><img src="https://i.stack.imgur.com/9hbl9.png" alt="enter image description here"></p> <p>My code for this experiment is </p> <pre><code>%% %######################( Position Controller )%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clear all; clc; dt = 0.001; t = 0; % initial values of the system x = 0; dx = 0; y = 0; dy = 0; z = 0; dz = 0; Phi = 0; dPhi = 0; Theta = 0; dTheta = 0; Psi = pi/3; dPsi = 0; %System Parameters: m = 0.75; % mass (Kg) L = 0.25; % arm length (m) Jx = 0.019688; % inertia seen at the rotation axis. (Kg.m^2) Jy = 0.019688; % inertia seen at the rotation axis. (Kg.m^2) Jz = 0.039380; % inertia seen at the rotation axis. (Kg.m^2) g = 9.81; % acceleration due to gravity m/s^2 errorSumX = 0; errorSumY = 0; errorSumZ = 0; errorSumPhi = 0; errorSumTheta = 0; pose = load('xyTrajectory.txt'); DesiredX = pose(:,1); DesiredY = pose(:,2); DesiredZ = pose(:,3); dDesiredX = 0; dDesiredY = 0; dDesiredZ = 0; DesiredXpre = 0; DesiredYpre = 0; DesiredZpre = 0; dDesiredPhi = 0; dDesiredTheta = 0; DesiredPhipre = 0; DesiredThetapre = 0; for i = 1:6000 % torque input %&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;( Ux )&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp; Kpx = 50; Kdx = 8; Kix = 0; Ux = Kpx*( DesiredX(i) - x ) + Kdx*( dDesiredX - dx ) + Kix*errorSumX; errorSumX = errorSumX + ( DesiredX(i) - x ); dDesiredX = ( DesiredX(i) - DesiredXpre ) / dt; DesiredXpre = DesiredX(i); %&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;( Uy )&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp; Kpy = 100; Kdy = 10; Kiy = 0; Uy = Kpy*( DesiredY(i) - y ) + Kdy*( dDesiredY - dy ) + Kiy*errorSumY; errorSumY = errorSumY + ( DesiredY(i) - y ); dDesiredY = ( DesiredY(i) - DesiredYpre ) / dt; DesiredYpre = DesiredY(i); %&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;( U1 )&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp; Kpz = 100; Kdz = 20; Kiz = 0; U1 = Kpz*( DesiredZ(i) - z ) + Kdz*( dDesiredZ - dz ) + Kiz*errorSumZ; errorSumZ = errorSumZ + ( DesiredZ(i) - z ); dDesiredZ = ( DesiredZ(i) - DesiredZpre ) / dt; DesiredZpre = DesiredZ(i); %####################################################################### %####################################################################### %####################################################################### % Desired Phi and Theta R = [ sin(Psi),cos(Psi); -cos(Psi),sin(Psi)]; DAngles = R\( (m/U1)*[Ux; Uy]); %Wrap angles DesiredPhi = wrapToPi( DAngles(1) ) /2; DesiredTheta = wrapToPi( DAngles(2) ); %&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;( U2 )&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp; KpP = 100; KdP = 10; KiP = 0; U2 = KpP*( DesiredPhi - Phi ) + KdP*( dDesiredPhi - dPhi ) + KiP*errorSumPhi; errorSumPhi = errorSumPhi + ( DesiredPhi - Phi ); dDesiredPhi = ( DesiredPhi - DesiredPhipre ) / dt; DesiredPhipre = DesiredPhi; %-------------------------------------- %&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;( U3 )&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp; KpT = 100; KdT = 10; KiT = 0; U3 = KpT*( DesiredTheta - Theta ) + KdP*( dDesiredTheta - dTheta ) + KiT*errorSumTheta; errorSumTheta = errorSumTheta + ( DesiredTheta - Theta ); dDesiredTheta = ( DesiredTheta - DesiredThetapre ) / dt; DesiredThetapre = DesiredTheta; %-------------------------------------- %&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;( U4 )&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp; KpS = 80; KdS = 20.0; KiS = 0.08; U4 = KpS*( 0 - Psi ) + KdS*( 0 - dPsi ); %###################( ODE Equations of Quadrotor )################### %===================( X )===================== ddx = (U1/m)*( Theta*cos(Psi) + Phi*sin(Psi) ); dx = dx + ddx*dt; x = x + dx*dt; %===================( Y )===================== ddy = (U1/m)*( Theta*sin(Psi) - Phi*cos(Psi) ); dy = dy + ddy*dt; y = y + dy*dt; %===================( Z )===================== ddz = (U1/m) - g; dz = dz + ddz*dt; z = z + dz*dt; %===================( Phi )===================== ddPhi = ( L/Jx )*U2; dPhi = dPhi + ddPhi*dt; Phi = Phi + dPhi*dt; %===================( Theta )===================== ddTheta = ( L/Jy )*U3; dTheta = dTheta + ddTheta*dt; Theta = Theta + dTheta*dt; %===================( Psi )===================== ddPsi = (1/Jz)*U4; dPsi = dPsi + ddPsi*dt; Psi = Psi + dPsi*dt; %store the erro ErrorX(i) = ( x - DesiredX(i) ); ErrorY(i) = ( y - DesiredY(i) ); ErrorZ(i) = ( z - DesiredZ(i) ); % ErrorPhi(i) = ( Phi - pi/4 ); % ErrorTheta(i) = ( Theta - pi/4 ); ErrorPsi(i) = ( Psi - 0 ); X(i) = x; Y(i) = y; Z(i) = z; T(i) = t; % drawnow % plot3(DesiredX, DesiredY, DesiredZ, 'r') % hold on % plot3(X, Y, Z, 'b') t = t + dt; end Figure1 = figure(1); set(Figure1,'defaulttextinterpreter','latex'); %set(Figure1,'units','normalized','outerposition',[0 0 1 1]); subplot(2,2,1) plot(T, ErrorX, 'LineWidth', 2) title('Error in $x$-axis Position (m)') xlabel('time (sec)') ylabel('$x_{d}(t) - x(t)$', 'LineWidth', 2) subplot(2,2,2) plot(T, ErrorY, 'LineWidth', 2) title('Error in $y$-axis Position (m)') xlabel('time (sec)') ylabel('$y_{d}(t) - y(t)$', 'LineWidth', 2) subplot(2,2,3) plot(T, ErrorZ, 'LineWidth', 2) title('Error in $z$-axis Position (m)') xlabel('time (sec)') ylabel('$z_{d} - z(t)$', 'LineWidth', 2) subplot(2,2,4) plot(T, ErrorPsi, 'LineWidth', 2) title('Error in $\psi$ (m)') xlabel('time (sec)') ylabel('$\psi_{d} - \psi(t)$','FontSize',12); grid on Figure2 = figure(2); set(Figure2,'units','normalized','outerposition',[0 0 1 1]); figure(2) plot3(X,Y,Z, 'b') grid on hold on plot3(DesiredX, DesiredY, DesiredZ, 'r') pos = get(Figure2,'Position'); set(Figure2,'PaperPositionMode','Auto','PaperUnits','Inches','PaperSize',[pos(3),pos(4)]); print(Figure2,'output2','-dpdf','-r0'); legend('actual', 'desired') </code></pre> <p>The code of the desired trajectory is </p> <pre><code>clear all; clc; fileID = fopen('xyTrajectory.txt','w'); angle = -pi; radius = 5; z = 0; t = 0; for i = 1:6000 if ( z &lt; 2 ) z = z + 0.1; x = 0; y = 0; end if ( z &gt;= 2 ) angle = angle + 0.1; angle = wrapToPi(angle); x = radius * cos(angle); y = radius * sin(angle); z = 2; end X(i) = x; Y(i) = y; Z(i) = z; fprintf(fileID,'%f \t %f \t %f\n',x, y, z); end fclose(fileID); plot3(X,Y,Z) grid on </code></pre>
how to implement and code inner and outer PD controllers for quadrotor for position tracking
<p>I just learned about <a href="http://www.cvlibs.net/software/libviso/" rel="nofollow">libviso</a>. I didn't compare to all of your requirements, but it looks pretty good.</p>
7243
2015-05-13T07:45:45.023
|computer-vision|stereo-vision|
<p>I am looking for a library for disparity map / stereo vision computation. These are my requirements:</p> <ol> <li>C++</li> <li>Multi-platform (Linux, Windows, OSX)</li> <li>(preferrable but not mandatory) not CUDA based</li> <li>Suited for robotics (e.g. it should work even if the images are not perfectly rectified and the cameras are not perfectly calibrated)</li> <li>Suitable for tracking purposes (20fps or more)</li> <li>Performing even with low-res images (e.g. 320x240px)</li> <li>Open Source</li> </ol>
Fast C++ library for stereo vision/disparity computation
<p>Shutter speed is a <a href="https://photo.stackexchange.com/questions/27573/whats-the-difference-between-exposure-and-shutter-speed">synonym for exposure time</a>. Exposure time is linked to the frames per second (FPS):</p> <p>$t_{exposure} + t_{readout} \leq 1 / {FPS}$</p> <p>So the answer to your question is: no. FPS is usually the more limiting factor because of the readout speed and the bandwidth to transmit the data. The FPS you select provides an upper limit for the exposure time, but not a lower limit. You can make your exposure time much shorter. How short depends on your camera model, but usually much shorter than a millisecond.</p>
7244
2015-05-13T12:13:30.507
|computer-vision|cameras|
<p>I have been looking at CCD and CMOS sensors and cameras to decide which one to use in the process of automatic control of a printing process. By now I am getting the grips on almost all the essential numbers and abbreviations but there remains a problem with shutters.</p> <p>I understand that there are different types of shutters, both mechanical and electronic, and I can understand how they work. My problem concerns shutter speed. If I use a mechanical shutter, well then the maximum shutter speed depends on that particular element in the assembly, but how does it work for electronic shutters? I have never read "Max shutter speed" in any specs. The only thing I usually see floating around are frames per second. But those do usally not pass a limit of about 120 fps. Depending on how the sensor it is built one could think that the maximum shutter speed therefore is 1/120 or 1/240 if it uses half frames.</p> <p>Can this be right? It seems really slow. I will be faced with the task of recording crisp and clear images of paper which moves at about 17 m/s. That is never possible with shutter speeds that slow. Will I be forced to use a mechanical shutter or am I misunderstanding something?</p>
Conceptual problem regarding electronic shutters
<p>One.</p> <p>Here's my quadcopter lifting a burrito, and lowering it using an onboard winch, in 2013. Built from the cheap commercially available X-aircraft x650v4 kit, except I replaced the ESCs and the flight controller (I think that's an APM 1.0 on there).</p> <p>A rule of thumb in multirotor design is that you should be able to hover at 50% throttle. This means that a quadcopter which can fly well at 1kg total weight will actually be able to get off the ground at around 2kg, but you'll have lost almost any ability to control attitude or position.</p> <p>The winch is built from a servo and a fishing reel. Happy to provide more details.</p> <p><a href="https://youtu.be/wGwa2DG2Cs0" rel="nofollow">https://youtu.be/wGwa2DG2Cs0</a></p>
7245
2015-05-13T13:06:05.360
|quadcopter|distributed-systems|
<p>I am investigating a possible business opportunity in which quadcopters perform high-precision nutritional delivery via a burrito medium. I have never used a burrito, but I have read on the internet that they typically weigh 600-700 grams (1). This is much too heavy for commercially available platforms.</p> <p>How many quadcopters would it take to lift a single burrito?</p> <p>(1): <a href="https://www.facebook.com/chipotle/posts/390817319252" rel="nofollow">https://www.facebook.com/chipotle/posts/390817319252</a></p>
How many quadcopters would it take to lift a burrito?
<p>Using a smart-phone may have advantages, but anything that can be accomplished using other traditional flight controllers. </p> <p><strong>Some advantages</strong></p> <ul> <li>Easier to implement</li> <li>Relatively cheap</li> <li>Relatively user-friendly</li> <li>Usually a built in way to recover the craft in the event of a flyaway (caused by what is doing the flight controls)</li> </ul> <p><strong>Some disadvantages</strong> </p> <ul> <li>Limited to the operating system's abilities and quarks</li> <li>Without rooting the device you may not be able to kill a process that is not responding</li> <li>Not as customizable as using an opens-ource flight controller</li> <li>latency caused by running a full operating system</li> <li><p>Smart-phones are fragile and ca</p> <p><a href="http://oddcopter.com/flight-controllers/" rel="nofollow">Here</a> is a list of common flight controllers and their capabilities.</p></li> </ul>
7249
2015-05-13T16:15:00.170
|quadcopter|
<p>iPhone contains</p> <ul> <li>Gyroscope</li> <li>GPS</li> <li>Two photo and video cameras</li> <li>Self-sufficient battery that outlives the motor battery</li> <li>Wifi</li> <li>Backup connectivity (cellular, bluetooth)</li> <li>Programmable computer</li> <li>Real-time image processing capabilities and face detection</li> <li>General purpose IO (with something like <a href="http://redpark.com/lightning-serial-cable-l2-db9v/" rel="nofollow">this</a>)</li> </ul> <p>and old models are available very cheap.</p> <p>What is the main benefit of having a separate dedicated flight controller and camera on hobbyist rotorcraft rather than a general purpose device like the iPhone?</p>
Quadcopter - is iPhone the ultimate flight controller?
<p>Here's a few ideas:</p> <ul> <li><p>Insert two bare wires as probes and measure the resistance between them. The resistance should drop dramatically once the liquid touches both sensors.</p></li> <li><p>Insert a sonar into one end that points directly into the tube. This method should give you a simple measurement of how far the surface of the fluid is from the sensor. I'm not sure how well a sonar works inside a tube as there will be echoes and such to consider, but some experimentation might lead to good results.</p></li> <li><p>Seal the tube at one end with a cork and put a barometer on the inside. The rising level of fluid will compress the air and register different "altitudes" on the barometer.</p></li> <li><p>Maybe its possible to use an IR beam through the tube if you can apply a dye to the fluid.</p></li> </ul>
7256
2015-05-14T04:19:23.603
|arduino|sensors|raspberry-pi|laser|
<p>I'm looking to build a sensor which will detect the level of liquid in a tube.</p> <p>It does not have to be precisely accurate, just detect whether the level is approximately above a certain height.</p> <p><img src="https://i.stack.imgur.com/4USD9.jpg" alt="tube"></p> <p>The liquid level can be seen in the red oval</p> <p><img src="https://i.stack.imgur.com/VwvhD.jpg" alt="level"></p> <p>I thought about monitoring this with a webcam and using opencv to detect the liquid level. but this seemed a bit overkill. Especially if I have to have a dedicated PC to process the images.</p> <p>Surely there's a simpler solution.</p> <p>Perhaps a component I can attach to a raspberry PI or arduino board ... </p> <p>I'm not very familiar with laser sensors so I don't know what is suitable. </p> <p>As long as it's reliable ...</p> <p><strong>EDIT</strong></p> <p>I should add that the tube contains toluene which is flammable, and it is vacuum sealed. So we can't just drill into it. Some kind of optical/laser sensor might be OK, as long as it can recognise a clear liquid. </p>
Sensing the level of a liquid in a tube
<p>It is not uncommon to have nested (or hierarchical) control loops in a controller.</p> <p>Usually the two different loops control two different variables, not the same one. Furthermore, it is assumed that the inner loop is faster then the outer one. If you write the differential equation of the plant, you can easily understand why: </p> <p>$$ \dot{x} = f(x, \xi)$$ $$ \dot{\xi} = g(x, \xi, u)$$</p> <p>You have the control $u$ which (directly) influences only the evolution of $\xi$ but you want to control $x$; you can think of designing a two-loop control system: an outer loop controlling $x$ and a second (inner) loop controlling $\xi$. The assumption here is that $\xi$ will converge to the desired value faster than $x$.</p> <p>This should answer your 1,2,3) questions. About the 4th one, it probably depends on how you implemented the Kalman Filter.</p>
7265
2015-05-15T00:09:24.500
|pid|
<p>I'm in charge of a module to control the smoothness with which a platform should move; the platform already implements a closed-loop control on its own but this firmware is closed and I do not have access to source code. It is therefore requested that a closed loop control should implemented on top of that PID, in a superior layer, above a module that already implements a closed loop control, so i have several question:</p> <ol> <li>It's conceptually correct implement a PID control in an upper layer to closed loop control that implements it?</li> <li>What features may be loose in the lower close loop?</li> <li>Maybe loop control closed negatively be influenced by the PID that implements the top layer?</li> <li>Estimate The angular speed, yaw and pitch, based on the position of the motors using Kalman filters can generate values too far from the actual values reported</li> </ol>
PID over another module that implements a PID control?