Answer
stringlengths
44
28.2k
Id
stringlengths
1
6
CreationDate
stringlengths
23
23
Tags
stringlengths
4
94
Body
stringlengths
54
35.7k
Title
stringlengths
13
150
<p>If your number <em>n</em> is small enough then there are simpler and more durable ways to hook things up.</p> <p>Car steering wheel style: these wheels turn ~ 5 times from steering lock to lock, I understand they use coiled flat flex cable that coils around the steering shaft with 90degree bends/turns on the ends to mate to connectors on the steering wheel and the static point of the steering column. If you have the volume this is neat, tidy and simple. Try scavenging cables from inkjet printers, heads for some low friction stuff.</p> <p>If you have a long distance between the rotating part and the farthest no rotating part, try stuffing excess cable in a tube between the two place. No special connectors necessary. Some cables can take a lot of turns before kinking to the point of damage if set up this way. I have heard this is the easiest way to get power from a wind generator down a pole to ground; it works because random wind patterns usually cancel the twists out and regular maintenance takes care of biases.</p> <p>Both of these only work for small <em>n</em> with little construction fuss.</p> <p>Have fun building your toys!</p> <pre><code> Albert. </code></pre>
4857
2014-10-29T03:48:43.470
|motor|chassis|
<p>A motor needs to spin n*360 degrees. On top of the motor there are distance sensor which scan the room. I.e. lidar. What options do I have for implementing a continous rotation while having cables which are in the way?</p>
Continous rotation with cables
<p>Motors stalling can take many amps as you have found out. In the electrical industry three types of overload cut out are available. fuse, trip, auto resetting trip.</p> <p>A more suttle way is to monitor the motor amps and if an overload happens limit out the power to the motor. For example current limitors</p> <p>If the motor windings have them (very rear) thermistors are incorperated into the motor windings. If the windings get hot the trip operates, but when the windings cool then the motor runs again. Nice if your saw jams and gets free. Try the motor and try again. When the windings cool down the saw can cut again. Thermistors can be fastened to the battery to give data feedback on temprature.</p> <p>There may be a way to give a few second of power boost, a very large capacitor on the main power bus, say a full farad electrolitic. However the motor control transitors can fry/burn if the motor is jammed. Big power audio ampliphoiers in cars use these large caps.</p>
4866
2014-10-29T13:26:35.813
|brushless-motor|esc|
<p>Ok apologies for those who think my questions are not direct enough as I got warned about this. I am really new to this and I will try to keep this one direct enough for the forum. </p> <p>For obvious reasons I cannot test this out without damaging something so would prefer to learn from the experience of others.</p> <p>I have a "Turnigy Trackstar 1/10 17.0T 2400KV Brushless" motor which I will be using for my weapon (spinning disk). </p> <p>Relevant specs of the motor are: Kv: 2400 Max Voltage: 21v Max current:24amps Watts: 550 Resistance: 0.0442Ohms Max RPM: 50000</p> <p>I will use this with an ESC with the following specs:</p> <p>Constant Current: 30A Burst Current: 40A Battery: 2-4S Lipoly / 5-12s NiXX BEC: 5v / 3A Motor Type: Sensorless Brushless Size: 54 x 26 11mm Weight: 32g Programming Functions: Battery Type: Lipo /NiXX Brake: On / Off Voltage Protection: Low / Mid / High Protection mode: Reduce power / Cut off power Timing: Auto / High / Low Startup: Fast / Normal / Soft PWM Frequency: 8k / 16k Helicopter mode: Off / 5sec / 15sec (Start up delay)</p> <p>If the motor stalls, I know the current draw will increase drastically. So my questions are:</p> <ol> <li><p>In the case that the motor stalls (my disk gets stuck in the opponent etc), then what gets damaged? The motor, the ESC, both? And how long before this happens? </p></li> <li><p>Would I have time to turn the r/c switch off before irrevocable damage occurs (once I am obviously observing the action?). Notes. I will be using an on/off switch on the r/c to just turn the motor on and off (so no proportional speed increase etc), plus I will be using an 11.1 volt battery even though the motor is rated for a 21 volt maximum.</p></li> </ol> <p>Thanks.</p>
What happens when a brush-less motor stalls?
<p>For the most common definition of roll, pitch, yaw used in UAVs, when pitch=90deg you encounter what's called <a href="http://en.wikipedia.org/wiki/Gimbal_lock" rel="nofollow">gimbal Lock</a>. Basically, when pitch=90deg you have infinite combinations of roll and yaw that will give you the same physical result.</p>
4872
2014-10-30T03:44:56.003
|quadcopter|
<p>My problem is that when i hold my sensors (MPU9150) so that +y axis is downward, and y axis is on the horizontal plane, i expect that pitch = 90 degree, and roll = 0 degree, but actually pitch = 90 degree, and roll = 160 degree. However, when roll = 90 degree and pitch = 0 degree (That is what i expect). Do you know what cause my problem? Thanks</p>
Roll, Pitch Calculation Problem!
<p>There are many things that humans can do that computers have never been able to do. Every year people manage to get computers and robots to do something they have never done before. Those builders and programmers are usually surprised at how much code and hardware it required to get it to work, and long it took to build and debug that code and hardware.</p> <blockquote> <p>is it theoretically possible for a human brain ... to comprehend itself in its deepest details and produce an artificial intelligence which is exactly identical?</p> </blockquote> <p>If you are asking if it's possible for a single human brain, simply through introspection, to figure out enough about what's going on inside to build an artificial intelligence that is exactly like that same single human brain (at some point in time), the answer is no. We now know that human brains do a lot of work in the visual system and other areas that is completely inaccessible to conscious introspection.</p> <p>However, the reason we know this is because there are other ways to learn something about what is going on in there.</p> <blockquote> <p>If not then, how close can we get?</p> </blockquote> <p>As of 2014, no one knows.</p> <p>Many of the tasks that were once thought of as "higher-level" thinking can now be done by computers -- symbolic mathematics, chess, music composition, writing stories <a href="http://homes.chass.utoronto.ca/~sousa/BRUTUS_rev.html" rel="nofollow noreferrer">(a)</a> <a href="http://www.sciencedaily.com/releases/1998/03/980312075430.htm" rel="nofollow noreferrer">(b)</a>, etc.</p> <p>And yet -- many other tasks that were once thought of as "simple" still cannot be reliably done by computers -- distinguishing photos of cats from photos of dogs <a href="http://en.wikipedia.org/wiki/CAPTCHA" rel="nofollow noreferrer">(c)</a>, much less remembering a human's name when seeing what that human's face looks like today (slightly older than any other time that human has ever been seen), etc.</p> <p>I personally suspect that it is unlikely humans will build a machine that "thinks like a human", and even less likely that humans will build a machine that thinks like some particular human.</p> <p>And yet I think it is very important for researchers to continue to <em>try</em> to do that, because -- whether they eventually discover some insurmountable barrier, or else (against my expectations) build a machine that actually does think like a human -- either way, I think we will find out some very interesting things about what it means to be human.</p> <p>As a secondary benefit, it is a goal that is highly likely to, as a side effect, produce machines that will do other very useful things, long before the question is resolved one way or the other.</p> <p><img src="https://i.stack.imgur.com/s74yQ.jpg" alt="enter image description here"></p> <p>a small part of <a href="http://debategraph.org/Stream.aspx?nid=75&amp;vt=bubble&amp;dc=focus" rel="nofollow noreferrer">"Navigating 50 years of philosophical debate -- Robert Horn's seven 'Can Computers Think?' debate maps"</a></p>
4890
2014-11-02T14:16:46.433
|artificial-intelligence|
<p>I'm reading a book about a hypothetical economy in which robots work for us because they eventually became able to do everything we do ("Our work here is done, visions of a robot economy" by Nesta).</p> <p>I was wondering though: is it theoretically possible for a human brain (an extremely complex artificial intelligence the way I and many others see it) to comprehend itself in its deepest details and produce an artifical intelligence which is exactly identical? It sounds unlikely. If not then, how close can we get?</p> <p>This is not a philosophic question, the ideal answer would be a rigorous demonstration based on metrics simplifying the problem to be able to answer it, however objective answers with valid arguments are always interesting, too.</p>
Will AIs ever be as advanced as the human brain?
<p>The paper <a href="http://conf-scoop.org/ACE-2013/13_Reza2_ACE.pdf" rel="nofollow noreferrer"><em>Controlling of Quadrotor UAV Using a Fuzzy System for Tuning the PID Gains in Hovering Mode</em> by E. Abbasi, M. J. Mahjoob</a> explains how to tune PID gains with fuzzy logic. You can find many papers about singleton tuning but this paper shows totally fuzzy control</p> <ol> <li>find PID gains with ziegler-nichols (or another technique)</li> <li>Create a fuzzy PID gain changer which has inputs error (e) and change in error(de)</li> <li><p>Define fuzzification graphs for inputs and outputs. Define limits (also you can change the shape) like </p> <p><code>name [min,peak,max]</code></p> <p><code>very small [-1,-1,-0.6], small [-1,-0.6,0], medium [-0.6,0,0.6], big [0,0.6,1], very big [0.6,1,1]</code></p></li> <li><p>create rules like</p> <p><code>if **e** and/or **de** *fuzzyname* (small,big etc.) than KI is fuzzyname (small,big etc.)</code></p></li> <li><p>Defuzzyfy the result.</p></li> </ol> <p>You can use tools like matlab fuzzy toolbox or python skfuzzy </p> <p>The tipping problem can be used as Fuzzy-PID just change qualtiy as e and service as de and lastly you can change tip output as KP/ KI/ KD (there is example about tipping problem: <em>python scikit fuzzy - Fuzzy Control Systems: The Tipping Problem</em>)</p> <p>Note 1: Error ranges should be well defined so you must log the error and change in error. The limits must be in max and min values of these values</p> <p>Note 2: The output value range is good between -1 and 1. </p> <p><strong>An example code for Fuzzy-PID in python is here:</strong></p> <pre><code># -*- coding: utf-8 -*- """ @author: acs """ import skfuzzy as fuzz from skfuzzy import control as ctrl import acspid import numpy as np from matplotlib import pyplot as plt plt.ion() fig=plt.figure() ferr = ctrl.Antecedent(np.arange(-150, 150, 1), 'ferr') fder = ctrl.Antecedent(np.arange(-150, 150, 1), 'fder') fout = ctrl.Consequent(np.arange(-1, 1, 0.01), 'fout') ferr.automf(5) fder.automf(5) fout.automf(5) fout['poor'] = fuzz.trimf(fout.universe, [-1, -1, -0.5]) fout['mediocre'] = fuzz.trimf(fout.universe, [-1, -0.5, 0]) fout['average'] = fuzz.trimf(fout.universe, [-0.1, 0, 0.1]) fout['decent'] = fuzz.trimf(fout.universe, [0, 0.5, 2]) fout['good'] = fuzz.trimf(fout.universe, [0.5, 1, 1]) fout.view() ferr.view() fder.view() plt.show() plt.pause(0.0001) #'poor'; 'mediocre'; 'average'; 'decent', or 'good' rules=[] rules.append(ctrl.Rule(ferr['average'] | fder['average'] , fout['average'])) rules.append(ctrl.Rule(ferr['decent'] | fder['decent'] , fout['decent'])) rules.append(ctrl.Rule(ferr['good'] | fder['good'] , fout['good'])) rules.append(ctrl.Rule(ferr['mediocre'] | fder['mediocre'] , fout['mediocre'])) rules.append(ctrl.Rule(ferr['poor'] | fder['poor'] , fout['poor'])) fctrl = ctrl.ControlSystem(rules) fpid = ctrl.ControlSystemSimulation(fctrl) pid=acspid.pidcont(1.2,0.02,0.01,5,-5) pid2=acspid.pidcont(1.2,0.02,0.01,5,-5) d=np.zeros(10) for i in range(10): d=np.append(d,np.ones(10)*np.random.uniform(-100,100,1)) print len(d) m=[] m.append(0.0) m2=[] m2.append(0.0) e=[] de=[] e2=[] de2=[] kp=pid.kp kd=pid.kd ki=pid.ki for i in range(len(d)): pid.setDesired(d[i]) print "e:",pid.error ,"\t de:", pid.ed fpid.input['ferr'] = pid.error fpid.input['fder'] = pid.ed fpid.compute() newpid=np.abs(fpid.output['fout']) print "PID:", newpid*pid.kp,"\t",newpid*pid.ki,"\t",newpid*pid.kd pid.setGains(newpid*kp,newpid*ki,newpid*kd) newm=pid.update(m[-1]) newm=m[-1]+newm print i,m[-1],newm m.append(newm) e.append(pid.error) de.append(pid.ed) pid2.setDesired(d[i]) newm2=pid2.update(m2[-1]) newm2=m2[-1]+newm2 m2.append(newm2) e2.append(pid2.error) de2.append(pid2.ed) ax1 =plt.subplot(2,1,1) ax1.set_xlim([0, len(d)]) ax1.set_ylim([-200, 200]) plt.grid() plt.plot(range(len(m)),m,linewidth=5.0) plt.plot(range(len(m2)),m2,linewidth=2.0) plt.plot(range(len(d)),d,'g--') plt.title('Status') ax2=plt.subplot(2,1,2) ax2.set_xlim([0, 50]) ax2.set_ylim([-100, 100]) plt.plot(range(len(e)),e,'r-',range(len(de)),de,'g-') plt.grid() plt.title('e and ed') #plt.draw() plt.show() plt.pause(0.0001) </code></pre> <p><strong>Fuzzy input membership functions:</strong> <a href="https://i.stack.imgur.com/Kbo0E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Kbo0E.png" alt="enter image description here"></a></p> <p><strong>Fuzzy output Membership function:</strong> <a href="https://i.stack.imgur.com/HLtbr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HLtbr.png" alt="enter image description here"></a></p> <p><strong>Status:</strong> In the status plot dashed line is target value, red line is <strong>PID</strong> and green line is <strong>Fuzzy-PID</strong></p> <p>Here the acspid class</p> <pre><code>class pidcont(): def __init__(self,P,I,D,pmax,pmin): self.kp=P self.kd=D self.ki=I self.pidmax=pmax self.pidmin=pmin self.desired=0.0 self.error=0.0 self.elast=0.0 self.esum=0.0 self.eder=0.0 def update(self,current): self.error=self.desired-current self.eder=self.error-self.elast self.elast=self.error self.esum=self.esum+self.error if self.esum&gt;self.pidmax: self.esum=self.pidmax elif self.esum&lt;self.pidmin: self.esum=self.pidmin self.P=self.kp*self.error self.D=self.kd*self.eder self.I=self.ki*self.esum pid=self.P+self.I+self.D return pid def setDesired(self,d): self.desired=d def setGains(self,P,I,D): self.kp=P self.kd=D self.ki=I def setLimits(self,pmax,pmin): self.pidmax=pmax self.pidmin=pmin </code></pre> <p><a href="https://i.stack.imgur.com/SN3pq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SN3pq.png" alt="enter image description here"></a></p>
4901
2014-11-04T07:22:30.183
|control|pid|tuning|
<p>I previously used the Ziegler method to tune the parameters of my PID controller to control my robot's position. I then implemented fuzzy logic for self-tuning the parameters. I have two inputs to the fuzzy logic controller; one is the position error and the error rate.</p> <p>I know that my problem might be due to not understanding the effect of each parameter very well.</p> <p>The problem is that I am confused in setting up the fuzzy rules. When do I need to use high and low values for Kp, Kd and Ki to achieve the best tuning? Is it that Kp must be very low when the error is almost zero (hence, the robot is at the desired position)? The same question applies for all of the three parameters.</p>
How to tune the PID parameters using Fuzzy Logic?
<p>I suggest the worm gear. In our projects we use worm gear for having reliable locking characteristics without draining the batteries. If your only concern is the operation time, you can simply add one more gear to reduce the torque and increase the speed. In that case, you can use the benefits of a reliable locking system and faster operation. It will come with the expense of having a bigger motor and battery though. Keep in mind that vibrations can have a very bad effect on your friction-based lock.</p> <p>To detect actual position of your derailleur, I suggest using a <a href="http://en.wikipedia.org/wiki/Rotary_encoder" rel="nofollow">rotary encoder</a>. I don't think you need to have a second feedback sensor on your gear and it will just overcomplicated the problem.</p>
4903
2014-11-04T12:40:48.647
|brushless-motor|stepper-motor|
<p>I am planning to control my bicycle derailleur with an electronic circuit, the advantages being multiple but not topic of this question.</p> <p>The actuation would be performed by placing the assembly very close to the derailleur (but not on it) and by winding the usual steel cable around a spool placed on axis of a gear, and by using a motor to turn the gear. </p> <p>This question concerns the alternatives for the spool self-locking mechanism (and eventually the kind of motor to use).</p> <p>In the literature I found <a href="http://nabilt.github.io/diy_eshift/" rel="nofollow">http://nabilt.github.io/diy_eshift/</a> and other similar ones that directly modify the derailleur with a stepper motor, and then are forced to keep the stepper motor powered up all the time to keep some torque on the shaft. I consider this inefficient, therefore I require a self-locking system to be able to remove power.</p> <p>I come up with two ideas:</p> <ul> <li>worm gear operated by a DC motor, where the steel cable is wound around the gear. This system is self-locking or almost self-locking, according to the gear ratio: the gear cannot (easily) drive the worm.</li> <li>a motor driving normal gears with an appropriate reduction factor, but with an additional friction element, whose friction force is greater than the strength of the spring mounted on the derailleur (sorry if I mixed the technical terms). This is what normal bicycles already have: the friction along the cable and in the element placed n the handle is high and keeps the derailleur in place.</li> </ul> <p>Both system would be assisted by a position-sensitive element (a trimmer?) to detect the actual position of the gear and/or of the derailleur, all configured in a closed feedback loop.</p> <p>I don't consider additional options for the gear such as this one: <a href="http://www.agma.org/assets/uploads/publications/10FTM17_Kapelevich.pdf" rel="nofollow">http://www.agma.org/assets/uploads/publications/10FTM17_Kapelevich.pdf</a> that consists of parallel-axis gears, whose teeth are however shaped in a manner to achieve self-locking without the need of a low-efficiency worm system.</p> <p>From my point of view, I cannot see any clear advantage of worm gear vs friction except for:</p> <ul> <li>the worm gear may allow me to build a more compact assembly, thanks to the two axes being perpendicular</li> <li>speed vs torque: worm gears reduce the torque requirements, but the motor has to spin a lot and I cannot wait 3 seconds for each gear change.</li> </ul> <p>Concerning the choice of the motor type (this is not the main question though), I think that:</p> <ul> <li>a worm gear allows me to easily use a DC motor, since torque requirements are low and I don't need to detect the position of the shaft. Moreover, DC motors increase torque with decreasing speed, while stepper motors have the maximum torque defined by the stepping frequency.</li> <li>DC are more compact and cheaper, important if I decide to offer this assembly as a kit and not unique, personal prototype.</li> <li>I am working with 5V supply and I fear most easy-to get DC motors (old printers, scrap electronics) work on 12V, with a significant reduction of the available torque when operated at 5V.</li> </ul> <p>I was looking for a "mechanics" section in Stack Exchange but I couldn't find it, so I opted for Robotics instead of Electronics.</p>
Self-locking actuator: Friction versus worm gear
<p>If I understand the problem correctly, this is very straightforward. Given a set of grid points that can be either filled or unfilled, a point of origin, and a set of scan angles, you want to find the distance of the closest filled point for any scan angle.</p> <p>To do this optimally, you will need to find an algorithm that can give you the full set of points that would be covered by your scan angles, up to distance n. For now, we can skip that -- the rest of the algorithm is the same.</p> <pre><code>x_o, y_o = get_origin_point() theta_o = get_center_angle_of_desired_scan_direction() n = get_max_sensor_distance() sweep_angle = 130 beam_width = 2 num_bins = sweep_angle / beam_width # initialize the array of scanned lines to the maximum sensor distance scan_results = [] for angle in range(bins): scan_results[angle] = n # iterate over all points in the grid for (x, y) in all_points: if not point_is_filled(x, y): continue # distance from scanner r = math.hypot(x - x_o, y - y_o) if n &lt; r: continue # the angle of this point relative to scanner theta = math.atan2(y - y_o, x - x_o) # the corresponding scan line bin this angle is in bin = (theta - theta_o - (sweep_angle / 2)) / beam_width if bin &lt; 0 or num_bins &lt;= bin: continue # update value if this point is closer than what was already sensed scan_results[bin] = min(scan_results[bin], r) </code></pre>
4914
2014-11-05T08:30:21.340
|python|mapping|simulation|
<p>I am a complete beginner at this.</p> <p>I have an arbitrary map as a PNG file (black and white, only). And I'm supposed to draw a robot at a position(x,y) and then I'm supposed to simulate the robot taking a laser-scan measurement for an opening angle of 130 degrees, separation of each scan-line is 2 degrees. So, obviously: each laser scanline is supposed to be obstructed by black pixels in the PNG map, while the other lines just keep on going for an n distance.</p> <p>I've attempted drawing scanlines around the object, but I'm running into issues when it comes to getting each line to be obstructed by the black-lines. To my mind, it requires a completely different approach.</p> <p>Any helpful advice on how to get started would be greatly appreciated.</p>
Robotics simulation from PNG map
<p>Since you don't need any position control (i.e., it needs to be either latched or unlatched), the simplest option that will serve your needs is a solenoid. </p> <p>All of the other mechanisms you listed would require some degree of position control, not to mention the extra complexity.</p>
4928
2014-11-07T00:07:12.160
|mechanism|
<p>My and a friend are hacking together a Nespresso Coffee pod dispenser. We have our heart set on a particular design after thinking up countless ways of dispensing a single pod. </p> <p>The design has a single flavour of pods in a vertical tube so they tend to fall down. One or more latches around the base of the tube stop any pods from falling out. Releasing the latch for 45ms will allow the pod through (10mm fall, well past the lip of the pod) while catching the next one.</p> <p>The latch is the problem component. I haven't yet found a suitable product off the shelf. Ideally, the solution would be compact and cheap (&lt; $5). </p> <p>Here are some of the latch ideas to date (most of which are considered linear in motion):</p> <ol> <li>Solenoid - Seems over-kill and tend to be > 5 dollars each</li> <li>Ultrasonic Motor - Can't find any</li> <li>Linear Actuator - Usually around 50 dollars and quite bulky</li> <li>Piezoelectric Actuator - Mostly tuned for nM scale precision, and are hard to come by.</li> <li>Rotating disk with release notch, driven by stepper motor - still > $5 and moderately bulky.</li> <li>Rotating latch with string attached to rack and pinion powered by electric motor - Don't think it's a simple enough solution.</li> <li>Rotating cam - how a gumball machine works (I suspect). (This was also suggested in an answer, but would involve both a mechanical and electronic motor component, not as simple as option [5])</li> </ol> <p>I have a 3D printer, so I am open to mechanical solutions - a custom latch with crude electromagnet for example.</p> <p><img src="https://i.stack.imgur.com/Jo6qG.png" alt="enter image description here"></p> <p>Note the desired size of the latch (Yellow), holding pods (Orange) in a tube (Black). Yes, motors can work, but they would be quite bulky. I'm not after the obvious solution, but a clever one, or one which finds the suitable product.</p> <p>(I understand that with only one latch on one side, the pods will not sit perfectly vertical, and the latch would need to be higher up.)</p>
Simplest and cheapest way to create a spring back latch
<p>First, regarding why “the same switch can be rated for different amperages”, the current-rating of a switch depends on switched voltage and current because switching a high voltage or current typically produces a more energetic arc than does switching a low voltage or current.</p> <p>The switch you referred to, with its “Legion EPS21 10A 250VAC TV-5 8A/128A 250V μT85” markings, is rated at switching a maximum of 10A in a 250VAC circuit, and is rated at 8A normal current, 128A surge, and is rated for use at temperatures no higher than 85° C. </p> <p>See eg <a href="http://www.alps.com/prod/info/E/HTML/Switch/Power/SDKZ/SDKZ_common.html" rel="nofollow">alps.com</a> for a table of UL, CSA, and IEC Safety Standards markings. The 8A/128A mark is a VDE, SEMKO, or BS (ie, German, Swedish, or British) Load Life Test rating, comparable to the UL or CSA (ie, American or Canadian) TV8 Load Life Test rating for DPST switches.</p> <p>If you used that switch in a 12VDC circuit carrying 25A, you'd have no guarantees about how it would work. It might fail open or closed, after working for 1 or 1000 operations; it might catch fire; it might work ok.</p> <p>If you operated it during a motor-stall event to switch a hundred or so amps, I'd expect it to fail after one or two such uses, with contacts either welded together or melted off.</p> <p>The 18 AWG wires attached to it are rated at 7 to 10A. They wouldn't burn out carrying 25A, but would run warm. If you consult a wire-size table (eg at <a href="http://www.powerstream.com/Wire_Size.htm" rel="nofollow">powerstream.com</a> or <a href="http://www.stayonline.com/reference-circuit-ampacity.aspx" rel="nofollow">stayonline.com</a>) you will see that 12 AWG is a workable size of wire for 25A, while 10 AWG would provide a bigger safety margin.</p> <p>Using a 12V contactor or automotive relay (controlled by a small switch like the example) is an alternative to using a big switch. (Eg, see <a href="https://www.google.com/search?hl=en&amp;site=imghp&amp;tbm=isch&amp;source=hp&amp;biw=1169&amp;bih=612&amp;oq=+12V+contactor+or+automotive+relay+&amp;gs_l=img.12...1826.1826.0.3103.1.1.0.0.0.0.75.75.1.1.0....0...1ac..58.img..1.0.0.qfanuMsiQpo&amp;q=12V%20contactor%20or%20automotive%20relay" rel="nofollow">contactor and relay pictures</a> at Google Images.)</p> <p><em>Edit:</em> If a surge or stall is brief (eg a few hundredths of a second; I don't know the actual UL specs on surge duration) and the switch is not operated during it, the switch should survive. But if the switch is opened during a motor stall, arcing will occur. Typically, most of the energy inductively stored in the motor windings will dump into the arc, and the switch contacts will vaporize.</p>
4933
2014-11-08T02:29:03.630
|current|
<p>I scavenged a 4 terminal power switch (Legion EPS 21) from some electronic device (don't remember what it was) and it has the following markings on it:</p> <p>Legion EPS21 10A 250VAC TV-5 8A/128A 250V &micro;T85 </p> <p><img src="https://i.imgur.com/TNYkzqQ.jpg" alt="Legion EPS21"></p> <p>I would like to use this as the main power switch for my robot which will have nothing higher than 12 volts and a "normal" total Amperage (i.e. with all the motors in it running) of around 25 Amps, but of course if a motor stalls then the current will rise much higher. </p> <p>First of all I cannot understand how the same switch can be rated for different amperages. I cannot find any datasheets for this switch that might help nor any reference to the standards (TV-5 and &micro;T85). So I would like to know if this can handle 128 Amps at 12 Volts. If it helps at all, the wires currently connected to the terminals are quite thick and have "18 AWG 600 volts" written on them.</p> <p>Secondly I would like to ask whether I need to cater for normal running amperage levels or for stall current levels, which are obviously much higher. Although people talk about stall currents running over 100 Amps in some cases - which quite frankly cause me some concern - I cannot seem to be able to find any such switches on any robot component websites, so I am starting to think that it is the "normal" level of current I should plan for and not the "stall" one. Am I right in this?</p>
Power switch standards and suitability for purpose
<p>Sure it can. The NXT provides a bluetooth serial port, so you don't actually have to do anything with bluetooth in Qt. All of the Bluetooth stuff will be handled by your OS.</p> <p>Instead, to communicate to the NXT from Qt, you will need to open the serial port for reading and writing, then send commands to the NXT over this connection. You can find out more about the available commands using the <a href="http://www.lego.com/en-us/mindstorms/downloads/software/nxt-hdk/" rel="nofollow">NXT Hardware Developer Kit</a>.</p>
4937
2014-11-09T03:21:32.053
|nxt|robotc|
<p>QT has native bluetooth support, but can it be used to communicate with the Lego NXT robot?</p>
Can I use QT to communicate with a Lego NXT robot?
<p>Ideally, when you raise the collective all the way up, all the ESCs put out their max power and the quadcopter goes straight up.</p> <p>Different ESCs will end up producing different maximum thrusts; and also will ramp up (and down) differently with sudden changes in control signal.</p> <p>As you probably already know, if one rotor of a multirotor vehicle has more (or less) thrust than the others, the vehicle will flip over.</p> <p>The <a href="http://en.wikipedia.org/wiki/Ardupilot" rel="nofollow">ArduPilotMega (APM)</a> or other <a href="http://www.opencircuits.com/multi-rotor" rel="nofollow">electronic stabilization system</a> will try to compensate for differences in thrust to keep it right side up.</p> <p>The ESCs don't need to be exactly the same.</p> <p>The more different the ESCs are from each other, the longer it will take to tune the stabilization system. The more different the ESCs are from each other, the harder the stabilization system has to work in flight to keep the multirotor vehicle right-side up; and the more the vehicle will wobble in flight.</p>
4939
2014-11-09T06:59:39.330
|quadcopter|microcontroller|electronics|esc|multi-rotor|
<p>I'm working on a diy quadcopter build from scratch and have bought a 4pack ESC from Castel Creations.While i currently have my quad up and running(sort of), from what i've read on the various sources and forums on the internet, i am not able to/ not recommended to use different ESCs together on the same quad.</p> <p>As i bought my ESCs together as a 4 pack, and am not able to buy any replacements unless i were to switch out all 4 of them, this has me worried in the eventual case of a spoilt ESC in the future.</p> <p>From what i can gleam from various posts on the internet, it seems to have something to do with the rate at which ESCs communicate with the flight controller.If so, can i not simply buy a esc programmer and program all of them to communicate at the same rate?</p> <p>I've asked the dude at my local hobby shop, and he said that i cannot/should not be using different escs from different brands or even the same brand but different models( i.e 35v &amp; 20V ) ESCs together.</p> <p>I would really appreciate it if someone were to clarify what exactly is the issue with using different ESCs together on the same quadcopter.</p> <p>P.S If it helps, i'm currently using the APM 2.6 as my flight controller on a WFLY transmitter and a f450 frame.</p>
Why can't i use different ESCs together on a multirotor?
<p>A variety of techniques are used for propeller shaft sealing. One of the most common is a "packing gland" or "stuffing box", as in illustration below from a <a href="http://www.dieselduck.info/machine/04%20auxiliary/stuffing_box.htm" rel="nofollow noreferrer">dieselduck.info</a> web page which has details of using such seals. </p> <p><img src="https://i.stack.imgur.com/qEeY4.jpg" alt="packing gland"></p> <p>Also see <a href="http://www.boatus.com/boattech/casey/stuffing-box.asp" rel="nofollow noreferrer">boatus.com</a>, which describes how much leakage to allow, and when to repack. Two or three drops of leakage per minute is recommended; the leakage helps lubricate the shaft where it passes through the seal. Leakage gets pumped out of the vessel as necessary. For some submarines, high leakage rates occur. For example, <a href="http://en.wikipedia.org/wiki/Collins-class_submarine#Propulsion_system" rel="nofollow noreferrer">wikipedia </a> notes that</p> <blockquote> <p>Propeller shaft seals were a significant problem on Collins and Farncomb. Although designed to allow for a leak of 10 litres per hour, during trials it was found that the seals would regularly misalign and allow hundreds of litres per hour into the boat – during one deep diving test the flow rate was measured at approximately 1,000 litres a minute.</p> </blockquote> <p>Other kinds of propeller shaft seals (oil or grease filled, gas filled, with carbon plates, o-rings, etc) may leak less or not at all. (<a href="http://www.shaftseal.com/en/categories" rel="nofollow noreferrer">1</a>, <a href="https://www.physicsforums.com/threads/how-do-you-seal-a-rotor-underwater.323819/" rel="nofollow noreferrer">2</a>). </p>
4948
2014-11-10T11:44:53.050
|motor|design|
<p>How does the submarine prevent water flow through the screw mechanism? The mechanism rotates the screw so there should be a little gap, how come the water doesn't come through this gap in to the submarine?</p>
Submarine screw and the isolation from the water
<p>HRL has been embodied in a robot in multiple cases.</p> <ul> <li><a href="http://perso.ensta-paristech.fr/~stulp/publications/pdfs/stulp11hierarchical.pdf" rel="nofollow">In a reaching, shelving robot</a>.</li> <li><a href="http://pdf.aminer.org/001/049/971/acquisition_of_stand_up_behavior_by_a_real_robot_using.pdf" rel="nofollow">In a robot learning how to stand-up</a>.</li> <li><a href="https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2013-19.pdf" rel="nofollow">In robot navigation.</a></li> </ul> <p>However, how HRL applied in each of these cases varies. The first uses HRL to manipulate Dynamic Movement Primitives, while the second, older method focuses moreso on learning state space values.</p>
4949
2014-11-10T14:18:39.563
|machine-learning|reinforcement-learning|reference-request|
<p>I've been reading about hierarchical reinforcement learning (HRL) and it's applications. A well-written literature review on the subject can be found <a href="http://people.cs.umass.edu/~mahadeva/papers/hrl.pdf">here</a>. However, I was wondering if research has ever been done where an HRL system has been implemented on an individual robot? <a href="https://mindmodeling.org/cogsci2014/papers/221/paper221.pdf">This paper</a> seems to imply that it has been, by saying that the delivery task that it models "is commonly used in HRL, both in computational and experimental settings". However, my Google Scholar searches haven't turned up any fruit as to what this real-world experimental setting might be. Help would be appreciated for finding either model-based or model-free implementation of hierarchical reinforcement learning in a robot.</p>
Has hierarchical learning been embodied in a robot before?
<p>After reading your other question, I reason that the one quantity you can look at is the rpm of your motor at different input values.</p> <p>To measure this, you may use a tachometer and give your ESC an input from your transmitter and then record your tachometer readings for several inputs.</p> <p>I myself user different esc's on one of my tricopter's, reason being, the ones I purchased earlier are no longer available. It flies fine as the flight controller compensates for any differences, but you may notice erratic behaviour while flying aggressively.</p> <p>On a different note, you could also open up the esc and look at the hardware used. Almost all esc's use an avr running at the same speed, so if you flash the same firmware in them, they would operate the same, given that the rest of the hardware is identical.</p>
4953
2014-11-11T17:14:04.983
|sensors|quadcopter|electronics|esc|multi-rotor|
<p>How would i go about measuring and quantifying the performance of an ESC? I am looking to measure the response rate(in Hz) of an ESC, as well as it's performance(i.e how fast it starts and stops, as well as how much it increases/decreases the motor's speed per tick).</p> <p>I'm assuming that i can manually program an ESC and it's response rate with a programming card/module that is compatible with said ESC, but i would still not be able to measure the exact performance of the ESC.</p> <p>I would appreciate any and all inputs and suggestions.</p> <p>P.S This question is linked/asked in conjunction with a previous question of mine here on the Robotics StackExchange here <a href="https://robotics.stackexchange.com/questions/4939/why-cant-i-use-different-escs-together-on-a-multirotor">Why can&#39;t i use different ESCs together on a multirotor?</a></p>
Measuring the performance and response rate of ESCs
<h2>Drivers</h2> <p>The only link they have to download drivers is on <a href="http://www.inovatic-ict.com/didactic-equipment/download.html" rel="nofollow">their download page</a>. Unfortunately the link to the EMoRo USB drivers returns a 404 error. I have attempted to send a message to the company to let them know.</p> <p>As for the version of the driver, the only way you'll know if the v1.0 driver will work is if you try it. Since nothing else is available at the moment, you should try working with the driver you have and see if you have any problems relating to that.</p> <h2>Programming Tools</h2> <p>According to the page you linked in your question:</p> <blockquote> <p>The kit is designed for users who wish to program the robot with a flowchart or in the Basic programming language... Advanced users can program the robot in Bascom-AVR or AVR C and they can build their own autonomous robot. </p> </blockquote> <p>So you can either use their flow-chart programming environment, you can use Bascom-AVR (which is BASIC for AVR microcontrollers), or you can program it in C. It looks like Inovatic's <a href="http://www.inovatic-ict.com/didactic-equipment/download.html" rel="nofollow">download page</a> has IDEs and compilers available to download for free to use any of these options.</p>
4958
2014-11-12T14:15:19.460
|microcontroller|
<p>In my possession I have an <a href="http://www.inovatic-ict.com/didactic-equipment/emoro-kits-overview/emoro-usb-classic.html" rel="nofollow noreferrer">Inovatic USB Interface</a>. (In Detail: UI-8x8 v1.1 2009) I would like to program it to do some simple stuff and things. I am familiar with C# Programming but from what i have heard its not possible to program this interface with C#.</p> <p>What it looks like:</p> <p><a href="https://i.stack.imgur.com/ZsJAf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZsJAfs.jpg" alt="Interface from the top"></a> <a href="https://i.stack.imgur.com/X1OZD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X1OZDs.jpg" alt="The USB port on the Interface"></a> <a href="https://i.stack.imgur.com/zbern.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zberns.jpg" alt="The USB Cable used to connect the Interface to the computer"></a></p> <ul> <li>Where can I find the Drivers for this Interface? I have checked the Inovatic website but they only have the v1.0 version of the drivers and I'm pretty sure that I need v1.1 !</li> <li>How can i program it? What language do i use?</li> </ul>
How to program an Inovatic USB interface?
<p>The guidelines is quite simple math. You are right the sync precision requirement is a function of speed - the faster you go the better the sync has to be to give you the same distance difference between the two cameras. Example: 100 km/h is 27.7 m/s - 100ms sync will give you difference of 2.7 meters of distance from one camera to other. 10ms will give you 27cm, 1ms 2.7cm etc. If you have a very slow moving vehicle 100ms can be OK. But I think no stereo algorithm (3D reconstruction, visual odometry) can produce good results with bad input data so you want to do your math before starting to see where the sync should be.</p> <p>If you can analyze your stereo in post you can fix the sync in post-processing (more FPS will give you better sync possibility). If you need real-time then you can not fix in post and your requirement for hardware is higher.</p> <p>For real time stereo processing I think you want the stereo sync to be as good as possible within your budget (of course depending on vehicle speed). The best is two identical cameras with wired external trigger (remote shutter) - you wire both cameras to the same HW trigger button. If you dont need high FPS you can use commodity (relatively cheap) consumer cameras with remote shutter (few point&amp;shoot cameras may have it, ultrazooms or DSLRs will too), but these usually save to SD card which is not very convenient for real-time processing.</p> <p>An USB camera is better for real time as you can stream the images directly into your code (C++ etc.). "Industrial" cameras are usually USB, they have quite high FPS and external trigger - <a href="http://www.e-consystems.com/1MP-USB3-Globalshutter-Camera.asp" rel="nofollow">See3CAM_10CUG by e-con Systems</a> is the cheapest one probably, other more expensive brands are <a href="http://www.theimagingsource.com/en_US/products/cameras/" rel="nofollow">The Imaging Source</a> or <a href="http://www.ptgrey.com/" rel="nofollow">Point Grey</a>. These can usually do from about 10 to many FPS with external trigger depending on model.</p> <blockquote> <p>I use 2 separate cams. The first camera is software triggered, but when its sensor is exposed it hardware triggers the 2nd camera.</p> </blockquote> <p>This is a wrong solution - you want to trigger both the same way in order for them to behave the same - see the remote shutter above.</p> <p><strong>EDIT:</strong></p> <p>Going back to the root of your question:</p> <blockquote> <p>what kind of delay between the cameras is tolerable?</p> </blockquote> <p>As I said above this translates into "what distance difference between left and right stereo frames is tolerable for stereo vision?" To be honest, even though I did some stereo matching I have no experience with sync precision here (I used synced dastasets), but as I said I can imagine the results will get worse linearly with the distance delta. Would be interesting to do some tests in this field...</p> <p>Something like "spherical resolution" (number of image horizontal/vertical pixels divided by your lens field-of-view) will probably play a big role - the lower the spherical resolution the lower the sync precision needs because you could not recognize the details anyways. There will be some (probably not very difficult) math behind this.</p>
4960
2014-11-12T18:45:01.713
|stereo-vision|
<p>When I put stereo camera on a moving platform and sync the cameras, what kind of delay between the cameras is tolerable? This would probably depend on the motion, but are there some guidelines?</p> <p>Are there any practitioner guides on how to build a moving stereo system?</p>
Stereo vision on a moving vehicle
<p>This motor is based on a simple DC motor. Don't be mistaken, it will draw much more than 50ma when running with even small load, just look at it's datasheet. In case you want to control the motor's speed, you will need some kind of DC motor driver, which you could build (look up H-bridge) or buy, and than use PWM technique to control it. In case you want just to turn it on and off, the simplest solution would be a relay controlled by arduino's digital output and connected in series with your additional power supply and the motor. </p>
4965
2014-11-13T17:15:47.467
|arduino|motor|
<p>I am new to robotics. I want to make a robot using Arduino Uno R3. I need to use Gear Motor 9 for that <a href="https://solarbotics.com/product/gm9/" rel="nofollow">here is the link</a>. </p> <p>The problem is that motor needs 50mA current. But arduino only outputs only 40mA current. </p> <p>I want to supply the motors with another power source and use a switch to connect both the circuits. Can you please tell me what type of switch I can use.</p> <p>Thanks in Advance. P.S. Sorry if I used any wrong technical terms</p>
How to use gear motor 9 with Arduino
<p>What I suggest to you is to use a PI or PID regulator for each motor like pericynthion said, but I'm afraid is not sufficient to keep your robot stright, so you need to keep the the two wheels at the same speed and synchronized, for example you can implement odometery in your UC.</p>
4967
2014-11-13T18:03:27.313
|motor|
<p>I have a two-wheeled (two DC motors) robot that needs to follow the wall beside the robot.</p> <p>The issue is that the DC motors spin at different rates (because they are not identical, of course), so the robot does not go straight when the same voltage is provided to each motor.</p> <p>How can I use IR distance sensors (and op-amps) to keep the distance from the wall constant?</p> <p>(the robot must travel parallel to the wall)</p>
Keeping two wheeled wall following robot straight
<p>There's a lovely theorem in mathematics: given three points $A, B, C$ on a circle, the angle $BAC$ subtended at $A$ by the arc $BC$ is exactly half the angle subtended by the arc $BC$ from the circle's center (i.e., the "angle between $B$ and $C$, as seen from the circle center"). </p> <p>Thus as @Zeph observes, knowing the angle between any pair of towers places you on a circle of a particular radius that contains these towers at a certain angular distance from each other on the circle... but there are two such circles. If you know the <em>signed</em> angle (i.e., "As I sweep counterclockwise, I see $B$ and then $C$, with the angle between them less than 180"), then you actually know which of the two circles you're on. </p> <p>If you look in Bowditch's <em>American Practical Navigator</em>, you'll find that this technique has been known for some time. (Closely related: "doubling the angle on the bow"). </p> <p>The sensitivity of the computed position to angular noise is not pretty. If your three towers are very far apart (hundreds of miles), but distributed at 120 degrees angles around the center of your region of interest, then the angles measured from any point in your region will be very close to 120...so determining the exact location will be tough. If, at the other extreme, your three beacons are very near the center of your region, then the three angles will again tend to be almost equal from all points of view, so the computed position will be very sensitive to angle again. In the ideal case -- which is probably to have the beacons placed on a circle that's perhaps twice the size of your region of interest -- you can probably do OK...but you'll do hugely better if you put 15 beacons out there, and use something like a Kalman filter to update your position estimate at each moment. Greg Welch at UNC did something like this for tracking in virtual reality systems, back in about 1991 or 1992, I believe, but I could be off by a year or two. </p>
4972
2014-11-14T04:54:13.397
|localization|
<p>To determine an outdoor location I think that I need to measure the angles between at least 3 beacons and take into account the order I sweep the beacons. Is this a workable solution to get positional accuracy of about 30cm in a house block sized area?</p> <p>Rewrite of question, note no distance measurement is suggested only angle measurements. I am proposing that it might be possible to have a minimum of 3 local rf beacons and a robot device that sweeps an antenna through a circle identifying the angles between these beacons and to use this information and the order that the beacons are seen to find an absolute location. I tried to prove this geometrically and it seems that with the 3 beacons there is 2 unique solutions without knowing the order and 1 solution if the order is known. There would (I believe) be no need to try to find the distance to the beacons. My question is, could this be implemented for a reasonable cost with some nRF24L01 based transcievers and some sort of rotating antenna?</p>
Can triangulation by measuring angles to 3 beacons to find location work over a large outdoor area
<p>Check out this video: <a href="https://www.youtube.com/watch?v=csfAOCdkCdU" rel="nofollow">https://www.youtube.com/watch?v=csfAOCdkCdU</a> and this relevant one: <a href="https://www.youtube.com/watch?v=t369aSInq-E" rel="nofollow">https://www.youtube.com/watch?v=t369aSInq-E</a></p>
4983
2014-11-16T16:06:20.860
|quadcopter|multi-rotor|
<p>I am a newbie in this drone field. I am curious to know what type of rotation and translation a dualcopter can achieve ? By rotation and translation i mean can it be able to roll, pitch and yaw like quadcopters? If not, in any copter what makes them to roll pitch and yaw? Furthermore are there any dualcopter design that have movable wings that will rotate the rotors itself or do up and down motion while flying?</p>
DualCopter Degree Of Freedom
<p>What I see you have a reference problem , </p> <p>I suggest installing AHRS (Attitude Heading and Reference System) system over your Robot, this will indicate the direction of your robot heading and will sense the rotation of the body , </p> <p>for being simper you can use 1 Axis Magnetometer that will act as compass which will show only body heading , </p> <p>if you used 6 DOF (Degree of freedom) system it will show heading and body rotation , </p> <p>check this :- <a href="http://www.youtube.com/watch?v=lwlOZzNJqJc" rel="nofollow">6 DOF AHRS using MPU 6050 , A low cost</a> </p>
4987
2014-11-17T03:14:44.307
|wheeled-robot|
<p>I have a robot simulation that simulates Mars Exploration Rover with six steerable wheels.</p> <p>In case of the following steering configuration</p> <pre><code>// --- // front wheels // // middle wheels // --- // rear wheels </code></pre> <p>I'd say the heading of the rover with respect of the rover's body is about 45 to the right. <strong>My question is what is the right approach of calculating heading with respect of the rover body?</strong> </p> <p>Do I simply sum the steering angles of steering actuators and divide it by the total number of the steering actuators? </p> <p><strong>Additional Note:</strong> Assuming no slippage on a perfectly flat plane.</p>
How do I determine the heading of a six wheeled robot?
<p>I believe this should tick all your boxes:</p> <p><a href="http://wiki.ros.org/robot_localization" rel="nofollow">http://wiki.ros.org/robot_localization</a></p> <p>It's a ROS node for 6D pose estimation that has the following features:</p> <ul> <li>Fusion of an arbitrary number of sensors. The nodes do not restrict the number of input sources. If, for example, your robot has multiple IMUs or multiple sources of odometry information, the state estimation nodes within robot_localization can support all of them.</li> <li>Support for multiple ROS message types. All state estimation nodes in robot_localization can take in nav_msgs/Odometry, sensor_msgs/Imu, geometry_msgs/PoseWithCovarianceStamped, or geometry_msgs/TwistWithCovarianceStamped messages.</li> <li>Per-sensor input customization. If a given sensor message contains data that you don't want to include in your state estimate, the state estimation nodes in robot_localization allow you to exclude that data on a per-sensor basis.</li> <li>Continuous estimation. Each state estimation node in robot_localization begins estimating the vehicle's state as soon as it receives a single measurement. If there is a holiday in the sensor data (i.e., a long period in which no data is received), the filter will continue to estimate the robot's state via an internal motion model.</li> </ul> <p>It has both an EKF implementation (ekf_localization_node) and a UKF (ukf_localization_node). Feel free to ask questions on the <a href="http://answers.ros.org/questions/" rel="nofollow">ROS Answers</a> site.</p>
4990
2014-11-17T08:45:51.183
|kalman-filter|ekf|pose|
<p>I am looking for open source implementations of an EKF for 6D pose estimation (Inertial Navigation System) using at minimum an IMU (accelerometer, gyroscope) + absolute position (or pose) sensor.</p> <p>This seems to be such a recurring and important problem in robotics that I am surprised I cannot find a few reference implementations. Does everyone just quickly hack together his own EKF and move on to more interesting things? Is that not rather error-prone?</p> <p>I would ideally like a well-tested implementation that can serve as a reference for fair evaluation of possible improvements. </p>
Open source implementations of EKF for 6D pose esimation
<blockquote> <p>what is the best way to command these motors simultaneously?</p> </blockquote> <p>Like all engineering questions, the answer is <strong>it depends</strong>.</p> <p>First, for some general things to keep in mind read <a href="https://robotics.stackexchange.com/q/3110/158">this answer</a> even though it's about ROS, but some of the same principles apply.</p> <p>In short, one particular factor deciding between one-micro-handles-all and dividing motors among slave micros is how powerful each micro is. If one micro <em>could</em> handle all, it's kind of overkill to use many micros. On the other hand, distributing the control means you could use cheaper micros and probably be more energy efficient. You may also save on wiring, depending on where the motors are located. Note that the slave micros don't necessarily have to control <em>one</em> motor, but they could control a couple of related motors.</p> <p>If you go with master-slave architecture, first you need to understand what motors are meaningless without the other. I.e. what motors need to be perfectly coordinated. The motors driving the fingers come to mind.</p> <p>That said, it would make sense to have one micro driving the finger motors, and one micro for each other joint. That means a failure in one micro disables a joint or the whole hand. In the former case, redundancy in the arm compensates for the lost degree of freedom and in the later case, well if you have two fingers and one of them is not working or both are not working, then there's really no difference. If you have more fingers, then of course you need to assess the possibilities and decide between more micros or better coordination.</p> <p>The master micro is of course connected to the slaves through bus(es); SPI, I2C or otherwise. The master would have to query the slaves periodically for information (such as the current state of the motors), do the control calculations and send instructions to the slaves to drive the motors. What needs to be transferred and the frequency really really depends on the type of the motor, the maximum speed of your robot, the internal safety features of the robot and various other things.</p> <p>That is probably what you would want with a lower level control. On the PC, what you would probably want is a more higher level control. So you tell the robot arm to go to some <code>(Position, Orientation)</code> and the master micro would take care of the details of how much and how fast each motor exactly has to move at every period.</p> <blockquote> <p>If this is the best way to go how does the master micro command slave ones simultaneously</p> </blockquote> <p>Of course, you cannot achieve perfect simultaneous-ness. However, with a <em>fast-enough</em> bus, the master micro may be able to tell all the slaves what to do with a delay that is acceptable. If you have multiple buses, by driving the buses inside interrupt service routines, you could send commands on different buses in parallel which further reduces the delay.</p> <p>However, don't fret it so much. You would likely have to make your robot slow because you cannot guarantee you can stop in time in a dangerous situation. There are actually two main reasons for this.</p> <ol> <li>You most likely don't have enough information from the robot to decide safe vs unsafe situation. If you cover your robot with robot skin and be able to process all the data on the slave or master micros, you may be able to gather enough information, but chances are that you probably don't do this.</li> <li>Your PC connection and processes are probably not real-time and depend on other non-real-time sensors and processes (such as cameras and image processing). That means that you don't have any guarantee that you can in fact respond in a timely manner.</li> </ol> <p>Since you have to make your robot slow (to minimize the damage in case of collision, both to the robot and to nearby humans), then your control loop likely has a much larger period than your bus communications have. Therefore, in comparison, the delay between controlling different motors (through different slaves) is likely to be negligible.</p> <blockquote> <p>Another question I have is what is the best micro for the robot to go with between arm and pic?</p> </blockquote> <p>I have to say that this is a bad question. Each of ARM and PIC have a wide variety of microcontrollers with a varying features and powers. That question is like asking "should I buy a Mazda or a Honda?" You really need to look for what features you want and how much you want to pay for it. You need to take into account what compilers you would have access to and what libraries are available. You need to look for what speed the micro can have, whether it has an FPU, whether it has interrupts on events you are interested in, whether it has enough pins to drive all you need to drive, whether it can support as many buses as you would like etc.</p> <p>So the answer is to look at as many datasheets as you can and see for yourself what matches your ideals best.</p>
5014
2014-11-19T19:12:06.347
|control|microcontroller|communication|
<p>My friend and I are building the upper body of a humanoid robot for our M.Sc thesis project. There are 24 DC motors in this robot. The thing I want to know is <strong>what is the best way to command these motors simultaneously?</strong></p> <p>The design I had in mind is for each motor to have its own micro for position and velocity control and then one master micro to command and control the slave micros. <strong>If this is the best way to go how does the master micro command slave ones simultaneously?</strong></p> <p>Another question I have is what is the best micro for the robot to go with between ARM and PIC? I want the master micro to receive its command from a PC. Any help would be appreciated.</p>
control circuit of a humanoid robot (something like iCub or Asimo)
<p>It sounds like all you need is a single beam distance measurement. There are lots of devices like the <a href="http://www.boschtools.com/products/tools/Pages/BoschProductDetail.aspx?pid=DLR130K" rel="nofollow">Bosch laser distance meter</a>. But i don't think they have the range you want, and also probably no computer interface.</p> <p>SICK makes a 2D laser scanner that used to be standard in robotics. (You can see many of these mounted atop the DARPA Grand Challenge cars for example). I think the standard used to be the LMS200 series, but it looks that has been replaced by the <a href="https://www.mysick.com/eCat.aspx?go=FinderSearch&amp;Cat=Row&amp;At=Fa&amp;Cult=English&amp;FamilyID=344&amp;Category=Produktfinder&amp;Selections=50696,34243" rel="nofollow">LMS500 series</a>. These work outdoors, have ranges up to 80 meters, and are easily interfaced with a computer.</p>
5016
2014-11-19T21:53:17.480
|laser|
<p>I am looking to buy a laser distance meter and to connect it to a motor and a 3g cellular to control both the motor and to mesure the distance. I will appriciate your advice on how to do so. thanks</p>
remote control laser meter
<p>I haven't seen anything specific to your specifications, but it seems like you'd want to look into servo motors that meet your needs: <a href="http://www.jameco.com/jameco/workshop/howitworks/how-servo-motors-work.html" rel="nofollow">http://www.jameco.com/jameco/workshop/howitworks/how-servo-motors-work.html</a></p> <p>and look for inspiration from automated storage and retrieval systems or perhaps a gripper arm that can extend horizontally but fold down vertically to avoid protruding. Maybe not the best example for what you want but maybe it'll get you headed in the right directions from this vertical lift example from a robotics competition: <a href="http://www.youtube.com/watch?v=ioilEy1I8lY" rel="nofollow">http://www.youtube.com/watch?v=ioilEy1I8lY</a></p>
5028
2014-11-22T11:22:41.293
|mechanism|
<p>I'm trying to devise a system to lift a 10kg weight a distance of 1.4m vertically, and allow it to move in/out a distance of 30cm. I'd like the motions to be able to occur simultaneously if possible. </p> <p>I'm thinking for the vertical motion I can use a suspended climber system. However I am unsure as to how I devise a system for the horizontal motion (in the horizontal plane I need nothing to protrude - only when the device is told to extend, so a horizontal suspended climber system isn't a possible solution.</p> <p>I'm thinking I will need to use 2 electric motors.</p> <p>Also - I'd like to mount it to the side of a car - so lightweight and low power draw is a must.</p> <p>Does anyone know if there is anything available that will do this? Or suggest how I could combine a couple of systems to make this work?</p> <p>Any information is appreciated.</p>
Trying to design a mechanical system with vertical and horizontal movement
<p>I think this should be possible. You could still run nc on the pi side, and use a socket connection in your java application.</p> <p>One important aspect is the encoding of the stream. It would probably be easiest to use mjpeg here. Mjpeg is just a concatenation of single jpeg images, and decoding jpegs is much easier than h.264 (the other alternative in the raspivid as far as I recall).</p> <p>The server could e.g. be</p> <pre><code>raspivid -t 0 -w 300 -h 300 -hf -fps 20 -codec MJPEG -o - | nc -l 4000 </code></pre> <p>On the client side you could look for example at this <a href="https://github.com/gskjold/mjpeg-viewer" rel="nofollow">mjpeg-viewer</a> to get you started.</p> <p>The support for MJPEG in the raspivid seems to be recent. I don't know if this will work.</p>
5031
2014-11-23T02:52:06.307
|quadcopter|raspberry-pi|cameras|linux|
<p>I'm building a quadcopter using Raspberry Pi. There is the Pi Camera connected to the Raspberry Pi which is streaming the captured video. I can connect to this stream via Wi-Fi on my notebook (Linux) by using the console command "nc" and then show it by "mplayer".</p> <p>What I want to do though is avoid the console commands and connect to this stream directly through my Java application. The reason is I want to do some image processing operations with this video so I need to have it in my application.</p> <p>Is there anyone able to help me?</p>
Connect to video stream with Java app instead of console and mplayer
<p>ROS is good for when you start writing code to run on a PC. It lets you write independent modules that interface through standard messages and service calls. Furthermore, it has a large user base with lots of off-the-shelf tools, drivers, and higher-level behaviors.</p> <p>What ROS is not good for is inter-microcontroller communication. (ROS's communication is Ethernet based). The overhead required for a microcontroller to "talk" ROS is not worth it in my opinion.</p> <p>So what I would recommend (what I have done numerous times in the past), is to write your own inter-microcontroller communication using whatever you are comfortable with. Then the micros will need to interface with a computer. This can be some embedded thing like a Gumstix, RaspberryPi, etc. Or it can be over a serial or bluetooth link to a standard PC. In either case, you define that protocol as well. Now on the PC, you write a ROS node to expose the functionality of your hardware. For a very custom robot, you will probably have to define your own messages for control as well as sensor feedback. But then you can start to interface with all of the ROS ecosystem.</p> <p>One of the first things i do is to create a <a href="http://wiki.ros.org/urdf" rel="nofollow">URDF</a> file. Then write a node to publish the <a href="http://docs.ros.org/api/sensor_msgs/html/msg/JointState.html" rel="nofollow">joint_state message</a> to interface with the <a href="http://wiki.ros.org/robot_state_publisher" rel="nofollow">robot_state_publisher</a>. Now you have an excellent 3d visualization of your robot in <a href="http://wiki.ros.org/rviz" rel="nofollow">rviz</a>. From there, you can <a href="http://wiki.ros.org/rviz/DisplayTypes/Map" rel="nofollow">overlay maps</a>, display sensor data, and <a href="http://wiki.ros.org/rviz/DisplayTypes/InteractiveMarker" rel="nofollow">interact</a> with the robot in power full ways.</p>
5034
2014-11-23T13:53:04.547
|microcontroller|ros|humanoid|
<p>I am building the upper body of a humanoid robot for my M.Sc thesis project with 24 DC motors and multiple sensors (something like i-cub or Nao). I have basic knowledge of communication protocols and I have worked with micros before but I have no knowledge and experience on working with ROS. The question I have is whether or not it is worthy and practical for me to learn ROS and use this for my robot or should I stick with what I already know.</p>
Control circuit of humanoid robot: is it worth to learn and use ROS?
<p>Looking at your hardware, only the wheel encoders are suitable for environment-independent localization, which is not enough in most cases. The localization method using the wheel encoders called "<b>odometry</b>". The other sensors you have are suited more for known environments and map-based navigation. The camera could be used for markers detection and mapping, the distance sensor can be used for matching the location to a map (<b>Particle Filter</b> is the thing). In any case, I would add some inertial sensors(gyro, accelerometer or integrated IMU) to your system, to improve the localization performance.</p>
5039
2014-11-24T03:27:09.467
|arduino|mobile-robot|localization|
<p>I am making a robot that is supposed to roam inside my house and pick up trash using openCV. I plan to send information from my arduino mega to my arduino nano connected to window pc using radio transceivers. I also plan to send video feed from raspberry pi camera over WiFi to the windows PC. The windows PC then uses openCV and processes other information from the sensors and sends command back to arduino mega.</p> <p>I have right now:</p> <ul> <li>Arduino mega</li> <li>raspberry pi + usb camera + wifi dongle</li> <li>Xbox 360 kinect </li> <li>wheel encoders</li> <li>sonar distance sensor</li> <li>arduino nano</li> <li>windows PC</li> </ul> <p>I want to know how to keep track of the robot like the room it is. I think what I am trying to do is SLAM, but I just need to make the map once because the rooms don't change much. I am open to ideas. Cost is a factor.</p>
Best localization method?
<p>I suspect what you really want is a complete MCU Bluetooth module, something like the one pictured below:</p> <p><img src="https://i.stack.imgur.com/Sepk3.jpg" alt="enter image description here"></p> <p>(via Erich Styger <a href="http://mcuoneclipse.com/2014/03/30/getting-bluetooth-working-with-jy-mcu-bt_board-v1-06/" rel="nofollow noreferrer">"Getting Bluetooth Working with JY-MCU BT_BOARD V1.06"</a>)</p> <p>When I look carefully at the above complete MCU Bluetooth module, I see 2 circuit boards soldered together:</p> <ul> <li>the RF module, a green PCB with a gold meandering Bluetooth antenna on it and the RF integrated circuits (ICs), and</li> <li>the interface board, a blue PCB with convenient header pins and a handful of small parts.</li> </ul> <p>In the original question, I only see the blue interface board, without a RF module or even an antenna. As far as I can tell, that interface board is useless without a RF module attached to it.</p> <p>My understanding is that it is impossible for Bluetooth or any other radio communication system to work without an antenna.</p>
5041
2014-11-25T07:05:32.580
|arduino|
<p>I am trying to integrate bluetooth in a project with MSP430 so to be able to communicate between it and my PC. Doing a search on eBay I found the following item:</p> <p>HC-05 06 Transceiver Bluetooth Module Backboard Interface Base Board Serial</p> <p><img src="https://i.stack.imgur.com/vQfu3.png" alt="enter image description here"></p> <p>There are also a lot of other bluetooth modules that appear to be a lot more expensive and their boards are populated with IC's that this one doesn't have. So I am wondering if this is what I need or it has another use.</p>
Bluetooth integration with MSP430
<p>Assuming the angular velocity is constant (i.e. $\dot{\theta}_{k+1} = \dot{\theta}_{k}$) It seems to me the state vector should look like this </p> <p>$$ \underbrace{ \begin{bmatrix} \theta_{k+1} \\ \dot{\theta}_{k+1} \end{bmatrix}}_{\textbf{x}_{k+1}} = \underbrace{ \begin{bmatrix} 1 &amp; \Delta t \\ 0 &amp; 1 \end{bmatrix} }_{F} \begin{bmatrix} \theta_{k} \\ \dot{\theta}_{k} \end{bmatrix} + w_{k+1} $$</p> <p>$$ \begin{align*} y_{k+1} &amp;= h(\textbf{x}_{k+1}) + z_{k+1} \\ &amp;= \underbrace{ \begin{bmatrix} 0 &amp; 1 \end{bmatrix}}_{H} \begin{bmatrix} \theta_{k+1} \\ \dot{\theta}_{k+1} \end{bmatrix} + z_{k+1} \end{align*} $$</p> <p>where $w_{k+1}$ is the process noise not the sensor noise (be careful). You could assume it to be zero if you think there is no noise affects your state vector and the noise only comes from the sensor (i.e. $z_{k+1}$) and it must be Gaussian noise with zero mean and known variance (i.e. $\mathcal{N}(0, \sigma^{2})$). Once you get the matrices and the noise parameters, applying Kalman filter is straightforward. Hope this helps</p>
5045
2014-11-26T19:22:13.987
|kalman-filter|gyroscope|noise|
<p>I am using <a href="http://www.pololu.com/file/0J563/L3GD20.pdf" rel="nofollow">this gyroscope</a> in order to measure the rotation of my robot around the z axis. I want to implement a kalman filter in order to improve the values. What i came with since now is this space model:</p> <p>$$ θ(k+1)=θ(k)+dt*θ'(k)+w(k) $$ $$ y(k)=θ(k)+z(k) $$ where $θ$ is the angle, $θ'$ is the angular rate given by the gyro and $w$ is the noise. (I hold up my gyro and measured 50 values while it was steady and find out that the variance is equal to 0.0002). What i want to ask:</p> <ol> <li>is what i did is correct?</li> <li>How can i find out $z(k)$? .According to the data sheet noise density is equal to 0.03 dps/sqrt(hz),how can i use this information to find out $z(k)$ and correct $w(k)$ if it is wrong.</li> </ol>
How to choose the state space model for 1 axis gyroscope to implemnt a good kalman filter
<p>First you need to understand the math, then you need to know how to program it.</p> <p>So let's begin with the math. This is high school material, so I go through it quickly.</p> <h2>The math</h2> <p>Any two points you take on a circle make a line (of course). The <a href="https://en.wikipedia.org/wiki/Bisection#Line_segment_bisector" rel="nofollow noreferrer">line bisector</a> of that line passes through the center of the circle. If you have two of such line bisectors that are not parallel, they collide at the center of the circle.</p> <p>The reason that this happens is quite trivial, but if you don't know it, just accept it at this point.</p> <p><img src="https://i.stack.imgur.com/X5UzT.png" alt="Theory"></p> <p>That said, the line segments don't need to be far apart for their line bisectors to collide at the center of the circle. They could cross each other or they could share a point as one of their ends. In short, any pair of lines are ok as long as they are not parallel. Sharing one end point and having the other end point different can ensure that this is the case.</p> <h2>The programming</h2> <p>To do the programming, first you need three points on the circle (arc).</p> <p>These three points don't need to be spaced out evenly. The more points you have and the farther they are from each other can increase your precision. Let's stick to three points at the moment.</p> <p>How can you find three points? That's quite simple. Run three horizontal lines through your image and record where you cross the arc. There would be at least three collisions, but there could also be more.</p> <p><img src="https://i.stack.imgur.com/aQIeg.png" alt="Choosing points on the arc"></p> <p>Now choose three as you like, preferably the two farthest points and one that is closest to the middle. Then find their bisectors and see where they collide.</p> <p><img src="https://i.stack.imgur.com/ahs3Q.png" alt="enter image description here"></p> <p>And that's it! You have your center.</p> <p>As a bonus, if you find more points and have the time, you can increase your precision by calculating more candidate points for the center and simply average them. That would give you a smaller computation error. In the figure below, that would be the average of the points identified by the arrows:</p> <p><img src="https://i.stack.imgur.com/eI6Bc.png" alt="enter image description here"></p> <p>Note that these points should have theoretically coincided, but due to computational error are a bit offset.</p>
5047
2014-11-27T10:49:49.730
|computer-vision|navigation|
<p>I originally asked this <a href="https://math.stackexchange.com/questions/891093/work-out-center-of-a-partial-circle">here</a> but I am struggling to take the concepts, particularly "Circumcenter", discussed there and apply it in my context. So here it is in my context...</p> <p>I have a small robot that can "see" (via a camera) a partial arc (from a birds-eye view)</p> <p>Therefore I know the height and the width of the square and therefore the width and height of the arc, I am sure there must be a way of approximating the circles centre?</p> <p>What would be the quickest (not necessarily the most accurate) way of helping my robot find the centre.</p> <p>In my head these are the steps of the problem:</p> <ol> <li><strong>Break up the arc into evenly spaced vectors ?</strong></li> <li>Work out the angle between the vectors</li> <li><strong>Use this information to complete the circle ?</strong></li> <li>Work out radius of circle </li> <li><strong>Find the center of the circle</strong></li> </ol> <p>Basically I would love some input on this problem because I think i know</p> <ol> <li>?</li> <li>Dot Product</li> <li>?</li> <li>Add up the lengths of all the vectors to get circumference and then divide by pi, then divide by 2 (or divide by Tau : ) ) </li> <li>I <strong>think</strong> this is where circumcentre comes in</li> </ol> <p>Basically I feel I have some pieces of the puzzle but I am not sure how they fit together. </p> <p>I am currently using python with OpenCV and you may have guessed, I am not great at understanding math unless its expressed in algebraic formula or code. </p> <p>Here are some illustrive pictures to reference: <img src="https://i.stack.imgur.com/4Fmyj.png" alt="enter image description here"><img src="https://i.stack.imgur.com/ts26N.png" alt="enter image description here"></p>
Find Centre Of Circle, when robot can "see" a partial arc
<blockquote> <p>Why someone would choose to use 2 different brands, I am not sure. </p> </blockquote> <p>It is VERY common and indeed common sense to use different brands of a critical component for security.</p> <p>This is because the chances are the the same brand component is likely to have the same point of failure. Therefor two different brands are used as they are unlikely to have the same point of failure. In aerospace systems this is the MINIMUM requirement i.e. several IMUs all of different brands.</p>
5053
2014-11-27T23:44:16.430
|uav|
<p>I was looking at the Pixhawk specs and noticed that it has 2 different IMUs- Invensense and STM. Is it for redundancy or does it have any other higher purpose?</p>
Why does the Pixhawk have 2 IMUs
<p>Check out this paper by Oussama Khatib, Peter Thaulad, Taizo Yoshikawa and Jaeheung Park from the Stanford Artificial Intelligence Laboratory:</p> <ul> <li><a href="https://ieeexplore.ieee.org/document/4543450" rel="nofollow noreferrer">Torque-position transformer for task control of position controlled robots</a></li> </ul>
5059
2014-12-01T06:33:03.253
|robotic-arm|industrial-robot|manipulator|
<p>I am working with a position-controlled manipulator. However, I want to implement a torque-controlled method on this robot. Is there any way to convert a torque command to a position command?</p> <p>I try to find research papers on this but I have no idea where I should start or which keywords I should use in searching. Do you have any suggestion?</p>
Implementing a torque-controlled method on a position-controlled robot
<p>I found the files for the analog inputs on the Beaglebone Black here:</p> <p>/sys/devices/platform/ocp/44e0d000.tscadc/TI-am335x-adc.0.auto/iio:device0/in_voltage#_raw (where # is the pin number, 0 through 7)</p> <p>Didn't downgrade my kernel. No enabling or setup needed. Those files are just continuously populated with the voltage readings from those pins. Values are 12 bit (0 to 4095, which corresponds to 0.0 volts to 1.8 volts).</p> <p>Or at least this is how my Beaglebone Black works.</p>
5074
2014-12-02T08:21:54.363
|beagle-bone|linux|
<p>I'm trying to pull analog input from a beaglebone black using <a href="http://www.linux.com/learn/tutorials/787511-how-to-get-analog-input-on-the-beaglebone-black" rel="nofollow">this tutorial</a>. However when I go to <code>/sys/devices</code> there is no <code>cape-bone-iio</code>. I have spoken with several other programmers and one of them suggested that the cape-bone does not work with the newer versions of Linux. However downgrading could have negative impact on the rest of the project. Is there any other solution?</p>
Ubuntu ARM lacking /sys/devices/cape-bone-iio
<p>Thank you for directly linking to the manual -- that makes answering questions like this much quicker.</p> <blockquote> <p>I connected the instrument to my computer using a D-sub/USB conversion dongle.</p> </blockquote> <p>I'm assuming that you've already figured out that the thing that <em>looks</em> like a standard 9-pin D-sub connector on the Harvard Model 33 is completely incompatible with standard RS-232, and you have (or can make) something that looks like the proper cable between a PC and the pump.</p> <p>The first thing I would do is "beep out" the cable: The cable should have a DB9 female on one end, and a RJ11 plug on the other end, and connections as shown on in Appendix D and Appendix F (p. 24-25) of the manual.</p> <p>As far as I can tell, for your application, you don't wan't <em>anything</em> plugged in to the "I/O" D-sub connector on the back of the pump. Instead, you want the RJ-11 plug end of your cable plugged into the "Port #1" RJ-11 socket on the back of the pump; and the DB9 female end of your cable plugged into the male DB-9 connector on the USB-to-RS232 dongle.</p> <p>Throwback1986 has many good tips.</p> <blockquote> <p>Requesting data (read(100)) returns only a couple of \x00. ... Could anyone explain to me why my method does not work?</p> </blockquote> <p>These pumps are designed to be connected in a chain, and they are designed to <em>ignore</em> commands sent to other pumps in a chain. Also, they ignore commands sent while the pump is in setting mode (see p. 16).</p> <p>I might run a script like this to see which (if any) address the pump is responding to:</p> <pre><code># communicate with a Harvard 33 syringe pump # http://robotics.stackexchange.com/questions/5079/communicating-with-syringe-pump-using-pyserial # pump website: http://www.harvardapparatus.com/webapp/wcs/stores/servlet/product_11051_10001_44004_-1_HAI_ProductDetail___ # pump manual: http://www.instechlabs.com/Support/manuals/HA33Manual.pdf # WARNING: untested code import serial # PySerial module: http://pyserial.sf.net/ # open the connection ser = serial.Serial(port="/dev/ttyUSB0", baudrate=9600, stopbits=STOPBITS_TWO, timeout=0.5) print ser # see if the connection is truly open print ser.isOpen() # See if the pump is at address 0: # try to get the software version of that pump # using the VER command from p. 19 of the manual command_string = "VER\r" ser.write( command_string ) response = ser.read(100) print "version: %r." % response # guess every possible address 0 to 99 for guessed_address in range (0, 100): # request prompt prompt_request = "%d\r" % guessed_address ser.write( prompt_request ) response = ser.read(100) print "prompt: %r." % response # try to get the software version of that pump # using the VER command from p. 19 of the manual command_string = "%dVER\r" % guessed_address ser.write( command_string ) response = ser.read(100) print "version: %r." % response print "All done." </code></pre> <p>Does this script show any "interesting" responses?</p> <p>If I had this pump in front of me, I would follow the "Configuring the Pump Chain" process on p. 14 to figure out the current pump address and baud rate.</p> <p>Does the pump display "b:9600" on the pump's front LED display when I do that? Does the pump display "Adr:00" on the pump's front LED display when I do that? If not, I would go through the "Configuring the Pump Chain" on p. 14 to get that set.</p> <p>I might try getting pyserial to communicate through that USB dongle to and from some other device, perhaps a simple loop-back wire, just to make sure that pyserial works with that USB dongle.</p> <p>Good luck!</p>
5079
2014-12-02T15:23:05.973
|control|usb|rs232|
<p>Let's first start of by explaining that I do not have a decent background in electronics and interfacing with them, so maybe this is a stupid question. I am currently trying to connect an old Harvard 33 syringe pump (<a href="http://www.harvardapparatus.com/webapp/wcs/stores/servlet/product_11051_10001_44004_-1_HAI_ProductDetail___" rel="nofollow noreferrer">website</a>, <a href="http://www.instechlabs.com/Support/manuals/HA33Manual.pdf" rel="nofollow noreferrer">PDF manual</a>) to a computer, with the goal of controlling things like pump rates and direction. For this purpose, I connected the instrument to my computer using a D-sub/USB conversion dongle. I then connected to the dongle with PySerial without issues. However, whenever I try to send commands or request the instrument's output, for example <code>write("RUN\r")</code>, the instrument does not do anything at all. Requesting data (<code>read(100)</code>) returns only a couple of <code>\x00</code>. I suspect I am communicating with the dongle itself rather than the pump. When the pump is turned off or unplugged, I get exactly the same results!</p> <p>Could anyone explain to me why my method does not work?</p> <p>My Python code for reference:</p> <pre><code>import serial # PySerial module # open the connection ser = serial.Serial("/dev/ttyUSB0", baudrate=9600, bytesize=8, stopbits=2, timeout=1) print ser # returns [Serial&lt;id=0x1cc7610, open=True&gt;(port='/dev/ttyUSB0', baudrate=9600, bytesize=8, parity='N', stopbits=2, timeout=1, xonxoff=False, rtscts=False, dsrdtr=False)] # see if the connection is truly open print ser.isOpen() # prints True # run the pump motor ser.write("RUN\r") </code></pre> <p><strong>Additional observations:</strong> when the instrument is plugged in but the above code is not running, the pump does all sorts of things at random (move one way, stop, move the other way, etc.). This behaviour is much less pronounced but still present when the code runs (and 'locks' the channel or something?). This seems to suggest that the reference voltages (logical high and low) are not properly set at 2-5V and 0-0.5V respectively</p>
Communicating with syringe pump using PySerial
<p>I believe the EN 61508 is the most important one from the software point of view. But it is not only valid for robots. This is one of the most important standard for industrial development. It defines the safety of a system, that it has to be free of "unjustifyable risks"</p> <p>There are all the requirements for: - rts - redundancy (including redundant software) - SIL - software development process and documentation</p> <p>More detailed: <br> 61508-1 is pretty general <br> 61508-2 is lot about electronics, including programmable electronic parts <br> 61508-3 is the main part about software <br> 61508-4 has nothing to do with software, just some acronym's and the rest of the overhead you need to understand the rest<br> 61508-5 How to calculate SIL <br> 61508-6 Examples of Part 2 and 3 <br> 61508-7 Another genral part about application</p> <p>From this standard lot of other standards are inherit: <br> 61511 for process industry <br> 60335 for home <br> 60601 for medical applications <br> 62061 for industrial tools <br></p> <p>For source I have some university scripts, unfortunatly they are in german:<br> <a href="http://ext02.fh-kaernten.at/rts/intern/downloads/RTS/RTS1/Sicherheitsnormen.pdf" rel="nofollow">about 61508</a> <br> <a href="http://ext02.fh-kaernten.at/rts/intern/downloads/RTS/RTS1/SIL.pdf" rel="nofollow">general info about SIL</a> <br></p>
5099
2014-12-04T12:41:32.373
|software|
<p>I am looking for possible ISO standards for robot safety specifically for software.</p> <p>I have come across <a href="http://www.eu-robotics.net/cms/upload/euRobotics_Forum/ERF2014_presentations/day_2/Industrial_HRC_-_ERF2014.pdf" rel="nofollow">this presentation</a> that mentions many ISO standards and it's not very clear which exactly applies to software. The most probable ones are:</p> <ul> <li>ISO 10218-1</li> <li>ISO 13849-1 / IEC 62061</li> <li>IEC 61508</li> <li>ISO/TS 15066</li> </ul> <p>The safety related to software seems to be categorized as Level 4 and Level 6 in the presentation above.</p> <p>I would appreciate if anyone with knowledge in this area could point me to the right standard. They are quite expensive so I could simply go through them all to see which one applies.</p> <p>As a side note, some standards like C have their standard "draft" freely available. Could there be free copies of drafts for those standards too?</p>
Robot safety standards for software
<p>Because these pictures were saved in SRAM rather then in flash memory they took up so much space that the arduino ran out of memory causing various types of noise to appear(along side a few other problems such as random crashes/reboots). By moving them out of SRAM and into flash memory the problem was solved. </p>
5103
2014-12-05T09:37:54.860
|arduino|
<p>I'm connecting an arduino to a <a href="http://www.dfrobot.com/wiki/index.php?title=SPI_LCD_Module_%28SKU:DFR0091%29" rel="nofollow">lcd screen</a> using the following <a href="http://www.dfrobot.com/image/data/DFR0091/LCD12864RSPI%20v1.0.zip" rel="nofollow">library</a>. For the display code I have written a simple piece of code that should display 2 picture os eyes (1 angry one friendly) and switching at regular intervals, however the display keeps showing me weird pixels around the borders that shouldn't be there, by making it show the same eyes twice this can be fixed however as long as I have both eyes being used it runs into trouble. Here is my code: </p> <pre><code> void setup() { LCDA.initDriverPin(2,7,10); LCDA.Initialise(); // INIT SCREEN delay(100); } void loop(){ LCDA.CLEAR(); //empty screen delay(100);//wait for command to finish LCDA.DrawFullScreen(eyes); delay(5000); //wait 5 seconds with eyes on the screen LCDA.CLEAR();//clear screen delay(100);//wait for command to finish LCDA.DrawFullScreen(angry); //show me the angry face delay(5000);//wait 5 seconds with angry face on screen } /*------------------------------------------------------------------------------ ; DFrobot bitmap - size 128x64 pixels, black/white image ------------------------------------------------------------------------------*/ unsigned char eyes[]={ 0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xE0 ,0x00 ,0x00 ,0x00 ,0xF8 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x1E ,0xF8 ,0x00 ,0x00 ,0x00 ,0xFE ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0xF8 ,0x00 ,0x00 ,0x02 ,0xFE ,0x80 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0xFC ,0x00 ,0x00 ,0x06 ,0xFE ,0xC0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0xFE ,0x00 ,0x00 ,0x0E ,0xFE ,0xF0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xFE ,0xFE ,0x00 ,0x00 ,0x0E ,0xFE ,0xF8 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x0E ,0xFE ,0x7E ,0x00 ,0x00 ,0x1E ,0xC6 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x1E ,0xF8 ,0x3E ,0x00 ,0x00 ,0x1E ,0x80 ,0xFE ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0xF0 ,0x1E ,0x80 ,0x00 ,0x3E ,0x80 ,0xFE ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0xC0 ,0x1E ,0x80 ,0x00 ,0x3E ,0x00 ,0x7E ,0xC0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x7E ,0x80 ,0x1E ,0x80 ,0x00 ,0x3E ,0x00 ,0x3E ,0xC0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0x00 ,0x0E ,0xC0 ,0x00 ,0x3E ,0x00 ,0x0E ,0xE0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0x00 ,0x0E ,0xC0 ,0x00 ,0x7E ,0x00 ,0x0E ,0xE0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFC ,0x00 ,0x06 ,0xC0 ,0x00 ,0x7E ,0x00 ,0x06 ,0xF0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xFC ,0x00 ,0x06 ,0xE0 ,0x00 ,0xFE ,0x00 ,0x02 ,0xF8 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xF8 ,0x00 ,0x06 ,0xE0 ,0x00 ,0xFE ,0x00 ,0x02 ,0xF8 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x06 ,0xF0 ,0x00 ,0x06 ,0xE0 ,0x00 ,0xFC ,0x00 ,0x00 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x0E ,0xE0 ,0x00 ,0x02 ,0xE0 ,0x00 ,0xFC ,0x00 ,0x00 ,0xFE ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x0E ,0xE0 ,0x00 ,0x02 ,0xE0 ,0x00 ,0xFC ,0x00 ,0x00 ,0xFE ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x1E ,0xC0 ,0x00 ,0x02 ,0xE0 ,0x00 ,0xF8 ,0x00 ,0x00 ,0x3E ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x1E ,0xC0 ,0x00 ,0x02 ,0xE0 ,0x00 ,0xF8 ,0x00 ,0x00 ,0x3E ,0x80 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x1E ,0x80 ,0xFE ,0xC2 ,0xE0 ,0x00 ,0xF8 ,0x7E ,0xF0 ,0x1E ,0x80 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0x00 ,0xFE ,0xF2 ,0xE0 ,0x00 ,0xF8 ,0xFE ,0xF8 ,0x1E ,0x80 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0x02 ,0xFE ,0xFA ,0xE0 ,0x00 ,0xF8 ,0xFE ,0xF8 ,0x1E ,0xC0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0x06 ,0xF2 ,0xFE ,0xF0 ,0x00 ,0xFA ,0xFC ,0xFC ,0x1E ,0xC0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x7E ,0x0E ,0xC0 ,0xFE ,0xF0 ,0x00 ,0xFE ,0xC0 ,0x7E ,0x0E ,0xC0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x7E ,0x0E ,0x80 ,0x7E ,0xF0 ,0x00 ,0xFE ,0xC0 ,0x3E ,0x0E ,0xC0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x7E ,0x1E ,0xFE ,0x3E ,0xF0 ,0x00 ,0xFE ,0x8E ,0xFE ,0x0E ,0xC0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFC ,0x1E ,0xEE ,0x9E ,0xF0 ,0x00 ,0xFE ,0xBE ,0xFE ,0x0E ,0xE0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFC ,0x1E ,0x82 ,0xDE ,0xF0 ,0x00 ,0xFE ,0xBC ,0x6E ,0x86 ,0xE0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFC ,0x3E ,0x82 ,0xDE ,0xF0 ,0x00 ,0xFE ,0xB8 ,0x2E ,0xC6 ,0xE0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFC ,0x3E ,0x80 ,0xDE ,0xF0 ,0x00 ,0xFE ,0xB0 ,0x2E ,0xC6 ,0xF0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xF8 ,0x3C ,0x3E ,0x5E ,0xE0 ,0x00 ,0xFE ,0x9E ,0x8E ,0xC2 ,0xF0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xF8 ,0x3C ,0x7E ,0x9E ,0xE0 ,0x00 ,0xFE ,0xBE ,0xD6 ,0xC2 ,0xF0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xF0 ,0x7C ,0x7E ,0xDE ,0xE0 ,0x00 ,0xFE ,0xBE ,0xC6 ,0xE2 ,0xF8 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xF0 ,0x7C ,0xFE ,0xDE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xF6 ,0xE2 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xF0 ,0x7A ,0xFE ,0xDE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xF6 ,0xE2 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xF0 ,0x7A ,0xFE ,0xDE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xF2 ,0xE0 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xF0 ,0x7A ,0xFE ,0xDE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xF2 ,0xE0 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xF0 ,0x7A ,0xFE ,0xDE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xF2 ,0xE0 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xF0 ,0x7A ,0xFE ,0xDE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xF2 ,0xE0 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xF0 ,0x7A ,0xFE ,0xDE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xF2 ,0xE0 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xF0 ,0x7A ,0xFE ,0xFE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xFA ,0xE0 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xF0 ,0x7A ,0xFE ,0xFE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xFA ,0xE0 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xF0 ,0x7A ,0xFE ,0xFE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xFA ,0xE0 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xE0 ,0x7A ,0xFE ,0xFE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xFA ,0xE0 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xE0 ,0x7E ,0xFE ,0xFE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xFA ,0xE0 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xFC ,0x7E ,0xFE ,0xFE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xFE ,0xC0 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xFE ,0x3E ,0xFE ,0xFE ,0xE0 ,0x00 ,0xFE ,0xFE ,0xFE ,0xCE ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xFE ,0xFE ,0xFE ,0xFE ,0xC0 ,0x00 ,0x7E ,0xFE ,0xFE ,0xFE ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xFE ,0xFE ,0xFE ,0xFE ,0xC0 ,0x00 ,0x7E ,0xFE ,0xFE ,0xFE ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0xFE ,0xFE ,0xFE ,0xC0 ,0x00 ,0x7E ,0xFE ,0xFE ,0xFE ,0xF8 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0xFE ,0xFE ,0xFE ,0xC0 ,0x00 ,0x3E ,0xFE ,0xFE ,0xFE ,0x80 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x06 ,0xFE ,0xFE ,0xFE ,0x80 ,0x00 ,0x3E ,0xFE ,0xFE ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0xFE ,0xE0 ,0x00 ,0x00 ,0x00 ,0xFE ,0xFE ,0x80 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 }; unsigned char angry[]={ 0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x30 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x7C ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0x00 ,0x00 ,0x00 ,0x7E ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x0E ,0x80 ,0x00 ,0x00 ,0x7E ,0xC0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0x80 ,0x00 ,0x00 ,0x1E ,0xE0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0x80 ,0x00 ,0x00 ,0x06 ,0xF8 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x06 ,0xF8 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0x80 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x1E ,0xE0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0xC0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x7E ,0xE0 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0xF0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0xF8 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xFE ,0xFC ,0x00 ,0x00 ,0x00 ,0x02 ,0xFC ,0xFE ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x0E ,0xF6 ,0xFE ,0x00 ,0x00 ,0x00 ,0x06 ,0xF0 ,0x7E ,0x80 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0xE0 ,0xFE ,0x00 ,0x00 ,0x00 ,0x0E ,0xE0 ,0x1E ,0xE0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x7E ,0x80 ,0x7E ,0x80 ,0x00 ,0x00 ,0x1E ,0xC0 ,0x1E ,0xF8 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0x00 ,0x3E ,0xC0 ,0x00 ,0x00 ,0x3E ,0xE0 ,0x1E ,0xFE ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x06 ,0xFE ,0x00 ,0x7E ,0xE0 ,0x00 ,0x00 ,0x3E ,0xF0 ,0x0C ,0xFE ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x1E ,0xF2 ,0x00 ,0xE6 ,0xE0 ,0x00 ,0x00 ,0x7E ,0x30 ,0x04 ,0x3E ,0xC0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0xC2 ,0x8E ,0xE2 ,0xF0 ,0x00 ,0x00 ,0x7C ,0x30 ,0x0C ,0x0E ,0xF0 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0x00 ,0x9E ,0x82 ,0xF0 ,0x00 ,0x00 ,0xFC ,0x3E ,0x1C ,0x06 ,0xFC ,0x00 ,0x00 ,0x00 ,0x02 ,0xFE ,0x02 ,0x98 ,0x00 ,0xF8 ,0x00 ,0x00 ,0xF8 ,0x3E ,0x18 ,0x00 ,0xFE ,0x00 ,0x00 ,0x00 ,0x06 ,0xF8 ,0x02 ,0x18 ,0x00 ,0xF8 ,0x00 ,0x00 ,0xF8 ,0x06 ,0x18 ,0x02 ,0xFE ,0x80 ,0x00 ,0x00 ,0x1E ,0xE0 ,0x06 ,0x18 ,0x00 ,0xF8 ,0x00 ,0x00 ,0xF0 ,0x02 ,0x18 ,0xFE ,0x1E ,0xE0 ,0x00 ,0x00 ,0x7E ,0xC0 ,0x06 ,0x78 ,0x00 ,0xFC ,0x00 ,0x00 ,0xF0 ,0x02 ,0x18 ,0xFE ,0x0E ,0xF8 ,0x00 ,0x00 ,0xFE ,0x66 ,0x8C ,0xF0 ,0x00 ,0x7C ,0x00 ,0x00 ,0xF0 ,0x02 ,0xFE ,0x80 ,0x02 ,0xFC ,0x00 ,0x02 ,0xFC ,0x7E ,0xC6 ,0xC0 ,0x00 ,0x7C ,0x00 ,0x00 ,0xF0 ,0x00 ,0xFE ,0x00 ,0x00 ,0xFE ,0x00 ,0x0E ,0xFC ,0x38 ,0xE6 ,0xC0 ,0x00 ,0x7C ,0x00 ,0x00 ,0xF0 ,0xE0 ,0xFE ,0x80 ,0x00 ,0xFE ,0xC0 ,0x3E ,0xFC ,0x00 ,0x76 ,0xE0 ,0x00 ,0x7C ,0x00 ,0x00 ,0xF0 ,0xFA ,0xFE ,0x80 ,0x00 ,0xFE ,0xF0 ,0xFE ,0xFC ,0x00 ,0x3E ,0xF6 ,0x80 ,0x7C ,0x00 ,0x00 ,0xF0 ,0xBA ,0xFE ,0xF0 ,0x00 ,0xF6 ,0xF0 ,0xFE ,0x78 ,0x00 ,0x3E ,0xFE ,0xE0 ,0x3C ,0x00 ,0x00 ,0xF0 ,0x8E ,0xFE ,0xF8 ,0x00 ,0xF0 ,0xF0 ,0xF8 ,0x7C ,0x00 ,0x1E ,0xFC ,0xF8 ,0xFC ,0x00 ,0x00 ,0xFC ,0x86 ,0xFE ,0x98 ,0x00 ,0xF0 ,0x60 ,0x60 ,0x7C ,0x70 ,0x1E ,0xF8 ,0x3E ,0xFC ,0x00 ,0x00 ,0xFE ,0x82 ,0xFE ,0x9C ,0x00 ,0xF0 ,0x00 ,0x00 ,0x7C ,0x7C ,0xFE ,0xF8 ,0x06 ,0x7C ,0x00 ,0x00 ,0xFE ,0x80 ,0xFE ,0x8E ,0xE0 ,0xF0 ,0x00 ,0x00 ,0x7C ,0x6E ,0xFE ,0xF0 ,0x00 ,0x7C ,0x00 ,0x00 ,0xF0 ,0x00 ,0xFE ,0x06 ,0xF0 ,0xF0 ,0x00 ,0x00 ,0x7C ,0x76 ,0x9E ,0xF8 ,0x00 ,0x7C ,0x00 ,0x00 ,0xF8 ,0x00 ,0xFE ,0x80 ,0x3A ,0xF0 ,0x00 ,0x00 ,0x7E ,0x30 ,0x0E ,0xFC ,0x00 ,0xFC ,0x00 ,0x00 ,0xF8 ,0x02 ,0x82 ,0x00 ,0x1E ,0xE0 ,0x00 ,0x00 ,0x3E ,0xF0 ,0x06 ,0xEC ,0x00 ,0xF8 ,0x00 ,0x00 ,0xFC ,0x06 ,0x02 ,0x00 ,0x0E ,0xE0 ,0x00 ,0x00 ,0x3E ,0xE0 ,0x06 ,0x0C ,0x00 ,0xF8 ,0x00 ,0x00 ,0xFC ,0x06 ,0x02 ,0x80 ,0x06 ,0xE0 ,0x00 ,0x00 ,0x3E ,0xC0 ,0x02 ,0x0C ,0x00 ,0xF8 ,0x00 ,0x00 ,0x7E ,0x06 ,0x00 ,0x80 ,0x0E ,0xC0 ,0x00 ,0x00 ,0x1E ,0x80 ,0x06 ,0x0C ,0x02 ,0xF0 ,0x00 ,0x00 ,0x7E ,0x06 ,0x00 ,0x80 ,0x1E ,0xC0 ,0x00 ,0x00 ,0x1E ,0x80 ,0x0E ,0x0C ,0x02 ,0xF0 ,0x00 ,0x00 ,0x3E ,0x3E ,0x00 ,0xC0 ,0x1E ,0x80 ,0x00 ,0x00 ,0x0E ,0xC0 ,0x1C ,0x0C ,0x06 ,0xE0 ,0x00 ,0x00 ,0x3E ,0xBC ,0x00 ,0xC0 ,0x3E ,0x80 ,0x00 ,0x00 ,0x0E ,0xE0 ,0x38 ,0x0C ,0x0E ,0xE0 ,0x00 ,0x00 ,0x1E ,0xE0 ,0x00 ,0xC0 ,0xFE ,0x00 ,0x00 ,0x00 ,0x06 ,0xF0 ,0x70 ,0x0E ,0x1E ,0xC0 ,0x00 ,0x00 ,0x0E ,0xF0 ,0x00 ,0xC0 ,0xFE ,0x00 ,0x00 ,0x00 ,0x02 ,0xFC ,0x60 ,0x06 ,0xFE ,0x80 ,0x00 ,0x00 ,0x06 ,0xFC ,0x00 ,0xF6 ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0x60 ,0x00 ,0xFE ,0x00 ,0x00 ,0x00 ,0x02 ,0xFE ,0x00 ,0xFE ,0xF8 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0xE0 ,0x06 ,0xFE ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0xFA ,0xFE ,0xF0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x7E ,0xFE ,0xFE ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0xFE ,0xFE ,0xE0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0xFE ,0xFE ,0xF8 ,0x00 ,0x00 ,0x00 ,0x00 ,0x3E ,0xFE ,0xFE ,0x80 ,0x00 ,0x00 ,0x00 ,0x00 ,0x0E ,0xFE ,0xFE ,0xE0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x0E ,0xFE ,0xFE ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x02 ,0xFE ,0xFE ,0x80 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0xFE ,0xF0 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x7E ,0xFC ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 ,0x00 }; </code></pre> <p>each time I re upload the image it changes the way the noise patterns look suggesting this is some kind of overflow problem. However changing the last byte of the bitmaps creates lines at the bottom of the screen, right where the noise is on one of the images. Note that with different images this noise can very much "cut" into the images even creating not active pixels (0's) rather then just set ones. Suggesting that the images themselves to at least fit the display. </p>
arduino -lcd screen has weird noise with multiple pictures
<p>Try a <a href="http://www.pololu.com/product/975" rel="nofollow noreferrer">Pololu 3Pi</a>. They were going for 50usd during black friday. Normally $99. You can use arduino IDE to program it using Wiring, a C/C++ like language.</p> <p>You would probably have to add IR sensors yourself, either via the expansion board, or re-wire the existing line detectors</p> <p>At the moment, Hobby Robots are still only reachable by those with soldering skills. But that will be changing in the next few years.</p> <p><img src="https://i.stack.imgur.com/K3Emz.jpg" alt="3Pi"></p>
5108
2014-12-06T17:48:15.140
|arduino|mobile-robot|wheeled-robot|micromouse|
<p>I'm looking to buy a micromouse (i.e. a small single-board unit with wheels and IR sensors that can move around freely). I've done a lot of searching but have only found resources relating to building one from components bought separately. However, I'm more of a programmer than an electrician so I fear I would struggle with this.</p> <p>Anybody know where to buy one of these in the UK? (PICAXE does some suitable stuff but they're BASIC only unfortunately). My budget is about £60.</p>
Looking for a cheap(ish) micromouse that I can program with C/C++
<p>This sounds like a question for the Electronics Stack Exchange.</p> <ul> <li><a href="https://electronics.stackexchange.com/questions/75325/long-distance-wireless-communication">"Long distance wireless communication"</a></li> <li><a href="https://electronics.stackexchange.com/questions/113188/long-distance-communication">"Long distance communication"</a></li> <li><a href="https://electronics.stackexchange.com/questions/1149/wireless-communication-over-long-distances">"Wireless communication over long distances"</a></li> <li><a href="https://electronics.stackexchange.com/questions/102408/zigbee-communication-outdoors-at-long-distance-around-1-1-5-km">"Zigbee communication outdoors at long distance ( around 1 - 1.5 km)"</a></li> </ul>
5109
2014-12-06T18:13:58.557
|wireless|
<p>I have a requirement to transmit some sensor data through wireless to a distance of 2 kilometers. I am a newbie to these technologies and concepts. Can anyone help me by providing some pointers to start with this. </p>
Regarding Long distance wireless communication
<p>We always need a reference for calibration to which we calibrate our sensor. For example in case of 3D accelerometer we use gravity as a reference which is assumed vertically downwards at a place. </p> <p>For magnetometer calibration we use Earth's magnetic field as a reference. But we dont know the direction of resultant magnetic field vector at the place unlike gravity in case of accelerometer. Hence we need to implement sphere fitting calibration method in order to calibrate our magnetometer.</p> <p>This method is explained in this <a href="https://chionophilous.wordpress.com/2012/09/01/implementing-the-gauss-newton-algorithm-for-sphere-fitting-1-of-3/" rel="nofollow">blogpost</a> as well as this <a href="http://www.sailboatinstruments.blogspot.in/2011/08/improved-magnetometer-calibration.html" rel="nofollow">blogpost</a>. You can also download a <a href="https://sites.google.com/site/sailboatinstruments1/home" rel="nofollow">tool</a> which can calibrate if you feed the data. You can also have a look at implementation in C <a href="https://sites.google.com/site/sailboatinstruments1/c-language-implementation" rel="nofollow">here</a>.</p>
5110
2014-12-06T18:50:48.653
|calibration|compass|magnetometer|
<p>I want to calibrate my compass, which is installed on a board which inherits a GPS module. Because the GPS antenna is up-side-down the compass is 180° inverted. The easiest way to correct the problem would be to invert the rotation matrix 180°. </p> <p>However I got interested how a general approach to calibrate a compass would look like. I found some approaches like <a href="http://www.varesano.net/blog/fabio/freeimu-magnetometer-and-accelerometer-calibration-gui-alpha-version-out" rel="nofollow">this</a>. They seem to collect magnetometer readings an project them on a sphere. But what is actually the point in this? </p> <p>Does someone know how a general calibration algorithm of a 3D magnetometer looks like?</p>
3D Magnetometer calibration algorithm
<p>The forces used to control the quad act around its center of mass, causing it to pitch, roll or yaw. Changes in the roll angle are induced by contrary variation of speeds in propellers A and C. Same effect with motors B and D can produce a change in the pitch angle and you will get lateral translation. </p> <ul> <li><p>First of all, make sure that your IMU gives you an <strong>accurate estimated angle</strong>. Have you validated these angles while motors were spinning?</p> <p>Then, define the control error as the difference between the desired angular position, <code>y_des = 0°</code> for example, and the actual measurement at time t, <code>y(t)</code>. The <code>y(t)</code> value is the one computed by your IMU so that <code>y(t)€[-1;1]</code>. The error equation for your roll angle will be:</p></li> </ul> <blockquote> <p>E_roll(t) = roll_des - roll(t)</p> </blockquote> <ul> <li>The input to you Roll pid controller should be the error <code>E_roll(t)</code>. And use the output, <code>rollPid</code>, in your equation to adjust motor angular velocities as shown below. These speed alterations contribute to the generation of torques in each of the three axis.</li> </ul> <blockquote> <p>a = throttle + rollpid - yawpid; </p> <p>b = throttle + pitchpid + yawpid;</p> <p>c = throttle - rollpid - yawpid; </p> <p>d = throttle - pitchpid + yawpid;</p> </blockquote> <p>How do you feed your pid and what does <code>steering.roll - gyro.roll</code> is? </p> <p><strong>From the informations provided by your model description, I don't understand how your axis are oriented in space. Use the equations above if you are using the following reference system</strong></p> <p><img src="https://i.stack.imgur.com/fqRpm.png" alt="Body reference system"></p> <p>For more details about the pid action see this <a href="https://robotics.stackexchange.com/questions/4721/how-to-convert-pid-outputs-to-appropriate-motor-speeds-for-a-quad-copter/5005#5005">link</a>, and <a href="https://robotics.stackexchange.com/questions/2964/quadcopter-pid-output?rq=1">this</a>.</p> <p>Hope this helps.</p>
5116
2014-12-08T03:31:19.333
|quadcopter|pid|design|
<p>I've built a quadcopter and a rig to safely test it on. I'm working on the PID for controlling the roll pitch and yaw. I understand how a PID works on a more simple plant like say a robot with wheels and I'm just really in the dark ( i believe ) with controlling and stabilizing a quad.</p> <p>My question, how do I make these sensor readings effectively alter the motors' throttle? </p> <p>Firstly, my approach is based on this model, </p> <pre><code>CW motors A,C CCW motors D,B Front +1 pitch C D -1 roll \-/ +1 roll right /-\ B A -1 pitch </code></pre> <p>My IMU calculates the roll and pitch as a value between +-1.0 where being perfect balance will read as 0.0. Now a degree of +-1.0 means approximately 90 degrees from the original axis.<br> A normal input to the pitch to go forward would be something like 0.33, meaning tilt 30 degrees forward.</p> <p>Now my motors take some value between 0 and 100. Originally I thought this would mean i would have to modify my motor values like so. </p> <pre><code>c = throttle - roll + pitch + yaw d = throttle + roll + pitch - yaw b = throttle - roll - pitch - yaw a = throttle + roll - pitch + yaw </code></pre> <p>Finally, I'm taking those floating point numbers, from the IMU and computing them like with this method, which appears to be the normal way as far as I've found.</p> <pre><code>RollPId.Compute( steering.roll - gyro.roll ); // pid_t is either #define pid_t float or double, I know its a reserved type but, a pre-processor definition will change that before it would matter. pid_t Compute(pid_t input) { uint64_t now = millis(); if( ( now - last_time ) &gt;= sample_time ) { pid_t error = set_point - input; error_sum += error; pid_t d_error = error - error_last; *output = kp * error + ki * error_sum + kd * d_error; error_last = error; last_time = now; } } </code></pre> <p>I don't know where to go from here? Also I have angular rate calculated from my IMU i just haven't encountered a solution that called for it. </p> <p>EDIT. Below is a graph of roughly 300 readings (20ms apart) so roughly six seconds where i hold it in one hand and roll it roughly 45degrees right. with kp=1 ki=0 kd=0<br> <img src="https://i.stack.imgur.com/8TKkr.png" alt="IMU and PID output"> </p>
How to find a solution for quadcopter PID control
<p>Servos are by definition, in the position you told them to go (or in a fault condition). This is the difference between a servo and a stepper. You tell a stepper to move 1 step at a time and do not know if it did or not. A servo on the other hand will continue trying to get to the position you asked. For hobby Servos, you are setting a pulse width / duty cycle in some part of the code. You can assume that IS the servo position for all intents and purposes. The standard servo's low value is 1000, high value 2000, and obviously 1500 in the center position.</p> <p>Taken from <a href="https://www.ini.uzh.ch/~tobi/wiki/doku.php?id=dig:servos" rel="nofollow noreferrer">Servo motors and controlling them with PWM</a></p> <p>They have one really great feature: the interface to control them is super simple, just the pulse width of a digital pulse sent to the motor at a rate from 50-200Hz. The pulse rate is not that important. The important thing is the pulse width. The range of the pulse width is from 1-2ms. 1.5ms is the middle servo position and 1 or 2 ms are the end positions.</p> <p>You cannot control the torque, rate, or anything else; as soon as the servo's electronics detects the pulse width (which can require several pulses) it tries to get the servo to the new position as quickly as possible. You also cannot sense the servo's position, current or anything else; control is open-loop as far as the servo is concerned; all control is handled inside the servo.</p> <p><img src="https://i.stack.imgur.com/nKcpq.gif" alt="Servo PWM control"></p> <hr> <p>Just as a warning, most modern servos DO NOT use potentiometers. Today, most use optical encoders for positional sensing, so reading an analog signal is going to be difficult (impossible). </p> <p>High end servos such as the <a href="http://hitecrcd.com/products/servos/discontinued-servos-servo-accessories/hsr-5990tg-hmi-ultra-premium-robot-servo/product" rel="nofollow noreferrer">HiTech</a> do provide feedback (positional) information.</p>
5118
2014-12-08T10:46:27.147
|mobile-robot|sensors|rcservo|
<p>I am building a collision avoidance system for my robot. As a part of this system I am using a pan and tilt kit </p> <p>(<a href="http://www.robotshop.com/en/lynxmotion-pan-and-tilt-kit-aluminium2.html" rel="nofollow">http://www.robotshop.com/en/lynxmotion-pan-and-tilt-kit-aluminium2.html</a>)</p> <p>My aim is to pan the sensor attached to this kit, and thus plan the route the robot should take.</p> <p>In order to pan this sensor, I need to know the angle the kit has panned, and need to be able to call upon that angle at point in time. </p> <p>Basically the sensor keeps panning, and at a point in time when certain conditions are met, it stops panning. When those conditions are met, I need to be able to extract the angle the sensor is panned at.</p> <p>The servo being used on is: <a href="http://www.robotshop.com/en/hitec-hs422-servo-motor.html" rel="nofollow">http://www.robotshop.com/en/hitec-hs422-servo-motor.html</a></p> <p>If someone could help me find a way to extract this information that would be helpful. I did read somewhere that the servo could be hacked and changed into a closed loop system where the effective angle would be shown, but that option is not viable.</p> <p>Thanks</p>
Finding the position of a servo
<h2>Beaglebone USB</h2> <p>When you plug a USB cable between the USB B plug of the Arduino Uno and the USB A plug of the Beaglebone Black, the Beaglebone should be able to talk and listen to it the same way any other Linux distribution talks and listens to any other CDC class USB peripheral.</p> <p>On the Beaglebone, a CDC class USB peripheral can be read an written to through "/dev/ttyUSB0" or "/dev/ttyACM0" or something similar.</p> <pre><code># warning: untested code import serial serial_port = serial.Serial( port="/dev/ttyUSB0", baudrate=9600, timeout=1 ) while True: serial_port.write("Hey.") print hex(ord(serial_port.read())) </code></pre> <ul> <li><a href="http://recursive-labs.com/blog/2012/06/20/ez430chronos-beaglebone/" rel="nofollow noreferrer">"EZ430 Chronos and the BeagleBone"</a></li> <li><a href="http://rimuboddy.blogspot.com/2013/03/usb-serial-gps-reader-with-pyserial.html" rel="nofollow noreferrer">"USB (serial) GPS Reader with pyserial, Redis, Ubuntu 12.10 on Beaglebone"</a></li> </ul> <p>The Arduino uses Serial.read() and Serial.write(). Any of the <a href="http://arduino.cc/en/reference/serial" rel="nofollow noreferrer">Arduino serial demo scripts</a> that read and write to the Arduino serial monitor when the USB cable is plugged into a "big" computer should work just fine when that same cable is plugged into a Beaglebone instead.</p> <h2>Beaglebone UARTs</h2> <p>p.s.: You may find it easier to wire one of <a href="http://beaglebone.cameon.net/home/serial-ports-uart" rel="nofollow noreferrer">the six BeagleBone UARTs</a> directly to the Arduino UART ("<a href="http://hipstercircuits.com/enable-serialuarttty-on-beaglebone-black/" rel="nofollow noreferrer">Enable serial/UART/tty on BeagleBone Black</a>"; "<a href="https://stackoverflow.com/questions/21148655/python-pyserial-with-auto-rts-through-half-duplex-rs-485-breakout-board-using-be">Python PySerial ... using Beaglebone Black</a>"), rather than trying to stick the more complicated USB in the middle somewhere.</p> <ul> <li><a href="http://bradsmc.blogspot.com/2013/05/example-serial-communication-between.html" rel="nofollow noreferrer">"Serial Communication between an Arduino &amp; a Beaglebone Black"</a>.</li> <li><a href="https://learn.adafruit.com/setting-up-io-python-library-on-beaglebone-black/uart" rel="nofollow noreferrer">"Adafruit: setting up IO Python Library on BeagleBone Black: UART"</a> </li> <li><a href="http://www.gigamegablog.com/2012/01/22/beaglebone-coding-101-using-the-serial-and-analog-pins/" rel="nofollow noreferrer">"Beaglebone Coding 101: Using the Serial and Analog Pins"</a></li> </ul>
5138
2014-12-11T15:21:00.867
|arduino|serial|usb|c++|beagle-bone|
<p>I have an arduino uno and a BeagleBone black and would like them to talk with eachother. Both the beagleBone and the arduino have their own 5V power supplies so no need for power to be transferred over usb. The used communication line we would like to use is the direct serial line. Preferably in such a way that the arduino can just call Serial.read() in order to get the bytes it needs (there won't be many). How do I get this to work on the BeagleBoneBlack? I suspect we can call the serial.write somewhere inside the BBB(we mainly program in C++). But how do we achieve this? </p>
Connecting an Arduino uno with a beaglebone black over USB
<p><a href="http://www.datplot.com/" rel="nofollow noreferrer" title="DatPlot">DatPlot</a> is an absolutely fantastic tool for plotting data and browsing through it, and I use it heavily when I want to quickly visualise and browse some data. I think it perfectly fills in a gap that exists between spreadsheets (such as excel) and general purpose computing packages (such as MATLAB, Scilab etc), in that it's great for exploring the data (zooming in and out, panning, selecting - which is something that spreadsheets utterly lack in my experience), and at the same time very simple and dedicated to one task (unlike MATLAB, scilab) and hence is quick to download install and use.</p> <p><img src="https://i.stack.imgur.com/utMtt.png" alt="DatPlot"></p> <p>Obviously, you're still going to have to get the temperature data into your PC. If you have an off the shelf data logger, then great. Otherwise, in my opinion, an Arduino (or other microcontroller) would be a very simple solution - just get it to regularly sample your sensor and then send temperature (and optionally time) via the serial port. You can then use a program such as <a href="http://realterm.sourceforge.net/" rel="nofollow noreferrer" title="RealTerm">RealTerm</a> (don't be overwhelmed by the UI) to automatically capture what's being sent via the serial port and save it to a file that you can then directly open in DatPlot. In RealTerm, the Capture tab allows you to just dump whatever is coming from the serial port directly to a file.</p> <p><img src="https://i.stack.imgur.com/uSHxX.png" alt="RealTerm"></p>
5150
2014-12-14T06:52:43.117
|arduino|sensors|microcontroller|electronics|
<p>After a lot of learning, I'm launching a reballing business and I feel the need to have a realtime plot of the temperatures involved (ideally 3 or 4) and I have an arduino uno and a few K type thermocouples, I was researching the subject and saw a lot of different approachs, most of them use arduinos to send serial data to a pc port, then from there they process it with phyton, other guys matlab, some use ms excel plus a free add on in vb for apps. etcetera, and now after some reading I feel overwhelmed by all the different methods, so I wonder, perhaps I'm already losing perspective here? may be there is a simple method I can use and KISS way of get it done? thank you.</p>
what is the easiest method to plot a temperature in my pc?
<p>You have the same exact problem that all astronomical telescope drives have. The easiest solution is to align your axis so that one of them is EXACTLY parallel to the Earth's pole. Now you only need one motor to track the sun as it moves.</p> <p>The second motor needs only a small range of motion and only needs to move a tiny bit every day and then it can "put on the brakes" literally to keep this axis from moving. We call this the declination axis</p> <p>This type of drive moves in something like a latitude and longitude coordinate system. They are spherical not cartesian and it make tracking the sun very simple. So simple you don't need computer control.</p> <p>The large motor needs to rotate about one revolution per day. The smaller motor makes a few degrees of motion per year and then reverses. It is mostly stationary.</p> <p>You are in fact building a CLOCK and in fact in these these drives were called "clock drives" because SPEED is the thing you need to control not position. The sun goes not move randomly. A hundred years ago, before electronics were available, they used springs and pendulums to regulate the speed, just like you'd see in a large clock.</p> <p>typically the axis that is parallel with Earth's pole called the "polar axis" is tilted to match your latitude is fitted with a large worm gear. Get the largest and best one you can afford. The pinion that drives this worm uses a motor that can be very accurately speed controlled. It will need an encoder on the motor shaft to measure speed. You will need a feedback loop to keep the speed constant as the mechanical load changes.</p> <p>In any case think more about controlling exact SPEED as this drive will move very slowly and must not do so in steps or you will have huge vibration problems. If the mass of the load is high you MUST move it continuously not in anything like small steps.</p> <p>One other thing: You seem to want to move this "open loop". But why not try and track the sun with some kind of sensor? All you need is a small tube with both ends closed. Face one end approximately at the sun and make a tiny pinhole in the sun facing cover. this will project a tiny dot of light on the other cover. control the speed of your polar axis drive motor so as to keep the projected dot from moving. Someone suggested placing an encoder on the shaft, no need when your target is so bright and easy to track.</p> <p>Problems you will face: (1) your axis is not perfectly aligned with the Earth's axis. This means that for sure you can not use an open lop drive and WILL need to track the Sun optically. This is another reason not to depend on a shaft mounted encoder. Also I lied about the declination motor being able to clamp on the brakes. That only works in the axis are perfect. It will not be perfect.</p> <p>You will have these problems but even worse it you try to build an "X, Y" type drive where one axis is vertical and the other horizontal. You will find that your two axis are not at exactly 90 degrees and they are not pointing where you think they are. You will have to track the sun with a sun sensor less you spend a LOT of money building a very rigid and precise mechanical system</p> <p>the good news is that this is a well known problem and there are many ways to go. Computers make it cheaper then in the last century because we can build a cheap sun tracker to compensate for the faults of a less then perfect mechanical system.</p>
5156
2014-12-14T20:35:26.207
|motor|brushless-motor|stepper-motor|servomotor|
<p>For a school project I am looking to track the sun with +/- 0.1 degree accuracy for use with a parabolic dish reflector. Say we need a final output torque of about 20Nm, what kind of gearing/motors and feedback would you guys use to accomplish this? The sun position will be found with a solar positioning algorithm.</p> <p>I am pretty new to this all but from my research stepper motors seem the easiest but brushless DC motors, from what I have read, can yield better results with smoother tracking. I am confused how you even use regular brushless dc motors to achieve precision positioning. I am aware of encoders but I don't really understand why the BLDC are preferred for this particular application, and how one would implement them.. Any ideas that can help kick start my researching?</p>
Sun tracking with +/- 0.1degree accuracy?
<p>FYI, most people (including myself) can't see that paper because we don't have IEEE memberships.</p> <p>Based on your question, though, the error is often considered the state because that is what you really care about in control/estimation algorithms: you want to control the robot so the error is zero. This is true whether the reference trajectory is static or dynamic.</p> <p>For 2), it's hard to say without looking at the paper, but the reference trajectory is a logical point of linearization because you assume that your controller is good and the robot is close the the reference trajectory. This way you can linearize and design controllers in advance, i.e. when you don't know the true state so the reference trajectory is the only information you can use.</p>
5169
2014-12-16T17:41:21.373
|mobile-robot|wheeled-robot|differential-drive|
<p>I am reading the following research paper regarding Trajectory tracking of mobile robots.</p> <p><a href="http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;arnumber=5415188" rel="nofollow">http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;arnumber=5415188</a></p> <p>There are two things at the start of the paper that i do not understand.</p> <p>1) The author derives equation(14) as the state space model of the system in which he considers the error as the state. Can anyone please elaborate on why he is using the error as the state space model of the system and not the Vx, Vy, and w(Omega, angular speed) of the robot.</p> <p>2) Why does the author linearize the system around the reference trajectory?</p>
Error as a State space
<p>Steps one should take to eventually write a flight controller software:</p> <h2>Step 1</h2> <p>Learn the mathematics and physics laws behind the airplanes, helicopters, multi-rotors, etc.</p> <h2>Step 2</h2> <p>Learn electronics.</p> <p>Electronics is a huge field and learning the robotics related stuff would be enough.</p> <h2>Step 3</h2> <p>Learn how computers work.</p> <p>The electronics behind a computer. How does a computer stores information, how does a CPU or micro-controller processes those information, how would you send those information to another system through the wire or air, etc. The concepts are same from an Arduino Uno to a bad-ass gaming PC.</p> <h2>Step 4</h2> <p>Learn a programming language.</p> <p>Choosing a language depends on the platform you want to target. Are you targeting a specific micro-controller? then you need to learn that micro-controller's specific language (which is an assembly language or maybe a C like language). Are you targeting an Operating System? For example your goal may be that I want to write a flight controller for the Linux OS which requires the real-time patch and would work on any Linux enabled SBC with some specified MPU sensors that I support. In that case you need to learn C (because Linux's native APIs are in C) and you also need to master the Linux kernel. There are wrappers around native APIs in many languages but none of them cover all of the APIs and may give you less control over how you can call them.</p> <h2>Step 5</h2> <p>Put it all together!</p> <p>At this point you can start writing a flight controller.</p> <p>Depending on how much time and energy you have, this may take from a few years to a few decades! :D</p> <p>This is why these things are usually created in a team work.</p>
5171
2014-12-16T19:07:39.473
|arduino|quadcopter|microcontroller|multi-rotor|
<p>I'm interested in quadcopters/multi-rotors and want to eventually code my own flight controller ala an APM and/or Pixhawk. I've got a little experience in programming (i.e i know about if/else/else if conditionals), and have done a little programming with PHP, though it was procedural code.</p> <p>I currently have a quadcopter that i built/assembled myself that is running on a f450 frame, using a APM 2.6 flight controller,so i have a reasonable grasp of how a quad works, and i would like to take it a step further and make my own adjustments to the code base, with the eventual aim of coding my own flight controller.</p> <p>I've had a look at the code base, but am still unable to get a grasp of what the code is actually doing....yet. How would i go about learning how to code a flight controller?</p> <p>I'm thinking that i would have to learn C++ &amp; OOP first, but how familiar/proficient would i have to be with C++ before i can reasonably attempt to edit the code base?Also, what else would i need to learn apart from C++ &amp; OOP?I am looking at setting a 6 month timeframe/deadline for me to do this, would it be possible?</p>
How would i go about learning to code a flight controller?
<p>A low power laser shining across the floor surface at a glancing angle will show a bright glitter of light off of any specks of dirt. A photo-sensor could be arranged to see the sparkle, and order a 'redo' of any area that glittered more than a given threshold...a threshold that would have to be determined experimentally, but should work for most floors. Alternately, the same idea could be applied to the air flow just inside the vacuum nozzle, and the vacuum only moving on after the dirt particle pickup dropped below a certain threshold. This might make for a very slow moving machine, though.</p>
5192
2014-12-21T04:51:41.787
|sensors|
<p>When I use a standard manual vacuum, I often notice that I have to pass over a spot several times because a single pass does not necessarily catch all the dirt. My eyes/brain can easily perceive this information visually, but I don't know how an autonomous robot vacuum can detect whether a pass over a patch of dirt was successful or not. What kind of sensor/action can I use to determine if the robot vacuum successfully picked up the dirt from a particular patch? </p> <p>I would prefer to avoid a visual camera if at all possible because it would necessarily have to be mounted above the robot and thereby limit the range of reachable locations. Is there some other low-cost sensor that can accomplish the same task that can be placed low to the ground?</p>
Alternatives to cameras for a dirt sensor for an autonomous vacuum robot?
<p>As kamek said, you can do mapping without localization.</p> <p>and it would be much easier if you have the locations and you just need to build a map.</p> <p>note: dead reckoning alone is divergent even if you have very good models, however, its use in conjunction with other sensors is very helpful</p>
5193
2014-12-21T05:14:58.733
|slam|mapping|occupancygrid|
<p>I know that Occupancy Grid Mapping requires the assumption that the robots' pose is always known when generating a map. However, I also know that in reality, position and orientation usually treated as uncertain in practical applications. Assuming that my target mapping environment is inside a home, is there a way I can overcome inherent robot pose uncertainty in a real world application of Occupancy Grid Mapping without resorting to implementing a SLAM? That is, what is a low-cost way to increase the certainty about my pose? </p> <p>Or is Occupancy Grid Mapping only useful in theory and not in practice? </p> <p><strong><em>Update</em></strong>:</p> <p>It is clear to me, from the responses given, that occupancy grid mapping is just one possible way to represent a map, not a method in and of itself. The heart of what I really want to know is: <strong><em>Can mapping be done without also solving the localization problem at the same time (i.e. SLAM) in real life applications?</em></strong> </p>
Can mapping be done in real life applications without also solving the localization problem at the same time (i.e. SLAM)?
<p>You can use a magnetic clutch. When the strain on the horn passes a particular mark, it overpowers the pairs of magnets holding the clutch plates causing it to slip without damaging the gears. You can easily make them, and here is a company selling them <a href="http://robosavvy.com/store/dagu-magnetic-servo-clutches-for-miniature-servos.html" rel="nofollow">http://robosavvy.com/store/dagu-magnetic-servo-clutches-for-miniature-servos.html</a></p> <p>If you want to use an off the shelf one there are clutches designed for r/c cars that you maybe able to use. There is a description of one here <a href="http://nitrorc.com/articles/clutch.html" rel="nofollow">http://nitrorc.com/articles/clutch.html</a> and here is a company that sells them <a href="http://www.bukupower.com/clutch.aspx" rel="nofollow">http://www.bukupower.com/clutch.aspx</a> .These are designed for use with gas vehicles so some modification to your design maybe required. </p> <p>It may also not be too hard to make a gear train disengagement mechanism you can control using a servo.</p>
5201
2014-12-23T01:50:38.637
|motor|power|rcservo|radio-control|
<p>How can I power a wheel but let it spin freely when not under power?</p> <p>I saw the question <a href="https://robotics.stackexchange.com/questions/2/how-can-i-modify-a-low-cost-hobby-servo-to-run-freely">How can I modify a low cost hobby servo to run &#39;freely&#39;?</a> but I'm more interested in knowing if there is some sort of gearbox that disengages ( moves to 'neutral' ) when no torque is being applied to it.</p> <p>Two ideas that come to mind are:</p> <ol> <li><p>A drive gear on a spring loaded arm with a nominal amount of resistance before engaging. Perhaps when under power it would first use power to move in one direction, then it would engage with another gear, but without power the spring would return it to a non-engaged position</p></li> <li><p>A centrifugal clutch - although I'd like something that works at low RPMs as well</p></li> </ol> <p>The idea is to create a small bot that can move up and down a track, but if someone interacts with it when not under power it will just roll and won't damage the gearbox.</p>
How can I power a wheel but let it spin freely when not under power?
<p>As far as I can tell, you are asking how a <a href="http://en.wikipedia.org/wiki/split-flap_display" rel="nofollow">"split-flap display"</a> on things like the <a href="http://en.wikipedia.org/wiki/Flapit" rel="nofollow">Flapit</a>, the <a href="http://www.wired.com/2013/08/lets-make-sure-everyone-knows-how-popular-you-are-with-the-smiirl-fliike/" rel="nofollow">Smiirl Fliike</a>, etc. works.</p> <h2>flip clocks</h2> <p>Some <a href="http://en.wikipedia.org/wiki/flip_clock" rel="nofollow">flip clocks</a> -- <a href="http://www.instructables.com/id/Cool-Exposed-Retro-Flip-Clock/" rel="nofollow">"Retro Flip Clock"</a>, etc. -- use a single motor and a <a href="https://www.youtube.com/watch?v=rjWfIiaOFR4" rel="nofollow">"carry gear mechanism"</a> so they can only count forward and backward (or in some cases only forward) in sequential order.</p> <h2>general flip displays</h2> <p>The video you show seems to spin the 100s digit, the 10s digit, and the 1 digit independently of the other digits, like split-flap displays indicating arrivals and departures at airports and railway stations.</p> <p>I suspect it is possible to have a mechanism with a total of one or two motors to independently select one at a time of a large number of letter/digit flip sections, and then flip that section an appropriate number of times, perhaps something like the way the <a href="https://en.wikipedia.org/wiki/Armatron" rel="nofollow">Armatron</a> somehow managed to control a bunch of joints with a total of one single motor.</p> <p>However, in practice, my understanding is that such displays typically have a dedicated motor for each letter/digit. For example, <a href="http://www.jave.de/blog2/?p=111" rel="nofollow">Markus' split-flap display</a> (via <a href="http://hackaday.com/2011/01/03/driving-an-8-digit-split-flap-display/" rel="nofollow">"Driving an 8-digit split flap display"</a> ), <a href="http://unknowndomain.co.uk/category/design/split-flap-display/" rel="nofollow">Tom Lynch's split-flap display</a> (via <a href="http://hackaday.com/2014/10/04/split-flap-display-if-cant-find-it-built-it/" rel="nofollow">"Split flap display: if you can’t find it, built it"</a> ), the <a href="http://www.wired.com/2013/12/this-exhibit-turns-old-school-travel-schedule-boards-into-art/#slideid-317641" rel="nofollow">"Signal-to-noise" mechanism</a>, etc. each use a dedicated motor for each letter/digit section.</p>
5214
2014-12-25T14:08:15.743
|mechanism|stepper-motor|
<p>First of all please see this video : <a href="http://www.youtube.com/watch?v=n0dkn4ZIQVg" rel="nofollow">http://www.youtube.com/watch?v=n0dkn4ZIQVg</a></p> <p>I think there is ony one stepper motor -or servo- working in the mechanisim. But as you can see each flip counter works alone and separately.</p> <p>It is not like classical counter mechanism like this : <a href="http://www.youtube.com/watch?v=rjWfIiaOFR4" rel="nofollow">http://www.youtube.com/watch?v=rjWfIiaOFR4</a></p> <p>How does it works?</p>
How does Fliike Smiirl counter mechanism work
<p>Memory of the past is required whenever failures and/or inadequacy arise in the perception layer of the robot thus affecting significantly its current representation of the world, forcing eventually to apply some sort of backtracking strategies.</p> <p>Quoting <em>S.D Whitehead and Long-Ji Lin</em> in their paper &quot;<a href="http://www.sciencedirect.com/science/article/pii/000437029400012P" rel="nofollow noreferrer">Reinforcement learning of non-Markov decision processes</a>&quot;:</p> <blockquote> <p>These non-Markov tasks are commonly referred to as <em>hidden state tasks</em>, since they occur whenever it is possible for a relevant piece of information to be <em>hidden</em> (or missing) from the agent’s representation of the current situation.</p> <p>Hidden state tasks arise naturally in the context of autonomous learning robots. The simplest example of a hidden state task is one which occurs when the agent’s sensors are inadequate for the task at hand. Suppose a robot is charged with the task of sorting blocks into bins according to their color, say Bin-l for red, Bin-2 for blue. If the robot’s sensors are unable to distinguish red from blue, then for any given block it can do no better than guess a classification. If there are an equal number of blocks of each color, then guessing can do no better than chance. On the other hand, if the robot can detect color, it can easily learn to achieve 100% performance. The former case corresponds to a non-Markov decision problem, since relevant information is missing from the agent’s representation. The latter case is Markov since once a color sense is available the information needed to achieve optimal performance is always available. In general, if a robot’s internal representation is defined only by its immediate sensor readings, and if there are circumstances in which the sensors do not provide all the information needed to uniquely identify the state of the environment with respect to the task, then the decision problem is non-Markov.</p> </blockquote>
5223
2014-12-27T15:51:52.693
|probability|
<p>As far as i can tell, the markov assumption is quite ubiquitous in probabilistic methods for robotics and i can see why. The notion that you can summarize all of your robot's previous poses with its current pose makes many methods computationally tractable.</p> <p>I'm just wondering if there are any classic examples of problems in robotics where the markov assumption cannot be used at all. Under what circumstances is the future state of the robot necessarily dependent on the current and at least some past states? In such non-markovian cases, what can be done to alleviate the computational expense? Is there a way to minimize the dependence on previous states to the previous $k$ states, where $k$ can be chosen as small as desired?</p>
Non-markovian problems/approaches in robotics
<p>You are probably new to the business. Richard's book follows standard approach in mechanics textbook with an emphasis on coordinate invariance (a notion commonly popped up in Lie group theory). The two most important references of this book is Greenwood's classical dynamics and John McCarthy's Introduction to theoretical kinematics.</p> <p>The advantage of Richard's approach is that you can just use any coordinate system, inertial or moving. In fact, if you really understand the book up to certain level, you will realize that instantaneously there is no difference between the two at all: when you use body fixed coordinate frame to compute the kinematics or dynamics, you are actually referring to an inertial frame which at that moment coincides with the body frame (we usually say, a copy). So there is no so called reverse order whatsoever. The true reversion happens only when you switch the roles of the base and the end-effector (a notion usually only used in mechanism synthesis).</p> <p>Later when you move on to dynamics, you will see everything computed in body frame. The tricky part is that in classical dynamics, laws of physics only respects inertial frame, or in other words, invariant under the 10 dimensional Galilean transformation. So the restatement of Newton's laws for example, has to be made first in inertial frame, and then transformed to body frame. So on the one hand, equation of motions should start within a inertial frame, but on the other hand, it is eventually transformed into body frame. You may ask why?</p> <p>A 'High-Big-Up' answer may process as follows. It is a common knowledge that there is no bi-invariant Riemannian metric on $SE(3)$. This means the notion of "error" on $SE(3)$ for example, is not consistent under arbitrary change of inertial and/or body frames (and so it also affects the curvature and hence dynamics). For the past 40 or so years, people satisfy themselves with left or right invariant metric instead. The philosophy behind this is that if they can choose some preferred inertial or body frame, then only right- or left-invariance is good enough (in terms of consistency) for engineering applications. The inertia tensor written in body frame for example, serves as a left-invariant (i.e. invariant under change of inertial frame) metric for a single rigid body, is chosen based on the fact that there is actually a preferred body frame, namely the one located at the center of mass with axes aligned with the principal axes.</p> <p>Nevetheless, the equation of motion has a coordinate invariant form (something like $M\dot V-ad_V^T(MV)=F$), though the Riemmanian metric might be different under change of coordinate frames. The reason is that Newton's second law is geometric in nature: $M\nabla_VV=F$, i.e. essentially a geodesic equation. This is the beauty of the matrix Lie group machinery: elegance in analytic derivation. But it is not necessarily computationally efficient. These days people are moving on to geometric algebra. But Lie theory is inevitable anyway, and is equivalent to Richard's matrix approach under proper linear representations.</p> <p>Hopes my mumbo jumbo didn't kill the fun of the book.</p>
5227
2014-12-28T03:52:07.077
|control|screw-theory|
<p>I am reading some theories related to rigid body motion from the book "A Mathematical Introduction to Robotic Manipulation" by Prof. Richard Murray. </p> <p>I am focusing on chapter 2, Sec 4 to derive some formulation. According to his introduction of chapter "we present a modern approach treatment of the theory of screws based on linear algebra and matrix groups". I myself feel rather understandable and comprehensive explanation from this approach. </p> <p>However, his scope in this chapter is limited in inertia coordinate frame where he refers to as spatial frame and moving frame as body frame. Is there any other references that treat the topic in the reversed order? spatial as moving/non-inertia frame and the other one is inertia frame?</p> <p>Thank you!</p>
theory on rigid body motion in robotics book
<p>Suppose you have three measurements (1, 2, and 3) and four landmarks (a, b, c, d). The <em>joint compatibility</em> is a measure of how well a subset of the measurements associates with a subset of the landmarks. </p> <p>For example, what is the joint compatibility of (1b, 2d, 3c)? First we construct the <em>implicit measurement functions</em> $f_{ij_i}$ for each correspondence ($f_{1b_1}$, $f_{2d_2}$, $f_{3c_3}$). The <em>joint implicit function</em> $f_{\mathcal{H}_i}$ is simply the vector of the implicit measurement functions; i.e.,</p> <p>$$ f_{\mathcal{H}_i} := \begin{bmatrix} f_{1b_1} \\ f_{2d_2} \\ f_{3c_3}\end{bmatrix} $$</p> <p>This function is linearized in (5) in the linked paper (this requires the Jacobians of $f_{\mathcal{H}_i}$ with respect to the state and measurement). Equation (9) calculates the covariance of $f_{\mathcal{H}_i}$ and (10) uses this covariance, along with expected value of $f_{\mathcal{H}_i}$ (that is, $h_{\mathcal{H}_i}$) to calculate the joint compatibility (or more specifically, the joint Mahalanobis distance) of this particular set of associations. The Mahalanobis distance forms a chi-square distribution, and the confidence of the association is checked against a threshold (which is dependent on the dimension of the distribution; in this case, it is three).</p> <p>What I described above is how you check a single set of associations. The real trick is how to check "all" (you don't usually need to check all of them) of the possible associations and pick the one that (a) has the maximum likelihood, AND (b) maximizes the number of associations. The reason why you want to maximize the number of associations is because (from the paper):</p> <p><em>"The probability that a spurious pairing is jointly compatible with all the pairings of a given hypothesis decreases as the number of pairings in the hypothesis increases."</em></p> <p>The "branch and bound" part of JCBB is how you efficiently traverse through the search space to find the best set of associations.</p>
5238
2014-12-29T13:29:54.537
|slam|ekf|mapping|data-association|
<p>I would like to implement the joint compatibility branch and bound technique <a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=976019&amp;tag=1" rel="nofollow">in this link</a> as a method to carry out data association. I've read the paper but still confused about this function $f_{H_{i}} (x,y)$. I don't know exactly what they are trying to do. They compared their approach with The individual compatibility nearest neighbor (ICNN). In the aforementioned method we have this function $f_{ij_{i}} (x,y)$. This function simply the inverse measurement function or what they call it in their paper the implicit measurement function. In Laser sensor, given the observations in the polar coordinates, we seek via the inverse measurement function to acquire their Cartesian coordinates. In ICNN, every thing is clear because we have this function $f_{ij_{i}} (x,y)$, so it is easily to acquire the Jacobian $H_{ij_{i}}$ which is </p> <p>$$ H_{ij_{i}} = \frac{\partial f_{ij_{i}}}{\partial \textbf{x}} $$ </p> <p>For example in 2D case and 2D laser sensor, $\textbf{x} = [x \ y \ \theta]$ and the inverse measurement function is $$ m_{x} = x + rcos(\phi + \theta) \\ m_{y} = y + rsin( \phi + \theta ) $$</p> <p>where $m_{x}$ and $m_{y}$ are the location of a landmark and $$ r = \sqrt{ (m_{x}-x)^{2} + (m_{y}-y)^{2}} \\ \phi = atan2\left( \frac{(m_{y}-y)}{(m_{x}-x)} \right) - \theta $$</p> <p>Using <code>jacobian()</code> in Matlab, we can get $H_{ij_{i}}$. Any suggetions?</p>
joint compatibility branch and bound (JCBB) data association implementation
<p>XBee sleep mode is supposed to be less than 10uA, so (if you can get it working) then use that. </p> <p>A P-FET on the high side can be used to turn off anything, you'll need to drive it with a signal transistor to switch it from the mcu - e.g. <a href="http://forum.arduino.cc/index.php?topic=112521.msg845816#msg845816" rel="nofollow">here</a> or google for <em>FET high side switch</em>.</p>
5250
2014-12-31T07:05:23.643
|power|
<p>I have an application that needs a XBee and another module to be turned on and off digitally via a microcontroller.</p> <p>The Setup is 2 XBee's and an application board is connected to the Microcontroller. On PowerON I need 1 Xbee and the Microcontroller to come on and do its routines. After the uC gets the signal from the Xbee (wirelessly from a baseStation) the board has to turn on the other XBee and application board. And when the operation is over, the XBee and Board is to be powered back down. I dont want to put them in sleep or low power state, just power both those devices off. </p> <p>I was thinking of using a relay. But i cannot find a 3.3v 1A SMD Equivalent system. I am looking for a SMD type of footprint to go on a very compact board. </p> <p>What options do i have?</p> <p>The XBee needs around 1A Power and the Application board 500mA.</p>
Need a Relay Alternative
<p>In short, there is no particular need for synchronization of inner and outer loops. The inner loop should run "fast enough" to control attitude dynamics and the outer loop should run "fast enough" to control translation dynamics. In my last quad the inner loop ran at 200Hz and the outer loop ran at a variable rate from 50Hz-100hz.</p> <p>Typically, you shouldn't be trying to "tune" the loop frequencies. Once they are fast enough there is no further benefit to be had. In your case, I would carefully check the code to see what the inner loop controller is doing between outer loop controller updates. In most setups, it will just remember the last command from the outer loop and control to that.</p> <p>To elaborate on the rationale for separate inner/outer loop controllers ...</p> <p>As you noticed in papers, for quadrotors it is very common to have a separate inner loop (attitude) and outer loop (translation). One strong motivation for this has been the relative ease of attitude control and state estimation. The attitude subsystem is fully actuation and can be aggressively controlled for tight command tracking. Equally important, state estimation is generally quite accurate using a small 9-axis IMU (gyro + accel + mag).</p> <p>When implemented correctly, this allows the outer loop to assume that it's attitude commands are tracked very tightly. The system now <strong>looks</strong> like a system where attitude acts like a traditional input. In this case, the translation subsystem also becomes fully actuated. For this to work, the inner loop typically needs to run much faster than translation dynamics. Conversely, this means the translation dynamics are relatively slow so the outer loop controller doesn't need to run very fast.</p> <p>The biggest limitation for the outer loop is how quickly and accurately you can estimate the translation state. For fully autonomous systems, This is typically slow and not very accurate so you can't act aggressively.</p> <p>One final note, from a control theory point of view most of the inner+outer loop controllers don't have stability guarantees; I've seen only one paper where they went through the effort for a similar control architecture. There are also several papers where they create advanced algorithms with stability analysis but those typically aren't work the effort to implement.</p>
5251
2014-12-31T13:08:57.283
|control|quadcopter|
<p>I've develop a quadrotor (only simulation on my PC using ROS) and the feedback controller resumes more or less the following structure:</p> <p><img src="https://i.stack.imgur.com/9FyeR.png" alt="Picture 1"></p> <p>where you can think that process is the dynamic movement of the quadrotor (motion equations), the inner loop is the <em>attitude controller</em> (it just sets the orientation about all 3 axis) and the outer loop is the <em>position controller</em> that takes care where the quadrotor actually is. Why are they separated? Because in many papers I found out that the attitude controller (pitch, roll, yaw) need to run at higher frequency then any other controller in the system. The <em>position controller</em> instead needs to run at lower frequency.</p> <p>The following picture is a better explanation of my description. Don't be scared...it is more simpler than one could think:</p> <p><img src="https://i.stack.imgur.com/Qf7Pd.png" alt="Picture 2"></p> <p>Now I did it as in the paper. <strong>BUT</strong> I discovered that my quadrotor was really unstable and I spent days and days trying to correct the gains of the controller without getting a stable system. My intuition said to me that maybe they are running at wrong frequency, so I put a different frequency values for the position controller being sure it is not a multiply of the main frequency (something like 1000Hz and 355 Hz for example.)</p> <p>Lately I removed the timer in my program (C++) and let the <em>position controller</em> run at the same frequency as the <em>attitude controller</em> just because I run out of ideas and suddenly worked everything nice.</p> <p>So here is my question. What should I consider when my system has an outer/inner controllers? How to be aware of that?</p> <p>Regards and happy new year!!!</p>
Feedback controller: Is there any influence between outer and inner loop when running at different frequencies?
<p>Macros(_MAV_RETURN_??) for uin8_t, int8_t and char are defined at <a href="https://github.com/diydrones/ardupilot/blob/master/libraries/GCS_MAVLink/include/mavlink/v1.0/protocol.h#L236" rel="nofollow noreferrer">https://github.com/diydrones/ardupilot/blob/master/libraries/GCS_MAVLink/include/mavlink/v1.0/protocol.h#L236</a> onwoards. </p> <p>However for datatype with more that 8 bits, they have written inline function _MAV_RETURN_## using 'Token Pasting Operator' which handles rest 16bit, 32bit and 64bit datatypes handled by mavlink including float. This function is written at <a href="https://github.com/diydrones/ardupilot/blob/master/libraries/GCS_MAVLink/include/mavlink/v1.0/protocol.h#L242" rel="nofollow noreferrer">https://github.com/diydrones/ardupilot/blob/master/libraries/GCS_MAVLink/include/mavlink/v1.0/protocol.h#L242</a> onwards</p> <p>You an read more about Token Pasting Operator (##) in following links <a href="https://stackoverflow.com/questions/216875/in-macros">https://stackoverflow.com/questions/216875/in-macros</a> <a href="https://gcc.gnu.org/onlinedocs/cpp/Concatenation.html#Concatenation" rel="nofollow noreferrer">https://gcc.gnu.org/onlinedocs/cpp/Concatenation.html#Concatenation</a></p>
5255
2014-12-31T20:48:44.860
|ardupilot|
<p>I'm trying to understand the source code of ArduPlane. The MAVLink message is decoded using a set of <code>_MAV_RETURN_????</code> functions, e.g. <code>_MAV_RETURN_float</code></p> <p>When I grep recursively for <code>_MAV_RETURN_float</code>, I could not find where it is defined. I wonder if I'm missing anything. </p> <hr> <p><strong>UPDATE</strong><br> Here is the source code of Ardupilot, including ArduPlane.<br> <a href="https://github.com/diydrones/ardupilot" rel="nofollow">https://github.com/diydrones/ardupilot</a></p>
Where is the function _MAV_RETURN_ defined?
<p>Both the solutions you proposed do suffer from unwanted interaction among the two PIDs. You're basically trying to assign two simultaneous goals - i.e. final relative position along with terminal non null speed - when the system has only one input variable, let's say the "<em>thrust</em>" driving the UAV dynamics.</p> <p>The correct scheme should be the one depicted below: <img src="https://i.stack.imgur.com/yQ0xT.png" alt="enter image description here"></p> <p>There exists only one PID providing the driving force to the system, which is fed by the error between the modeled dynamics of the Tanker (plus the relative offset) and the current UAV position.</p> <p>A typical outcome is plotted here for the plant $m\ddot{x}=-d\dot{x}+u$, where $m=10$, $d=0.1$ $x(0)=(0,0)'$, and the controller's parameters are $P=22.608$, $I=0.874$, $D=51.891$, $N=10.413$ (PID in parallel form). <img src="https://i.stack.imgur.com/DDlzP.png" alt="enter image description here"></p> <p>As it is visible in the graph, the unique goal of the UAV is to catch up the increasing position of the flying Tanker; the fact that eventually also the terminal speed turns to be equal to the desired one comes out exclusively as positive consequence of the good position tracking performance. When you track a position very well, then you also track how it changes over time (i.e. its velocity).</p> <p>Take-home message about hierarchical control loops: when the designer opts for a combination of inner and outer loops with two nested PIDs, he can only assign as set-point the final position for the outer loop, whereas the inner PID is tuned to have some given performance of dynamic behavior but not to get any steady-state velocity.</p>
5260
2015-01-01T09:39:37.983
|control|pid|uav|
<p>At first happy new 2015!!!</p> <p>I'm looking for my next simulator development: a Tanker is flying at constant speed (350 Knots) (no acceleration, no change of altitude or direction). The Tanker is approached from behind by a UAV which needs to refuel or transfer data through a wire. The UAV knows the direction, the speed and <em>relative position</em> from the tanker in order to approach it smoothly. It knows that at about 5 m from the tanker is the contact successful. Here a picture I found on internet but it is clear more than thousand words:</p> <p><img src="https://i.stack.imgur.com/XR2Rv.jpg" alt="Bild 1"></p> <p>To achieve the task I thought to implement a "simple" PID which controls the position and the velocity, but for this I have in my mind two different designs approaches:</p> <ol> <li><strong>Solution one</strong>: the motion equation of my system provide the position $x,y,z$ and velocity $Vx, Vy, Vz$ of the UAV (to simplify things I will consider just $x$ but of course $y,z$ must be eventually considered too). Those are feedback with the desired position (5m) and velocity (350 Knots) of the tanker. The feedback line is separated for each state and PIDs are working quite indipendently as in the following picture:</li> </ol> <p><img src="https://i.stack.imgur.com/q6VbZ.png" alt="PID 1"></p> <p>please note that to simplify things I never considered the acceleration.</p> <ol start="2"> <li><strong>Solution two</strong>: this is the most tricky one and I was yesterday thinking about it all the time. In this case <strong>only one state</strong> vector is going to be feedback to the desired setpoints. In this case I would feedback only the velocity then integrate it and feed the result into the second PID. Maybe the following picture is clearer:</li> </ol> <p><img src="https://i.stack.imgur.com/OEcFF.png" alt="PID 2"></p> <p>But here I'm not really sure if the second idea is conceptually wrong or could be affordable. I'm pretty sure that the first one is working and leads to good result, but I was wondering if the second one is affordable or is not recommended for a control design.</p> <p>Regards</p>
The aerial refueling problem: sketch of a feedback controller
<p>As you noted, there is only one output (the motor's desired PWM) when all is said and done. All three loops may contribute to this output, but how should this be managed? The simplest method is to average all 3 commanded inputs to form a single PWM value, $PWM=(heading+speed+position)/3$.</p> <p>Of course, this method assumes that all command forms are equally valued. Is heading more important? Then weight it more: $(alpha * heading)+(beta * speed)+(gamma * position)$, where $alpha+beta+gamma = 1$ and $alpha &gt; beta$ and $alpha &gt; gamma$. </p>
5261
2015-01-01T13:28:36.387
|motor|sensors|control|pid|
<p>So I'm in the process of building my robot and it has encoders on every wheel measuring speed and position and a compass sensor measuring heading.</p> <p>I have 3 seperate PID loops at the moment, I can either control the robots speed or I can control the robots position or I can make it follow a heading using a line following type algorithm.</p> <p>Now I am completely stuck on how to merge these control loops, how do I control the robots heading AND its speed. So I can say to it, "go at 20 degrees at 3m/s" or "go at 45degrees for 5 metres then stop"</p> <p>Really I would like to be able to control all 3, heading speed and position so I could say "go at 20 degrees for 10 metres at a speed of 5m/s" however I have no idea how to merge the loops.</p> <p>At the end of the day there are 3 inputs (heading, speed and position) and 1 output (PWM to motors) so do I need to use some kind of MISO control scheme, if so which one? </p> <p>I've looked at cascaded control but that only accepts 1 set point where I want to set 3 different set points. Maybe some kind of state machine? I'm so stuck!!</p>
How to program parallel PID control loops? So I can give my robot multiple set points to follow
<p>I doubt you will be able to simply by something an attach it. How to make an encoder will depend on how this motor is used. Are you doing a chain, gears? You will need to design the rotation sensor into the rest of the power transmission system.</p> <p>What you do is place a disk of some kind, plastic or metal and then you make a hole or slot in the disk and your sensor detects the slot. The simplest sensors to use are these IR LED/photo transistors. They are cheap and very reliable even in bright light. I use them with a black plastic disk. But test your disk material first. Some that looks black is transparent in IR. Metal is best.<br> <a href="http://rads.stackoverflow.com/amzn/click/B00843JUJM" rel="nofollow">IR sensor</a> To use them you need to add two resisters and 5V power. 330R for a current limiter on the LED side and a 10K load resister on the transistor side. Those values are not critical. I connect this to a interrupt pin on an Arduino.</p> <p>In your case you might drill a hole in the sprocket gear and mount an IR sensor where it can "see" the hole. The sensor come with different slot widths. I'd pick a width abut twice the thickness of the disk</p>
5279
2015-01-03T15:03:36.820
|motor|quadrature-encoder|
<p>I would like to control the position and velocity of a DC motor like this <a href="http://www.ebay.co.uk/itm/ZY-MY-1016-Electric-Motor-24v-200w-Brushed-E-Bike-Scooter-24-Volt-200-Watt-Chain-/171573828523?pt=LH_DefaultDomain_0&amp;hash=item27f298dbab" rel="nofollow">one</a> (ZY MY 1016 - 24V - 10A - 200W). The motor comes with a 13 TOOTH SPROCKET and the diameter of the shaft is 6mm. It's not possible to apply an encoder to the back of the motor (see <a href="http://i.ebayimg.com/00/s/NDIzWDU1MA==/z/0NYAAOxyDo1TjvN6/$_12.JPG?set_id=880000500F" rel="nofollow">picture</a>)</p> <p>The angular velocity in the description is 2750 RPM, which encoder do you recommend? </p>
Which encoder should I use with a 24V Dc motor and 6mm shaft?
<p>You say "I have little programming experience to speak of." I'd fix this problem before going further. You questions tell be you don't know how to use even the basic Arduino IDE.</p> <p>Have you gotten the example sketches in the IDE to work? Staring with the blinking LED example? Sorry if this is an insulting question butI don't know, you don't say what you CAN do</p> <p>Now to your question. "Uploaded Succeeded, but verify failed" means that when the code was loaded and the read back to verify the loading as accurate. This failed. So you can't be sure the correct code is loaded in the Arduino. </p> <p>We don't know why this failed. I would put the copter and related software aside and verify you can load code into the Aduino and verify it. Just write some big sketch that nearly fills memory.</p>
5282
2015-01-03T15:52:44.317
|arduino|quadcopter|microcontroller|ardupilot|
<p>I'm attempting to customise some code for my DIY pentacopter frame.</p> <p>To that end, i've modified the some existing code, and saved it under AP_MotorPenta.cpp and AP_MotorsPenta.h . I'm currently trying to upload the code onto my flight controller, but am currently unable to do so due to the following problems.</p> <p>Problems</p> <p>Unable to upload to my APM 2.6 ( #1) Unable to select my pentacopter frame. (#2)</p> <p>Problem (#1)</p> <p>I've saved my customised files in the AP_Motors library, and have compiled the Arducopter 3.2 code in ArduPilot-Arduino-1.0.3-gcc-4.8.2-windows , after which i upload it using mission planner. However, when i am uploading the hex file, i get the following error </p> <blockquote> <p>"Uploaded Succeeded, but verify failed : exp E2 got 60 at 245760"</p> </blockquote> <p>However, when i try uploading it directly from the modified Arduino IDE, i get a series of warnings , followed by the messages</p> <blockquote> <p>avrdude:verification error, first mismatch at byte 0x3c000 0x60 != 0xe2 avrdude: verification error; content mismatch</p> </blockquote> <p>followed by the message </p> <blockquote> <p>" avrdude done.Thank you. "</p> </blockquote> <p>Does this mean that the uploading of the firmware to my flight controller is successfull? Also, is there any difference between uploading via mission planner and the modified Arduino IDE?</p> <p>Problem #2</p> <p>In the mission planner, originally there is the option to choose one of several frames, (i.e Quad/HexaOcto) etc. After uploading my firmware, how would i go about selecting my penta frame for use?Also is there any further thing that i would have to do?</p> <p>Apologies in advance if the questions are rather inane, as i have little programming experience to speak of.</p> <p>I would really appreciate any help i can get.</p> <p>Thanks in advance !</p>
Uploading edited code for Arducopter
<p><a href="http://vision.in.tum.de/data/datasets/rgbd-dataset" rel="nofollow noreferrer">http://vision.in.tum.de/data/datasets/rgbd-dataset</a></p> <p>This is a set of recordings for the Kinect and Asus Xtion pro, which are all indoors (in offices and a hangar). It comes with precise ground truth from a motion capturing system.</p> <p>The data is available as ROS bag files, but also as a tarball with png images with a text file for the trajectory. </p> <p>There are also some scripts to benchmark your results against the ground truth.</p>
5283
2015-01-03T16:36:27.340
|computer-vision|odometry|
<p>I am planning to develop a monocular visual odometry system. Is there any indoor dataset available to the public, along with the ground truth, on which I can test my approach?</p> <p>Note: I am already aware of the KITTI Vision Benchmark Suite which provides stereo data for outdoor vehicles.</p> <p>If anyone has access to the datasets used in the following paper [SVO14], it would be even more great: <a href="http://rpg.ifi.uzh.ch/docs/ICRA14_Forster.pdf" rel="nofollow">http://rpg.ifi.uzh.ch/docs/ICRA14_Forster.pdf</a></p>
Public dataset for monocular visual odometry
<p>The simplest way? Every crate has its own wheels and motor. In other words one robot per crate. I bet that in some cases this is lower cost. It certainly lowest the design complexity.</p> <p>The other design is a "travel lift". These are used on boat and lumber yards. You would build a smaller version. It is the shape of an inverted "U" with four wheels. It looks like an overhead crane, the kind the runs on an overhead I-bean track but the I beam is on wheels. Then you place three lifts on you mobile I-beam track. The robot could stores three crates, if the I-beam where longer it could store more.</p> <p>You crate ned a handle that a lift can capture. A magnet might help to initially align the parts and then close a mechanical latch. </p> <p>like the one robot per crate idea as being fool proof simple. </p>
5292
2015-01-03T21:50:03.293
|design|wheeled-robot|mechanism|
<p>I am trying to design a robot to lift tote-crates and transport them around in a localized area. I want to be able to carry 3 tote-creates at a time. This robot needs to be able to pickup the creates. I only want the robot to carry three at a time so keep is small and mobile. I was thinking of a design with a central lift that could carry the crates. What would you suggest as a simple ingenious way to create this robot? </p>
Lifting Robot To Lift Small Crates
<p>A linear regulator has a major drawback: it dissipates the power it doesn't deliver. Using the well-known 7805, the output voltage is 5V, your input shall be at least 7V to allow a good regulation (2V dropout voltage), if your load requires 1A, your 7805 dissipates 1A * 2V = 2W. This makes your Linear regulator hot. The best efficiency you may expect from a 7805 is 70% (and, to the best of my knowledge, a linear regulator doesn't exceed 80%).</p> <p>Switching regulators are non linear. For a switching regulator, 80% is a poor efficiency (but sometimes acceptable for very cheap designs), <a href="http://en.wikipedia.org/wiki/Voltage_regulator" rel="nofollow">power efficiency is usually around 90%</a> under full charge, and may come up to 99% when large amounts of energy are involved, with an expensive design. For little loads, efficiency may drop to 60%, due to the power consumption of the regulator itself.</p> <p>So, in my opinion, you should use switching regulator, for a better power efficiency and lower heat dissipation. If you need a single voltage, try to use only one regulator: it's cheaper, and avoid the little load situation. If you want to design a low power application, you'll probably have to choose a more sophisticated design, but it's probably not the case, as you use a beaglebone, which prevents you to manage each processor power supply independently.</p>
5300
2015-01-05T15:10:16.597
|power|battery|
<p>Power Block Designing noob here.</p> <p>I have a beaglebone 2x XbeePro(s) and another 500mAh device connected to the board i am building a PCB Around.</p> <p>I need some advice on weather to use Linear Voltage Regulation vs Switching mode regulation. </p> <p>Secondly if i am using linear voltage regulation setup do I need multiple regulators for the different devices? </p> <p>My plan is to use A 2S 1000mAh Battery -> Fuse -> 2x 1.5A LM1084's in parallel output feed to the beaglebone and a LM3940 for both the XBees. Or its better to have each XBee on its on LM3940 drawing power from a seperate LM1084?</p> <p>Linear Regulators tend to get hot on full load, hows the performance of Switching mode regulators ?</p>
Linear or Switching Power Supply for a Embedded Project
<p>you dont need to do the soldering for the cypress processor, there are always sdk boards like <a href="http://www.cypress.com/?rID=58321" rel="nofollow">http://www.cypress.com/?rID=58321</a></p> <p>or really just get something like <a href="http://www.e-consystems.com/UltraHD-USB-Camera.asp" rel="nofollow">http://www.e-consystems.com/UltraHD-USB-Camera.asp</a></p> <p>browse e-consystems site and you will find something usefull </p>
5310
2015-01-06T19:09:02.560
|cameras|usb|stereo-vision|
<p>I'm working on a project requiring HD (Stereo) Video Processing. Most of High Resolution (5MP+) Sensors use MIPI-CSI interface. </p> <p>I managed to get a board with an Exynos5 SoC. The SoC Itself has 2 MIPI-CSI2 interfaces, the problem is that the pins to those interfaces are not exposed and It's (almost) impossible to reach them. So I decided to use the USB3.0 Buses.</p> <p>The problem is when I get to Significant bandwidth (~5.36 Gibibits/s per sensor), I don't think USB3.0 will work out. <code>Bandwidth = Colordepth*ColorChannels*PixelCount*FPS</code> but this could be solved with a Compressed stream (via a coprocessor)</p> <p>I was thinking that Cypress' CYUSB306X chip was a good candidate for the job, but one of the problems is that I can't do BGA Soldering by hand nor have been able to find a BGA Soldering Service in Switzerland.</p> <p>Any Ideas on other interfaces I could implement or other coprocessors with MIPI-CSI2 Interface?</p> <p>Just a final remark, space and weight are important as this is supposed to be mounted on a drone.</p>
Interfacing high-resolution image sensors with ARM Board
<p>Circular polarized is intended to reduce unwanted reflections. The only concern is you use the same polarization on everything. </p> <p>On quads, cloverleaf patterns were very common, 3 leaf on the transmitters, 4 leaf on the receiver, but there was little consequence using the opposite, or both the same. Just a tad bit of performance enhancement. 5 leaf (aka mad mushroom) was the best receiver, noticeably.</p> <p>This was my experience around 1.5km, should be good 2km.</p> <p><img src="https://i.stack.imgur.com/E9Miz.jpg" alt="mad mushroom"></p> <p><a href="http://www.readymaderc.com/store/index.php?main_page=product_info&amp;products_id=1535" rel="nofollow noreferrer">ReadyMade RC</a></p> <p>BTW that is a 1.5db antenna and was adequate, more gain = more noise. It also is not RP, so you would need an adapter (rp-non rp) of some sort. (rp is reverse pin, it indicates if the sender or receiver are male or female). </p> <p>Are those XBees frequency hopping? Or do they depend on non-point to point for mesh networks? I'm not familiar with that model nor have I used XBees for swarms. I assume you already looked into all that.</p>
6310
2015-01-07T09:22:25.490
|battery|wireless|
<p>I am planning to use 2.4Ghz XBeePro 63mW Devices for a project that requires a coverage area of around 1.5-2km. </p> <p>When i go to select an antenna there are various options like Circular,Virtical, Horizontal polarized etc.</p> <p>Which antenna would give a coverage for a field? I cant have it directional (one point to another point). By devices will be moving around on a field. </p> <p>What type of polarization is recommended for this kind of a setup? My Base XBee will be on a elevation of around 40m from the ground so i have a clear line of sight for all the moving modules. </p> <p>There are going to be around 20-30moving modules streaming data at around 2-5readings per second. </p> <p>A +12dBi Antenna should suffice the application? And what about polarization? </p>
What type of Antennas to use for XBeePRo 2.4Ghz
<p>You can calculate the moment of inertia of a pendulum by measuring the period of oscillation for small amplitudes. Suspend the quad by one arm and give it a little push and time the period. It does work better for larger aircraft, measuring the period of a quad-pendulum will be tricky. Maybe get a video of the aircraft with a high framerate so you can get a more accurate measurement than just a stopwatch.</p> <p><a href="https://www.boundless.com/physics/textbooks/boundless-physics-textbook/waves-and-vibrations-15/periodic-motion-123/the-physical-pendulum-432-1488/" rel="nofollow">https://www.boundless.com/physics/textbooks/boundless-physics-textbook/waves-and-vibrations-15/periodic-motion-123/the-physical-pendulum-432-1488/</a></p> <p>Also having an accurate solidworks model or cad model should have the moments of inertial in some properties list.</p>
6313
2015-01-07T22:36:33.790
|quadcopter|kinematics|
<p>I'm building a quadcopter for my final year project. I have a set of equations describing attitude and altitude but they involve $I_{xx}$, $I_{yy}$ and $I_{zz}$. None of the papers I have read describe how these are calculated. they simply choose it before their simulation. Can anyone help?</p>
How do you calculate the moment of inertia of a Quadcopter?
<p>In this answer, I will outline some techniques for diagnosing <a href="http://en.wikipedia.org/wiki/RS-232" rel="nofollow">RS232</a>-style serial communication problems.</p> <p>For a two-way serial communication link between two devices to work, at the most basic level we need a connection from TxD on device 1 to RxD on device 2 and a connection from TxD on device 2 to RxD on device 1.</p> <p>Voltage or current level conversions may be needed; for example, if device 1 uses -12 V for mark and +12 V for space while device 2 uses +3.3 V for mark and 0 V for space, voltage inversion/scaling/shifting is an obvious requirement.</p> <p>For the OP's case, 3.3 V at the Beaglebone needs to be translated to or from 5.0 V at the Arduino. Note that Arduinos with an 8 MHz crystal can operate at 3.3 V and may have a 3.3 V regulator rather than a 5 V regulator. Use of 3.3 V on the Arduino would allow dropping out the voltage converter board shown in the OP, ie would remove one possible problem source.</p> <p>If you have a 3.3 V power supply, you can attach that to Arduino Vcc, but don't attach more than one power source at once. Ie, if you attach 3.3 V to Vcc, don't plug in a USB cord, and don't hook +RAW or the power jack to anything.</p> <p>The RS232 standard provides several control lines [eg request to send (RTS), clear to send (CTS), data terminal ready (DTR), and data set ready (DSR)] which could be discussed in a comprehensive treatise on diagnosing RS232 problems but won't be discussed at any length here. A <a href="http://beaglebone.cameon.net/home/serial-ports-uart" rel="nofollow">Beaglebone Serial ports / UART</a> webpage on beaglebone.cameon.net shows pin numbers for control lines, eg:</p> <pre><code> RX TX CTS RTS Device UART4 P9_11 P9_13 P8_35 P8_33 /dev/ttyO4 </code></pre> <p>It may be necessary to jumper CTS high or low.</p> <p>To recap: We need to connect TxD, RxD on device 1 to RxD, TxD on device 2. To make that connection, we need to identify the TxD and RxD pins on each device. Beaglebone and Arduino devices may have several sets of TxD, RxD pairs. One of the best, quickest, and most-certain ways to check if you are using the correct pair (eg, TX0,RX0 instead of TX1,RX1 on an Arduino Mega, or UART4 instead of UART3 or UART5 on a Beaglebone) and have properly enabled it is to run a program that continuously sends a pattern of data to TxD, and verify with an oscilloscope, logic analyzer, logic probe, or LED that the expected data pattern appears at the expected pin.</p> <p>If you are using a scope, the program should send a simple and steady stream of data at a reasonably high datarate, like 9600 bps. Eg, run a program like <code>while 1: send 'U'</code> and check that your TxD pin is toggling at the right rate and other pins aren't.</p> <p>If you are using an LED, make it not too bright (eg, 2KΩ-20KΩ dropping resistor) so you can look closely at the light; make it red or amber so it will light up ok on 3.3 V; use a low datarate (300 bps) so you can see flickering bits; and use a repetition pattern obvious to the eye; eg:</p> <pre><code>while 1: send '@'*60 # Ie, send a string of 60 @ signs wait 1 sec </code></pre> <p>On the Beaglebone, if you don't find any output on Tx4, revise your test program to output on Tx0 (which I think is automatically enabled and available in userspace) and use that to verify that your test program and probe are working. If it works ok on Tx0 but not on Tx4, look for additional web information about enabling Beaglebone serial ports and making them available to the user.</p> <p>After you have verified that output works on some TxD pins with no voltage shifter attached, attach the shifter and repeat the test. If successful, also test the outputs of the voltage shifter. Then attach RxD of the other device to shifter output and verify that the desired data pattern is arriving at that RxD pin.</p> <p>If you are using a scope, you'll have seen whether voltages are correct (ie 0 and +3.3 or +5.0) at each stage of each connection. If you are using an LED probe, you may be able to tell if voltage at one point is more than at another; but you might also try running data patterns with more zeroes (eg long strings of <code>\0</code>), or more ones (long strings of <code>0x7F</code> or <code>0xFF</code> characters) and using a voltmeter to estimate voltages.</p>
6314
2015-01-08T15:02:47.410
|arduino|control|serial|communication|beagle-bone|
<p>I'm trying to get an arduino to talk with a beaglebone black. I have followed <a href="http://hipstercircuits.com/enable-serialuarttty-on-beaglebone-black/" rel="nofollow">this</a> tutorial for getting ttyO4 open on the BBB and used the following command on to set the serial line correctly: </p> <p>wiring is set up according to <a href="http://www.instructables.com/id/How-to-make-a-BeagleBone-and-an-Arduino-communicat/" rel="nofollow">this</a> tutorial. stty -F /dev/ttyO4 cs8 9600 ignbrk -brkint -imaxbel -opost -onlcr -isig -icanon -iexten -echo -echoe -echok -echoctl -echoke noflsh -ixon -crtscts</p> <p>next data is sent using the following method: </p> <pre><code>echo s123 &gt; /dev/ttyO4 </code></pre> <p>the arduino uses the followingvoid loop(){ code to check for serial communication:</p> <pre><code>#include &lt;SPI.h&gt; void setup(){ //////////////SETUP/////////////////////// Serial.begin(9600); } void loop(){ if(Serial.available()&gt;=4){ digitalWrite(12, HIGH); delay(1000); // wait for a second digitalWrite(12, LOW); // turn the LED on (HIGH is the voltage level) delay(1000); // wait for a second digitalWrite(12, HIGH); byte b1,b2,b3,b4; b1=Serial.read(); } } } </code></pre> <p>However it seems no message is received. It does not give any error either. </p> <p>As an alternative I have also tried a variant of the code suggested in the wiring tutorial resulting in the following code:</p> <pre><code>import sys from bbio import * Serial2.begin(9600) for arg in sys.argv: print arg Serial2.write(arg) delay(5) </code></pre> <p>called with <code>pyton test s123</code> this printed s123 but the arduino remained silent. Edit I have now also tried to exactly follow the wiring tutorial so that gave me the following sketch: </p> <pre><code>char inData[20]; // Allocate some space for the string char inChar=-1; // Where to store the character read byte index = 0; // Index into array; where to store the character void setup() { Serial.begin(9600); pinMode(13, OUTPUT); // digital sensor is on digital pin 2 digitalWrite(13, HIGH); delay(2000); digitalWrite(13, LOW); delay(500); } void loop() { Serial.write("A"); digitalWrite(13, HIGH); delay(100); digitalWrite(13, LOW); delay(100); if (Comp("A")==0) { digitalWrite(13, HIGH); delay(1000); digitalWrite(13, LOW); delay(500); } } char Comp(char* This) { while (Serial.available() &gt; 0) // Don't read unless // there you know there is data { if(index &lt; 19) // One less than the size of the array { inChar = Serial.read(); // Read a character inData[index] = inChar; // Store it index++; // Increment where to write next inData[index] = '\0'; // Null terminate the string } } if (strcmp(inData,This) == 0) { for (int i=0;i&lt;19;i++) { inData[i]=0; } index=0; return(0); } else { return(1); } } </code></pre> <p>and on the BBB we turn on the echo script with</p> <pre><code>/PyBBIO/examples$ sudo python serial_echo.py </code></pre> <p>The effect remains that there is no error but also no data delivery. </p>
Communicating between a beaglebone black and an arduino using ttyO4
<p>First some terminology, finding camera pose is known in computer vision as external camera calibration. Since you mentioned Kalman filter, I assume you need to need to "update" the so-called external parameters of the camera continously (i.e. the camera is moving). </p> <p>If the camera moves in front of the reference frame you might use the usual chessboard pattern as a calibration rig. If however the camera moves about the reference frame, you might want to use other object and test robust feature detectors (like SIFT) but you have to make sure your object doesn't have symmetrics that might lead to ambiguities..</p> <p>In case you don't have control over the envionment, there are the so-called automatic calibraiton methods (see reference). The problem with this approach is that you have the assumption that the features you find in your envionment belongs to rigid bodies (on Mars you can make such assumption safely ;) </p> <p>References:</p> <ul> <li>SIFT <a href="http://en.wikipedia.org/wiki/Scale-invariant_feature_transform" rel="nofollow">http://en.wikipedia.org/wiki/Scale-invariant_feature_transform</a></li> <li>OpenCV Camera Calibration <a href="http://docs.opencv.org/trunk/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html" rel="nofollow">http://docs.opencv.org/trunk/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html</a></li> <li>Carrera et al, "SLAM-Based Automatic Extrinsic Calibration of a Multi-Camera Rig", 2011</li> <li>Miksch et al, "Automatic Extrinsic Camera Self-Calibraiton Based on Homography and Epipolar Geometry", 2010</li> </ul>
6315
2015-01-08T16:31:38.217
|kalman-filter|computer-vision|pose|
<p>I am building an estimator that solves for the camera pose relative to a reference frame which contains a known set of features and edges. Currently, the system works with an unscented kalman filter with four known points (red leds) in the reference frame. I am now hoping to improve robustness by adding edges to the model as well as robust features. I would like to add additional points that are uncovered by some opencv feature finding function (fast,cornerHarris,...).</p> <p>So far I found the paper "Fusing Points and Lines for High Performance Tracking" and "Robust Extended Kalman Filtering For Camera Pose Tracking Using 2D to 3D Lines Correspondences" which seem to detail how to fuse edge and feature matching for pose estimation.</p> <p>Is there a strategy to populate the known set of edges and features when it is impractical to measure them with a ruler/tape measure? My first thought is to start with a small known set of features, my red leds, then run some slam algorithm and keep all features/edges that have some minimum certainty.</p> <p>Thanks a bunch!</p> <p>I have misunderstood the RANSAC algorithm. This is not appropriate for my application. </p> <p>For those interested, I am hoping to use a similar approach to the one presented in the following paper.</p> <p>Youngrock Yoon, Akio Kosaka, Jae Byung Park and Avinash C. Kak. "A New Approach to the Use of Edge Extremities for Model-based Object Tracking." International Conference on Robotics and Automation, 2005.</p>
Pose estimation, how to populate set of known edges and points?
<p>I would recommend looking into ROS. It's a communications protocol that allows for easy communication between two pieces of code. It works with python and cpp and handles all the building and compiling. There are a ton of ROS libraries, called packages, which the community has put together and maintains. </p> <p>It looks like ros indigo has been successfully built in using the jetson tk1.</p> <p><a href="http://wiki.ros.org/indigo/Installation/UbuntuARM" rel="nofollow">http://wiki.ros.org/indigo/Installation/UbuntuARM</a></p> <p>Here are some ros tutorials that cover all of the basics.</p> <p><a href="http://wiki.ros.org/ROS/Tutorials" rel="nofollow">http://wiki.ros.org/ROS/Tutorials</a> </p>
6322
2015-01-10T08:32:31.583
|linux|
<p>After working for a long time on my Arduino Due, I needed a better and more powerful prototyping platform for my future projects. For which, I have placed an order for NVIDIA Jetson Tegra K1 board which runs on linux and supports CUDA based development. Being a newbie to Linux, I have no idea where to start from and what to do for getting started with code execution on the Jetson board. Please suggest the initial steps required and from where can I get familiar to Linux environment...</p> <p>Thank you</p>
Getting started with Jetson Tegra K1
<p>18 servos is a lot of power no matter their size but you are going to need to be more specific with what kind of servos you are talking about. That said, this <a href="https://robotics.stackexchange.com/questions/416/what-is-the-best-way-to-power-a-large-number-27-servos-at-5-v?rq=1">What is the best way to power a large number (27) servos at 5 V?</a> is a good answer to your question. </p>
6327
2015-01-10T15:09:06.450
|power|otherservos|beagle-bone|hexapod|
<p>I am trying to build a <em>hexapod</em> with camera interfacing using a <strong>beaglebone black</strong> for college project. I'm not sure what <strong>power supply</strong> to give so it can power up to <em>bot,</em> having in mind that it should be <strong>portable (mobile)</strong> and it should power about <em>18 servo motors</em> along with the <em>camera</em>, <em>wifi</em> and the <em>processor</em>. Your help is needed very badly as i'm nearing the <em>deadline</em> for the project.</p>
beaglebone black power supply for hexapod
<p>Ubuntu is a pretty common distribution so it should be pretty easy to work with. However, Ubuntu is targeted for desktops, so I'd recommend using a light-weight flavor for the Beaglebone. For example <a href="http://xubuntu.org/" rel="nofollow">Xubuntu</a> or <a href="http://lubuntu.net/" rel="nofollow">Lubuntu</a>. But I also know that installation on a board like this can be a pain, so i'd try to find some standard image if possible.</p>
6328
2015-01-10T15:15:58.930
|control|programming-languages|beagle-bone|linux|
<p>I'm trying to build a hexapod with beaglebone in the linux environment (Im thinking of using Ubuntu). What is the <strong>best language</strong> to use for <strong>coding purpose</strong> to make <em>robot controls, camera and wifi integration</em> etc.</p>
language to code Beaglebone
<p>The device you are looking for is broadly known as a slip ring, but since you are trying to power other motors through it you should be aware that a slip ring is not electrically identical to a wire. </p> <p>Depending on the amount of current you are trying to send, the slight variations in impedance that your slip ring will produce as it spins can damage some motor controllers. There is no real way (that I'm aware of) to find out whether this will happen to you... until it happens.</p>
6331
2015-01-11T15:25:34.317
|motor|stepper-motor|power|
<p>How would one go about passing power through a motor?</p> <p>Let's say we have some basic robot which has a motor that slowly spins a limb, on each end of that limb, there is a motor which again spins a limb. Because the first motor is always going to be spinning, any wires would twist and eventually break, so a wired approach wouldn't work. The same goes for the subsequent motors.</p> <p>I know that dc motors use brushes to get past this, but how is this generally solved in engineering/robotics? This must be a problem that has come up before, and there must be a solution to it.</p> <p>Any ideas? :)</p>
Passing power through a motor
<p>In addition to the reasons mentioned in other answers about the numerical conditioning of inverse depth, a major reason for this term to appear in specifically visual odometry literature is in the way that depths are computed from stereo vision: After rectification, 3D information is inferred from the distance in X between where a point appears in the two cameras' images.</p> <p>Depth, $Z$, is then computed from disparity, $d$, as $Z=\frac{fB}{d}$, where $f$ and $B$ are focal length (in pixels) and camera baseline (in meters) respectively. So working in the space of inverse depths puts you also in the space of disparities, the quantity directly being estimated, and it becomes easier to work with distributions or errors in that quantity.</p>
6334
2015-01-12T02:54:50.400
|slam|computer-vision|odometry|
<p>Reading some papers about visual odometry, many use inverse depth. Is it only the mathematical inverse of the depth (meaning 1/d) or does it represent something else. And what are the advantages of using it?</p>
What is inverse depth (in odometry) and why would I use it?
<p>I assume you are talking about a Bode plot of a <a href="http://en.wikipedia.org/wiki/Transfer_function" rel="nofollow">transfer function</a> response, which relates the input of a system to the output in the frequency domain. One way of measuring such a response would be to input a sinusoidal signal of one particular frequency, and measure the amplitude and phase shift of the output. So the input will have the following form,</p> <p>$$ x_{in}(t) = A \sin(\omega t). $$</p> <p>The output will always also be a sinusoidal signal of the same frequency, assuming that the dynamics of the system are linear, so no $\dot{x}=x^2$, but possibly with a different amplitude and phase. So the output will have the following form,</p> <p>$$ x_{out}(t) = B \sin(\omega t + \phi). $$</p> <p>The amplitude would actually be the ratio of $B/A$ and $\phi$ would be the phase shift. There are more efficient ways of measuring a transfer function, however I think this helps explain the meaning of the Bode plot of a transfer function in the frequency domain.</p> <p>Usually a logarithmic scale is used for the frequency axis of a Bode plot, just as the amplitude (usually with decibel). A linear scale usually used for the phase information. A transfer function usually expressed in complex values instead of amplitude and phase, since that information can also be extracted from complex numbers. When you want to find the response of a system, you can just multiply the transfer function by the frequency domain of the input signal (when you have amplitude and phase separate, you can multiply the amplitudes and add the phases).</p> <p>The frequency domain of a step function has a constant nonzero amplitude and constant phase for all frequencies. If you would multiply this by the transfer function you would basically get the transfer function back (possibly scaled) as the frequency domain of the output signal. Since all sinusoidal signals oscillate around zero, therefore you can find the offset from zero (I assume you mean this by statical error) by looking at the amplitude of the output at 0 Hz.</p>
6339
2015-01-12T10:41:26.060
|control|automatic|
<p>I'm not sure if this is the correct forum for this question about Automatic Control, but I'll try to post it anyway.</p> <p>So, I've just started to learn about Control Systems and I have some troubles understanding Bode plots. My textbook is really unstructured and the information I find on the Internet always seem a little bit too advanced for a beginner like me to grasp, so I hope you can help me to understand this. </p> <p>I would like to know what the information we find in the Bode plot can tell us about how the corresponding step responses behave. I know that a low phase margin will give an oscillatory step response and that the crossover frequency decide the rise time. But how can we see if the step response has a statical error in the Bode plot and what does the phase graph of the Bode plot actually mean?</p>
Understanding the Bode Plot
<p>The answer is on page 2 of the datasheet. The minimum "Input High Voltage" is listed as "70% × VS". Therefore, if you supply voltage (VS) is 5V, the minimum specified input voltage would be 3.5V. This value is for operation over the entire specified temperature range.</p> <p>It MAY work with 3.3V inputs (probably unreliably), but it would be better to convert the inputs to 5V logic. If your application will allow you to use a low-side driver, you could power the LT1157 from 3.3V and use a low-side driver to switch power for your 5V load (similar to the "Mixed 3.3V and 12V High- and Low-Side Switching" example in the datasheet.</p>
6341
2015-01-12T11:53:02.410
|circuit|
<p>I plan to use the <a href="http://cds.linear.com/docs/en/datasheet/lt1157.pdf" rel="nofollow">LT1157</a> in my application PCB to act as a switch control from a micro controller side to control the On/Off state of 2 module boards which will be connected in the PCB. </p> <ul> <li>1st Load is 5V 1A.</li> <li>2nd Load is 3.3V 500mA.</li> </ul> <p>The LT1157 will get a 5V input at the V<sub>s</sub> terminal. </p> <p>Does anyone know how much voltage is required to be used at the IN1 and IN2 pins? The datasheet doesn't say how much voltage can be used here. I am guessing it will be 5V, but can it do logic level with 3.3V? My microcontroller board gives an output of 3.3V and not 5V so I'll have to make a logic Level converter before feeding the pins IN1 and IN2 if it's not 3.3V tolerant. </p> <p>Please confirm, if anyone has used this IC before. </p>
LT1157 Logic Level Question
<p>The text is saying that system perturbations and measurement noise look identical to the controller. That is, a controller will react in the same fashion to both system perturbations and noise in the measurement. By increasing perturbation rejection, you also make the system more susceptible to noise from your sensors. </p> <p>The sensitivity function is the ability to reject system perturbations. The complementary sensitivity function is the ability to filter noise.</p> <p><a href="http://lorien.ncl.ac.uk/ming/robust/sensfunc.pdf" rel="nofollow">http://lorien.ncl.ac.uk/ming/robust/sensfunc.pdf</a></p>
6348
2015-01-12T19:54:05.230
|control|noise|automatic|
<p>I'm studying for a test in Automatic Control and I have some troubles understanding sensitivity functions and complementary sensitivity functions. </p> <p>There's one assignment from an old exam that sais "Someone suggests that you should reduce perturbations and measurement noise simultaneously. Explain why this is not possible."</p> <p>The correct answer sais: "Since the sensitivity and complementary sensitivity transfer functions add up to 1, i.e. $S+T=1$, one cannot improve both the output disturbance and measurement error suppression at the same time."</p> <p>I don't really understand this answer and my textbook is not to much help either, so I would appreciate alot if someone could explain how they got to this answer? Also, is the sensitivity function always representing the perturbations in the system and the complementary sensitivity function the measurement noise? My textbook seem to imply this, but I'm really not sure if this is always true.</p>
What Does the Sensitivity Function Mean?
<p>"Is there a way to account for possible error in the speeds of the motors so that the robot can end up in a very precise location?"</p> <p>The other answers describe the approximate solution (encoders). It depends on what you define 'precise' as. For an Arduino budget project, it is probably precise enough. But typically, say you direct your Arduino robot to move in a 1 meter square, you can expect it to be off by several cm when it returns to the start point. That is usually fine for an Arduino robot, but imagine something 100 times bigger (a car) being off by 100 times as much, now you have problems.</p> <p>This problem is overcome by adding additional sensors. Each one can add additional accuracy until required precision is reached.</p> <p>A gyro helps by giving more accurate short term turning information, as encoders depends on mechanical solutions that have slippage.</p> <p>A compass works by giving more long term heading information, but is subject to disturbances.</p> <p>A GPS, if available, can provide periodic 'corrections' to overall location information.</p> <p>A SLAM algorithm, often laser range finder based, can triangulate on detected objects to help improve accuracy.</p> <p>Lately the use of images, especially stereo, can also provide triangulation information. This is often referred to as optical odometry.</p> <p>All of these additional measurements, separate or in some combination, is 'fused'. Very often with some form of a Kalman filter. The filter helps prioritize which signals to use dynamically.</p>
6351
2015-01-12T21:36:46.953
|arduino|mobile-robot|two-wheeled|
<p>Our goal is to drive an autonomous robot with a differential locomotion system using two identical electric motors and an Arduino Uno (1 for each wheel). From our understanding, over time the motors can lose accuracy and may result in one motor rotating more than the other. Is there a way to account for possible error in the speeds of the motors so that the robot can end up in a very precise location?</p> <p>Our thoughts were to have an indicator which would allow us to count the number of rotations of each motor and compensate if a noticeable difference began to appear.</p>
Accounting for error in multiple electric motors
<p>Update: After more debugging I have traced the variance in the potentiometer to a shaky ground in the Roboclaw 2x60a motor controller. When I connect the ground to the Roboclaw the potentiometers vary by +-4 through +-6 degrees when the ground is not connected to the Roboclaw the pots vary by +- 1 sometimes rarely +- 2. I have tried using the arduino as a logic battery and it doesn't help.</p>
6355
2015-01-13T01:49:07.980
|arduino|motor|pid|software|avr|
<p>Background: I am new to PID, for my first PID project I am using a simple P-Loop and 300 degree linear potentiometers for position feedback. I am using the roboclaw 2x60A motor controller. The motor controller has 64 speeds between. Sometimes the potentiometers can vary as much as +-4 degrees when not in motion. I am using an Arduino mega with a 10bit ADC to control the motors. </p> <p>My Question: How can I filter or reduce the variance in the potentiometers? In addition, it takes a certain amount of time for the motors to react to the command, and it seems to throw off the P loop. How do I account for the latency, in my program?</p> <p>Example: For this example the P loop was run every 33-36 milliseconds. I will tell the motor to go to 250 deg/sec, and it will go to 275 deg/sec, the P loop then reacts by lowering the value sent to the motor however the speed then increase to 400 deg/sec and then the P loop lowers the value again, then the speed will drop to 34 deg/sec.</p> <p>Thanks so much for any help, Joel</p>
Dealing with position inaccuracy and latency in PID Loop
<p>You may want to consider mastering the art of controlling an LED matrix first. Then, replace the LED's with some sort of actuator (motor, servo, or other mechanism) at each node of the matrix that performs the up and down motion that you seek.</p> <p>It sounds simple, but there are other details to consider. Particularly, you'll probably need more power to drive the actuators than you would for LED's. This, of course, depends on the actuators that you choose. </p> <p>There are probably more factors to consider beyond what i've mentioned (e.g. Cost, precision, sensor feedback, etc...), but the first step would best be learning how to control an LED matrix. I may even advise to look into using shift registers rather than multiplexing.</p> <p>As for achieving vertical motion of the dowels with an actuator, there are a number of ways to do it. The easiest may be using a rack and pinion or a closed loop pulley with an actuator attached at one end. </p>
6356
2015-01-13T02:10:59.060
|arduino|mechanism|servomotor|
<p>I am completely new to this site and robotics, but I have experience in programming, and programming microcontrollers. </p> <p>I would like to create a grid of "pixels", where each "pixel" is a metal or wooden dowel that is programmed to push in and out, like a piston. </p> <p>I'm imagining a lot of pixels, maybe 40x40, where each could be quite small in diameter (1/4"). The Arduino would have control over the linear movement - up and down - of each pixel.</p> <p>Could anyone point me in the right direction for accomplishing this?</p>
How do I achieve this? Grid of dowels powered by piston-like movement
<p>Depends whether you're talking Point to point or just in a wifi field. According to the <a href="http://cdn.sparkfun.com/datasheets/Wireless/WiFi/ds_xbeewifis6b.pdf" rel="nofollow">datasheet</a> the transmit power is +16dBm or 40 mW and a maximum sensitivity of -93dBm. According to <a href="http://www.afar.net/rf-link-budget-calculator/" rel="nofollow">my favorite link budget calculator</a> with 2 of these modules both with 22dBi gain antennas on them you will get around 6 miles (RF line of sight) out of them with a respectable 33dB fade margin. But I suspect you are asking about accessing a standard wifi router, and with a receiving antenna gain of 0dBi (perfictaly omnidirectional) and a sensitivity of -70dBm (crappy ap) you will have a fade margin of around 13dB which would work, and I suspect it would be higher. So, those modules should work for your application, but the best way to find out is to test them. </p>
6366
2015-01-13T18:11:37.700
|wifi|
<p>Has anyone used the XBee WiFI modules? Done a range check on them?</p> <p>With my laptop i get a range of around 400m on industrial level Accesspoints on a football field, well how good are these devices ? If i get a SMA Connector version and use a higher gain antenna am I looking at ranges from 250-500m ? (Talking 18-22dBi gains here).</p>
How can I improve the range of an XBee S6B?
<p>This thread might also be helpful: <a href="http://answers.ros.org/question/9665/test-for-when-a-rospy-publisher-become-available/" rel="nofollow">http://answers.ros.org/question/9665/test-for-when-a-rospy-publisher-become-available/</a></p> <p>The question is specifically about python, but i think the root cause is ROS specific, not language specific.</p>
6374
2015-01-14T11:54:54.653
|ros|c++|
<p>I have the following code: </p> <pre><code>void NewCore::spin(){ ros::Rate rate(10); int first =1; while(ros::ok()){ if (first){ askMath();first=0;} ros::spinOnce(); rate.sleep(); } } int main(int argc, char **argv){ ros::init(argc, argv, "newCore"); NewCore nc; nc.init(); nc.spin(); } void NewCore::init(){ mngrSub = handle.subscribe&lt;std_msgs::String&gt;("/tawi/core/launch", 10, &amp;NewCore::launchCallback, this); mngrPub = handle.advertise&lt;std_msgs::String&gt;("/tawi/core/launch", 100); mathSub = handle.subscribe&lt;std_msgs::String&gt;("/display", 10, &amp;NewCore::launchCallback, this); serSub = handle.subscribe&lt;std_msgs::String&gt;("/tawi/arduino/serial", 100,&amp;NewCore::serialCallback,this); mathPub = handle.advertise&lt;std_msgs::String&gt;("/questions", 100); ballPub = handle.advertise&lt;std_msgs::Int16&gt;("/tawi/core/ballcount", 100); nmbrPub = handle.advertise&lt;std_msgs::Int16&gt;("/tawi/core/number", 100); } void NewCore::askMath(){ ROS_INFO("addition1d&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;"); std_msgs::String question; question.data = "1digitAddition"; mathPub.publish(question); } </code></pre> <p>(code that isn't interesting has been removed)</p> <p>Running this causes the following output to appear: </p> <pre><code>$ rosrun glados newCore [ INFO] [1421236273.617723131]: addition1d&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt; </code></pre> <p>However if I have the following code on during launch:</p> <pre><code>$ rostopic echo /questions </code></pre> <p>then it does not show me an initial message being sent.</p> <p>Changing</p> <pre><code> if (first){ askMath();first=0;} </code></pre> <p>into </p> <pre><code> askMath();first=0; </code></pre> <p>does appear to work but it then sends a message every cycle rather then just the one at the start. </p> <p>Does anybody know what is wrong here? </p>
ros send message on startup doesn't seem to work
<p>Yes, the Jacobian relates the joint velocities to end-effector velocity through this equation:</p> <p>$$ \mathbf{v}_e = \mathbf{J}(\mathbf{q}) \dot{\mathbf{q}} $$</p> <p>Where $\mathbf{q}$ is the joint angles, $\dot{\mathbf{q}}$ is the joint velocities, and $\mathbf{v}_e$ is the end-effector velocity. As you can see, the Jacobian, $\mathbf{J}$, is configuration dependant. So plug in some joint angles and velocities then you will get the velocity of the end-effector. </p> <p>To get the Cartesian position of the end-effector, given the joint angles, you use the <em>direct kinematics function</em> also called <em>forward kinematics</em>. There are various methods to do this. Geometric analysis if your arm is simple enough e.g. a planar 2 link arm. Product-of-exponentials is another method. But the Denavit-Hartenberg method is probably the most widely used. I am not going to go into the details of this here. But basically you will get a transformation matrix for each joint: $\mathbf{A}_i^{i-1}(q_i)$. That when you plug in the joint angle, you get the pose of joint $i$ relative to joint $i-1$. These can be combined in a recursive fashion to get the pose of the end-effector relative to the base of the arm:</p> <p>$$ \mathbf{T}_n^0(\mathbf{q})=\mathbf{A}_1^{0}(q_1) \mathbf{A}_2^{1}(q_2) ... \mathbf{A}_n^{n-1}(q_n) $$</p> <p>Note that you can differentiate $\mathbf{T}_n^0(\mathbf{q})$ to get the <em>analytical Jacobian</em>. But people typically use the <em>geometric Jacobian</em> which is not as hard to compute.</p> <p>Now to compute the required joint velocities to achieve a desired end-effector velocity, you must invert the Jacobian. But this only works if the number of DOFs equals the number of dimensions of your space:</p> <p>$$ \dot{\mathbf{q}} = \mathbf{J}^{-1}(\mathbf{q}) \mathbf{v}_e $$</p> <p>(Note that there are some arm configurations (such as singularities) where the Jacobian will not be invertable.) If you have more DOFs, you are under-constrained. (i.e. there is more than one solution). Typically, people use the <em>right pseudo-inverse</em> of the Jacobian which locally minimizes the norm of joint velocities. </p> <p>$$ \dot{\mathbf{q}}=\mathbf{J}^{\dagger}\mathbf{v}_{e} $$ where: $$ \mathbf{J}^{\dagger} = \mathbf{J}^T(\mathbf{J}\mathbf{J}^T)^{-1} $$</p> <p>Note that J is still dependant on q, but (q) is dropped for clarity.</p>
6382
2015-01-15T15:12:04.873
|kinematics|inverse-kinematics|forward-kinematics|manipulator|jacobian|
<p>I'm in the early stages of working with a simple robot arm, and learning about the Jacobian and inverse kinematics.</p> <p>From my understanding, the Jacobian can be used to determine the linear and angular velocity of the end effector, given the angular velocity of all joints in the arm. Can it also be used to determine the Cartesian position of the end effector, given the angles and/or positions of the joints?</p> <p>Furthermore, suppose that I want to determine the required angular velocities of the joints, in order to bring about a desired linear velocity of the end effector. Can this be done by simply inverting the Jacobian and plugging in the desired parameters?</p>
Can a Jacobian be used to determine required joint angles for end effector velocity/position?
<p>A slightly better method for home-made PCBs is to <a href="https://en.wikipedia.org/wiki/Printed_circuit_board_milling" rel="nofollow">mill the copper off</a> of a two sided PCB. Google pcb milling for many machines and services.</p> <p>Another crazy idea that will probably get more affordable in the coming years is to use a 3D printer that can extrude conductive and non-conductive materials. </p>
6383
2015-01-15T19:46:06.627
|design|
<p>I've made many PCBs at home but still there are some mistakes. I tried ironing, drawing methods but it doesn't work very well. I use eagle CAD for design PCBs. Please help me. </p>
PCB making at home
<p>The alternate method is to move the pen as you suggested (or the board if you have room) instead of the robot. This is called a <a href="http://www.instructables.com/id/Low-Cost-Hobby-Servo-XY-Table/" rel="nofollow">XY Table.</a></p> <p>To do this, you would need a set of stepper motors, some sort of linear rail system for movement, and a hardware controller board. The software will need to include acceleration and deceleration (and perhaps motion planning) to achieve smooth results. Coarse results can be achieved by simple instructions, as you mentioned. If you do not wish to write your own software, a <a href="https://www.google.com/webhp?#q=3d%20print%20control%20board" rel="nofollow">board designed for 3D printing</a> would work as is, and you would just need to learn <a href="https://code.google.com/p/dxf2gcode/" rel="nofollow">G-Code</a> to send to it. </p> <p>Overall, I think typically the cost exceed the turtle bot approach, but the program can be simplier. Just be aware that steppers, can and do fail to step, if the mechanics bind in some way, thus precision of construction is critical.</p> <p>An alternate of this alternate, is inverse kinematics where the pen is moved by a non XY motion. Again, the math can be complicated, but success has been <a href="https://www.youtube.com/watch?v=G3GtdstjT7U" rel="nofollow">accomplished</a> by those younger than high school (well sorta).</p>
6386
2015-01-16T13:15:34.337
|motor|mechanism|
<p>For a high school project I will be building a robot that will draw an image on a whiteboard for you based on what instructions you give. To accomplish this a motor will move the pen on each axis similar to how a 3rd printer moves but without the Z axis. As far as code goes I'm fine but I was wondering if anyone could give me an insight on how to go about building the robot (i.e. what motors, best system for moving on axises etc) All help is appreciated thanks</p>
Robotics advice needed
<p>I bought <a href="https://www.adafruit.com/products/1083" rel="nofollow">this ADC breakout board</a> I will use a battery for this and put optocouplers on the i2c lines to completely separate the ADC circuit from the rest of the circuits. And it gives me a little better resolution on the potentiometers.</p>
6398
2015-01-17T21:38:32.953
|arduino|motor|electronics|noise|driver|
<p>Background:</p> <p>I am using an Arduino Mega connected to a RoboClaw 2x60A motor driver. I asked <a href="https://robotics.stackexchange.com/questions/6355/dealing-with-position-inaccuracy-and-latency-in-pid-loop">this question</a> about the system, but I have since narrowed the scope of the problem. I tried adding a bunch of different size capacitor between the 5v and gnd, when the RoboClaw is switched off then a 470 micro farad capacitor seems to eliminate all noise but when I turn on the RoboClaw no capacitance valued I tried, (4.7,10,100,220,320,470,540,690,1000,1100)microfarads seems to eliminate any noise. I even tried hooking up a 12v battery with a 5v regulator to the logic battery on the RoboClaw and connecting it to the ground on the Arduino. Then I tried using a separate battery for the pots and connecting the AREF to the +5v on the battery.<br> No matter what I try when the roboclaw is on the potentiometer value will vary as much as +-6 degrees. I found the degrees using:</p> <p>map(analogRead(A0),0,1023,0,300) </p> <p>In addition I took a bunch of data and graphed it and found that if I took 25 instantaneous data points and averaged them together it would help significantly reduce the variance. I chose 25 because it take 2.9 ms, 100 worked really well but it took 11 ms. To help explain the averageing of analog read, here is my code:</p> <p>unsigned int num = 0;</p> <p>for (int i = 0; i&lt;25; i++){</p> <pre><code>num+=analogRead(A0); </code></pre> <p>}</p> <p>potReading = num/25;</p> <p>My Question:</p> <p>What is my next step in eliminating this noise? Is there a formula I can use to find a better capacitance value? Should I try putting capacitors on each potentiometer between 5V and gnd? Any other IC I should try to help with this? On my previous question someone mentioned optocouplers, what size would work best and where in the circuit do they go? Is there code I can write to help eliminate the size of the variance beyond what I have written?</p> <p>Thanks so much for any help, Joel</p>
Eliminating Electrical Noise from my motor driver
<p>Kinematic constraints involves only constraint on the motion (kinematics means the study of motion without considering the force that causes it), which may involve configuration variables and their time derivatives (including higher order derivatives). In particular, no inertial parameters are involved in the constraint. The constraint is geometric in nature, since it restricts the free motion in your configuration space (or state space if there is some constraint on the velocity) to a submanifold defined by the constraints. </p> <p>If the constraints involve no time derivative of the configuration variables, it is sometimes called a holonomic constraint. Your robot is constrained to move on a submanifold (or a hypersurface if it pleases you). In this case, you have a reduced number of independent configuration variables, or to say the robot has a reduced number of degrees of freedom</p> <p>If the constraints involve time derivative of the configuration variables, it is called a differential constraint. It is very tricky in this situation, since some differential constraints might actually come from time differentiation of holonomic constraints, in which case we say it is integrable. Integrable differential constraints are in fact holonomic constraints and should be treated as one. Nonintegrable differential constraints are also called nonholonomic constraints. It may or may not reduce the robot's degrees of freedom, depending on certain reachability conditions (similar to a controllability condition).</p> <p>A dynamic constraint, conceivably, is one involving not only kinematics but also inertias, forces, dampings, stiffness, etc. Usually, up to second order configuration variables are involved (third order may be involved in the case of vibration analysis, trajectory generation, etc).</p>
6405
2015-01-20T08:57:02.873
|motion-planning|
<p>I would like to know the simple difference between kinematic, dynamic and differential constraints in robotic motion planning.</p>
Difference between kinematic, dynamic and differential constraints
<p>The state space is the space of possible values that the state can take. For any given system it depends on which variables you are taking into account. For instance it is common to consider the 2D position $(x, y)$ and the orientation $\theta$ of the system. For this example the state space consists of three dimensions where $x$ and $y$ could conceivable come from either $\mathbb{R}$ or some bounded subset depending on the problem and $\theta \in [0, 2\pi)$.</p> <p>Similarly the control space consists of the possible control values that can be applied to the system. For example if you using torque control on a pendulum then it would be the torques your motor can produce. Usually a bounded, and in practice a discrete, subset of $\mathbb{R}$.</p>
6406
2015-01-20T09:04:22.627
|motion-planning|
<p>I would like to know the difference between state space and control space in relation to motion planning. I would like a simpler explanation.</p>
State space and control space
<blockquote> <p>These programs don't include explicit support for the RAMBo as far as I can see</p> </blockquote> <p>Turns out the reason for that is because Host software has evolved to declare support for specific firmware interfaces. The industry as evolved so that <a href="http://reprap.org/wiki/List_of_Firmware" rel="nofollow">there are several popular firmwares available</a>. These firmwares they tend to support various motherboard designs including the RAMBo and RAMPS specs (RAMBo is a refinement of RAMPS). Some even support very early RepRap boards such as the Gen3 through Gen6 boards as were found on early MakerBots, for example.</p> <hr> <h1>The Answer</h1> <p>YES, MatterControl will drive a RAMBo board, provided it's flashed with a supported firmware. <a href="http://reprap.org/wiki/List_of_Firmware#Marlin" rel="nofollow">Marlin</a>, for example, is a firmware that MatterControl supports.</p>
6408
2015-01-20T18:41:46.057
|3d-printing|reprap|
<p>I'm fairly new to 3D printing. I'm considering motherboards I might use to control my printer. I'm hoping to find a board that I can easily control using either:</p> <ul> <li>ReplicatorG</li> <li>MatterControl</li> </ul> <p>I like these programs because they seem reasonably current, mature and straight-forward for beginners.</p> <p>My question is <strong>can I control a Rambo V1.2 board from either of these programs?</strong> These programs don't include explicit support for the RAMBo as far as I can see, but maybe I'm missing how printing software works at this point?</p> <hr> <h1>What is a RAMBo?</h1> <p>The RAMBo V1.2 board is a creative-commons/open source design. It integrates an Arduino, Stepper-Motor drivers, Heater &amp; Fan controls, Power Management and more.</p> <p>An example implementation looks like this: <img src="https://i.stack.imgur.com/bJ6KR.jpg" alt="RepRap Rambo V1.2G"></p> <p>For more background info on what a RAMBo board is, you may read about it on the <a href="http://reprap.org/wiki/Rambo" rel="nofollow noreferrer">RepRap community wiki</a>.</p>
Can ReplicatorG or MatterControl drive a RepRap RAMBo motherboard?
<p>Perhaps a hall effect sensor would work for your application? <a href="http://reprap.org/wiki/MagServo" rel="nofollow">(a)</a> <a href="http://www.eeweb.com/company-blog/melexis/360-through-shaft-position-sensor-solution" rel="nofollow">(b)</a> <a href="http://archives.sensorsmag.com/articles/0306/20/main.shtml" rel="nofollow">(c)</a></p>
6428
2015-01-23T15:38:08.377
|sensors|
<p>Is there a sensor that will produce a sinusoidal signal phase locked to a high RPM (7000 RPM) shaft? I am attempting to build a coaxial helicopter based on the architecture described in <a href="http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;arnumber=6696528" rel="nofollow">this paper</a> which requires increasing and decreasing drive torque once per revolution, and I would like to do this modulation in hardware.</p>
Sensor that will produce a sinusoid phase locked to a high RPM Shaft
<p>These types of low cost 2 speed transmission almost always are based on some gyroscopic effect on a simple gear train.</p> <p>You are right, even knowing how they work, I could not find any described on the internet. Very surprising.</p> <p>Look at this image; <img src="https://i.stack.imgur.com/stRmW.gif" alt="enter image description here"></p> <p>Imagine as the speed increases, the lower block will rise.</p> <p>So now you have to imagine it attached to a shifting lever (or gears themselves).</p> <p><img src="https://i.stack.imgur.com/i68qA.gif" alt="enter image description here"></p> <p>Combine the 2 concepts and you have automatic 2 speed gear shifting. </p> <p>In practicality, it is usually less costly to use infinitely variable transmissions for the same effect on anything of decent size / value.</p> <p><img src="https://i.stack.imgur.com/hPMGc.jpg" alt="enter image description here"></p>
6430
2015-01-23T17:35:22.643
|mechanism|
<p>This is not a robotics question, but this Stack Exchange is the closest I could find to mechanical engineering. Please refer me to a better place to ask this, if one exists. Hopefully someone might just know this.</p> <hr> <p>I got a pull-back car for my boy at McDonalds, and it has two gears. It starts slow, then speeds up after about two seconds. It's impressive to me, especially given the inherent cheapness of toys sold by McDonalds. It feels solidly built as well.</p> <p>I couldn't find anything related to this concept. The <a href="http://en.wikipedia.org/wiki/Pullback_motor" rel="nofollow">wiki on pullback motors</a> does not include any information on multiple gears.</p> <p>Any ideas on how this works? </p>
How does a two-gear pull-back car toy work?
<p>There are indeed many ways to implement an (Extended) Kalman Filter for IMU data. You might or not include the bias, depending if you want also to calibrate the sensor within the KF framework itself. w(t) is the noise and you do not make it, it is just for demonstration there and you might use it to implement fake sensor data to test your algorithm. Just an general advice, if you wrap the angles between [pi, -pi] or other way, remember to compute always the minimum difference between the two angles (when you are computing the innovation vector) or you will get some nasty surprises.</p>
6434
2015-01-24T13:25:51.940
|kalman-filter|imu|
<p>Hello i am building a differential drive robot which is equipped with quadrature encoders on both of the motors. My aim is to be able to predict the heading (yaw angle) of the robot using a kalman filter. I am using an MPU 9150 imu. As of now m just interested in yaw angle and not the roll/pitch. As i understand, i will be needing the z-axis of gyro to be fused with the magnetometer data in order to properly estimate the heading angle. My problem is how do i implement the bias and covariance required for the kalman filter to work. Gyroscope would be my process and magnetometer data would be my update step yeah?. From the datasheet i have found the Angular random walk of my gyroscope to be 0.3 degrees/second for 10 Hz motion bandwidth and a constant bias of 20 degrees/second at room temperature. If i am not mistaken i should include the bias in my state prediction equation?. Also how do i get the covariance matrix Q. </p>
differential robot yaw estimation using kalman filter
<blockquote> <p>What parameters should I watch when selecting an antenna?</p> </blockquote> <p>The biggest one is going to be the application, does it need to operate outside? in pouring rain? does it need to be omnidirectional? </p> <blockquote> <p>Is it ok to use indoor stick antenna and seal the mounting point with hot glue?</p> </blockquote> <p>Generally this will work just fine depending on how harsh the environment, outside on a sunny day? will probably be just fine, outside 24/7 in all weather? you will probably need something rated for ingress and UV exposure.</p> <blockquote> <p>Does it change anything if the antenna will be sticking out of largish sheet of aluminum?</p> </blockquote> <p>No It might cause minor interference, just be sure the antenna is isolated form the enclosure.</p> <blockquote> <p>The robot will also have GPS, is it possible that the two will interact badly under some circumstances?</p> </blockquote> <p>Nope they use different signal bands so they will not interfere, however placing the GPS receiver inside a medal box could cause many issues</p>
6436
2015-01-24T22:02:07.737
|mobile-robot|wifi|
<p>A time has come for my robot to get some more permanent computer than my laptop balanced on top of it. I have selected a mini itx board that can be powered directly from battery and some components that go with it including a wifi card and now I'm thinking about the antenna I will need.</p> <p>Constraints I have identified so far:</p> <ul> <li>The robot's body is a closed aluminium box, so I think this rules out keeping the antenna inside.</li> <li>The robot is intended to work outdoors, so it needs to be waterproof.</li> <li>Vibrations might be an issue.</li> </ul> <p>And the questions:</p> <ul> <li>What parameters should I watch when selecting an antenna?</li> <li>Is it ok to use indoor stick antenna and seal the mounting point with hot glue?</li> <li>Does it change anything if the antenna will be sticking out of largish sheet of alluminium?</li> <li>The robot will also have GPS, is it possible that the two will interact badly under some circumstances?</li> </ul>
Choosing a WiFi antenna for outdoor robot
<p>Yes, the Turnigy 9x and just about every RC transmitter can reverse channels. Look for Servo setup, reverse. Step 9.1 in <a href="http://www.hobbyking.com/hobbyking/store/uploads/529892926x11510x49.pdf" rel="nofollow">9x manual</a></p> <p>This assumes your left and right motors are on different channels. If you used a v-tail mixer to obtain turtle drive, you will have to experiment with it's set-up instead (9.17).</p>
6445
2015-01-25T21:24:17.763
|motor|
<p>My set up consists of a brushed motor (ex cordless drill type) connected to a motor controller which is in turn connected to a LIPO battery and an r/c receiver. All my cables are fitted with XT 60 connectors except for the cable that goes to the receiver which is a 3 wire pin (usual white, red and black).</p> <p>The above set up is one of a pair which I am using in my battle robot. The motors are connected to drive wheels, left and right respectively. the problem is the motors are turning in opposing directions.</p> <p>For some reason I neglected to switch the polarity of the wires of one motor at the time I attached the XT 60 connectors and I really am not looking forward to re-soldering. </p> <p>So my question is whether there is any fast way of reversing the direction of rotation without soldering? For instance can the R/C transmitter (a turnigy 9x without any modding)be programmed to switch up for down (hence forward for reverse)? </p> <p>Or can I maybe switch the pin connector going into the receiver (I don't think so because the ground is probably common, but worth asking just in case I guess). </p> <p>Any ideas or should I just get soldering?</p>
Ways of reversing motor direction easily
<p>Microcontrollers and software in general is not in continuous time but discrete time. So you need to use the discretized version of the PID and use it in discrete time (iterations) domain.</p> <p>If you are curious about quadropters, I would suggest to check coursera or edx website. There was some time ago a nice quadropter course made by university of Munich if I am right ;).</p>
6446
2015-01-25T21:45:11.597
|arduino|control|pid|
<p>Just to give a bit a context, here are the equations I'm using for the Angular accelerations.</p> <p>φ** =(1/Jx)τφ</p> <p>and</p> <p>θ** =(1/Jy)τθ</p> <p>So my plant gains would be </p> <p>φ**/τφ =(1/Jx) along x axis</p> <p>and</p> <p>θ**/τθ =(1/Jy) along y axis</p> <p>The basic PID structure is </p> <p>Gain=Kp(Desired-measured)+Ki(integral(Desired-measured))+Kd(Differential(Desired-measured)</p> <p>Lets just say my plant gain for angular accl around x axis is φg and my PID gain is Pg. To obtain a controller, do I do</p> <p>(φg)(Pg)=open loop gain=L</p> <p>and for closed loop L/(1+L). </p> <p>My question is, am I right about what I'm doing and do I upload the algorithm in time domain form or frequency domain form (Silly question as frequency domain is for analysis but my only control experience is purely theory and entirely focused on analysis using root locus and nyquist)</p>
How do you design Quadcopter PID algorithm?
<p>Specifically, the <a href="https://en.wikipedia.org/wiki/Chi-squared_distribution" rel="noreferrer">Chi-Square Distribution</a>(or Chi2, $\chi^2$, or equivalently $\chi^2_1$) is used to model the probability of the absolute value of the deviation of the measurement from it's expected value. This calculation is vital to tackle the <a href="https://en.wikipedia.org/wiki/Measurement_uncertainty" rel="noreferrer">measurement origin uncertainty</a> problem. It can also be used to determine the "correctness" of a multi-hypothesis estimate using a similar idea, but I won't touch on that specifically.</p> <p>Note that the $\chi^2_k$ distribution has $k$ degrees of freedom. Why $\chi^2_k$? Any time you assume <a href="https://en.wikipedia.org/wiki/Normal_distribution" rel="noreferrer">Gaussian</a> (e.g., $\mathcal{N}(0,1)$) measurement noise, you will encounter the $\chi^2_1$ distribution because the <a href="https://en.wikipedia.org/wiki/Norm_%28mathematics%29" rel="noreferrer">two-norm</a> squared of a $n$-dimensional vector of $\mathcal{N}(0,1)$ variables is equivalent to a $\chi_{n}^2$ pdf. [see: <a href="https://en.wikipedia.org/wiki/Chi-squared_distribution#Applications" rel="noreferrer">Applications of Chi-squared</a>].</p> <p>In SLAM it is commonly used to model the probability of a certain landmark providing a certain measurement. </p>
6447
2015-01-26T02:33:23.200
|slam|theory|
<p>All-most all SLAM back-end implementation compute chi2 estimates. Chi2 computation is usually used to compute the best-fitness score of model to the data. How it is related to optimization framework for SLAM?</p> <p>Rgds Nitin</p>
role of chi2 in SLAM back-end optimization
<p>For “Something that would prevent gears or linkages from breaking when the load gets too high”, look for <em><a href="https://www.google.com/search?hl=en&amp;site=imghp&amp;tbm=isch&amp;source=hp&amp;biw=1075&amp;bih=600&amp;q=shear+pin&amp;oq=shear+pin&amp;gs_l=img.3..0l10.2373.3993.0.4430.9.9.0.0.0.0.112.852.7j2.9.0.msedr...0...1ac.1.61.img..0.9.847.lIztnQY_HeA" rel="nofollow">shear pins</a></em> and <em>sacrificial <a href="http://www.amazon.co.uk/JR-PROPO-Sacrificial-Servo-Gear/dp/B00AJMII1C#productDetails" rel="nofollow">gears</a>, links</em>, <em><a href="http://www.flpflightteam.co.uk/~flpfligh/?p=2280" rel="nofollow">arms</a></em>, etc. Typically a shear key, shear pin, etc is far less expensive than the components it protects.</p> <p>A shear pin (ie, a transverse pin linking a gear to a shaft) typically is made of softer material (eg, plastic, aluminum, or mild steel) than the drive train parts it protects. Or it may have shear zones that are cut down to a smaller diameter than the rest of the pin.</p> <p>Drive <a href="https://en.wikipedia.org/wiki/Key_%28engineering%29" rel="nofollow">keys</a> (eg, square keys, Woodruff keys, and less frequently Scotch keys) also are sometimes used as shear elements. </p>
6452
2015-01-26T14:25:51.703
|mechanism|
<p>** If there's a better stack exchange site for this, let me know--the mechanical engineering one is closed so I'm not sure where else is appropriate **</p> <p>Do mechanical systems have fuses? </p> <p>In an electrical system, like a charging circuit for example, we prevent large loads from damaging the system with fuses and circuit breakers. These will disconnect the inputs from the outputs if the load get too high. </p> <p>Do mechanical systems have a similar mechanism? Something that would prevent gears or linkages from breaking when the load gets too high by disconnecting the input from the output?</p>
Fuses for mechanical systems
<p>The wheels simply pivot. In terms of the mechanism, they've just placed the center of rotation for the wheel's steering through the center point of the wheel itself. There is no special name for this; <a href="https://robotics.stackexchange.com/q/4442/350">the practice of steering just means that all the wheels are oriented toward a single point</a>.</p> <p>The actuation is done by <a href="http://mars.nasa.gov/msl/mission/rover/wheelslegs/" rel="nofollow noreferrer">10 servo motors</a>, one per wheel for driving and one per corner-wheel for steering.</p>
6469
2015-01-29T05:54:56.640
|wheeled-robot|
<p>I read up on the wheels of Curiosity, and also about the suspension. But is there a name for the steering? It looks similar in nature to the front landing gear on an airplane, but searching those terms didn't turn up and answer. I've attached a picture with the area of interest highlighted. </p> <p>(Image: Gene Blevins/Reuters)</p> <p><img src="https://i.stack.imgur.com/hD8ca.jpg" alt="enter image description here"></p>
Is there a name for the steering style/wheel actuation used on Curiosity?
<p>Other possible source of electronics for such applications are <a href="https://www.pololu.com/" rel="nofollow">pololu</a>, <a href="http://www.adafruit.com/" rel="nofollow">adafruit</a>, and <a href="https://www.sparkfun.com/" rel="nofollow">sparkfun</a>. All three of them offer all sorts of motor controllers, servo controllers, sensor interface boards, etc.</p>
6476
2015-01-30T05:57:22.960
|motor|usb|
<p>I am looking to build some custom hardware (nothing too fancy, just some motors, cameras and the like), and I need it to be controlled by my laptop (its going to have to do a non-trivial amount of image processing).</p> <p>Is there a way to attach $n$ motors to a laptop where $n&lt;10$ via USB/e-SATA? It seems like something that should be very easy to solve, but I can't seem to find it anywhere.</p> <p>I am not looking to get an Arduino/Raspberry Pi, really just connect the motors, and be able to control them individually. I am comfortable adding more power from a second source to supplement the USB power.</p> <p>Ideas?</p>
How to run custom hardware from a laptop
<p>As Gouda mentioned, you cannot get any reasonable accuracy without knowing exactly how the control inputs translate to wheel movement. </p> <p>This is a very hard problem: Imagine knowing only that approximately 6 volts are being applied to each motor, but not knowing anything about the efficiency of the motors in converting this to torque, the friction of the gears which translate that torque to the wheel, or the friction of the wheels and the floor in translating that torque to movement. This goes on an on. I have <em>never</em> heard of this working, or even being attempted. </p> <p>Instead, encoders are used to measure the effect of the control inputs on the wheels, but counting how much the wheel rotates. This is called using odometry to estimate the path of the robot, and was answered <a href="https://robotics.stackexchange.com/a/134/163">here</a></p>
6493
2015-01-31T22:16:57.810
|pid|wheeled-robot|odometry|
<p>I decided to work on a 2 wheeled robot position mapping problem. For that I have 2 DC motors with encoders from pololu. I have the following two questions:</p> <ol> <li><p>Do I need to know the model of each motor in order to tune a controller?</p></li> <li><p>What are the steps/approach used to get a good mapping (let's say 1 mm maximum error)?</p></li> </ol>
2 wheeled, 2 motor robot control