source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
10
I'm new to A.I. and I'd like to know in simple words, what is the fuzzy logic concept? How does it help, and when is it used?
As complexity rises, precise statements lose meaning and meaningful statements lose precision. ( Lofti Zadeh ). Fuzzy logic deals with reasoning that is approximate rather than fixed and exact. This may make the reasoning more meaningful for a human: Fuzzy logic is an extension of Boolean logic by Lotfi Zadeh in 1965 based on the mathematical theory of fuzzy sets, which is a generalization of the classical set theory. By introducing the notion of degree in the verification of a condition, thus enabling a condition to be in a state other than true or false, fuzzy logic provides a very valuable flexibility for reasoning, which makes it possible to take into account inaccuracies and uncertainties. One advantage of fuzzy logic in order to formalize human reasoning is that the rules are set in natural language. For example, here are some rules of conduct that a driver follows, assuming that he does not want to lose his driver’s licence: Intuitively, it thus seems that the input variables like in this example are approximately appreciated by the brain, such as the degree of verification of a condition in fuzzy logic. I've written a short introduction to fuzzy logic that goes into a bit more details but should be very accessible.
{ "source": [ "https://ai.stackexchange.com/questions/10", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/8/" ] }
17
I've heard the idea of the technological singularity, what is it and how does it relate to Artificial Intelligence? Is this the theoretical point where Artificial Intelligence machines have progressed to the point where they grow and learn on their own beyond what humans can do and their growth takes off? How would we know when we reach this point?
The technological singularity is a theoretical point in time at which a self-improving artificial general intelligence becomes able to understand and manipulate concepts outside of the human brain's range, that is, the moment when it can understand things humans, by biological design, can't. The fuzziness about the singularity comes from the fact that, from the singularity onwards, history is effectively unpredictable . Humankind would be unable to predict any future events, or explain any present events, as science itself becomes incapable of describing machine-triggered events. Essentially, machines would think of us the same way we think of ants. Thus, we can make no predictions past the singularity. Furthermore, as a logical consequence, we'd be unable to define the point at which the singularity may occur at all, or even recognize it when it happens. However, in order for the singularity to take place, AGI needs to be developed, and whether that is possible is quite a hot debate right now. Moreover, an algorithm that creates superhuman intelligence (or superintelligence ) out of bits and bytes would have to be designed. By definition, a human programmer wouldn't be able to do such a thing, as his/her brain would need to be able to comprehend concepts beyond its range. There is also the argument that an intelligence explosion (the mechanism by which a technological singularity would theoretically be formed) would be impossible due to the difficulty of the design challenge of making itself more intelligent, getting larger proportionally to its intelligence, and that the difficulty of the design itself may overtake the intelligence required to solve the said challenge. Also, there are related theories involving machines taking over humankind and all of that sci-fi narrative. However, that's unlikely to happen, if Asimov's laws are followed appropriately. Even if Asimov's laws were not enough , a series of constraints would still be necessary in order to avoid the misuse of AGI by misintentioned individuals, and Asimov's laws are the nearest we have to that.
{ "source": [ "https://ai.stackexchange.com/questions/17", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/55/" ] }
35
These two terms seem to be related, especially in their application in computer science and software engineering. Is one a subset of another? Is one a tool used to build a system for the other? What are their differences and why are they significant?
Machine learning has been defined by many people in multiple (often similar) ways [ 1 , 2 ]. One definition says that machine learning (ML) is the field of study that gives computers the ability to learn without being explicitly programmed. Given the above definition, we might say that machine learning is geared towards problems for which we have (lots of) data (experience), from which a program can learn and can get better at a task. Artificial intelligence has many more aspects, where machines may not get better at tasks by learning from data, but may exhibit intelligence through rules (e.g. expert systems like Mycin ), logic or algorithms, e.g. path-finding). The book Artificial Intelligence: A Modern Approach shows more research fields of AI, like Constraint Satisfaction Problems , Probabilistic Reasoning or Philosophical Foundations .
{ "source": [ "https://ai.stackexchange.com/questions/35", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/69/" ] }
36
What aspects of quantum computers, if any, can help to further develop Artificial Intelligence?
Quantum computers are super awesome at matrix multiplication, with some limitations . Quantum superposition allows each bit to be in a lot more states than just zero or one, and quantum gates can fiddle those bits in many different ways. Because of that, a quantum computer can process a lot of information at once for certain applications. One of those applications is the Fourier transform , which is useful in a lot of problems, like signal analysis and array processing. There's also Grover's quantum search algorithm , which finds the single value for which a given function returns something different. If an AI problem can be expressed in a mathematical form amenable to quantum computing , it can receive great speedups. Sufficient speedups could transform an AI idea from "theoretically interesting but insanely slow" to "quite practical once we get a good handle on quantum computing."
{ "source": [ "https://ai.stackexchange.com/questions/36", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/29/" ] }
74
I've heard the terms strong-AI and weak-AI used. Are these well defined terms or subjective ones? How are they generally defined?
The terms strong and weak don't actually refer to processing, or optimization power, or any interpretation leading to "strong AI" being stronger than "weak AI". It holds conveniently in practice, but the terms come from elsewhere. In 1980, John Searle coined the following statements: AI hypothesis, strong form: an AI system can think and have a mind (in the philosophical definition of the term); AI hypothesis, weak form: an AI system can only act like it thinks and has a mind. So strong AI is a shortcut for an AI systems that verifies the strong AI hypothesis . Similarly, for the weak form. The terms have then evolved: strong AI refers to AI that performs as well as humans (who have minds), weak AI refers to AI that doesn't. The problem with these definitions is that they're fuzzy. For example, AlphaGo is an example of weak AI, but is "strong" by Go-playing standards. A hypothetical AI replicating a human baby would be a strong AI, while being "weak" at most tasks. Other terms exist: Artificial General Intelligence (AGI), which has cross-domain capability (like humans), can learn from a wide range of experiences (like humans), among other features. Artificial Narrow Intelligence refers to systems bound to a certain range of tasks (where they may nevertheless have superhuman ability), lacking capacity to significantly improve themselves. Beyond AGI, we find Artificial Superintelligence (ASI), based on the idea that a system with the capabilities of an AGI, without the physical limitations of humans would learn and improve far beyond human level.
{ "source": [ "https://ai.stackexchange.com/questions/74", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/55/" ] }
86
How is a neural network having the "deep" adjective actually distinguished from other similar networks?
The difference is mostly in the number of layers. For a long time, it was believed that "1-2 hidden layers are enough for most tasks" and it was impractical to use more than that, because training neural networks can be very computationally demanding. Nowadays, computers are capable of much more, so people have started to use networks with more layers and found that they work very well for some tasks. The word "deep" is there simply to distinguish these networks from the traditional, "more shallow" ones.
{ "source": [ "https://ai.stackexchange.com/questions/86", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/8/" ] }
92
The following page / study demonstrates that the deep neural networks are easily fooled by giving high confidence predictions for unrecognisable images, e.g. How this is possible? Can you please explain ideally in plain English?
First up, those images (even the first few) aren't complete trash despite being junk to humans; they're actually finely tuned with various advanced techniques, including another neural network. The deep neural network is the pre-trained network modeled on AlexNet provided by Caffe . To evolve images, both the directly encoded and indirectly encoded images, we use the Sferes evolutionary framework. The entire code base to conduct the evolutionary experiments can be download [sic] here . The code for the images produced by gradient ascent is available here . Images that are actually random junk were correctly recognized as nothing meaningful: In response to an unrecognizable image, the networks could have output a low confidence for each of the 1000 classes, instead of an extremely high confidence value for one of the classes. In fact, they do just that for randomly generated images (e.g. those in generation 0 of the evolutionary run) The original goal of the researchers was to use the neural networks to automatically generate images that look like the real things (by getting the recognizer's feedback and trying to change the image to get a more confident result), but they ended up creating the above art. Notice how even in the static-like images there are little splotches - usually near the center - which, it's fair to say, are triggering the recognition. We were not trying to produce adversarial, unrecognizable images. Instead, we were trying to produce recognizable images, but these unrecognizable images emerged. Evidently, these images had just the right distinguishing features to match what the AI looked for in pictures. The "paddle" image does have a paddle-like shape, the "bagel" is round and the right color, the "projector" image is a camera-lens-like thing, the "computer keyboard" is a bunch of rectangles (like the individual keys), and the "chainlink fence" legitimately looks like a chain-link fence to me. Figure 8. Evolving images to match DNN classes produces a tremendous diversity of images. Shown are images selected to showcase diversity from 5 evolutionary runs. The diversity suggests that the images are non-random, but that instead evolutions producing [sic] discriminative features of each target class. Further reading: the original paper (large PDF)
{ "source": [ "https://ai.stackexchange.com/questions/92", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/8/" ] }
111
Obviously, self-driving cars aren't perfect, so imagine that the Google car (as an example) got into a difficult situation. Here are a few examples of unfortunate situations caused by a set of events: The car is heading toward a crowd of 10 people crossing the road, so it cannot stop in time, but it can avoid killing 10 people by hitting the wall (killing the passengers), Avoiding killing the rider of the motorcycle considering that the probability of survival is greater for the passenger of the car, Killing an animal on the street in favour of a human being, Purposely changing lanes to crash into another car to avoid killing a dog, And here are a few dilemmas: Does the algorithm recognize the difference between a human being and an animal? Does the size of the human being or animal matter? Does it count how many passengers it has vs. people in the front? Does it "know" when babies/children are on board? Does it take into the account the age (e.g. killing the older first)? How would an algorithm decide what it should do from the technical perspective? Is it being aware of above (counting the probability of kills), or not (killing people just to avoid its own destruction)? Related articles: Why Self-Driving Cars Must Be Programmed to Kill How to Help Self-Driving Cars Make Ethical Decisions
How could self-driving cars make ethical decisions about who to kill? It shouldn't. Self-driving cars are not moral agents. Cars fail in predictable ways. Horses fail in predictable ways. the car is heading toward a crowd of 10 people crossing the road, so it cannot stop in time, but it can avoid killing 10 people by hitting the wall (killing the passengers), In this case, the car should slam on the brakes. If the 10 people die, that's just unfortunate. We simply cannot trust all of our beliefs about what is taking place outside the car. What if those 10 people are really robots made to look like people? What if they're trying to kill you? avoiding killing the rider of the motorcycle considering that the probability of survival is greater for the passenger of the car, Again, hard-coding these kinds of sentiments into a vehicle opens the rider of the vehicle up to all kinds of attacks, including "fake" motorcyclists. Humans are barely equipped to make these decisions on their own, if at all. When it doubt, just slam on the brakes. killing animal on the street in favour of human being, Again, just hit the brakes. What if it was a baby? What if it was a bomb? changing lanes to crash into another car to avoid killing a dog, Nope. The dog was in the wrong place at the wrong time. The other car wasn't. Just slam on the brakes, as safely as possible. Does the algorithm recognize the difference between a human being and an animal? Does a human? Not always. What if the human has a gun? What if the animal has large teeth? Is there no context? Does the size of the human being or animal matter? Does it count how many passengers it has vs. people in the front? Does it "know" when babies/children are on board? Does it take into the account the age (e.g. killing the older first)? Humans can't agree on these things. If you ask a cop what to do in any of these situations, the answer won't be, "You should have swerved left, weighed all the relevant parties in your head, assessed the relevant ages between all parties, then veered slightly right, and you would have saved 8% more lives." No, the cop will just say, "You should have brought the vehicle to a stop, as quickly and safely as possible." Why? Because cops know people normally aren't equipped to deal with high-speed crash scenarios. Our target for "self-driving car" should not be 'a moral agent on par with a human.' It should be an agent with the reactive complexity of cockroach, which fails predictably.
{ "source": [ "https://ai.stackexchange.com/questions/111", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/8/" ] }
154
I'm aware that neural networks are probably not designed to do that, however asking hypothetically, is it possible to train the deep neural network (or similar) to solve math equations? So given the 3 inputs: 1st number, operator sign represented by the number (1 - + , 2 - - , 3 - / , 4 - * , and so on), and the 2nd number, then after training the network should give me the valid results. Example 1 ( 2+2 ): Input 1: 2 ; Input 2: 1 ( + ); Input 3: 2 ; Expected output: 4 Input 1: 10 ; Input 2: 2 ( - ); Input 3: 10 ; Expected output: 0 Input 1: 5 ; Input 2: 4 ( * ); Input 3: 5 ; Expected output: 25 and so The above can be extended to more sophisticated examples. Is that possible? If so, what kind of network can learn/achieve that?
Yes, it has been done! However, the applications aren't to replace calculators or anything like that. The lab I'm associated with develops neural network models of equational reasoning to better understand how humans might solve these problems. This is a part of the field known as Mathematical Cognition . Unfortunately, our website isn't terribly informative, but here's a link to an example of such work. Apart from that, recent work on extending neural networks to include external memory stores (e.g. Neural Turing Machines) was used to solve math problems as a good proof of concept. This is because many arithmetic problems involve long procedures with stored intermediate results. See the sections of this paper on long binary addition and multiplication.
{ "source": [ "https://ai.stackexchange.com/questions/154", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/8/" ] }
1,294
Geoffrey Hinton has been researching something he calls "capsules theory" in neural networks. What is it? How do capsule neural networks work?
It appears to not be published yet; the best available online are these slides for this talk . (Several people reference an earlier talk with this link , but sadly it's broken at time of writing this answer.) My impression is that it's an attempt to formalize and abstract the creation of subnetworks inside a neural network. That is, if you look at a standard neural network, layers are fully connected (that is, every neuron in layer 1 has access to every neuron in layer 0, and is itself accessed by every neuron in layer 2). But this isn't obviously useful; one might instead have, say, n parallel stacks of layers (the 'capsules') that each specializes on some separate task (which may itself require more than one layer to complete successfully). If I'm imagining its results correctly, this more sophisticated graph topology seems like something that could easily increase both the effectiveness and the interpretability of the resulting network.
{ "source": [ "https://ai.stackexchange.com/questions/1294", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/144/" ] }
1,396
On the Wikipedia page about AI, we can read: Optical character recognition is no longer perceived as an exemplar of "artificial intelligence" having become a routine technology. On the other hand, the MNIST database of handwritten digits is specially designed for training and testing neural networks and their error rates (see: Classifiers ). So, why does the above quote state that OCR is no longer exemple of AI?
Whenever a problem becomes solvable by a computer, people start arguing that it does not require intelligence. John McCarthy is often quoted: "As soon as it works, no one calls it AI anymore" ( Referenced in CACM ). One of my teachers in college said that in the 1950's, a professor was asked what he thought was intelligent for a machine. The professor reputedly answered that if a vending machine gave him the right change, that would be intelligent. Later, playing chess was considered intelligent. However, computers can now defeat grandmasters at chess, and people are no longer saying that it is a form of intelligence. Now we have OCR. It's already stated in another answer that our methods do not have the recognition facilities of a 5 year old. As soon as this is achieved, people will say "meh, that's not intelligence, a 5 year old can do that!" A psychological bias, a need to state that we are somehow superior to machines, is at the basis of this.
{ "source": [ "https://ai.stackexchange.com/questions/1396", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/8/" ] }
1,420
Are there any research teams that attempted to create or have already created an AI robot that can be as close to intelligent as these found in Ex Machina or I, Robot movies? I'm not talking about full awareness, but an artificial being that can make its own decisions and physical and intellectual tasks that a human being can do?
We are absolutely nowhere near, nor do we have any idea how to bridge the gap between what we can currently do and what is depicted in these films. The current trend for DL approaches (coupled with the emergence of data science as a mainstream discipline) has led to a lot of popular interest in AI. However, researchers and practitioners would do well to learn the lessons of the 'AI Winter' and not engage in hubris or read too much into current successes. For example: Success in transfer learning is very limited. The 'hard problem' (i.e. presenting the 'raw, unwashed environment' to the machine and having it come up with a solution from scratch) is not being addressed by DL to the extent that it is popularly portrayed: expert human knowledge is still required to help decide how the input should be framed, tune parameters, interpret output etc. Someone who has enthusiasm for AGI would hopefully agree that the 'hard problem' is actually the only one that matters. Some years ago, a famous cognitive scientist said "We have yet to successfully represent even a single concept on a computer". In my opinion, recent research trends have done little to change this. All of this perhaps sounds pessimistic - it's not intended to. None of us want another AI Winter, so we should challenge (and be honest about) the limits of our current techniques rather than mythologizing them.
{ "source": [ "https://ai.stackexchange.com/questions/1420", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/8/" ] }
1,479
Do scientists or research experts know from the kitchen what is happening inside complex "deep" neural network with at least millions of connections firing at an instant? Do they understand the process behind this (e.g. what is happening inside and how it works exactly), or it is a subject of debate? For example this study says: However there is no clear understanding of why they perform so well, or how they might be improved. So does this mean that scientists actually don't know how complex convolutional network models work?
There are many approaches that aim to make a trained neural network more interpretable and less like a "black box", specifically convolutional neural networks that you've mentioned. Visualizing the activations and layer weights Activations visualization is the first obvious and straight-forward one. For ReLU networks, the activations usually start out looking relatively blobby and dense, but as the training progresses the activations usually become more sparse (most values are zero) and localized. This sometimes shows what exactly a particular layer is focused on when it sees an image. Another great work on activations that I'd like to mention is deepvis that shows reaction of every neuron at each layer, including pooling and normalization layers. Here's how they describe it : In short, we’ve gathered a few different methods that allow you to “triangulate” what feature a neuron has learned, which can help you better understand how DNNs work. The second common strategy is to visualize the weights (filters). These are usually most interpretable on the first CONV layer which is looking directly at the raw pixel data, but it is possible to also show the filter weights deeper in the network. For example, the first layer usually learns gabor-like filters that basically detect edges and blobs. Occlusion experiments Here's the idea. Suppose that a ConvNet classifies an image as a dog. How can we be certain that it’s actually picking up on the dog in the image as opposed to some contextual cues from the background or some other miscellaneous object? One way of investigating which part of the image some classification prediction is coming from is by plotting the probability of the class of interest (e.g. dog class) as a function of the position of an occluder object. If we iterate over regions of the image, replace it with all zeros and check the classification result, we can build a 2-dimensional heat map of what's most important for the network on a particular image. This approach has been used in Matthew Zeiler’s Visualizing and Understanding Convolutional Networks (that you refer to in your question): Deconvolution Another approach is to synthesize an image that causes a particular neuron to fire, basically what the neuron is looking for. The idea is to compute the gradient with respect to the image, instead of the usual gradient with respect to the weights. So you pick a layer, set the gradient there to be all zero except for one for one neuron and backprop to the image. Deconv actually does something called guided backpropagation to make a nicer looking image, but it's just a detail. Similar approaches to other neural networks Highly recommend this post by Andrej Karpathy , in which he plays a lot with Recurrent Neural Networks (RNN). In the end, he applies a similar technique to see what the neurons actually learn: The neuron highlighted in this image seems to get very excited about URLs and turns off outside of the URLs. The LSTM is likely using this neuron to remember if it is inside a URL or not. Conclusion I've mentioned only a small fraction of results in this area of research. It's pretty active and new methods that shed light to the neural network inner workings appear each year. To answer your question, there's always something that scientists don't know yet, but in many cases they have a good picture (literary) of what's going on inside and can answer many particular questions. To me the quote from your question simply highlights the importance of research of not only accuracy improvement, but the inner structure of the network as well. As Matt Zieler tells in this talk , sometimes a good visualization can lead, in turn, to better accuracy.
{ "source": [ "https://ai.stackexchange.com/questions/1479", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/8/" ] }
1,507
I believe artificial intelligence (AI) term is overused nowadays. For example, people see that something is self-moving and they call it AI, even if it's on autopilot (like cars or planes) or there is some simple algorithm behind it. What are the minimum general requirements so that we can say something is AI?
It's true that the term has become a buzzword, and is now widely used to a point of confusion - however if you look at the definition provided by Stuart Russell and Peter Norvig, they write it as follows: We define AI as the study of agents that receive percepts from the environment and perform actions . Each such agent implements a function that maps percept sequences to actions, and we cover different ways to represent these functions, such as reactive agents, real-time planners, and decision-theoretic systems. We explain the role of learning as extending the reach of the designer into unknown environments , and we show how that role constrains agent design, favoring explicit knowledge representation and reasoning . Artificial Intelligence: A Modern Approach - Stuart Russell and Peter Norvig So the example you cite, "autopilot for cars/planes", is actually a (famous) form of AI as it has to use a form of knowledge representation to deal with unknown environments and circumstances . Ultimately, these systems also collect data so that the knowledge representation can be updated to deal with the new inputs that they have found. They do this with autopilot for cars all the time So, directly to your question, for something to be considered as "having AI", it needs to be able to deal with unknown environments/circumstances in order to achieve its objective/goal , and render knowledge in a manner that provides for new learning/information to be added easily. There are many different types of well defined knowledge representation methods, ranging from the popular neural net , through to probabilistic models like bayesian networks (belief networks) - but fundamentally actions by the system must be derived from whichever representation of knowledge you choose for it to be considered as AI.
{ "source": [ "https://ai.stackexchange.com/questions/1507", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/8/" ] }
1,508
I believe normally you can use genetic programming for sorting, however I'd like to check whether it's possible using ANN. Given the unsorted text data from input, which neural network is suitable for doing sorting tasks?
It's true that the term has become a buzzword, and is now widely used to a point of confusion - however if you look at the definition provided by Stuart Russell and Peter Norvig, they write it as follows: We define AI as the study of agents that receive percepts from the environment and perform actions . Each such agent implements a function that maps percept sequences to actions, and we cover different ways to represent these functions, such as reactive agents, real-time planners, and decision-theoretic systems. We explain the role of learning as extending the reach of the designer into unknown environments , and we show how that role constrains agent design, favoring explicit knowledge representation and reasoning . Artificial Intelligence: A Modern Approach - Stuart Russell and Peter Norvig So the example you cite, "autopilot for cars/planes", is actually a (famous) form of AI as it has to use a form of knowledge representation to deal with unknown environments and circumstances . Ultimately, these systems also collect data so that the knowledge representation can be updated to deal with the new inputs that they have found. They do this with autopilot for cars all the time So, directly to your question, for something to be considered as "having AI", it needs to be able to deal with unknown environments/circumstances in order to achieve its objective/goal , and render knowledge in a manner that provides for new learning/information to be added easily. There are many different types of well defined knowledge representation methods, ranging from the popular neural net , through to probabilistic models like bayesian networks (belief networks) - but fundamentally actions by the system must be derived from whichever representation of knowledge you choose for it to be considered as AI.
{ "source": [ "https://ai.stackexchange.com/questions/1508", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/8/" ] }
1,768
In Portal 2 we see that AI's can be " killed " by thinking about a paradox. I assume this works by forcing the AI into an infinite loop which would essentially " freeze " the computer's consciousness. Questions: Would this confuse the AI technology we have today to the point of destroying it? If so, why? And if not, could it be possible in the future?
This classic problem exhibits a basic misunderstanding of what an artificial general intelligence would likely entail. First, consider this programmer's joke: The programmer's wife couldn't take it anymore. Every discussion with her husband turned into an argument over semantics, picking over every piece of trivial detail. One day she sent him to the grocery store to pick up some eggs. On his way out the door, she said, "While you are there, pick up milk." And he never returned. It's a cute play on words, but it isn't terribly realistic. You are assuming because AI is being executed by a computer, it must exhibit this same level of linear, unwavering pedantry outlined in this joke. But AI isn't simply some long-winded computer program hard-coded with enough if-statements and while-loops to account for every possible input and follow the prescribed results. while (command not completed) find solution() This would not be strong AI. In any classic definition of artificial general intelligence , you are creating a system that mimics some form of cognition that exhibits problem solving and adaptive learning (←note this phrase here). I would suggest that any AI that could get stuck in such an "infinite loop" isn't a learning AI at all. It's just a buggy inference engine. Essentially, you are endowing a program of currently-unreachable sophistication with an inability to postulate if there is a solution to a simple problem at all. I can just as easily say "walk through that closed door" or "pick yourself up off the ground" or even "turn on that pencil" — and present a similar conundrum. "Everything I say is false." — The Liar's Paradox
{ "source": [ "https://ai.stackexchange.com/questions/1768", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/1812/" ] }
2,008
As far as I can tell, neural networks have a fixed number of neurons in the input layer. If neural networks are used in a context like NLP, sentences or blocks of text of varying sizes are fed to a network. How is the varying input size reconciled with the fixed size of the input layer of the network? In other words, how is such a network made flexible enough to deal with an input that might be anywhere from one word to multiple pages of text? If my assumption of a fixed number of input neurons is wrong and new input neurons are added to/removed from the network to match the input size I don't see how these can ever be trained. I give the example of NLP, but lots of problems have an inherently unpredictable input size. I'm interested in the general approach for dealing with this. For images, it's clear you can up/downsample to a fixed size, but, for text, this seems to be an impossible approach since adding/removing text changes the meaning of the original input.
Three possibilities come to mind. The easiest is the zero-padding . Basically, you take a rather big input size and just add zeroes if your concrete input is too small. Of course, this is pretty limited and certainly not useful if your input ranges from a few words to full texts. Recurrent NNs (RNN) are a very natural NN to choose if you have texts of varying size as input. You input words as word vectors (or embeddings) just one after another and the internal state of the RNN is supposed to encode the meaning of the full string of words. This is one of the earlier papers. Another possibility is using recursive NNs . This is basically a form of preprocessing in which a text is recursively reduced to a smaller number of word vectors until only one is left - your input, which is supposed to encode the whole text. This makes a lot of sense from a linguistic point of view if your input consists of sentences (which can vary a lot in size), because sentences are structured recursively. For example, the word vector for "the man", should be similar to the word vector for "the man who mistook his wife for a hat", because noun phrases act like nouns, etc. Often, you can use linguistic information to guide your recursion on the sentence. If you want to go way beyond the Wikipedia article, this is probably a good start .
{ "source": [ "https://ai.stackexchange.com/questions/2008", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/2522/" ] }
2,236
I've heard before from computer scientists and from researchers in the area of AI that that Lisp is a good language for research and development in artificial intelligence. Does this still apply, with the proliferation of neural networks and deep learning? What was their reasoning for this? What languages are current deep-learning systems currently built in?
First, I guess that you mean Common Lisp (which is a standard language specification, see its HyperSpec ) with efficient implementations (à la SBCL ). But some recent implementations of Scheme could also be relevant (with good implementations such as Bigloo or Chicken/Scheme ). Both Common Lisp and Scheme (and even Clojure ) are from the same Lisp family. And as a scripting language driving big data or machine learning applications, Guile might be a useful replacement to Python and is also a Lisp dialect. BTW, I do recommend reading SICP , an excellent introduction to programming using Scheme. Then, Common Lisp (and other dialects of Lisp) is great for symbolic AI. However, many recent machine learning libraries are coded in more mainstream languages, for example TensorFlow is coded in C++ & Python. Deep learning libraries are mostly coded in C++ or Python or C (and sometimes using OpenCL or Cuda for GPU computing parts). Common Lisp is great for symbolic artificial intelligence because: it has very good implementations (e.g. SBCL , which compiles to machine code every expression given to the REPL ) it is homoiconic , so it is easy to deal with programs as data, in particular it is easy to generate [sub-]programs, that is use meta-programming techniques. it has a Read-Eval-Print Loop to ease interactive programming it provides a very powerful macro machinery (essentially, you define your own domain specific sublanguage for your problem), much more powerful than in other languages like C. it mandates a garbage collector (even code can be garbage collected) it provides many container abstract data types, and can easily handle symbols. you can code both high-level (dynamically typed) and low-level (more or less startically typed) code, thru appropriate annotations. However most machine learning & neural network libraries are not coded in CL. Notice that neither neural network nor deep learning is in the symbolic artificial intelligence field. See also this question . Several symbolic AI systems like Eurisko or CyC have been developed in CL (actually, in some DSL built above CL). Notice that the programming language might not be very important. In the Artificial General Intelligence research topic, some people work on the idea of a AI system which would generate all its own code (so are designing it with a bootstrapping approach). Then, the code which is generated by such a system can even be generated in low level programming languages like C. See J.Pitrat's blog , which has inspired the RefPerSys project.
{ "source": [ "https://ai.stackexchange.com/questions/2236", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/3323/" ] }
3,494
First of all, I'm a beginner studying AI and this is not an opinion-oriented question or one to compare programming languages. I'm not implying that Python is the best language. But the fact is that most of the famous AI frameworks have primary support for Python. They can even be multilanguage supported, for example, TensorFlow that supports Python, C++, or CNTK from Microsoft that supports C# and C++, but the most used is Python (I mean more documentation, examples, bigger community, support, etc). Even if you choose C# (developed by Microsoft and my primary programming language), you must have the Python environment set up. I read in other forums that Python is preferred for AI because the code is simplified and cleaner, good for fast prototyping. I was watching a movie with AI thematics (Ex Machina). In some scenes, the main character hacks the interface of the house automation. Guess which language was on the scene? Python. So, what is the big deal with Python? Why is there a growing association between Python and AI?
Python comes with a huge amount of inbuilt libraries. Many of the libraries are for Artificial Intelligence and Machine Learning. Some of the libraries are TensorFlow (which is a high-level neural network library), scikit-learn (for data mining, data analysis and machine learning), pylearn2 (more flexible than scikit-learn), etc. The list keeps going and never ends. You can find some libraries here . Python has an easy implementation for OpenCV. What makes Python favorite for everyone is its powerful and easy implementation. For other languages, students and researchers need to get to know the language before getting into ML or AI with that language. This is not the case with Python . Even a programmer with very basic knowledge can easily handle Python. Apart from that, the time someone spends on writing and debugging code in Python is way less when compared to C, C++ or Java. This is exactly what the students of AI and ML want. They don't want to spend time on debugging the code for syntax errors, they want to spend more time on their algorithms and heuristics related to AI and ML Not just the libraries but their tutorials, handling of interfaces are easily available online . People build their own libraries and upload them on GitHub or elsewhere to be used by others. All these features make Python suitable for them.
{ "source": [ "https://ai.stackexchange.com/questions/3494", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/7268/" ] }
3,938
Suppose that I have 10K images of sizes $2400 \times 2400$ to train a CNN. How do I handle such large image sizes without downsampling? Here are a few more specific questions. Are there any techniques to handle such large images which are to be trained? What batch size is reasonable to use? Are there any precautions to take, or any increase and decrease in hardware resources that I can do? Here are the system requirements Ubuntu 16.04 64-bit RAM 16 GB GPU 8 GB HDD 500 GB
How do I handle such large image sizes without downsampling? I assume that by downsampling you mean scaling down the input before passing it into CNN. Convolutional layer allows to downsample the image within a network, by picking a large stride, which is going to save resources for the next layers. In fact, that's what it has to do, otherwise your model won't fit in GPU. Are there any techniques to handle such large images which are to be trained? Commonly researches scale the images to a resonable size. But if that's not an option for you, you'll need to restrict your CNN. In addition to downsampling in early layers, I would recommend you to get rid of FC layer (which normally takes most of parameters) in favor of convolutional layer . Also you will have to stream your data in each epoch, because it won't fit into your GPU. Note that none of this will prevent heavy computational load in the early layers, exactly because the input is so large: convolution is an expensive operation and the first layers will perform a lot of them in each forward and backward pass. In short, training will be slow. What batch size is reasonable to use? Here's another problem. A single image takes 2400x2400x3x4 (3 channels and 4 bytes per pixel) which is ~70Mb, so you can hardly afford even a batch size 10. More realistically would be 5. Note that most of the memory will be taken by CNN parameters. I think in this case it makes sense reduce the size by using 16-bit values rather than 32-bit - this way you'll be able to double the batches. Are there any precautions to take, or any increase and decrease in hardware resources that I can do? Your bottleneck is GPU memory. If you can afford another GPU, get it and split the network across them. Everything else is insignificant compared to GPU memory.
{ "source": [ "https://ai.stackexchange.com/questions/3938", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/6382/" ] }
4,456
What's the difference between model-free and model-based reinforcement learning? It seems to me that any model-free learner, learning through trial and error, could be reframed as model-based. In that case, when would model-free learners be appropriate?
What's the difference between model-free and model-based reinforcement learning? In Reinforcement Learning, the terms "model-based" and "model-free" do not refer to the use of a neural network or other statistical learning model to predict values, or even to predict next state (although the latter may be used as part of a model-based algorithm and be called a "model" regardless of whether the algorithm is model-based or model-free). Instead, the term refers strictly as to whether, whilst during learning or acting, the agent uses predictions of the environment response. The agent can use a single prediction from the model of next reward and next state (a sample), or it can ask the model for the expected next reward, or the full distribution of next states and next rewards. These predictions can be provided entirely outside of the learning agent - e.g. by computer code that understands the rules of a dice or board game. Or they can be learned by the agent, in which case they will be approximate. Just because there is a model of the environment implemented, does not mean that a RL agent is "model-based". To qualify as "model-based", the learning algorithms have to explicitly reference the model: Algorithms that purely sample from experience such as Monte Carlo Control, SARSA, Q-learning, Actor-Critic are "model free" RL algorithms. They rely on real samples from the environment and never use generated predictions of next state and next reward to alter behaviour (although they might sample from experience memory, which is close to being a model). The archetypical model-based algorithms are Dynamic Programming (Policy Iteration and Value Iteration) - these all use the model's predictions or distributions of next state and reward in order to calculate optimal actions. Specifically in Dynamic Programming, the model must provide state transition probabilities, and expected reward from any state, action pair. Note this is rarely a learned model. Basic TD learning, using state values only, must also be model-based in order to work as a control system and pick actions. In order to pick the best action, it needs to query a model that predicts what will happen on each action, and implement a policy like $\pi(s) = \text{argmax}_a \sum_{s',r} p(s',r|s,a)(r + v(s'))$ where $p(s',r|s,a)$ is the probability of receiving reward $r$ and next state $s'$ when taking action $a$ in state $s$ . That function $p(s',r|s,a)$ is essentially the model. The RL literature differentiates between "model" as a model of the environment for "model-based" and "model-free" learning, and use of statistical learners, such as neural networks. In RL, neural networks are often employed to learn and generalise value functions, such as the Q value which predicts total return (sum of discounted rewards) given a state and action pair. Such a trained neural network is often called a "model" in e.g. supervised learning. However, in RL literature, you will see the term "function approximator" used for such a network to avoid ambiguity. It seems to me that any model-free learner, learning through trial and error, could be reframed as model-based. I think here you are using the general understanding of the word "model" to include any structure that makes useful predictions. That would apply to e.g. table of Q values in SARSA. However, as explained above, that's not how the term is used in RL. So although your understanding that RL builds useful internal representations is correct, you are not technically correct that this can be used to re-frame between "model-free" as "model-based", because those terms have a very specific meaning in RL. In that case, when would model-free learners be appropriate? Generally with current state of art in RL, if you don't have an accurate model provided as part of the problem definition, then model-free approaches are often superior. There is lots of interest in agents that build predictive models of the environment, and doing so as a "side effect" (whilst still being a model-free algorithm) can still be useful - it may regularise a neural network or help discover key predictive features that can also be used in policy or value networks. However, model-based agents that learn their own models for planning have a problem that inaccuracy in these models can cause instability (the inaccuracies multiply the further into the future the agent looks). Some promising inroads are being made using imagination-based agents and/or mechanisms for deciding when and how much to trust the learned model during planning. Right now (in 2018), if you have a real-world problem in an environment without an explicit known model at the start, then the safest bet is to use a model-free approach such as DQN or A3C. That may change as the field is moving fast and new more complex architectures could well be the norm in a few years.
{ "source": [ "https://ai.stackexchange.com/questions/4456", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/10720/" ] }
4,864
What are "bottlenecks" in the context of neural networks? This term is mentioned, for example, in this TensorFlow article , which also uses the term "bottleneck values". How does one calculate bottleneck values? How do these values help image classification? Please explain in simple words.
The bottleneck in a neural network is just a layer with fewer neurons than the layer below or above it. Having such a layer encourages the network to compress feature representations (of salient features for the target variable) to best fit in the available space. Improvements to compression occur due to the goal of reducing the cost function, as for all weight updates. In a CNN (such as Google's Inception network), bottleneck layers are added to reduce the number of feature maps (aka channels) in the network, which, otherwise, tend to increase in each layer. This is achieved by using 1x1 convolutions with fewer output channels than input channels. You don't usually calculate weights for bottleneck layers directly, the training process handles that, as for all other weights. Selecting a good size for a bottleneck layer is something you have to guess, and then experiment, in order to find network architectures that work well. The goal here is usually finding a network that generalises well to new images, and bottleneck layers help by reducing the number of parameters in the network whilst still allowing it to be deep and represent many feature maps.
{ "source": [ "https://ai.stackexchange.com/questions/4864", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/11837/" ] }
5,246
For instance, the title of this paper reads: "Sample Efficient Actor-Critic with Experience Replay". What is sample efficiency , and how can importance sampling be used to achieve it?
An algorithm is sample efficient if it can get the most out of every sample. Imagine yourself playing PONG for the first time. As a human, it would take you within seconds to learn how to play the game based on very few samples. This makes you very "sample efficient". Modern RL algorithms would have to see $100$ thousand times more data than you so they are, relatively, sample inefficient. In the case of off-policy learning, not all samples are useful in that they are not part of the distribution that we are interested in. Importance sampling is a technique to filter these samples. Its original use was to understand one distribution while only being able to take samples from a different but related distribution. In RL, this often comes up when trying to learn off-policy. Namely, that your samples are produced by some behaviour policy but you want to learn a target policy. Thus one needs to measure how important/similar the samples generated are to samples that the target policy may have made. Thus, one is sampling from a weighted distribution which favours these "important" samples. There are many methods, however, for characterizing what is important, and their effectiveness may differ depending on the application. The most common approach to this off-policy style of importance sampling is finding a ratio of how likely a sample is to be generated by the target policy. The paper On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient (2010) by Tang and Abbeel covers this topic.
{ "source": [ "https://ai.stackexchange.com/questions/5246", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/12574/" ] }
5,493
It is said that activation functions in neural networks help introduce non-linearity . What does this mean? What does non-linearity mean in this context? How does the introduction of this non-linearity help? Are there any other purposes of activation functions ?
Almost all of the functionalities provided by the non-linear activation functions are given by other answers. Let me sum them up: First, what does non-linearity mean? It means something (a function in this case) which is not linear with respect to a given variable/variables i.e. $f(c1.x1 + c2.x2...cn.xn + b) != c1.f(x1) + c2.f(x2) ... cn.f(xn) + f(b).$ NOTE: There is some ambiguity about how one might define linearity. In polynomial equations we define linearity in somewhat a different way as compared to in vectors or some systems which take an input $x$ and give an output $f(x)$ . See the second answer . What does non-linearity mean in this context? It means that the Neural Network can successfully approximate functions (up-to a certain error $e$ decided by the user) which does not follow linearity or it can successfully predict the class of a function that is divided by a decision boundary that is not linear. Why does it help? I hardly think you can find any physical world phenomenon which follows linearity straightforwardly. So you need a non-linear function that can approximate the non-linear phenomenon. Also, a good intuition would be any decision boundary or a function is a linear combination of polynomial combinations of the input features (so ultimately non-linear). Purposes of activation function? In addition to introducing non-linearity, every activation function has its own features. Sigmoid $\frac{1} {(1 + e ^ {-(w1*x1...wn*xn + b)})}$ This is one of the most common activation function and is monotonically increasing everywhere. This is generally used at the final output node as it squashes values between 0 and 1 (if the output is required to be 0 or 1 ). Thus above 0.5 is considered 1 while below 0.5 as 0 , although a different threshold (not 0.5 ) maybe set. Its main advantage is that its differentiation is easy and uses already calculated values and supposedly horseshoe crab neurons have this activation function in their neurons. Tanh $\frac{e ^ {(w1*x1...wn*xn + b)} - e ^ {-(w1*x1...wn*xn + b)})}{(e ^ { (w1*x1...wn*xn + b)} + e ^ {-(w1*x1...wn*xn + b)}}$ This has an advantage over the sigmoid activation function as it tends to centre the output to 0 which has an effect of better learning on the subsequent layers (acts as a feature normaliser). A nice explanation here . Negative and positive output values maybe considered as 0 and 1 respectively. Used mostly in RNN's. Re-Lu activation function - This is another very common simple non-linear (linear in positive range and negative range exclusive of each other) activation function that has the advantage of removing the problem of vanishing gradient faced by the above two i.e. gradient tends to 0 as x tends to +infinity or -infinity. Here is an answer about Re-Lu's approximation power in-spite of its apparent linearity. ReLu's have a disadvantage of having dead neurons which result in larger NN's. Also, you can design your own activation functions depending on your specialized problem. You may have a quadratic activation function which will approximate quadratic functions much better. But then, you have to design a cost function that should be somewhat convex in nature, so that you can optimise it using first-order differentials and the NN actually converges to a decent result. This is the main reason why standard activation functions are used. But I believe with proper mathematical tools, there is a huge potential for new and eccentric activation functions. For example, say you are trying to approximate a single-variable quadratic function say $a.x^2 + c$ . This will be best approximated by a quadratic activation $w1.x^2 + b$ where $w1$ and $b$ will be the trainable parameters. But designing a loss function that follows the conventional first-order derivative method (gradient descent) can be quite tough for non-monotonically increasing function. For Mathematicians: In the sigmoid activation function $(1 / (1 + e ^ {-(w1*x1...wn*xn + b)})$ we see that $e ^ {-(w1*x1...wn*xn + b)}$ is always < 1 . By binomial expansion, or by reverse calculation of the infinite GP series we get $sigmoid(y)$ = $1 + y + y^2.....$ . Now in a NN $y = e ^ {-(w1*x1...wn*xn + b)}$ . Thus we get all the powers of $y$ which is equal to $e ^ {-(w1*x1...wn*xn + b)}$ thus each power of $y$ can be thought of as a multiplication of several decaying exponentials based on a feature $x$ , for eaxmple $y^2 = e^ {-2(w1x1)} * e^ {-2(w2x2)} * e^ {-2(w3x3)} *...... e^ {-2(b)}$ . Thus each feature has a say in the scaling of the graph of $y^2$ . Another way of thinking would be to expand the exponentials according to Taylor Series: $$e^{x}=1+\frac{x}{1 !}+\frac{x^{2}}{2 !}+\frac{x^{3}}{3 !}+\cdots$$ So we get a very complex combination, with all the possible polynomial combinations of input variables present. I believe if a Neural Network is structured correctly the NN can fine-tune these polynomial combinations by just modifying the connection weights and selecting polynomial terms maximum useful, and rejecting terms by subtracting the output of 2 nodes weighted properly. The $tanh$ activation can work in the same way since output of $|tanh| < 1$ . I am not sure how Re-Lu's work though, but due to its rigid structure and problem of dead neurons we require larger networks with ReLu's for a good approximation. But for a formal mathematical proof, one has to look at the Universal Approximation Theorem. A visual proof that neural nets can compute any function The Universal Approximation Theorem For Neural Networks- An Elegant Proof For non-mathematicians some better insights visit these links: Activation Functions by Andrew Ng - for more formal and scientific answer How does neural network classifier classify from just drawing a decision plane? Differentiable activation function A visual proof that neural nets can compute any function
{ "source": [ "https://ai.stackexchange.com/questions/5493", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/12957/" ] }
5,728
Suppose that a NN contains $n$ hidden layers, $m$ training examples, $x$ features, and $n_i$ nodes in each layer. What is the time complexity to train this NN using back-propagation? I have a basic idea about how they find the time complexity of algorithms, but here there are 4 different factors to consider here i.e. iterations, layers, nodes in each layer, training examples, and maybe more factors. I found an answer here but it was not clear enough. Are there other factors, apart from those I mentioned above, that influence the time complexity of the training algorithm of a NN?
I haven't seen an answer from a trusted source, but I'll try to answer this myself, with a simple example (with my current knowledge). In general, note that training an MLP using back-propagation is usually implemented with matrices. Time complexity of matrix multiplication The time complexity of matrix multiplication for $M_{ij} * M_{jk}$ is simply $\mathcal{O}(i*j*k)$ . Notice that we are assuming the simplest multiplication algorithm here: there exist some other algorithms with somewhat better time complexity. Feedforward pass algorithm The feedforward propagation algorithm is as follows. First, to go from layer $i$ to $j$ , you do $$S_j = W_{ji}*Z_i$$ Then you apply the activation function $$Z_j = f(S_j)$$ If we have $N$ layers (including input and output layer), this will run $N-1$ times. Example As an example, let's compute the time complexity for the forward pass algorithm for an MLP with $4$ layers, where $i$ denotes the number of nodes of the input layer, $j$ the number of nodes in the second layer, $k$ the number of nodes in the third layer and $l$ the number of nodes in the output layer. Since there are $4$ layers, you need $3$ matrices to represent weights between these layers. Let's denote them by $W_{ji}$ , $W_{kj}$ and $W_{lk}$ , where $W_{ji}$ is a matrix with $j$ rows and $i$ columns ( $W_{ji}$ thus contains the weights going from layer $i$ to layer $j$ ). Assume you have $t$ training examples. For propagating from layer $i$ to $j$ , we have first $$S_{jt} = W_{ji} * Z_{it}$$ and this operation (i.e. matrix multiplication) has $\mathcal{O}(j*i*t)$ time complexity. Then we apply the activation function $$ Z_{jt} = f(S_{jt}) $$ and this has $\mathcal{O}(j*t)$ time complexity, because it is an element-wise operation. So, in total, we have $$\mathcal{O}(j*i*t + j*t) = \mathcal{O}(j*t*(i + 1)) = \mathcal{O}(j*i*t)$$ Using same logic, for going $j \to k$ , we have $\mathcal{O}(k*j*t)$ , and, for $k \to l$ , we have $\mathcal{O}(l*k*t)$ . In total, the time complexity for feedforward propagation will be $$\mathcal{O}(j*i*t + k*j*t + l*k*t) = \mathcal{O}(t*(ij + jk + kl))$$ I'm not sure if this can be simplified further or not. Maybe it's just $\mathcal{O}(t*i*j*k*l)$ , but I'm not sure. Back-propagation algorithm The back-propagation algorithm proceeds as follows. Starting from the output layer $l \to k$ , we compute the error signal, $E_{lt}$ , a matrix containing the error signals for nodes at layer $l$ $$ E_{lt} = f'(S_{lt}) \odot {(Z_{lt} - O_{lt})} $$ where $\odot$ means element-wise multiplication. Note that $E_{lt}$ has $l$ rows and $t$ columns: it simply means each column is the error signal for training example $t$ . We then compute the "delta weights", $D_{lk} \in \mathbb{R}^{l \times k}$ (between layer $l$ and layer $k$ ) $$ D_{lk} = E_{lt} * Z_{tk} $$ where $Z_{tk}$ is the transpose of $Z_{kt}$ . We then adjust the weights $$ W_{lk} = W_{lk} - D_{lk} $$ For $l \to k$ , we thus have the time complexity $\mathcal{O}(lt + lt + ltk + lk) = \mathcal{O}(l*t*k)$ . Now, going back from $k \to j$ . We first have $$ E_{kt} = f'(S_{kt}) \odot (W_{kl} * E_{lt}) $$ Then $$ D_{kj} = E_{kt} * Z_{tj} $$ And then $$W_{kj} = W_{kj} - D_{kj}$$ where $W_{kl}$ is the transpose of $W_{lk}$ . For $k \to j$ , we have the time complexity $\mathcal{O}(kt + klt + ktj + kj) = \mathcal{O}(k*t(l+j))$ . And finally, for $j \to i$ , we have $\mathcal{O}(j*t(k+i))$ . In total, we have $$\mathcal{O}(ltk + tk(l + j) + tj (k + i)) = \mathcal{O}(t*(lk + kj + ji))$$ which is the same as the feedforward pass algorithm. Since they are the same, the total time complexity for one epoch will be $$O(t*(ij + jk + kl)).$$ This time complexity is then multiplied by the number of iterations (epochs). So, we have $$O(n*t*(ij + jk + kl)),$$ where $n$ is number of iterations. Notes Note that these matrix operations can greatly be parallelized by GPUs. Conclusion We tried to find the time complexity for training a neural network that has 4 layers with respectively $i$ , $j$ , $k$ and $l$ nodes, with $t$ training examples and $n$ epochs. The result was $\mathcal{O}(nt*(ij + jk + kl))$ . We assumed the simplest form of matrix multiplication that has cubic time complexity. We used the batch gradient descent algorithm. The results for stochastic and mini-batch gradient descent should be the same. (Let me know if you think the otherwise: note that batch gradient descent is the general form, with little modification, it becomes stochastic or mini-batch) Also, if you use momentum optimization, you will have the same time complexity, because the extra matrix operations required are all element-wise operations, hence they will not affect the time complexity of the algorithm. I'm not sure what the results would be using other optimizers such as RMSprop. Sources The following article http://briandolhansky.com/blog/2014/10/30/artificial-neural-networks-matrix-form-part-5 describes an implementation using matrices. Although this implementation is using "row major", the time complexity is not affected by this. If you're not familiar with back-propagation, check this article: http://briandolhansky.com/blog/2013/9/27/artificial-neural-networks-backpropagation-part-4
{ "source": [ "https://ai.stackexchange.com/questions/5728", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/-1/" ] }
6,196
As far as I understand, Q-learning and policy gradients (PG) are the two major approaches used to solve RL problems. While Q-learning aims to predict the reward of a certain action taken in a certain state, policy gradients directly predict the action itself. However, both approaches appear identical to me, i.e. predicting the maximum reward for an action (Q-learning) is equivalent to predicting the probability of taking the action directly (PG). Is the difference in the way the loss is back-propagated?
However, both approaches appear identical to me i.e. predicting the maximum reward for an action (Q-learning) is equivalent to predicting the probability of taking the action directly (PG). Both methods are theoretically driven by the Markov Decision Process construct, and as a result use similar notation and concepts. In addition, in simple solvable environments you should expect both methods to result in the same - or at least equivalent - optimal policies. However, they are actually different internally. The most fundamental differences between the approaches is in how they approach action selection, both whilst learning, and as the output (the learned policy). In Q-learning, the goal is to learn a single deterministic action from a discrete set of actions by finding the maximum value. With policy gradients, and other direct policy searches, the goal is to learn a map from state to action, which can be stochastic, and works in continuous action spaces. As a result, policy gradient methods can solve problems that value-based methods cannot: Large and continuous action space. However, with value-based methods, this can still be approximated with discretisation - and this is not a bad choice, since the mapping function in policy gradient has to be some kind of approximator in practice. Stochastic policies. A value-based method cannot solve an environment where the optimal policy is stochastic requiring specific probabilities, such as Scissor/Paper/Stone. That is because there are no trainable parameters in Q-learning that control probabilities of action, the problem formulation in TD learning assumes that a deterministic agent can be optimal. However, value-based methods like Q-learning have some advantages too: Simplicity. You can implement Q functions as simple discrete tables, and this gives some guarantees of convergence. There are no tabular versions of policy gradient, because you need a mapping function $p(a \mid s, \theta)$ which also must have a smooth gradient with respect to $\theta$ . Speed. TD learning methods that bootstrap are often much faster to learn a policy than methods which must purely sample from the environment in order to evaluate progress. There are other reasons why you might care to use one or other approach: You may want to know the predicted return whilst the process is running, to help other planning processes associated with the agent. The state representation of the problem lends itself more easily to either a value function or a policy function. A value function may turn out to have very simple relationship to the state and the policy function very complex and hard to learn, or vice-versa . Some state-of-the-art RL solvers actually use both approaches together, such as Actor-Critic. This combines strengths of value and policy gradient methods.
{ "source": [ "https://ai.stackexchange.com/questions/6196", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/15298/" ] }
7,763
I am studying reinforcement learning and the variants of it. I am starting to get an understanding of how the algorithms work and how they apply to an MDP. What I don't understand is the process of defining the states of the MDP. In most examples and tutorials, they represent something simple like a square in a grid or similar. For more complex problems, like a robot learning to walk, etc., How do you go about defining those states? Can you use learning or classification algorithms to "learn" those states?
The problem of state representation in Reinforcement Learning (RL) is similar to problems of feature representation, feature selection and feature engineering in supervised or unsupervised learning. Literature that teaches the basics of RL tends to use very simple environments so that all states can be enumerated. This simplifies value estimates into basic rolling averages in a table, which are easier to understand and implement. Tabular learning algorithms also have reasonable theoretical guarantees of convergence, which means if you can simplify your problem so that it has, say, less than a few million states, then this is worth trying. Most interesting control problems will not fit into that number of states, even if you discretise them. This is due to the " curse of dimensionality ". For those problems, you will typically represent your state as a vector of different features - e.g. for a robot, various positions, angles, velocities of mechanical parts. As with supervised learning, you may want to treat these for use with a specific learning process. For instance, typically you will want them all to be numeric, and if you want to use a neural network you should also normalise them to a standard range (e.g. -1 to 1). In addition to the above concerns which apply for other machine learning, for RL, you also need to be concerned with the Markov Property - that the state provides enough information, so that you can accurately predict expected next rewards and next states given an action, without the need for any additional information. This does not need to be perfect, small differences due to e.g. variations in air density or temperature for a wheeled robot will not usually have a large impact on its navigation, and can be ignored. Any factor which is essentially random can also be ignored whilst sticking to RL theory - it may make the agent less optimal overall, but the theory will still work. If there are consistent unknown factors that influence result, and could logically be deduced - maybe from history of state or actions - but you have excluded them from the state representation, then you may have a more serious problem, and the agent may fail to learn. It is worth noting the difference here between observation and state . An observation is some data that you can collect. E.g. you may have sensors on your robot that feed back the positions of its joints. Because the state should possess the Markov Property, a single raw observation might not be enough data to make a suitable state. If that is the case, you can either apply your domain knowledge in order to construct a better state from available data, or you can try to use techniques designed for partially observable MDPs (POMDPs) - these effectively try to build missing parts of state data statistically. You could use a RNN or hidden markov model (also called a "belief state") for this, and in some way this is using a " learning or classification algorithms to "learn" those states " as you asked. Finally, you need to consider the type of approximation model you want to use. A similar approach applies here as for supervised learning: A simple linear regression with features engineered based on domain knowledge can do very well. You may need to work hard on trying different state representations so that the linear approximation works. The advantage is that this simpler approach is more robust against stability issues than non-linear approximation A more complex non-linear function approximator, such as a multi-layer neural network. You can feed in a more "raw" state vector and hope that the hidden layers will find some structure or representation that leads to good estimates. In some ways, this too is " learning or classification algorithms to "learn" those states " , but in a different way to a RNN or HMM. This might be a sensible approach if your state was expressed naturally as a screen image - figuring out the feature engineering for image data by hand is very hard. The Atari DQN work by DeepMind team used a combination of feature engineering and relying on deep neural network to achieve its results. The feature engineering included downsampling the image, reducing it to grey-scale and - importantly for the Markov Property - using four consecutive frames to represent a single state, so that information about velocity of objects was present in the state representation. The DNN then processed the images into higher-level features that could be used to make predictions about state values.
{ "source": [ "https://ai.stackexchange.com/questions/7763", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/17853/" ] }
9,141
I am a new learner in NLP. I am interested in the sentence generating task. As far as I am concerned, one state-of-the-art method is the CharRNN , which uses RNN to generate a sequence of words. However, BERT has come out several weeks ago and is very powerful. Therefore, I am wondering whether this task can also be done with the help of BERT? I am a new learner in this field, and thank you for any advice!
For newbies, NO. Sentence generation requires sampling from a language model, which gives the probability distribution of the next word given previous contexts. But BERT can't do this due to its bidirectional nature. For advanced researchers, YES. You can start with a sentence of all [MASK] tokens, and generate words one by one in arbitrary order (instead of the common left-to-right chain decomposition). Though the text generation quality is hard to control. Here's the technical report BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model , its errata and the source code . In summary: If you would like to do some research in the area of decoding with BERT, there is a huge space to explore If you would like to generate high quality texts, personally I recommend you to check GPT-2 .
{ "source": [ "https://ai.stackexchange.com/questions/9141", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/20170/" ] }
10,623
What is self-supervised learning in machine learning? How is it different from supervised learning?
Introduction The term self-supervised learning (SSL) has been used (sometimes differently) in different contexts and fields, such as representation learning [ 1 ], neural networks, robotics [ 2 ], natural language processing, and reinforcement learning. In all cases, the basic idea is to automatically generate some kind of supervisory signal to solve some task (typically, to learn representations of the data or to automatically label a dataset). I will describe what SSL means more specifically in three contexts: representation learning, neural networks and robotics. Representation learning The term self-supervised learning has been widely used to refer to techniques that do not use human-annotated datasets to learn (visual) representations of the data (i.e. representation learning). Example In [ 1 ], two patches are randomly selected and cropped from an unlabelled image and the goal is to predict the relative position of the two patches. Of course, we have the relative position of the two patches once you have chosen them (i.e. we can keep track of their centers), so, in this case, this is the automatically generated supervisory signal. The idea is that, to solve this task (known as a pretext or auxiliary task in the literature [ 3 , 4 , 5 , 6 ]), the neural network needs to learn features in the images. These learned representations can then be used to solve the so-called downstream tasks, i.e. the tasks you are interested in (e.g. object detection or semantic segmentation). So, you first learn representations of the data (by SSL pre-training), then you can transfer these learned representations to solve a task that you actually want to solve, and you can do this by fine-tuning the neural network that contains the learned representations on a labeled (but smaller dataset), i.e. you can use SSL for transfer learning. This example is similar to the example given in this other answer . Neural networks Some neural networks, for example, autoencoders (AE) [ 7 ] are sometimes called self-supervised learning tools. In fact, you can train AEs without images that have been manually labeled by a human. More concretely, consider a de-noising AE, whose goal is to reconstruct the original image when given a noisy version of it. During training, you actually have the original image, given that you have a dataset of uncorrupted images and you just corrupt these images with some noise, so you can calculate some kind of distance between the original image and the noisy one, where the original image is the supervisory signal. In this sense, AEs are self-supervised learning tools, but it's more common to say that AEs are unsupervised learning tools, so SSL has also been used to refer to unsupervised learning techniques. Robotics In [ 2 ], the training data is automatically but approximately labeled by finding and exploiting the relations or correlations between inputs coming from different sensor modalities (and this technique is called SSL by the authors). So, as opposed to representation learning or auto-encoders, in this case, an actual labeled dataset is produced automatically. Example Consider a robot that is equipped with a proximity sensor (which is a short-range sensor capable of detecting objects in front of the robot at short distances) and a camera (which is long-range sensor, but which does not provide a direct way of detecting objects). You can also assume that this robot is capable of performing odometry . An example of such a robot is Mighty Thymio . Consider now the task of detecting objects in front of the robot at longer ranges than the range the proximity sensor allows. In general, we could train a CNN to achieve that. However, to train such CNN, in supervised learning, we would first need a labelled dataset, which contains labelled images (or videos), where the labels could e.g. be "object in the image" or "no object in the image". In supervised learning, this dataset would need to be manually labelled by a human, which clearly would require a lot of work. To overcome this issue, we can use a self-supervised learning approach. In this example, the basic idea is to associate the output of the proximity sensors at a time step $t' > t$ with the output of the camera at time step $t$ (a smaller time step than $t'$ ). More specifically, suppose that the robot is initially at coordinates $(x, y)$ (on the plane), at time step $t$ . At this point, we still do not have enough info to label the output of the camera (at the same time step $t$ ). Suppose now that, at time $t'$ , the robot is at position $(x', y')$ . At time step $t'$ , the output of the proximity sensor will e.g. be "object in front of the robot" or "no object in front of the robot". Without loss of generality, suppose that the output of the proximity sensor at $t' > t$ is "no object in front of the robot", then the label associated with the output of the camera (an image frame) at time $t$ will be "no object in front of the robot".
{ "source": [ "https://ai.stackexchange.com/questions/10623", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/2444/" ] }
10,812
I came across these 2 algorithms, but I cannot understand the difference between these 2, both in terms of implementation as well as intuitionally. So, what difference does the second point in both the slides refer to?
The first-visit and the every-visit Monte-Carlo (MC) algorithms are both used to solve the prediction problem (or, also called, "evaluation problem"), that is, the problem of estimating the value function associated with a given (as input to the algorithms) fixed (that is, it does not change during the execution of the algorithm) policy, denoted by $\pi$ . In general, even if we are given the policy $\pi$ , we are not necessarily able to find the exact corresponding value function, so these two algorithms are used to estimate the value function associated with $\pi$ . Intuitively, we care about the value function associated with $\pi$ because we might want or need to know "how good it is to be in a certain state", if the agent behaves in the environment according to the policy $\pi$ . For simplicity, assume that the value function is the state value function (but it could also be e.g. the state-action value function), denoted by $v_\pi(s)$ , where $v_\pi(s)$ is the expected return (or, in other words, expected cumulative future discounted reward ), starting from state $s$ (at some time step $t$ ) and then following (after time step $t$ ) the given policy $\pi$ . Formally, $v_\pi(s) = \mathbb{E}_\pi [ G_t \mid S_t = s ]$ , where $G_t = \sum_{k=0}^\infty \gamma^k R_{t+k+1}$ is the return (after time step $t$ ). In the case of MC algorithms, $G_t$ is often defined as $\sum_{k=0}^T R_{t+k+1}$ , where $T \in \mathbb{N}^+$ is the last time step of the episode, that is, the sum goes up to the final time step of the episode, $T$ . This is because MC algorithms, in this context, often assume that the problem can be naturally split into episodes and each episode proceeds in a discrete number of time steps (from $t=0$ to $t=T$ ). As I defined it here, the return, in the case of MC algorithms, is only associated with a single episode (that is, it is the return of one episode). However, in general, the expected return can be different from one episode to the other, but, for simplicity, we will assume that the expected return (of all states) is the same for all episodes. To recapitulate, the first-visit and every-visit MC (prediction) algorithms are used to estimate $v_\pi(s)$ , for all states $s \in \mathcal{S}$ . To do that, at every episode, these two algorithms use $\pi$ to behave in the environment, so that to obtain some knowledge of the environment in the form of sequences of states, actions and rewards. This knowledge is then used to estimate $v_\pi(s)$ . How is this knowledge used in order to estimate $v_\pi$ ? Let us have a look at the pseudocode of these two algorithms. $N(s)$ is a "counter" variable that counts the number of times we visit state $s$ throughout the entire algorithm (i.e. from episode one to $num\_episodes$ ). $\text{Returns(s)}$ is a list of (undiscounted) returns for state $s$ . I think it is more useful for you to read the pseudocode (which should be easily translatable to actual code) and understand what it does rather than explaining it with words. Anyway, the basic idea (of both algorithms) is to generate trajectories (of states, actions and rewards) at each episode, keep track of the returns (for each state) and number of visits (of each state), and then, at the end of all episodes, average these returns (for all states). This average of returns should be an approximation of the expected return (which is what we wanted to estimate). The differences of the two algorithms are highlighted in $\color{red}{\text{red}}$ . The part " If state $S_t$ is not in the sequence $S_0, S_1, \dots, S_{t-1}$ " means that the associated block of code will be executed only if $S_t$ is not part of the sequence of states that were visited (in the episode sequence generated with $\pi$ ) before the time step $t$ . In other words, that block of code will be executed only if it is the first time we encounter $S_t$ in the sequence of states, action and rewards: $S_0, A_0, R_1, S_1, A_1, R_2 \ldots, S_{T-1}, A_{T-1}, R_T$ (which can be collectively be called "episode sequence"), with respect to the time step and not the way the episode sequence is processed. Note that a certain state $s$ might appear more than once in $S_0, A_0, R_1, S_1, A_1, R_2 \ldots, S_{T-1}, A_{T-1}, R_T$ : for example, $S_3 = s$ and $S_5 = s$ . Do not get confused by the fact that, within each episode, we proceed from the time step $T-1$ to time step $t = 0$ , that is, we process the "episode sequence" backwards. We are doing that only to more conveniently compute the returns (given that the returns are iteratively computed as follows $G \leftarrow G + R_{t+1}$ ). So, intuitively, in the first-visit MC, we only update the $\text{Returns}(S_t)$ (that is, the list of returns for state $S_t$ , that is, the state of the episode at time step $t$ ) the first time we encounter $S_t$ in that same episode (or trajectory). In the every-visit MC, we update the list of returns for the state $S_t$ every time we encounter $S_t$ in that same episode. For more info regarding these two algorithms (for example, the convergence properties), have a look at section 5.1 (on page 92) of the book " Reinforcement Learning: An Introduction " (2nd edition), by Andrew Barto and Richard S. Sutton.
{ "source": [ "https://ai.stackexchange.com/questions/10812", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/-1/" ] }
11,285
In general, the word "latent" means "hidden" and "to embed" means "to incorporate". In machine learning, the expressions "hidden (or latent) space" and "embedding space" occur in several contexts. More specifically, an embedding can refer to a vector representation of a word. An embedding space can refer to a subspace of a bigger space, so we say that the subspace is embedded in the bigger space. The word "latent" comes up in contexts like hidden Markov models (HMMs) or auto-encoders . What is the difference between these spaces? In some contexts, do these two expressions refer to the same concept?
Embedding vs Latent Space Due to Machine Learning's recent and rapid renaissance, and the fact that it draws from many distinct areas of mathematics, statistics, and computer science, it often has a number of different terms for the same or similar concepts. "Latent space" and "embedding" both refer to an (often lower-dimensional) representation of high-dimensional data: Latent space refers specifically to the space from which the low-dimensional representation is drawn. Embedding refers to the way the low-dimensional data is mapped to ("embedded in") the original higher dimensional space. For example, in this "Swiss roll" data, the 3d data on the left is sensibly modelled as a 2d manifold 'embedded' in 3d space. The function mapping the 'latent' 2d data to its 3d representation is the embedding , and the underlying 2d space itself is the latent space (or embedded space ): Synonyms Depending on the specific impression you wish to give, "embedding" often goes by different terms: Term Context dimensionality reduction combating the "curse of dimensionality" feature extraction feature projection feature embedding feature learning representation learning extracting 'meaningful' features from raw data embedding manifold learning latent feature representation understanding the underlying topology of the data However this is not a hard-and-fast rule, and they are often completely interchangeable.
{ "source": [ "https://ai.stackexchange.com/questions/11285", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/2444/" ] }
12,870
Explainable artificial intelligence (XAI) is concerned with the development of techniques that can enhance the interpretability, accountability, and transparency of artificial intelligence and, in particular, machine learning algorithms and models, especially black-box ones, such as artificial neural networks, so that these can also be adopted in areas, like healthcare, where the interpretability and understanding of the results (e.g. classifications) are required. Which XAI techniques are there? If there are many, to avoid making this question too broad, you can just provide a few examples (the most famous or effective ones), and, for people interested in more techniques and details, you can also provide one or more references/surveys/books that go into the details of XAI. The idea of this question is that people could easily find one technique that they could study to understand what XAI really is or how it can be approached.
Explainable AI and model interpretability are hyper-active and hyper-hot areas of current research (think of holy grail, or something), which have been brought forward lately not least due to the (often tremendous) success of deep learning models in various tasks, plus the necessity of algorithmic fairness & accountability. Here are some state of the art algorithms and approaches, together with implementations and frameworks. Model-agnostic approaches LIME: Local Interpretable Model-agnostic Explanations ( paper , code , blog post , R port ) SHAP: A Unified Approach to Interpreting Model Predictions ( paper , Python package , R package ). GPU implementation for tree models by NVIDIA using RAPIDS - GPUTreeShap ( paper , code , blog post ) Anchors: High-Precision Model-Agnostic Explanations ( paper , authors' Python code , Java implementation) Diverse Counterfactual Explanations (DiCE) by Microsoft ( paper , code , blog post ) Black Box Auditing and Certifying and Removing Disparate Impact (authors' Python code ) FairML: Auditing Black-Box Predictive Models, by Cloudera Fast Forward Labs ( blog post , paper , code ) SHAP seems to enjoy high popularity among practitioners; the method has firm theoretical foundations on co-operational game theory (Shapley values), and it has in a great degree integrated the LIME approach under a common framework. Although model-agnostic, specific & efficient implementations are available for neural networks ( DeepExplainer ) and tree ensembles ( TreeExplainer , paper ). Neural network approaches (mostly, but not exclusively, for computer vision models) The Layer-wise Relevance Propagation (LRP) toolbox for neural networks ( 2015 paper @ PLoS ONE , 2016 paper @ JMLR , project page , code , TF Slim wrapper ) Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization ( paper , authors' Torch code , Tensorflow code , PyTorch code , yet another Pytorch implementation , Keras example notebook , Coursera Guided Project ) Axiom-based Grad-CAM (XGrad-CAM): Towards Accurate Visualization and Explanation of CNNs, a refinement of the existing Grad-CAM method ( paper , code ) SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability ( paper , code , Google blog post ) TCAV: Testing with Concept Activation Vectors ( ICML 2018 paper , Tensorflow code ) Integrated Gradients ( paper , code , Tensorflow tutorial , independent implementations ) Network Dissection: Quantifying Interpretability of Deep Visual Representations, by MIT CSAIL ( project page , Caffe code , PyTorch port ) GAN Dissection: Visualizing and Understanding Generative Adversarial Networks, by MIT CSAIL ( project page , with links to paper & code) Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions ( paper , code ) Transparecy-by-Design (TbD) networks ( paper , code , demo ) Distilling a Neural Network Into a Soft Decision Tree , a 2017 paper by Geoff Hinton, with various independent PyTorch implementations Understanding Deep Networks via Extremal Perturbations and Smooth Masks ( paper ), implemented in TorchRay (see below) Understanding the Role of Individual Units in a Deep Neural Network ( preprint , 2020 paper @ PNAS , code , project page ) GNNExplainer: Generating Explanations for Graph Neural Networks ( paper , code ) Benchmarking Deep Learning Interpretability in Time Series Predictions ( paper @ NeurIPS 2020, code utilizing Captum ) Concept Whitening for Interpretable Image Recognition ( paper , preprint , code ) Libraries & frameworks As interpretability moves toward the mainstream, there are already frameworks and toolboxes that incorporate more than one of the algorithms and techniques mentioned and linked above; here is a partial list: The ELI5 Python library ( code , documentation ) DALEX - moDel Agnostic Language for Exploration and eXplanation ( homepage , code , JMLR paper ), part of the DrWhy.AI project The What-If tool by Google, a feature of the open-source TensorBoard web application, which let users analyze an ML model without writing code ( project page , code , blog post ) The Language Interpretability Tool (LIT) by Google, a visual, interactive model-understanding tool for NLP models ( project page , code , blog post ) Lucid, a collection of infrastructure and tools for research in neural network interpretability by Google ( code ; papers: Feature Visualization , The Building Blocks of Interpretability ) TorchRay by Facebook, a PyTorch package implementing several visualization methods for deep CNNs iNNvestigate Neural Networks ( code , JMLR paper ) tf-explain - interpretability methods as Tensorflow 2.0 callbacks ( code , docs , blog post ) InterpretML by Microsoft ( homepage , code still in alpha, paper ) Captum by Facebook AI - model interpetability for Pytorch ( homepage , code , intro blog post ) Skater, by Oracle ( code , docs ) Alibi, by SeldonIO ( code , docs ) AI Explainability 360, commenced by IBM and moved to the Linux Foundation ( homepage , code , docs , IBM Bluemix , blog post ) Ecco: explaining transformer-based NLP models using interactive visualizations ( homepage , code , article ). Recipes for Machine Learning Interpretability in H2O Driverless AI ( repo ) Reviews & general papers A Survey of Methods for Explaining Black Box Models (2018, ACM Computing Surveys) Definitions, methods, and applications in interpretable machine learning (2019, PNAS) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead (2019, Nature Machine Intelligence, preprint ) Machine Learning Interpretability: A Survey on Methods and Metrics (2019, Electronics) Principles and Practice of Explainable Machine Learning (2020, preprint) Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges (keynote at 2020 ECML XKDD workshop by Christoph Molnar, video & slides ) Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI (2020, Information Fusion) Counterfactual Explanations for Machine Learning: A Review (2020, preprint, critique by Judea Pearl) Interpretability 2020 , an applied research report by Cloudera Fast Forward, updated regularly Interpreting Predictions of NLP Models (EMNLP 2020 tutorial) Explainable NLP Datasets ( site , preprint , highlights ) Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges eBooks (available online) Interpretable Machine Learning , by Christoph Molnar, with R code available Explanatory Model Analysis , by DALEX creators Przemyslaw Biecek and Tomasz Burzykowski, with both R & Python code snippets An Introduction to Machine Learning Interpretability (2nd ed. 2019), by H2O Online courses & tutorials Machine Learning Explainability , Kaggle tutorial Explainable AI: Scene Classification and GradCam Visualization , Coursera guided project Explainable Machine Learning with LIME and H2O in R , Coursera guided project Interpretability and Explainability in Machine Learning , Harvard COMPSCI 282BR Other resources explained.ai blog A Twitter thread , linking to several interpretation tools available for R A whole bunch of resources in the Awesome Machine Learning Interpetability repo The online comic book (!) The Hitchhiker's Guide to Responsible Machine Learning , by the team behind the textbook Explanatory Model Analysis and the DALEX package mentioned above ( blog post and backstage )
{ "source": [ "https://ai.stackexchange.com/questions/12870", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/2444/" ] }
13,289
Imagine you show a neural network a picture of a lion 100 times and label it with "dangerous", so it learns that lions are dangerous. Now imagine that previously you have shown it millions of images of lions and alternatively labeled it as "dangerous" and "not dangerous", such that the probability of a lion being dangerous is 50%. But those last 100 times have pushed the neural network into being very positive about regarding the lion as "dangerous", thus ignoring the last million lessons. Therefore, it seems there is a flaw in neural networks, in that they can change their mind too quickly based on recent evidence. Especially if that previous evidence was in the middle. Is there a neural network model that keeps track of how much evidence it has seen? (Or would this be equivalent to letting the learning rate decrease by $1/T$ where $T$ is the number of trials?)
Yes, indeed, neural networks are very prone to catastrophic forgetting (or interference) . Currently, this problem is often ignored because neural networks are mainly trained offline (sometimes called batch training ), where this problem does not often arise, and not online or incrementally , which is fundamental to the development of artificial general intelligence . There are some people that work on continual lifelong learning in neural networks, which attempts to adapt neural networks to continual lifelong learning, which is the ability of a model to learn from a stream of data continually, so that they do not completely forget previously acquired knowledge while learning new information. See, for example, the paper Continual lifelong learning with neural networks: A review (2019), by German I. Parisi et al., which summarises the problems and existing solutions related to catastrophic forgetting of neural networks.
{ "source": [ "https://ai.stackexchange.com/questions/13289", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/4199/" ] }
13,317
The Wikipedia article for the universal approximation theorem cites a version of the universal approximation theorem for Lebesgue-measurable functions from this conference paper . However, the paper does not include the proofs of the theorem. Does anybody know where the proof can be found?
There are multiple papers on the topic because there have been multiple attempts to prove that neural networks are universal (i.e. they can approximate any continuous function) from slightly different perspectives and using slightly different assumptions (e.g. assuming that certain activation functions are used). Note that these proofs tell you that neural networks can approximate any continuous function, but they do not tell you exactly how you need to train your neural network so that it approximates your desired function. Moreover, most papers on the topic are quite technical and mathematical, so, if you do not have a solid knowledge of approximation theory and related fields, they may be difficult to read and understand. Nonetheless, below there are some links to some possibly useful articles and papers. The article A visual proof that neural nets can compute any function (by Michael Nielsen) should give you some intuition behind the universality of neural networks, so this is probably the first article you should read. Then you should probably read the paper Approximation by Superpositions of a Sigmoidal Function (1989), by G. Cybenko, who proves that multi-layer perceptrons (i.e. feed-forward neural networks with at least one hidden layer) can approximate any continuous function . However, he assumes that the neural network uses sigmoid activations functions, which, nowadays, have been replaced in many scenarios by ReLU activation functions. Other works (e.g. [1] , [2] ) showed that you don't necessarily need sigmoid activation functions, but only certain classes of activation functions do not make neural networks universal. The universality property (i.e. the ability to approximate any continuous function) has also been proved in the case of convolutional neural networks . For example, see Universality of Deep Convolutional Neural Networks (2020), by Ding-Xuan Zhou, which shows that convolutional neural networks can approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough. See also Refinement and Universal Approximation via Sparsely Connected ReLU Convolution Nets (by A. Heinecke et al., 2020) See also page 632 of Recurrent Neural Networks Are Universal Approximators (2006), by Schäfer et al., which shows that recurrent neural networks are universal function approximators. See also On the computational power of neural nets (1992, COLT) by Siegelmann and Sontag. This answer could also be useful. For graph neural networks , see Universal Function Approximation on Graphs (by Rickard Brüel Gabrielsson, 2020, NeurIPS)
{ "source": [ "https://ai.stackexchange.com/questions/13317", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/27047/" ] }
13,775
I just finished a 1-year Data Science master's program where we were taught R. I found that Python is more popular and has a larger community in AI. What are the advantages that Python may have over R in terms of features applicable to the field of Data Science and AI (other than popularity and larger community)? What positions in Data Science and AI would be more Python-heavy than R-heavy (especially comparing industry, academic, and government job positions)? In short, is Python worthwhile in all job situations or can I get by with only R in some positions?
I want to reframe your question. Don't think about switching, think about adding. In data science you'll be able to go very far with either python or r but you'll go farthest with both. Python and r integrate very well, thanks to the reticulate package. I often tidy data in r because it is easier for me, train a model in python to benefit from superior speed and visualize the outcomes in r in beautiful ggplot all in one notebook! If you already know r there is no sense in abandoning it, use it where sensible and easy to you. But it is 100% a good idea to add python for many uses. Once you feel comfortable in both you'll have a workflow that fits you best dominated by your favorite language.
{ "source": [ "https://ai.stackexchange.com/questions/13775", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/27652/" ] }
14,224
If the original purpose for developing AI was to help humans in some tasks and that purpose still holds, why should we care about its explainability? For example, in deep learning, as long as the intelligence helps us to the best of their abilities and carefully arrives at its decisions, why would we need to know how its intelligence works?
As argued by Selvaraju et al. , there are three stages of AI evolution, in which interpretability is helpful. In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models . It can give a better understanding of how a model works and helps us answer several key questions. For example, why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc. When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason. When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task). Why is trust so important? First, let me give you a couple of examples of industries where trust is paramount: In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry. Another example is self-driving cars. The same questions arise: if a car crashes, whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry. In fact, according to many, this lack of trust has hindered the adoption of AI in many fields (sources: [1] , [2] , [3] ). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: [4] , [5] , [6] ). In several real-world applications, you can't just say "it works 94% of the time". You might also need to provide a justification... Government regulations Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this. The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: [7] , [8] , [9] ). For instance, the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access "meaningful information about the logic involved" ( Article 15, EU GDPR ) Now, this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example, a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected. To sum up... Explainable AIs are necessary because: It gives us a better understanding, which helps us improve them. In some cases, we can learn from AI how to make better decisions in some tasks. It helps users trust AI, which leads to a wider adoption of AI. Deployed AIs in the (not too distant) future might be required to be more "transparent".
{ "source": [ "https://ai.stackexchange.com/questions/14224", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/16565/" ] }
15,449
We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous. How could artificial intelligence harm us?
tl;dr There are many valid reasons why people might fear (or better be concerned about ) AI, not all involve robots and apocalyptic scenarios. To better illustrate these concerns, I'll try to split them into three categories. Conscious AI This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator" , "The Matrix" , "Age of Ultron" . The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot" , which was also adapted as a movie). The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic). In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious. Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans. Using AI with malicious intent Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today , that don't involve robots! The second category I want to focus a bit more on is several malicious uses of today's AI. I'll focus only on AI applications that are available today . Some examples of AI that can be used for malicious intent: DeepFake : a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1 , 2 , 3 With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second , AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London , Atlanta and Berlin are among the most-surveilled cities in the world . China has taken things a step further by adopting the social credit system , an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984. Influencing people through social media . Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1 , 2 , 3 . Hacking . Military applications, e.g. drone attacks, missile targeting systems. Adverse effects of AI This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are: Jobs becoming redundant . As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same). Reinforcing the bias in our data . This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1 , 2 , 3 , 4 .
{ "source": [ "https://ai.stackexchange.com/questions/15449", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/29713/" ] }
15,730
As a human being, we can think infinity. In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real). For example, at least, we can take into account integers. We can think, principally, and "understand" infinitely many numbers that are displayed on the screen. Nowadays, we are trying to design artificial intelligence which is capable at least human being. However, I am stuck with infinity. I try to find a way how can teach a model (deep or not) to understand infinity. I define "understanding' in a functional approach. For example, If a computer can differentiate 10 different numbers or things, it means that it really understand these different things somehow. This is the basic straight forward approach to "understanding". As I mentioned before, humans understand infinity because they are capable, at least, counting infinite integers, in principle. From this point of view, if I want to create a model, the model is actually a function in an abstract sense, this model must differentiate infinitely many numbers. Since computers are digital machines which have limited capacity to model such an infinite function, how can I create a model that differentiates infinitely many integers? For example, we can take a deep learning vision model that recognizes numbers on the card. This model must assign a number to each different card to differentiate each integer. Since there exist infinite numbers of integer, how can the model assign different number to each integer, like a human being, on the digital computers? If it cannot differentiate infinite things, how does it understand infinity? If I take into account real numbers, the problem becomes much harder. What is the point that I am missing? Are there any resources that focus on the subject?
I think this is a fairly common misconception about AI and computers, especially among laypeople. There are several things to unpack here. Let's suppose that there's something special about infinity (or about continuous concepts) that makes them especially difficult for AI. For this to be true, it must both be the case that humans can understand these concepts while they remain alien to machines, and that there exist other concepts that are not like infinity that both humans and machines can understand. What I'm going to show in this answer is that wanting both of these things leads to a contradiction. The root of this misunderstanding is the problem of what it means to understand . Understanding is a vague term in everyday life, and that vague nature contributes to this misconception. If by understanding, we mean that a computer has the conscious experience of a concept, then we quickly become trapped in metaphysics. There is a long running , and essentially open debate about whether computers can "understand" anything in this sense, and even at times, about whether humans can! You might as well ask whether a computer can "understand" that 2+2=4. Therefore, if there's something special about understanding infinity, it cannot be related to "understanding" in the sense of subjective experience. So, let's suppose that by "understand", we have some more specific definition in mind. Something that would make a concept like infinity more complicated for a computer to "understand" than a concept like arithmetic. Our more concrete definition for "understanding" must relate to some objectively measurable capacity or ability related to the concept (otherwise, we're back in the land of subjective experience). Let's consider what capacity or ability might we pick that would make infinity a special concept, understood by humans and not machines, unlike say, arithmetic. We might say that a computer (or a person) understands a concept if it can provide a correct definition of that concept. However, if even one human understands infinity by this definition, then it should be easy for them to write down the definition. Once the definition is written down, a computer program can output it. Now the computer "understands" infinity too. This definition doesn't work for our purposes. We might say that an entity understands a concept if it can apply the concept correctly. Again, if even the one person understands how to apply the concept of infinity correctly, then we only need to record the rules they are using to reason about the concept, and we can write a program that reproduces the behavior of this system of rules. Infinity is actually very well characterized as a concept, captured in ideas like Aleph Numbers . It is not impractical to encode these systems of rules in a computer, at least up to the level that any human understands them. Therefore, computers can "understand" infinity up to the same level of understanding as humans by this definition as well. So this definition doesn't work for our purposes. We might say that an entity "understands" a concept if it can logically relate that concept to arbitrary new ideas. This is probably the strongest definition, but we would need to be pretty careful here: very few humans (proportionately) have a deep understanding of a concept like infinity. Even fewer can readily relate it to arbitrary new concepts. Further, algorithms like the General Problem Solver can, in principal, derive any logical consequences from a given body of facts, given enough time. Perhaps under this definition computers understand infinity better than most humans, and there is certainly no reason to suppose that our existing algorithms will not further improve this capability over time. This definition does not seem to meet our requirements either. Finally, we might say that an entity "understands" a concept if it can generate examples of it. For example, I can generate examples of problems in arithmetic, and their solutions. Under this definition, I probably do not "understand" infinity, because I cannot actually point to or create any concrete thing in the real world that is definitely infinite. I cannot, for instance, actually write down an infinitely long list of numbers, merely formulas that express ways to create ever longer lists by investing ever more effort in writing them out. A computer ought to be at least as good as me at this. This definition also does not work. This is not an exhaustive list of possible definitions of "understands", but we have covered "understands" as I understand it pretty well. Under every definition of understanding, there isn't anything special about infinity that separates it from other mathematical concepts. So the upshot is that, either you decide a computer doesn't "understand" anything at all, or there's no particularly good reason to suppose that infinity is harder to understand than other logical concepts. If you disagree, you need to provide a concrete definition of "understanding" that does separate understanding of infinity from other concepts, and that doesn't depend on subjective experiences (unless you want to claim your particular metaphysical views are universally correct, but that's a hard argument to make). Infinity has a sort of semi-mystical status among the lay public, but it's really just like any other mathematical system of rules: if we can write down the rules by which infinity operates, a computer can do them as well as a human can (or better).
{ "source": [ "https://ai.stackexchange.com/questions/15730", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/19102/" ] }
15,820
Is there any research on the development of attacks against artificial intelligence systems? For example, is there a way to generate a letter "A", which every human being in this world can recognize but, if it is shown to the state-of-the-art character recognition system, this system will fail to recognize it? Or spoken audio which can be easily recognized by everyone but will fail on the state-of-the-art speech recognition system. If there exists such a thing, is this technology a theory-based science (mathematics proved) or an experimental science (randomly add different types of noise and feed into the AI system and see how it works)? Where can I find such material?
Yes, there is some research on this topic, which can be called adversarial machine learning , which is more an experimental field. An adversarial example is an input similar to the ones used to train the model, but that leads the model to produce an unexpected outcome. For example, consider an artificial neural network (ANN) trained to distinguish between oranges and apples. You are then given an image of an apple similar to another image used to train the ANN, but that is slightly blurred. Then you pass it to the ANN, which unexpectedly predicts the object to be an orange. Several machine learning and optimization methods have been used to detect the boundary behaviour of machine learning models, that is, the unexpected behaviour of the model that produces different outcomes given two slightly different inputs (but that correspond to the same object). For example, evolutionary algorithms have been used to develop tests for self-driving cars. See, for example, Automatically testing self-driving cars with search-based procedural content generation (2019) by Alessio Gambi et al.
{ "source": [ "https://ai.stackexchange.com/questions/15820", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/30335/" ] }
15,857
Is the gradient at a layer (of a feed-forward neural network) independent of the activations of the previous layers? I read this in a paper titled Mean Field Residual Networks: On the Edge of Chaos (2017). I am not sure how far this is true, because the error depends on those activations.
Yes, there is some research on this topic, which can be called adversarial machine learning , which is more an experimental field. An adversarial example is an input similar to the ones used to train the model, but that leads the model to produce an unexpected outcome. For example, consider an artificial neural network (ANN) trained to distinguish between oranges and apples. You are then given an image of an apple similar to another image used to train the ANN, but that is slightly blurred. Then you pass it to the ANN, which unexpectedly predicts the object to be an orange. Several machine learning and optimization methods have been used to detect the boundary behaviour of machine learning models, that is, the unexpected behaviour of the model that produces different outcomes given two slightly different inputs (but that correspond to the same object). For example, evolutionary algorithms have been used to develop tests for self-driving cars. See, for example, Automatically testing self-driving cars with search-based procedural content generation (2019) by Alessio Gambi et al.
{ "source": [ "https://ai.stackexchange.com/questions/15857", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/30384/" ] }
17,721
I trained a simple CNN on the MNIST database of handwritten digits to 99% accuracy. I'm feeding in a bunch of handwritten digits, and non-digits from a document. I want the CNN to report errors, so I set a threshold of 90% certainty below which my algorithm assumes that what it's looking at is not a digit. My problem is that the CNN is 100% certain of many incorrect guesses. In the example below, the CNN reports 100% certainty that it's a 0. How do I make it report failure? My thoughts on this : Maybe the CNN is not really 100% certain that this is a zero. Maybe it just thinks that it can't be anything else, and it's being forced to choose (because of normalisation on the output vector). Is there any way I can get insight into what the CNN "thought" before I forced it to choose? PS: I'm using Keras on Tensorflow with Python. Edit Because someone asked. Here is the context of my problem: This came from me applying a heuristic algorithm for segmentation of sequences of connected digits. In the image above, the left part is actually a 4, and the right is the curve bit of a 2 without the base. The algorithm is supposed to step through segment cuts, and when it finds a confident match, remove that cut and continue moving along the sequence. It works really well for some cases, but of course it's totally reliant on being able to tell if what it's looking at is not a good match for a digit. Here's an example of where it kind of did okay. My next best option is to do inference on all permutations and maximise combined score. That's more expensive.
The concept you are looking for is called epistemic uncertainty, also known as model uncertainty. You want the model to produce meaningful calibrated probabilities that quantify the real confidence of the model. This is generally not possible with simple neural networks as they simply do not have this property, for this you need a Bayesian Neural Network (BNN). This kind of network learns a distribution of weights instead of scalar or point-wise weights, which then allow to encode model uncertainty, as then the distribution of the output is calibrated and has the properties you want. This problem is also called out of distribution (OOD) detection, and again it can be done with BNNs, but unfortunately training a full BNN is untractable, so we use approximations. As a reference, one of these approximations is Deep Ensembles, which train several instances of a model in the same dataset and then average the softmax probabilities, and has good out of distribution detection properties. Check the paper here , in particular section 3.5 which shows results for OOD based on entropy of the ensemble probabilities.
{ "source": [ "https://ai.stackexchange.com/questions/17721", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/16871/" ] }
17,732
The famous Nvidia paper Progressive Growing of GANs for Improved Quality, Stability, and Variation , the GAN can generate hyperrealistic human faces. But, in the very same paper, images of other categories are rather disappointing and there hasn't seemed to be any improvements since then. Why is it the case? Is it because they didn't have enough training data for other categories? Or is it due to some fundamental limitation of GAN? I have come across a paper talking about the limitations of GAN: Seeing What a GAN Cannot Generate . Anybody using GAN for image synthesis other than human faces? Any success stories?
The concept you are looking for is called epistemic uncertainty, also known as model uncertainty. You want the model to produce meaningful calibrated probabilities that quantify the real confidence of the model. This is generally not possible with simple neural networks as they simply do not have this property, for this you need a Bayesian Neural Network (BNN). This kind of network learns a distribution of weights instead of scalar or point-wise weights, which then allow to encode model uncertainty, as then the distribution of the output is calibrated and has the properties you want. This problem is also called out of distribution (OOD) detection, and again it can be done with BNNs, but unfortunately training a full BNN is untractable, so we use approximations. As a reference, one of these approximations is Deep Ensembles, which train several instances of a model in the same dataset and then average the softmax probabilities, and has good out of distribution detection properties. Check the paper here , in particular section 3.5 which shows results for OOD based on entropy of the ensemble probabilities.
{ "source": [ "https://ai.stackexchange.com/questions/17732", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/33082/" ] }
17,755
We have hundreds of thousands of customers records, and we need to take the benefits of our data to train a model that will recognize fake entries or unrealistic ones for our platform, where customers are asked to enter their names, phone number and zip code. So, our attributes are name, phone number, zip code and IP address to train the model with. We have only data associated with real users. Can we train a model provided with only positive labels (as we do not have a negative dataset to train the model with)?
The concept you are looking for is called epistemic uncertainty, also known as model uncertainty. You want the model to produce meaningful calibrated probabilities that quantify the real confidence of the model. This is generally not possible with simple neural networks as they simply do not have this property, for this you need a Bayesian Neural Network (BNN). This kind of network learns a distribution of weights instead of scalar or point-wise weights, which then allow to encode model uncertainty, as then the distribution of the output is calibrated and has the properties you want. This problem is also called out of distribution (OOD) detection, and again it can be done with BNNs, but unfortunately training a full BNN is untractable, so we use approximations. As a reference, one of these approximations is Deep Ensembles, which train several instances of a model in the same dataset and then average the softmax probabilities, and has good out of distribution detection properties. Check the paper here , in particular section 3.5 which shows results for OOD based on entropy of the ensemble probabilities.
{ "source": [ "https://ai.stackexchange.com/questions/17755", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/33135/" ] }
18,576
Background: It's well-known that neural networks offer great performance across a large number of tasks, and this is largely a consequence of their universal approximation capabilities . However, in this post I'm curious about the opposite : Question: Namely, what are some well-known cases, problems or real-world applications where neural networks don't do very well? Specification: I'm looking for specific regression tasks (with accessible data-sets) where neural networks are not the state-of-the-art. The regression task should be "naturally suitable", so no sequential or time-dependent data (in which case an RNN or reservoir computer would be more natural).
Here's a snippet from an article by Gary Marcus In particular, they showed that standard deep learning nets often fall apart when confronted with common stimuli rotated in three dimensional space into unusual positions, like the top right corner of this figure, in which a schoolbus is mistaken for a snowplow: . . . Mistaking an overturned schoolbus is not just a mistake, it’s a revealing mistake: it that shows not only that deep learning systems can get confused, but they are challenged in making a fundamental distinction known to all philosophers: the distinction between features that are merely contingent associations (snow is often present when there are snowplows, but not necessary) and features that are inherent properties of the category itself (snowplows ought other things being equal have plows, unless eg they have been dismantled). We’d already seen similar examples with contrived stimuli, like Anish Athalye’s carefully designed, 3-d printed foam covered dimensional baseball that was mistaken for an espresso Alcorn’s results — some from real photos from the natural world — should have pushed worry about this sort of anomaly to the top of the stack. Please note that the opinions of the author are his alone and I do not necessarily share all of them with him. Edit: Some more fun stuff 1) DeepMind's neural network that could play Breakout and Starcraft saw a dramatic dip in performance when the paddle was moved up by a few pixels. See: General Game Playing With Schema Networks While in the latter, it performed well with one race of the character but not on a different map and with different characters. Source 2) AlphaZero searches just 80,000 positions per second in chess and 40,000 in shogi, compared to 70 million for Stockfish and 35 million for elmo. What the team at Deepmind did was to build a very good search algorithm. A search algorithm that includes the capability to remember facets of previous searches to apply better results to new searches. This is very clever; it undoubtedly has immense value in many areas, but it cannot be considered general intelligence. See: AlphaZero: How Intuition Demolished Logic (Medium)
{ "source": [ "https://ai.stackexchange.com/questions/18576", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/31649/" ] }
18,587
It's an idea I heard a while back but couldn't remember the name of. It involves the existence and development of an AI that will eventually rule the world and that if you don't fund or progress the AI then it will see you as "hostile" and kill you. Also, by knowing about this concept, it essentially makes you a candidate for such consideration, as people who didn't know about it won't understand to progress such an AI. From my understanding, this idea isn't taken that seriously, but I'm curious to know the name nonetheless.
If I'm not mistaken you're looking for Roko's Basilisk , in which an otherwise benevolent future AI system tortures simulations of those who did not work to bring the system into existence
{ "source": [ "https://ai.stackexchange.com/questions/18587", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/34205/" ] }
20,075
I am reading the article How Transformers Work where the author writes Another problem with RNNs, and LSTMs, is that it’s hard to parallelize the work for processing sentences, since you have to process word by word. Not only that but there is no model of long and short-range dependencies . Why exactly does the transformer do better than RNN and LSTM in long-range context dependencies ?
I'll list some bullet points of the main innovations introduced by transformers , followed by bullet points of the main characteristics of the other architectures you mentioned, so we can then compared them. Transformers Transformers ( Attention is all you need ) were introduced in the context of machine translation with the purpose to avoid recursion in order to allow parallel computation (to reduce training time) and also to reduce drops in performance due to long dependencies. The main characteristics are: Non sequential : sentences are processed as a whole rather than word by word. Self Attention : this is the newly introduced 'unit' used to compute similarity scores between words in a sentence. Positional embeddings : another innovation introduced to replace recurrence. The idea is to use fixed or learned weights which encode information related to a specific position of a token in a sentence. The first point is the main reason why transformer do not suffer from long dependency issues. The original transformers do not rely on past hidden states to capture dependencies with previous words. They instead process a sentence as a whole. That is why there is no risk to lose (or "forget") past information. Moreover, multi-head attention and positional embeddings both provide information about the relationship between different words. RNN / LSTM Recurrent neural networks and Long-short term memory models, for what concerns this question, are almost identical in their core properties: Sequential processing : sentences must be processed word by word. Past information retained through past hidden states : sequence to sequence models follow the Markov property: each state is assumed to be dependent only on the previously seen state. The first property is the reason why RNN and LSTM can't be trained in parallel. In order to encode the second word in a sentence I need the previously computed hidden states of the first word, therefore I need to compute that first. The second property is a bit more subtle, but not hard to grasp conceptually. Information in RNN and LSTM are retained thanks to previously computed hidden states. The point is that the encoding of a specific word is retained only for the next time step, which means that the encoding of a word strongly affects only the representation of the next word, so its influence is quickly lost after a few time steps. LSTM (and also GruRNN) can boost a bit the dependency range they can learn thanks to a deeper processing of the hidden states through specific units (which comes with an increased number of parameters to train) but nevertheless the problem is inherently related to recursion. Another way in which people mitigated this problem is to use bi-directional models. These encode the same sentence from the start to end, and from the end to the start, allowing words at the end of a sentence to have stronger influence in the creation of the hidden representation. However, this is just a workaround rather than a real solution for very long dependencies. CNN Also convolutional neural networks are widely used in nlp since they are quite fast to train and effective with short texts. The way they tackle dependencies is by applying different kernels to the same sentence, and indeed since their first application to text ( Convolutional Neural Networks for Sentence Classification ) they were implement as multichannel CNN. Why do different kernels allow to learn dependencies? Because a kernel of size 2 for example would learn relationships between pairs of words, a kernel of size 3 would capture relationships between triplets of words and so on. The evident problem here is that the number of different kernels required to capture dependencies among all possible combinations of words in a sentence would be enormous and unpractical because of the exponential growing number of combinations when increasing the maximum length size of input sentences. To summarize, Transformers are better than all the other architectures because they totally avoid recursion, by processing sentences as a whole and by learning relationships between words thanks to multi-head attention mechanisms and positional embeddings. Nevertheless, it must be pointed out that also transformers can capture only dependencies within the fixed input size used to train them, i.e. if I use as a maximum sentence size 50, the model will not be able to capture dependencies between the first word of a sentence and words that occur more than 50 words later, like in another paragraph. New transformers like Transformer-XL tries to overcome exactly this issue, by kinda re-introducing recursion by storing hidden states of already encoded sentences to leverage them in the subsequent encoding of the next sentences.
{ "source": [ "https://ai.stackexchange.com/questions/20075", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/9863/" ] }
22,957
The transformer, introduced in the paper Attention Is All You Need , is a popular new neural network architecture that is commonly viewed as an alternative to recurrent neural networks, like LSTMs and GRUs. However, having gone through the paper, as well as several online explanations, I still have trouble wrapping my head around how they work. How can a non-recurrent structure be able to deal with inputs of arbitrary length?
Actually, there is usually an upper bound for inputs of transformers, due to the inability of handling long-sequence. Usually, the value is set as 512 or 1024 at current stage. However, if you are asking handling the various input size, adding padding token such as [PAD] in BERT model is a common solution. The position of [PAD] token could be masked in self-attention, therefore, causes no influence. Let's say we use a transformer model with 512 limit of sequence length, then we pass a input sequence of 103 tokens. We padded it to 512 tokens. In the attention layer, positions from 104 to 512 are all masked, that is, they are not attending or being attended.
{ "source": [ "https://ai.stackexchange.com/questions/22957", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/12201/" ] }
22,959
I recently read some introductions to AI alignment, AIXI and decision theory things. As far as I understood, one of the main problems in AI alignment is how to define a utility function well, not causing something like the paperclip apocalypse. Then a question comes to my mind that whatever the utility function would be, we need a computer to compute the utility and reward, so that there is no way to prevent AGI from seeking it to manipulate the utility function to always give the maximum reward. Just like we humans know that we can give happiness to ourselves in chemical ways and some people actually do so. Is there any way to prevent this from happening? Not just protecting the utility calculator physically from AGI (How can we sure it works forever?), but preventing AGI from thinking of it?
Actually, there is usually an upper bound for inputs of transformers, due to the inability of handling long-sequence. Usually, the value is set as 512 or 1024 at current stage. However, if you are asking handling the various input size, adding padding token such as [PAD] in BERT model is a common solution. The position of [PAD] token could be masked in self-attention, therefore, causes no influence. Let's say we use a transformer model with 512 limit of sequence length, then we pass a input sequence of 103 tokens. We padded it to 512 tokens. In the attention layer, positions from 104 to 512 are all masked, that is, they are not attending or being attended.
{ "source": [ "https://ai.stackexchange.com/questions/22959", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/40196/" ] }
38,270
After seeing StackOverflow's banning of ChatGPT , I explored it out of curiosity. It's marvellous as it can write code by itself! Later to check if it knows chess as well like Google-Deepmind's AlphaZero AI, I asked below questions: Me : Does openai know how to play chess? OpenAI : It is not clear whether OpenAI knows how to play chess. As an artificial intelligence research organization, OpenAI has developed a number of advanced machine learning algorithms and techniques that allow AI systems to learn and adapt to new tasks and situations. ... Me : In a chess if white has a king & bishop and if black has only a king, then who can win with a perfect play? OpenAI : In a chess game where white has a king and a bishop and black has only a king, it is possible for either player to win with perfect play . This type of endgame is known as a king and bishop versus king endgame, and it is considered one of the most challenging in chess. ... Now this answer is entirely wrong. Because as soon as only a bishop (or a knight) remains apart from the kings, it's instantly declared as a draw ! Question : Shouldn't the AI reject my question on a subject which is not in its known criteria? It does so for many other subjects. Note : Should we replace bishop = rook then ChatGPT answers exactly the same answer with replacing those pieces. However that happens to be true.
ChatGPT is a large language model. That means it's very good at stringing together words in ways that humans tend to use them. It's able to construct sentences that are grammatically correct and sound natural, for the most part, because it's been trained on language. Because it's good at stringing together words, it's able to take your prompt and generate words in a grammatically correct way that's similar to what it's seen before. But that's all that it's doing: generating words and making sure it sounds natural. It doesn't have any built-in fact checking capabilities, and the manual limitations that OpenAI placed can be fairly easily worked around. Someone in the OpenAI Discord server a few days ago shared a screenshot of the question "What mammal lays the largest eggs?" ChatGPT confidently declared that the elephant lays the largest eggs of any mammal. While much of the information that ChatGPT was trained on is accurate, always keep in mind that it's just stringing together words with no way to check if what it's saying is accurate. Its sources may have been accurate, but just writing in the style of your sources doesn't mean that the results will themselves be true.
{ "source": [ "https://ai.stackexchange.com/questions/38270", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/64749/" ] }
39,293
Sorry if this question makes no sense. I'm a software developer but know very little about AI. Quite a while ago, I read about the Chinese room, and the person inside who has had a lot of training/instructions how to combine symbols, and, as a result, is very good at combining symbols in a "correct" way, for whatever definition of correct. I said "training/instructions" because, for the purpose of this question, it doesn't really make a difference if the "knowledge" was acquired by parsing many many examples and getting a "feeling" for what's right and what's wrong (AI/learning), or by a very detailed set of instructions (algorithmic). So, the person responds with perfectly reasonable sentences, without ever understanding Chinese, or the content of its input. Now, as far as I understand ChatGPT (and I might be completely wrong here), that's exactly what ChatGPT does. It has been trained on a huge corpus of text, and thus has a very good feeling which words go together well and which don't, and, given a sentence, what's the most likely continuation of this sentence. But that doesn't really mean it understands the content of the sentence, it only knows how to chose words based on what it has seen. And because it doesn't really understand any content, it mostly gives answers that are correct, but sometimes it's completely off because it "doesn't really understand Chinese" and doesn't know what it's talking about. So, my question: is this "juggling of Chinese symbols without understanding their meaning" an adequate explanation of how ChatGPT works, and if not, where's the difference? And if yes, how far is AI from models that can actually understand (for some definition of "understand") textual content?
Yes, the Chinese Room argument by John Searle essentially demonstrates that at the very least it is hard to locate intelligence in a system based on its inputs and outputs. And the ChatGPT system is built very much as a machine for manipulating symbols according to opaque rules, without any grounding provided for what those symbols mean. The large language models are trained without ever getting to see, touch, or get any experience reference for any of their language components, other than yet more written language. It is much like trying to learn the meaning of a word by looking up its dictionary definition and finding that composed of other words that you don't know the meaning of, recursively without any way of resolving it. If you possessed such a dictionary and no knowledge of the words defined, you would still be able to repeat those definitions, and if they were received by someone who did understand some of the words, the result would look like reasoning and "understanding". But this understanding is not yours, you are simply able to retrieve it on demand from where someone else stored it. This is also related to the symbol grounding problem in cognitive science. It is possible to argue that pragmatically the "intelligence" shown by the overall system is still real and resides somehow in the rules of how to manipulate the symbols. This argument and other similar ones try to side-step or dismiss some proposed hard problems in AI - for instance, by focusing on behaviour of the whole system and not trying to address the currently impossible task of asking whether any system has subjective experience. This is beyond the scope of this answer (and not really what the question is about), but it is worth noting that The Chinese Room argument has some criticism, and is not the only way to think about issues with AI systems based on language and symbols. I would agree with you that the latest language models, and ChatGPT are good example models of the The Chinese Room made real. The room part that is, there is no pretend human in the middle, but actually that's not hugely important - the role of the human in the Chinese room is to demonstrate that from the perspective of an entity inside the room processing a database of rules, nothing need to possess any understanding or subjective experience that is relevant to the text. Now that next-symbol predictors (which all Large Language Models are to date) are demonstrating quite sophisticated, even surprising behaviour, it may lead to some better insights into the role that symbol-to-symbol references can take in more generally intelligent systems.
{ "source": [ "https://ai.stackexchange.com/questions/39293", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/68600/" ] }
1
I'm making a simple Arduino web server and I want to keep it turned on all the time. So it must endure to stay working continuously. I'm using an Arduino Uno with a Ethernet Shield. It's powered with a simple outlet power supply 5V @ 1A. My Questions: Will I have any problems leaving the Arduino turned on all the time? Is there some other Arduino board better recommended for this? Are there any precautions that I need to heed regarding this?
You shouldn't have any issues keeping it on all the time, however, some thing to consider is any counters you may have, like the use of millis() . From the Arduino docs on millis : This number will overflow (go back to zero), after approximately 50 days. So for projects that are on for long periods of time, you may not see an issue immediately but something like this could pop up and cause errors down the road.
{ "source": [ "https://arduino.stackexchange.com/questions/1", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/3/" ] }
6
For solar & battery powered projects, decreasing power consumption is a necessity. I know a little bit about using a timer and interrupt to put the microcontroller into sleep when it isn't doing anything. I have also read that you can disable some peripherals to further save power. Here is a page explaining these: Power Saving Techniques My question: What other techniques are there to save power? Are there any libraries that make using these AVR features easier?
Note: I wanted to point out that, even though my answer was accepted, the reader (you) should really read Anindo's answer first if you really want to save energy with any Arduino board. Once you address those items in your design, then you should read my answer and worry about how to set the MCU into low power mode. Having said that, there are several techniques to make an Arduino save power. I'll list a few and then point you to a page that explains them all in more detail. While the controller isn't doing anything important (between one read of a sensor and the next, for example), you can put the controller into one of the sleep modes below, with the command set_sleep_mode (SLEEP_MODE_PWR_DOWN) . Next to each mode is the approximate power consumption of each mode. SLEEP_MODE_IDLE: 15 mA SLEEP_MODE_ADC: 6.5 mA SLEEP_MODE_PWR_SAVE: 1.62 mA SLEEP_MODE_EXT_STANDBY: 1.62 mA SLEEP_MODE_STANDBY : 0.84 mA SLEEP_MODE_PWR_DOWN : 0.36 mA Disable brown-out detection (the circuitry that turns off the controller when low voltage is detected). Turn off ADC (analog to digita conversion) Use the internal clock Then, when you put the controller to sleep, you need to use one or more mechanisms below to wake up the controller and do something with it: Wake up with a signal Wake up with a timer This is a summary I made from - Nick Gammon's article: Power saving techniques for microprocessors . That article applies mostly to ATmega328P, but the technique applies to other Arduino compatible controllers as well. As TheDoctor said well, you will need to check the datashet to make sure your controller suports any of those techniques and how to do it more precisely.
{ "source": [ "https://arduino.stackexchange.com/questions/6", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/11/" ] }
17
I made an awesome program the other day, and I wanted to upload it to my Arduino. After clicking the upload button, some mean dude named avr came along and stopped me, saying: avrdude: stk500_getsync(): not in sync: resp=0x00 All I want to do is just upload my program, but avr won't let me. He's even unintelligible, so can someone tell me what the heck he's trying to say and how to get rid of him? i.e.: Whenever I try to upload a program to my Arduino, I get this error message: avrdude: stk500_getsync(): not in sync: resp=0x00 What does this mean, and how can I fix it?
This is caused by a generic connection error between your computer and the Arduino, and can result from many different specific problems. Here are some easy things that can often fix this error: Disconnect and reconnect the USB cable. Press the reset button on the board. Restart the Arduino IDE. Make sure you select the right board in Tools ► Board ► , e.g. If you are using the Duemilanove 328, select that instead of Duemilanove 128. The board should say what version it is on the microchip. Make sure you selected the right port in Tools ► Serial Port ► . One way to figure out which port it is on is by following these steps: Disconnect the USB cable. Go to Tools ► Serial Port ► and see which ports are listed (e.g. COM4 COM5 COM14). Reconnect the USB cable. Go back to Tools ► Serial Port ► , and see which port appeared that wasn't there before. Make sure digital pins 0 and 1 do not have any parts connected, including any shields. If none of those work, you will want to try to isolate the issue by replacing things: try a different computer on the same arduino, try a different arduino on the same computer, and try using a different USB cable. If the issue is with the computer: Double-check all computer-related issues in the "easy fixes" list above. Reinstall the IDE. Reinstall the drivers. If the issue is with the Arduino: Double-check all board-related issues in the "easy fixes" list above. Make sure the microcontroller is seated correctly. You may need to burn the bootloader . Replace the microcontroller if you have another one handy nearby. You may have bricked your Arduino. Sorry :(
{ "source": [ "https://arduino.stackexchange.com/questions/17", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/37/" ] }
40
The basic Arduino IDE lacks a lot of the sophistication present in other IDEs such as code completion, code collapsing, folder organisation, etc. Are there other IDEs that allow programming in C or C++ and improve on these aspects?
There is an Arduino Eclipse plugin named sloeber ! And Eclipse is an awesome cross-platform open-source IDE! Stino is good. It requires Sublime Text 2 which has an indefinite free trial. Visual Micro provides a full build system with debugger for Arduino in Microsoft Visual Studio . For advanced users it also allows the underlying Arduino source code to be viewed or modified, enabled projects and/or libraries to be edited from any location and shared in multiple projects alongside true cross-platform intellisense . For more go to The Official Arduino Site For development on Windows, there is a special edition from Arduino official IDE called arduino-erw , This edition much better the last one because it fixed a lot of lagging and stability issues!
{ "source": [ "https://arduino.stackexchange.com/questions/40", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/76/" ] }
61
I would like to start the development of some basic Arduino projects but I don't own an Arduino board yet. Is there a way I can write my code and emulate/test it using a desktop computer so after my board arrives I just have to upload and run my project on it?
There are a whole slew of Arduino simulators out there, many free, and some paid products as well. The CodeBlocks Arduino development environment includes a free Arduino simulator, still under development but functional. Simuino simulates the Arduino Uno and Mega pins - not a pretty-looking realistic simulator, but it works. The Python based Arduino Simulator is another option, that plays well with the official IDE Virtronics Simulator for Arduino looks promising, but I don't see why I would pay $14.99 for it, when I could buy one or more actual Arduino clones for that price Many other Arduino simulators are out there if you search, and new ones are being announced, even crowdfunded, all the time.
{ "source": [ "https://arduino.stackexchange.com/questions/61", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/87/" ] }
85
I wanted to make a fairly simple circuit which would flash a series of LEDs in sequence, using my Arduino Uno (more specifically, a SainSmart clone). I wrote my sketch and it compiled fine. After that, I connected 8 LEDS+resistors to pins 0 through 7, and then connected the Uno to my computer via USB. I've uploaded sketches successfully in the past, so I'm sure my settings and drivers etc. are correct. However, when I tried to upload my sketch this time, it didn't work. I tried removing everything I'd connected to the Arduino's pins, and suddenly the upload worked again. Why does this happen? Does it mean I have to disconnect everything from the board every time I upload a sketch?
The problem is specifically pins 0 and 1. Although they can be used as regular digital IO pins, they also serve as the RX and TX pins for the Uno's serial port. The USB connection (for uploading sketches etc.) is routed to the same pins internally. Unfortunately that means anything connected on pins 0 and 1 can interfere with the serial connection, preventing communication via USB. In short, it's not necessary to disconnect everything when uploading a sketch. It should only be necessary to disconnect anything from pins 0 and 1. Rather than going through that hassle every time a sketch is uploaded though, it may be best just to avoid using those pins unless necessary (e.g. you run out of other pins, or your project needs a serial connection to another device).
{ "source": [ "https://arduino.stackexchange.com/questions/85", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/42/" ] }
105
I am not very skilled with the C Language and I was wondering if there is a way in which python could be used to program an Arduino. This would most likely require a different IDE in order to be able to debug the scripts them self.
It's going to be extremely difficult to get any kind of Python script running directly on the Arduino. The reason is that it's an interpreted language, so you would need the interpreter on-board in addition to the plain text script. There's probably not going to be enough memory for all of that. Your best bet would probably be finding a way to compile a Python script to native machine code (which is how C/C++ works). I believe there are projects around to do something like that for other platforms, but (as far as I know) none which does it successfully for Arduino yet. You might find some more useful information on this question at Stack Overflow: Is there a way to "compile" Python code onto an Arduino (Uno) .
{ "source": [ "https://arduino.stackexchange.com/questions/105", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/24/" ] }
117
Is it possible to have more than 14 output pins on the Arduino, I am working on a project in which I need to light up several LEDs individually. I only have an Arduino Uno, and I don't want to get a Mega.
A common way to expand the set of available output pins on the Arduino is to use shift registers like the 74HC595 IC ( link to datasheet ). You need 3 pins to control these chips: Clock Latch Data In a program, you pass on the data one bit at a time to the shift register using the shiftOut() command , like so: shiftOut(dataPin, clockPin, data); With that command, you set each of the 8 outputs on the 595 IC with the 8 bits in the data variable. With one 595, you gain 5 pins (8 on the IC, but you spend 3 to talk to it). To get more outputs, you can daisy-chain a series of 595 together, by connecting its serial-out pin, to the data pin of the next one. You also must connect together the clock and latch pins of all of the 595 ICs. The resulting circuit (using one 595) would look like this: The figure above was taken from this codeproject.com webpage: Arduino Platform - Working with Shift Registers The latch pin is used to keep the 595 outputs steady while you are shifting out data into it, like so: digitalWrite(latchPin, LOW); shiftOut(dataPin, clockPin, data); digitalWrite(latchPin, HIGH);
{ "source": [ "https://arduino.stackexchange.com/questions/117", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/24/" ] }
132
There are some pins on the Arduino which I haven't been able to find out anything about: IOREF AREF An unlabeled one next to IOREF What are they?
AREF: This is the voltage reference analog-to-digital converter (ADC). It can be used instead of the standard 5V reference for the top end of the analog spectrum – for example, if you wanted to use the ADC to monitor a signal that had a 0-1.5 volt range you could get the full scale of the ADC by connect AREF to a 1.5V signal. DO NOT CONNECT A SIGNAL OUTSIDE THE 0V TO 5V RANGE! Note that in order for this to work, you must run analogReference(EXTERNAL); before using analogRead() . Also: After changing the analog reference, the first few readings from analogRead() may not be accurate. Fore more information, see AnalogReference . IOREF: This is a voltage corresponding to the i/o of that board, for example an Uno would supply 5v to this pin, but a Due would supply 3.3v. Sending a signal to this pin does nothing. Unlabeled pin: This pin is unused, but is there to ensure compatibility with future products. It is not connected to anything on any R3 boards I have seen.
{ "source": [ "https://arduino.stackexchange.com/questions/132", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/144/" ] }
176
Say I have some variables that I want to print out to the terminal, what's the easiest way to print them in a string? Currently I do something like this: Serial.print("Var 1:");Serial.println(var1); Serial.print(" Var 2:");Serial.println(var2); Serial.print(" Var 3:");Serial.println(var3); Is there a better way to do this?
ardprintf is a function that I hacked together which simulates printf over the serial connection. This function (given at the bottom) can be pasted in the beginning of the files where the function is needed. It should not create any conflicts. It can be called similar to printf . See it in action in this example: void setup() { Serial.begin(9600); } void loop() { int l=2; char *j = "test"; long k = 123456789; char s = 'g'; float f = 2.3; ardprintf("test %d %l %c %s %f", l, k, s, j, f); delay(5000); } The output as expected is: test 2 123456789 g test 2.30 The function prototype is: int ardprintf(char *, ...); It returns the number of arguments detected in the function call. This is the function definition: #ifndef ARDPRINTF #define ARDPRINTF #define ARDBUFFER 16 #include <stdarg.h> #include <Arduino.h> int ardprintf(char *str, ...) { int i, count=0, j=0, flag=0; char temp[ARDBUFFER+1]; for(i=0; str[i]!='\0';i++) if(str[i]=='%') count++; va_list argv; va_start(argv, count); for(i=0,j=0; str[i]!='\0';i++) { if(str[i]=='%') { temp[j] = '\0'; Serial.print(temp); j=0; temp[0] = '\0'; switch(str[++i]) { case 'd': Serial.print(va_arg(argv, int)); break; case 'l': Serial.print(va_arg(argv, long)); break; case 'f': Serial.print(va_arg(argv, double)); break; case 'c': Serial.print((char)va_arg(argv, int)); break; case 's': Serial.print(va_arg(argv, char *)); break; default: ; }; } else { temp[j] = str[i]; j = (j+1)%ARDBUFFER; if(j==0) { temp[ARDBUFFER] = '\0'; Serial.print(temp); temp[0]='\0'; } } }; Serial.println(); return count + 1; } #undef ARDBUFFER #endif **To print the % character, use %% .* Now, available on Github gists .
{ "source": [ "https://arduino.stackexchange.com/questions/176", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/11/" ] }
179
I made a sketch, but then I lost it. However, I uploaded it to the Arduino before losing it. Is there any way I can get it back?
It should be possible as long as the security bit isn't set. This question was asked on EE a while back. Is it possible to extract code from an arduino board? But you won't get the Arduino code you wrote back. The code is compiled into assembly and you'll have to convert that back to C yourself.
{ "source": [ "https://arduino.stackexchange.com/questions/179", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/37/" ] }
210
According to the Arduino reference for analogWrite() , the PWM frequency on most pins is ~490 Hz. However, it's ~980 Hz for pins 5 and 6 on the Uno, and for pins 3 and 11 on the Leonardo. Why are these different? Is it a deliberate design feature, or is it somehow dictated by the hardware?
Those aren't the only frequencies available for the PWM signals. However, they are the frequencies as determined by the applied prescaler (which you can readily change as detailed below). Each of the 3 pairs of PWM pins is tied to one timer, each of which has its own base frequency, as follows: Pins 5 and 6 are paired on timer0, with base frequency of 62500Hz Pins 9 and 10 are paired on timer1, with base frequency of 31250Hz Pins 3 and 11 are paired on timer2, with base frequency of 31250Hz Then each set of pins have a number of prescaler values that can be chosen, that will divide the base frequency of that pair of pins. The prescaler values available are: Pins 5 and 6 have prescaler values of 1, 8, 64, 256, and 1024 Pins 9 and 10 have prescaler values of 1, 8, 64, 256, and 1024 Pins 3 and 11 have prescaler values of 1, 8, 32, 64, 128, 256, and 1024 The different combinations yield different frequencies in a given PWM pin. Notice that timer 2 (tied to pins 3 and 11) have more prescaler values available, resulting in more frequencies available. Now, why timer 2 is different, that's a separate question. Edit: Here's a list of possible PWM frequencies per pin (from this article ): For pins 6 and 5 (OC0A and OC0B): If TCCR0B = xxxxx001, frequency is 64kHz If TCCR0B = xxxxx010, frequency is 8 kHz If TCCR0B = xxxxx011, frequency is 1kHz (this is the default from the Diecimila bootloader) If TCCR0B = xxxxx100, frequency is 250Hz If TCCR0B = xxxxx101, frequency is 62.5 Hz For pins 9, 10, 11 and 3 (OC1A, OC1B, OC2A, OC2B): If TCCRnB = xxxxx001, frequency is 32kHz If TCCRnB = xxxxx010, frequency is 4 kHz If TCCRnB = xxxxx011, frequency is 500Hz (this is the default from the Diecimila bootloader) If TCCRnB = xxxxx100, frequency is 125Hz If TCCRnB = xxxxx101, frequency is 31.25 Hz TCCRnB is where you set the prescaler bits for timer n , replacing n by 0, 1 or 2, depending on the timer you want to set. If you are still unsure about bitwise operations, read this bit math tutorial . My sources: http://playground.arduino.cc/Code/PwmFrequency http://arduino.cc/en/Tutorial/SecretsOfArduinoPWM http://arduino.cc/en/Tutorial/PWM http://arduino-info.wikispaces.com/Arduino-PWM-Frequency Note that there seems to be divergence in those sources about whether pins 9 and 10 have the same behavior as 5 and 6 or 3 and 11, but you get the idea anyway. I'm reading the datashet to try and figure out which is correct, or whether this is a difference between boards.
{ "source": [ "https://arduino.stackexchange.com/questions/210", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/42/" ] }
221
According to the Arduino documentation, the ATmega328 has 32KB of Flash memory for the bootloader + uploaded sketch, and only 2KB SRAM for runtime data. The ATmega2560 has quite a bit more, totalling 256KB and 8KB respectively. In either case, those limits seem rather small, especially when compared to similarly sized consumer devices, such as smartphones. What can you do if you run out? E.g. if your sketch is too big, or you need to process a lot of data (such as strings) at runtime? Is there any way to expand the Flash or SRAM?
Optimisation Low-level programming for embedded systems is quite different from programming for general purpose devices, such as computers and cell phones. Efficiency (in terms of speed and space) is far more important because resources are at a premium. That means the very first thing to do if you run out of space is to look at what parts of your code you can optimise. In terms of reducing program space (Flash) usage, the code size can be quite difficult to optimise if you're inexperienced, or if you're more used to programming for desktop computers which don't tend to need that skill. Unfortunately, there's no 'magic bullet' approach which will work for all situations, although it helps if you consider seriously what your sketch really needs to have. If a feature isn't needed, take it out. Sometimes it's also helpful to identify where multiple parts of your code are the same (or very similar). You may be able to condense them into reusable functions which can be called from multiple places. However, be aware that sometimes trying to make code too reusable actually ends up making it more verbose. It's a tricky balance to strike that tends to come with practice. Spending some time looking at how code changes affect the compiler output can help. Runtime data (SRAM) optimisation tends to be a bit easier when you're used to it. A very common pitfall for beginner programmers is using too much global data. Anything declared at global scope will exist for the entire lifetime of the sketch, and that isn't always necessary. If a variable is only used inside one function, and it doesn't need to persist between calls, then make it a local variable. If a value needs to be shared between functions, consider if you can pass it as a parameter instead of making it global. That way you'll only use SRAM for those variables when you actually need it. Another killer for SRAM usage is text processing (e.g. using the String class). Generally speaking, you should avoid doing String operations if possible. They are massive memory hogs. For example, if you're outputting lots of text to serial, use multiple calls to Serial.print() instead of using string concatenation. Also try to reduce the number of string literals in your code if possible. Avoid recursion if possible as well. Each time a recursive call is made, it takes the stack a level deeper. Refactor your recursive functions to be iterative instead. Use EEPROM EEPROM is used for long-term storage of things that only change occasionally. If you need to use large lists or look-up tables of fixed data, then consider storing it in EEPROM in advance, and only pulling out what you need when necessary. Obviously EEPROM is quite limited in size and speed though, and has a limited number of write cycles. It's not a great solution to data limitations, but it might be enough to ease the burden on Flash or SRAM. It's also quite possible to interface with similar external storage, such as an SD card. Expansion If you've exhausted all other options, then expansion may be a possibility. Unfortunately, expanding Flash memory to increase program space isn't possible. However, it is possible to expand SRAM. This means you may be able to refactor your sketch to reduce code size at the expense of increasing data size. Getting more SRAM is actually fairly straightforward. One option is to use one or more 23K256 chips. They are accessed via SPI, and there is the SpiRAM library to help you use them. Just beware that they operate at 3.3V not 5V! If you're using the Mega, you could alternatively get SRAM expansion shields from Lagrangian Point or Rugged Circuits .
{ "source": [ "https://arduino.stackexchange.com/questions/221", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/42/" ] }
226
ATMEL says the cell lifetime of an EEPROM cell is about 100,000 write cycle/ cell. Is this actually how the EEPROM performs in the wild? If I do not change the value of a cell, does this stress the lifetime? For example, if I write the value 0xFF to the same cell again and again, is this any different to writing 0x00 , 0xFF , 0x00 etc.
As you state, the internal EEPROM has a lifetime of 100,000 write cycles. This isn't a guess - a very significant proportion of ATmega328 will reach this number with no issues. I have tested three processors before, and all reached 150,000 cycles with no issues. It is important to note the failure mode of EEPROM. Most "EEPROM destroyer" projects repeatedly read/write until the data is not written at all. Before this point, the EEPROM will still be damaged. This would be manifested by data not being retained for a reasonable period. It is unwise to rely on anything more than 100,000 write cycles for this reason. EEPROM is different to the RAM on an ATmega. Writing to it is not simple or quick, but it is wrapped up in a friendly Arduino library , hiding this complexity from the user. The first level of indirection is the EEPROM library , which is trivially simple], just calling two other functions for read and write. This calls eeprom_write_byte, found here . This function uses inline assembly, so might not be easily understood. There is a comment that is easily understood though: Set programming mode: erase and write This hints to one of the complexities of dealing with EEPROM - to write to it, you first need to erase it. This means that if you call EEPROM.write(), it will perform a write cycle regardless of the value you are writing. This means that repeatedly writing 0xFF will likely have the same effect as writing 0xFF,0x00,0xFF,0x00 etc. There are ways to work around this - you can try calling EEPROM.read() before EEPROM.write() to see if the value is already the same, but this takes additional time. There are other techniques to avoid excessive EEPROM wear, but their use depends on your application.
{ "source": [ "https://arduino.stackexchange.com/questions/226", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/190/" ] }
286
Is there a way I can have multiple parts of the program running together without doing multiple things in the same code block? One thread waiting for an external device while also blinking a LED in another thread.
There is no multi-process, nor multi-threading, support on the Arduino. You can do something close to multiple threads with some software though. You want to look at Protothreads : Protothreads are extremely lightweight stackless threads designed for severely memory constrained systems, such as small embedded systems or wireless sensor network nodes. Protothreads provide linear code execution for event-driven systems implemented in C. Protothreads can be used with or without an underlying operating system to provide blocking event-handlers. Protothreads provide sequential flow of control without complex state machines or full multi-threading. Of course, there is an Arduino example here with example code . This SO question might be useful, too. ArduinoThread is a good one too.
{ "source": [ "https://arduino.stackexchange.com/questions/286", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/193/" ] }
296
The standard is 9600 baud. That's just the standard . Using a Arduino Uno SMD R2, what is the highest practical baud rate I can achieve? Bonus points for the audacious: How would you go about creating an error checking mechanism and then increasing the baud rate ridiculous high to get high transfer rates?
There are several factors here: How high of a baud-rate can the ATmega328P MCU achieve? How high of a baud-rate can the USB-Serial interface achieve? What is the oscillator frequency on the ATmega328P? What is the oscillator frequency on the USB-serial interface (if it has one)? How tolerant is the USB-serial interface of baud-rate mismatch? All of these factors are relevant to determining the maximum achieveable baud rate. The ATmega328P uses a hardware divisor from it's clock-rate to generate the base-clock for the serial interface. If there is no integer ratio from the main clock to the bit-time of the desired baud rate, the MCU will not be able to exactly produce the desired rate. This can lead to potential issues, as some devices are much more sensitive to baud-rate mismatch then others. FTDI-based interfaces are quite tolerant of baud-rate mismatch, up to several percent error. However, I have worked with specialized embedded GPS modules that were unable to handle even a 0.5% baud rate error. General serial interfaces are tolerant of ~5% baud-rate error. However, since each end can be off, a more common spec is +-2.5%. This way, if one end is 2.5% fast, and the other is 2.5% slow, your overall error is still only 5%. Anyways. The Uno uses a ATmega328P as the primary MCU, and a ATmega16U2 as the USB-serial interface. We're also fortunate here in that both these MCUs use similar harware USARTs, as well as 16 Mhz clocks. Since both MCUs have the same harware and clock-rate, they'll both have the same baud-rate error in the same direction, so we can functionally ignore the baud error issue. Anyways, the "proper" answer to this question would involve digging up the source for the ATmega16U2, and working out the possible baud-rates from there, but since I'm lazy, I figure simple, empirical testing will work. A quick glance at the ATmega328P datasheet produces the following table: So given the max stated baud-rate of 2 Mbps, I wrote a quick test program: void setup(){}; void loop() { delay(1000); Serial.begin(57600); Serial.println("\r\rBaud-rate = 57600"); delay(1000); Serial.begin(76800); Serial.println("\r\rBaud-rate = 76800"); delay(1000); Serial.begin(115200); Serial.println("\r\rBaud-rate = 115200"); delay(1000); Serial.begin(230400); Serial.println("\r\rBaud-rate = 230400"); delay(1000); Serial.begin(250000); Serial.println("\r\rBaud-rate = 250000"); delay(1000); Serial.begin(500000); Serial.println("\r\rBaud-rate = 500000"); delay(1000); Serial.begin(1000000); Serial.println("\r\rBaud-rate = 1000000"); delay(1000); Serial.begin(2000000); Serial.println("\r\rBaud-rate = 2000000"); }; And then looking at the relevant serial port with a serial terminal: So it appears the hardware can run at 2,000,000 baud without problems. Note that this baud rate only gives the MCU 64 80 clock-cycles per byte, so it would be very challenging to keep the serial interface busy. While the individual bytes may be transferred very rapidly, there is likely to be lots of time when the interface is simply idle. Edit: Actual Testing! The 2 Mbps is real: each bit-time is 500 ns, which matches exactly with what is expected. Performance issues! Overall packet length: 500 Kbaud: 1 Mbaud: 2 Mbaud: Note: The noticeable overshoot is due to poor scope probe grounding practices, and is probably not real. I'm using the ground-clip-lead that's part of my scope probe, and the lead-inductance is likely the cause of the majority of the overshoot. As you can see, the overall transmission length is the same for 0.5, 1 and 2 Mbaud. This is because the code that is placing the bytes in the serial buffer is poorly optimized. As such, you will never achieve anything better then an effective 500 Kbaud, unless you write your own serial libraries. The Arduino libraries are very poorly optimized, so it probably wouldn't be too hard to get a proper 2 Mbaud, at least for burst transmissions, if you spent a bit of time on it.
{ "source": [ "https://arduino.stackexchange.com/questions/296", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/220/" ] }
316
Currently, my sketch is checking an input pin every time round the main loop. If it detects a change, it calls a custom function to respond to it. Here's the code (trimmed down to the essentials): int pinValue = LOW; void pinChanged() { //... } void setup() { pinMode(2, INPUT); } void loop() { // Read current input int newValue = digitalRead(2); // Has the input changed? if (newValue != pinValue) { pinValue = newValue; pinChanged(); } } Unfortunately, this doesn't always work properly for very short changes on the input (e.g. brief pulses), especially if loop() is running a bit slowly. Is there a way to make the Arduino detect the input change and call my function automatically?
You can do this using external interrupts. Most Arduinos only support this on a limited number of pins though. For full details, see the documentation on attachInterrupt() . Assuming you're using an Uno, you could do it like this: void pinChanged() { //... } void setup() { pinMode(2, INPUT); attachInterrupt(0, pinChanged, CHANGE); } void loop() { } This will call pinChanged() whenever a change is detected on external interrupt 0. On the Uno, that corresponds to GPIO pin 2. The external interrupt numbering is different on other boards, so it's important to check the relevant documentation. There are limitations to this approach though. The custom pinChanged() function is being used as an Interrupt Service Routine (ISR). That means the rest of the code (everything in loop() ) is temporarily stopped while the call is executing. In order to prevent disrupting any important timing, you should aim to make ISRs as fast as possible. It's also important to note that no other interrupts will run during your ISR. That means anything relying on interrupts (such as the core delay() and millis() functions) may not work properly inside it. Lastly, if your ISR needs to change any global variables in the sketch, they should usually be declared as volatile , e.g.: volatile int someNumber; That's important because it tells the compiler that the value could change unexpectedly, so it should be careful not to use any out-of-date copies/caches of it.
{ "source": [ "https://arduino.stackexchange.com/questions/316", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/42/" ] }
432
I'm working on building a solar powered, Arduino based weather station. The weather station consists of a temperature sensor and a photoresistor, and I plan to add an anemometer in the future. I would like to connect the weather station to my wireless network so that I can retrieve the sensor data from my computer without having to run wires (I live in a rental). What are the different options for connecting the Arduino to WiFi? I've looked at ethernet shields, WiFi shields, and something called Xbee, but I don't understand what each of them are for. I also have a wireless home router that I could use. Is it possible to connect my Arduino Uno to the router via the routers ethernet or USB port and then receive data from and send commands to the Arduino wirelessly over my home network? If so, how would this be accomplished? I currently have a bare Arduino Uno.
You have a few options for connecting your Arduino to the network/Internet. Ethernet Something like the Arduino Ethernet Shield allows you to plug in an Ethernet cable from the wall or router into your Arduino. Obviously, the main limitation is that your device is now tethered by the cable. For outdoor use, I wouldn't do this. WiFi The Arduino WiFi Shield allows you to connect to your home WiFi network. This is just like the Ethernet except its now wireless. The ESP8266 is a cheaper alternative that, with the default firmware, has the same functionality as the WiFi Shield. Be careful that you power it with 3.3V and not 5V as the rest of the Arduino. It also uses 3.3V logic levels so don't connect the Arduino's TX pin directly to the ESP's RX pin; use a voltage divider. RF If you have a lot of sensors or other devices that need to communicate with each other, the best option is usually an RF module. You have many options here, XBee being one of them. Check out the Sparkfun XBee Buying Guide to look at all the options available. And that's just XBee. There are many other wireless options available, at all sorts of prices. The thing with RF is that none of these will connect to the Internet. You will have all your devices communicate with each other or a base station, which will then be connected to the network by either a WiFi or Ethernet module. Wireless Router Serial Depending on what kind of wireless router you use, you can have the Arduino communicate directly with it and use that as your connection to a network. Arduino - Cheap wifi connectivity Converting your Ethernet Shield to a wireless shield
{ "source": [ "https://arduino.stackexchange.com/questions/432", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/225/" ] }
439
If I upload any sketch that sends serial data, I immediately see the TX/RX LEDs flash once the sketch is uploaded. If I then start the serial monitor, the sketch appears to restart. A bare minimum sketch that shows this behaviour: void setup() { Serial.begin(9600); Serial.println("Setup"); } void loop() { Serial.println("Loop"); delay(1000); } Tested with several boards and Mac and Windows versions of the IDE. Example output - it goes back to "Setup" when I open the serial monitor: Why is this?
The Arduino uses the RTS (Request To Send) (and I think DTR (Data Terminal Ready) ) signals to auto-reset. If you get a serial terminal that allows you to change the flow control settings you can change this functionality. The Arduino terminal doesn't give you a lot of options and that's the default. Others will allow you to configure a lot more. Setting the flow control to none will allow you to connect/disconnect from the serial without resetting your board. it's quite useful for debugging when you want to be able to just plug in the connector and see the output without having to start the sketch over. Another way to disable the auto reset is to put a pull up resistor on the reset pin. Disabling Auto Reset On Serial Connection
{ "source": [ "https://arduino.stackexchange.com/questions/439", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/136/" ] }
452
I've traditionally used a text editor with avr-gcc and makefiles for working with Arduino boards. I'm now trying to develop projects for the wider Arduino user-base, so I am trying to use the Arduino libraries and common IDEs for ease of use. I started using Stino, but then found out that the Arduino IDE has some toolchain "quirks" that mean I will need to test everything in Arduino IDE expressly. Since starting to use it more, I have found it frequently hangs or crashes. The triggers for this are: Creating a new sketch takes about 15s, and shows me the OS X beach ball. Saving a sketch takes about 15s. Going to File->Examples frequently crashes the IDE - I need to force quit. Going to File->Sketchbook always crashes the IDE. It sometimes randomly hangs. I don't have any other stability issues and other Java based IDEs like Pycharm work flawlessly. I have installed numerous libraries, including the entire Teensyduino suite (which is a lot of libraries!). Outside of that, no changes have been made. Is this a common issue? Is there any logging or anyway I can try and work out what is causing this issue? This is happening with: - Arduino 1.05 - Arduino 1.55 I have tried uninstalling and reinstalling.
The Arduino uses the RTS (Request To Send) (and I think DTR (Data Terminal Ready) ) signals to auto-reset. If you get a serial terminal that allows you to change the flow control settings you can change this functionality. The Arduino terminal doesn't give you a lot of options and that's the default. Others will allow you to configure a lot more. Setting the flow control to none will allow you to connect/disconnect from the serial without resetting your board. it's quite useful for debugging when you want to be able to just plug in the connector and see the output without having to start the sketch over. Another way to disable the auto reset is to put a pull up resistor on the reset pin. Disabling Auto Reset On Serial Connection
{ "source": [ "https://arduino.stackexchange.com/questions/452", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/136/" ] }
506
Arduino is an odd hybrid, where some C++ functionality is used in the embedded world—traditionally a C environment. Indeed, a lot of Arduino code is very C like though. C has traditionally used #define s for constants. There are a number of reasons for this: You can't set array sizes using const int . You can't use const int as case statement labels (though this does work in some compilers) You can't initialize a const with another const . You can check this question on StackOverflow for more reasoning. So, what should we use for Arduino? I tend towards #define , but I see some code using const and some using a blend.
It's important to note that const int does not behave identically in C and in C++, so in fact several of the objections against it that have been alluded to in the original question and in Peter Bloomfields's extensive answer are not valid: In C++, const int constants are compile time values and can be used to set array limits, as case labels, etc. const int constants do not necessarily occupy any storage. Unless you take their address or declare them extern, they will generally just have a compile time existence. However, for integer constants, it might often be preferable to use a (named or anonymous) enum . I often like this because: It's backward compatible with C. It's nearly as type safe as const int (every bit as type safe in C++11). It provides a natural way of grouping related constants. You can even use them for some amount of namespace control. So in an idiomatic C++ program, there is no reason whatsoever to use #define to define an integer constant. Even if you want to remain C compatible (because of technical requirements, because you're kickin' it old school, or because people you work with prefer it that way), you can still use enum and should do so, rather than use #define .
{ "source": [ "https://arduino.stackexchange.com/questions/506", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/136/" ] }
564
Anyone with kids knows they never help with the toilet paper. Anyone know how to track when it's low or out and sound an audible alarm? I just don't know what sensor to use that may help. Some that came to mind are: by weight, by reflection (the color of the paper) or some laser tripwire - all right on the spool. I don't mind building it, it's just I don't know which sensor. Anyone know which to use?
Bring up several rolls at a time and hang one for use. Put the other two on a short vertical pole within reach of the sitter. Sitter can take another roll when needed. Mechanically sense lack of weight on the shelf at the bottom of the pole. Alarm triggers when the last roll is removed. No one has to get caught short. To sense the weight use a force sensitive resistor such as the FSR 400 (see datasheet ). Alternatively, you could use a lightweight coil spring to rest the toilet paper on with a micro switch that is released when both rolls are removed. Another option would be an IR beam break detector where the circuit is completed when the last roll is removed.
{ "source": [ "https://arduino.stackexchange.com/questions/564", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/381/" ] }
682
The use of malloc() and free() seems pretty rare in the Arduino world. It is used in pure AVR C much more often, but still with caution. Is it a really bad idea to use malloc() and free() with Arduino?
My general rule for embedded systems is to only malloc() large buffers and only once, at the start of the program, e.g., in setup() . The trouble comes when you allocate and de-allocate memory. Over a long run session, memory becomes fragmented and eventually an allocation fails due to lack of a sufficiently large free area, even though the total free memory is more than adequate for the request. (Historical perspective, skip if not interested): Depending on the loader implementation, the only advantage of run-time allocation vs. compile-time allocation (intialized globals) is the size of the hex file. When embedded systems were built with off the shelf computers having all volatile memory, the program was often uploaded to the embedded system from a network or an instrumentation computer and the upload time was sometimes an issue. Leaving out buffers full of zeros from the image could shorten the time considerably.) If I need dynamic memory allocation in an embedded system, I generally malloc() , or preferably, statically allocate, a large pool and divide it into fixed-size buffers (or one pool each of small and large buffers, respectively) and do my own allocation/de-allocation from that pool. Then every request for any amount of memory up to the fixed buffer size is honored with one of those buffers. The calling function doesn't need to know whether it's larger than requested, and by avoiding splitting and re-combining blocks we solve fragmentation. Of course memory leaks can still occur if the program has allocate/de-allocate bugs. Update 02/16/23: I'm curious why the Arduino library's malloc implementation doesn't implement some coalescing of the free blocks like a full OS would. That's interesting to think about, but let's first be clear that in C/C++, malloc() and free() are implemented as library functions at the application-level not the OS level, even in major OSes. And they are based on fragmenting the heap, just as the Arduinos' malloc() functions do. Embedded systems, are relatively new, at least as an influential force in the design of OSes, languages, and libraries' design. Which is to say, programs that had to run "forever" without failing (other than, perhaps the OS itself) were outliers. Most programs we run on our desktops or (once upon a time) ran on mainframes, were started, processed a batch of data, and exited. Secondly, the malloc()/free() method of memory allocation/deallocation was simple, where anticipating the sizes of memory requests was not; and some allocations are used briefly and returned, where some are kept for the duration of the run. How is a library designer to provide for coalescing currently free and newly deallocated memory with no knowledge of the allocate/deallocate usage and sizes of requests? The fixed-size buffer scheme I described above doesn't suffer the fragmentation problem that "malloc() an arbitrary number of bytes" scheme does. It is still limited to available memory but within that limitation, could meet the "run forever" requirement. There may be algorithms that split and coalesce memory, but probably only for a few - statistically predictable - patterns of allocation and deallocation. Update 02/18/23: @edgarbonet points out that Arduino's free() does try to coalesce free blocks. This is a good example of an algorithm that works for certain cases: Properly done, it would work for "last-out / first-in" cases, and to some extent for some other cases. The trouble is, most of our use patterns are not very regular, if they're regular at all. An alogrithm that does no splitting, but merely allocates & deallocates fixed-size pieces (rather like loaning library books) can still run out of memory but won't end up with fragmented resources. In fact, because it doesn't split, it will need more memory to meet the same random-sized requests than even a coalescing malloc() and free() .
{ "source": [ "https://arduino.stackexchange.com/questions/682", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/136/" ] }
804
I have a Arduino Nano (Sainsmart) that I'm trying to upload a sketch to. Under the Arduino IDE, the device selected was Arduino Nano w/ ATmega328 . However uploading the sketch gives me the error avrdude: stk500_recv(): programmer is not responding I tried both USB ports ( /dev/tty.usbserial & /dev/cu.usbserial ) but the same error persist. The Arduino is connected to a Macbook Air via the USB cable, and the PWR LED indicator light on the Arduino is turned on and the L indicator LED blinks. There was no problem uploading to a Arduino Uno. Retried after installing the latest FTDI drivers (MAC OSX, x64, v2.2.18, FTDIUSBSerialDriver_10_4_10_5_10_6_10_7.mpkg) from http://www.ftdichip.com/Drivers/VCP.htm . However that did not help. What could have gone wrong?
Know this is old but I ran onto it during my search for Nano(V3)'s not uploading so thought might help someone else. Problem is the bootloader - Arduino IDE BUT I Found an easy solution (right under my nose). I realized that my nano's had been uploading just fine then I had finally updated the Arduino AVR Boards from 1.6.20 to 1.6.21. I didn't think there was any problems because it still showed my Nano and ATmega328 etc in the board manager after the change. But the new boards manager has a new ATmega328 processor choice for the Nano. I changed processor: In the Arduino IDE select TOOLS > PROCESSOR > pulldown menu from ATmega328P to "ATmega328P (Old Bootloader)" . Since then, I have uploaded many programs to several different Nano's V3 (Prolofic interace Chipset) without issue.
{ "source": [ "https://arduino.stackexchange.com/questions/804", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/639/" ] }
816
What are the advantages of each language when using the Arduino? I'm thinking this is a good general question, but I'll add a bit about why I'm asking if anyone wants to give me a tip. I'm experienced in preprocessed languages like JavaScript, PHP, and have fiddled with languages like Java and Visual Basic. In other words I know programming techniques and both classical and prototypal object orientation, but nothing about communicating directly with hardware. I'm making an octocopter, and am thinking that an object oriented approach will be the easiest. (The software will have very many features...) However I have never written in C++. Since this is a Q&A site that's supposed to help others, only the general question presented at the beginning is of much importance, but I'd appreciate any comments on my situation.
My personal experience as professor (programming, mechatronics) is that if you have previous programming experience and you are aware of concepts as OOP, it is better to go for C/C++. The arduino language is really great for beginners, but have some limitations (e.g. you must have all your files in the same folder). And it is basically a simplification of C/C++ (you can practically copy&paste arduino code to a C/C++ file, and it will work). Also it makes sense that you can go and use a full well known IDE as eclipse: http://playground.arduino.cc/Code/Eclipse Initially it is required a bit more of setup and configuration of your dev environment, but IMHO it is worth it for programmers with experience in any other language. In any case, it won't harm you to start using the arduino language and the arduino IDE for a few days to get familiar with the arduino hardware and then move to C/C++ with Eclipse for really developing your project.
{ "source": [ "https://arduino.stackexchange.com/questions/816", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/693/" ] }
821
An XBee Series 2 is set to Router AT whose TX and RX pins are connected to an Arduino Nano's Rx and Tx pins respectively. The Arduino is connected to a Mac OSX via USB. A second XBee Series 2 is connected to a Windows system via USB. It is set to Coordinator API mode. Using the sketch below on the Arduino, a packet is sent from the Router AT XBee to the Coordinator API XBee, which is seen by XCTU as a Explicit RX frame. However the Arduino LED should blink once if it received a reply packet (should it?) On another test, I wrote a script to send a frame for the Coordinator API XBee to send to the Router AT XBee. Once again the Arduino LED does not blink, and nothing is seen using Arduino's Serial Monitor. Testing the Coordinator API XBee Using the same script to send a packet from Coordinator API XBee to itself, the packet was received as well as a delivery confirmation packet. This shows that both the Coordinator API XBee and the script are working. // Delivery confirmation received: { type: 144, remote64: '0013a20040a74613', remote16: '0000', receiveOptions: 1, data: [ 116 ] } // Received the packet sent to itself received: { type: 139, id: 1, remote16: '0000', transmitRetryCount: 0, deliveryStatus: 0, discoveryStatus: 0 } Testing the Arduino Sketch Code Using the same Arduino sketch which continuously sends API frames to the Coordinator , I connected the Router AT XBee's RX pin to Arduino's RX pin, so the frames the Arduino are sending out are going back into its RX pin. This causes the Arduino's LED to light up! So there is nothing wrong with the code. Problem: Does this mean the Router AT XBee is not configured properly? I do not think its TX pin is damaged because XCTU can still read the settings off this XBee. Any ideas on how we can troubleshoot this? Arduino Sketch (Connected to Router Xbee) #include <XBee.h> XBee xbee = XBee(); uint8_t payload[] = { 0, 0 }; // SH + SL Address of receiving XBee XBeeAddress64 addr64 = XBeeAddress64(0x0013a200, 0x40a74613); ZBTxRequest zbTx = ZBTxRequest(addr64, payload, sizeof(payload)); ZBTxStatusResponse txStatus = ZBTxStatusResponse(); void setup() { pinMode(13, OUTPUT); Serial.begin(9600); xbee.setSerial(Serial); } void loop() { xbee.send(zbTx); delay(1000); xbee.readPacket(); if (xbee.getResponse().isAvailable()) { // Response received, blink LED once Serial.println('resposne!'); digitalWrite(13, HIGH); delay(1000); digitalWrite(13, LOW); delay(1000); } }
My personal experience as professor (programming, mechatronics) is that if you have previous programming experience and you are aware of concepts as OOP, it is better to go for C/C++. The arduino language is really great for beginners, but have some limitations (e.g. you must have all your files in the same folder). And it is basically a simplification of C/C++ (you can practically copy&paste arduino code to a C/C++ file, and it will work). Also it makes sense that you can go and use a full well known IDE as eclipse: http://playground.arduino.cc/Code/Eclipse Initially it is required a bit more of setup and configuration of your dev environment, but IMHO it is worth it for programmers with experience in any other language. In any case, it won't harm you to start using the arduino language and the arduino IDE for a few days to get familiar with the arduino hardware and then move to C/C++ with Eclipse for really developing your project.
{ "source": [ "https://arduino.stackexchange.com/questions/821", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/639/" ] }
893
I am a new Arduino user. I am not sure whether I can connect both USB and external supply through power adaptor to Arduino simultaneously. Would this burn the controller? Basically, I want to automatically shut down the computer after saving all open documents before my UPS battery goes out whenever I am not there to manually shut it down. I am trying to notify the computer about the power failures of main supply - AC 220V 50 Hz (notification/signal sent when power goes out) via USB using Arduino. I am thinking that this can be done by using the external voltage power supply pin. Whenever power goes out in the mains, the external voltage power supply pin will drop to 0 V from 7 V. The value of the voltage can be read using software ( I am unclear about this: Can it be done? If yes, how?). The Arduino will still be powered up as it is connected to USB of computer and thereby communicates to the computer about the power failure.
I am a novice user of Arduino. I am not sure whether I can connect both USB and external supply through power adaptor to Arduino simultaneously. Would this burn the controller? Lets study the schematic of Arduino UNO R3 The input from the power supply plug (PWRIN, the power jack) goes through a diode D1 (to prevent reverse polarity), and feeds a NCP1117 regulator that down converts it to the 5v supply that feeds the 5v parts. The alternative supply comes from the USB plug (USBVCC). The relevant circuit is shown below The USB power lines goes through a P-mosfet (T1) that operates as a switch and then goes to the +5V node (that is the +5v regulator output as shown in the first schematic). The mosfet is controlled by an LMV358 operational amplifier (OPAMP) that operates as a comparator. The negative input of the opamp is tied to 3.3V and the positive one fed through a voltage divider with half the Vin supply level. When Vin >6.6V then the + input of the opamp become higher than the - input and the opamp turns the mosfet off. When Vin <6.6V then the + input of the opamp become lower than the - input and the opamp turns the mosfet on. Note that Vin is after the input diode so it's about 0.6V lower than the external connected power supply level. So when there is power supply connected to the power input that is higher than 6.6V+0.6V (where 0.6V is the diode D1 voltage drop), then the USB supply line is cutoff (because the mosfet turns off) and the power is provided from the power plug. Connecting or disconnecting the USB supply in this case will not make a difference, so you can have both power supplies connected simultaneously, only when the power input drops below the specified level (about 6.6V+0.6V=7.2V), the USB will start powering the board.
{ "source": [ "https://arduino.stackexchange.com/questions/893", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/785/" ] }
1,013
I am sending a list of servo positions via the serial connection to the arduino in the following format 1:90&2:80&3:180 Which would be parsed as: servoId : Position & servoId : Position & servoId : Position How would I split these values up, and convert them to an integer?
Contrarily to other answers, I'd rather stay away from String for the following reasons: dynamic memory usage (that may quickly lead to heap fragmentation and memory exhaustion ) quite slow due to construction/destruction/assignment operators In an embedded environment like Arduino (even for a Mega that has more SRAM), I'd rather use standard C functions : strchr() : search for a character in a C string (i.e. char * ) strtok() : splits a C string into substrings, based on a separator character atoi() : converts a C string to an int That would lead to the following code sample: // Calculate based on max input size expected for one command #define INPUT_SIZE 30 ... // Get next command from Serial (add 1 for final 0) char input[INPUT_SIZE + 1]; byte size = Serial.readBytes(input, INPUT_SIZE); // Add the final 0 to end the C string input[size] = 0; // Read each command pair char* command = strtok(input, "&"); while (command != 0) { // Split the command in two values char* separator = strchr(command, ':'); if (separator != 0) { // Actually split the string in 2: replace ':' with 0 *separator = 0; int servoId = atoi(command); ++separator; int position = atoi(separator); // Do something with servoId and position } // Find the next command in input string command = strtok(0, "&"); } The advantage here is that no dynamic memory allocation takes place; you can even declare input as a local variable inside a function that would read the commands and execute them; once the function is returned the size occupied by input (in the stack) is recovered.
{ "source": [ "https://arduino.stackexchange.com/questions/1013", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/878/" ] }
1,477
Is it possible to reset an Arduino (i.e., to reboot it) from code (i.e from the sketch itself)? I know that is possible with a special circuit but is there a chance to make it just with code? Below is my code and the comment //reset is where I want to force a reset. #include <TrueRandom.h> int i; int randSeed; long randNumber; void setup(){ Serial.begin(9600); Serial.println("20 pseudo Zufallszahlen:"); for (i=1;i<=20;i++) Serial.print(random(10)); Serial.println(); Serial.println(); //randomSeed(TrueRandom.random()); randSeed = analogRead (A0); randomSeed(randSeed); Serial.print("Der 'seed' Wert: "); Serial.println(randSeed); Serial.println(); Serial.println("20 Zufallszahlen mit analogem 'seed' Wert:"); for (i=1;i<=20;i++) Serial.print(random(10)); Serial.println(); Serial.println("---------------------------"); Serial.println(); delay(500); //reset } void loop() { } I want to reset the micro-controller at the end of the setup function to show the effect of random numbers with and without a seed.
There three ways to accomplish this. (last is my favorite) 1) Jumper an unused IO to the RESET pin. Leave it as INPUT for normal run, As it is externally pulled high. And when desired to reset set it as LOW and Output. (bang its rebooting). setup() { ... pinMode(PINtoRESET, INPUT); // Just to be clear, as default is INPUT. Not really needed. digitalWrite(PINtoRESET, LOW); // Prime it, but does not actually set output. ... // Does disable 10K pull Up, but who cares. then when desired... ... pinMode(PINtoRESET, OUTPUT); // lights out. Assuming it is jumper-ed correctly. while(1); // never gets here. 2) Jump to beginning of the code. void(* resetFunc) (void) = 0; // declare reset fuction at address 0 ... resetFunc(); //call reset But be careful , this does not perform a true reset, in that all the registers ARE NOT DEFAULTED. Rather they and the IO are left as is. Where somethings from the bootloader and then the heap will be initialized. And reset are not! 3) Use the watchdog. The SoftReset library makes it easy. Although it is not difficult to implement directly. Shown below.. #include <avr/wdt.h> ... setup() { ... MCUSR = 0; // clear out any flags of prior resets. ... then when desired... ... wdt_enable(WDTO_15MS); // turn on the WatchDog and don't stroke it. for(;;) { // do nothing and wait for the eventual... } ...
{ "source": [ "https://arduino.stackexchange.com/questions/1477", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/233/" ] }
1,569
I know that this is to initialize something: Serial.begin(9600); But I want to know what it really means?
Serial.begin(9600) doesn't actually print anything. For that you'd want to use Serial.print("Hello world!") to print the text "Hello world!" to the serial console. Rather it initializes the serial connection at 9600 bits per second. Both sides of the serial connection (i.e. the Arduino and your computer) need to be set to use the same speed serial connection in order to get any sort of intelligible data. If there's a mismatch between what the two systems think the speed is then the data will be garbled. 9600 bits per second is the default for the Arduino, and is perfectly adequate for the majority of users, but you could change it to other speeds: Serial.begin(57600) would set the Arduino to transmit at 57600 bits per second. You'd need to set whatever software you're using on your computer (like the Arduino IDE's serial monitor) to the same speed in order to see the data being sent.
{ "source": [ "https://arduino.stackexchange.com/questions/1569", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/670/" ] }
3,774
I'd like to make three arrays of the same length. According to the documentation , Arrays must be defined as int myArray[10]; where 10 can be substituted for a known length (another integer), or filled with an array {2, 3, 5, 6, 7} . However, when I attempted to declare a value int arrSize = 10; and then an array based on that size int myArray[arrSize]; , I get the following: error: array bound is not an integer constant . Is there a way to variably determine array sizes, or do I just need to hardcode them? (I was taught hardcoding is bad and something to avoid at all costs.)
Your question has 2 parts actually. 1/ How can I declare the constant size of an array outside the array? You can either use a macro #define ARRAY_SIZE 10 ... int myArray[ARRAY_SIZE]; or use a constant const int ARRAY_SIZE = 10; ... int myArray[ARRAY_SIZE]; if you initialized the array and you need to know its size then you can do: int myArray[] = {1, 2, 3, 4, 5}; const int ARRAY_SIZE = sizeof(myArray) / sizeof(int); the second sizeof is on the type of each element of your array, here int . 2/ How can I have an array which size is dynamic (i.e. not known until runtime)? For that you will need dynamic allocation, which works on Arduino, but is generally not advised as this can cause the "heap" to become fragmented. You can do (C way): // Declaration int* myArray = 0; int myArraySize = 0; // Allocation (let's suppose size contains some value discovered at runtime, // e.g. obtained from some external source) if (myArray != 0) { myArray = (int*) realloc(myArray, size * sizeof(int)); } else { myArray = (int*) malloc(size * sizeof(int)); } Or (C++ way): // Declaration int* myArray = 0; int myArraySize = 0; // Allocation (let's suppose size contains some value discovered at runtime, // e.g. obtained from some external source or through other program logic) if (myArray != 0) { delete [] myArray; } myArray = new int [size]; For more about problems with heap fragmentation, you can refer to this question .
{ "source": [ "https://arduino.stackexchange.com/questions/3774", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/3559/" ] }
4,071
In a lot of the sample code online people add the line Serial.begin(9600) in the setup block. When I look up what Serial.begin() is on the official documentation, it says that it controls the bit per second data transfer. So the obvious question is, why not use 28800, the highest transfer rate? Why do people settle for 9600? What is the limitation here?
Why do people settle? People settle because it is more than fast enough. The most common use is just to print some stuff on a terminal for debuggin. 9600 baud is 960 characters per second, or 12 x 80 character lines per second. How fast can you read? :) If your program is using the serial port for bulk data transfer, you would choose not to settle. What is the limitation... The limits on serial are high. Directly you can use 115200 baud in your programs and it will just work. The Arduino terminal will allow a max of 115200, but other programs such as RealTerm would let you run higher. Hardware serial will run to 1 M baud. If you read around you will see people have used up to 1 M by directly controlling the UART. You might get benefit of high baud rates for uses such as transmitting via a bluetooth chip. If you are using the hardware serial interface to exchange from chip to chip with just a short distance, then 1 M baud is completely feasible. Think of all the SPI and I2C devices that operate just fine at 1 MHz clock rate. Over larger distances, you will start to have problems with noise when using logic level (plain 0 to 5V) signalling. To use larger distances, you would add a transceiver to provide robust signalling, commonly RS-232 and less commonly RS-485. With RS-232 you could run a mega bit at distances of 10's of feet. The microprocessor clock speed will be the real limit. With a hardware UART, the processor must load one byte to the UART every 10 bits (for N81). So when you get to 1 M baud it will be a challenge for the 16 MHz processor to keep the UART supplied with data. A new byte will be sent every 160 clock ticks, which is very few lines of code. For a short burst of data, you might achieve that rate. The message is, the processor will run out of speed before the UART is the limit. Note, this all applies to HardwareSerial , software serial is very different.
{ "source": [ "https://arduino.stackexchange.com/questions/4071", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/3786/" ] }
8,511
I have many Arduino Mini Pro, every from different seller. Few are 3.3v and most 5v. I had to clean table for Xmas and now I do not have any idea how to identify 3.3v Arduinos. They do not have any marks. I bought them on ebay. I know 3.3v has 8mhz clock but only one my arduino has big crystal with 16.000-30.
The regulator should be marked K850(5.0V) or K833(3.3V). A 5 volt part has a 16MHz resonator may be marked with "A1" or "A'N" A 3.3 volt part has a 8MHz resonator may be marked with "80'0" As others have indicated, you can apply up to 12V at the RAW pin, and measure the output of the regulator.
{ "source": [ "https://arduino.stackexchange.com/questions/8511", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/7216/" ] }
11,824
Recently I have noticed that there are two arduino sites, arduino.cc and arduino.org. They both have the Arduino logo and both sell what seems to be official Arduino boards. Also, arduino.org came out with the Arduino Zero board first. What is the deal here? Has Arduino partnered with another site? Any ideas appreciated.
The short of it is that there was a falling out within the Arduino people and now there are two groups laying claim to the "Arduino" name. Arduino LLC runs arduino.cc. They are the steward of the Arduino IDE and libraries and own the "Arduino" trademark in the United States, and also owns the "Genuino" trademark outside of the United States. Arduino SRL (fka Smart Projects SRL) is the company that assembles(assembled) the majority of Arduino boards for Arduino LLC, runs arduino.org, and owns the "Arduino" trademark inside Italy and all of the other countries they have registered the "Arduino" trademark. Arduino SRL recently decided that they are no longer beholden to Arduino LLC and has stopped paying licensing fees for using the Arduino name. Whether this is justified or not has not been fully tested in court yet.
{ "source": [ "https://arduino.stackexchange.com/questions/11824", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/8046/" ] }
11,840
I am interested in writing a simple Arduino program that will turn a servo back and forth (180 degrees each time) continuously. I am looking at these servos and am planning on following this example . Being so new to Arduino, I first wanted to confirm that the SoftwareServo library in that example will drive those particular servos. If it will not, then can someone begin by explaining to me why these servos are incompatible with the lib? Assuming they are compatible, I am looking at the main example on that page (comments stripped out for brevity): #include <SoftwareServo.h> SoftwareServo myservo; int potpin = 0; int val; void setup() { myservo.attach(2); } void loop() { val = analogRead(potpin); val = map(val, 0, 1023, 0, 179); myservo.write(val); delay(15); SoftwareServo::refresh(); } My concerns: There seem to be 3 pins involved here: a potentiometer pin ( 0 ), an analog pin ( val ) and a servo pin ( 2 ). What is the purpose of each of these pins, and is this 3-pin wiring common across all/most servos? Is there anything "wrong" with defining myservo.attach(2); up above the setup function, and leave the setup function an empty no-op function? Where does the map(...) function come from? If it was defined inside SoftwareServo I would have expected its usage to be something like SoftwareServo::map(...) , etc. Why do we need the delay after calling myservo.write(val); ? What would happen if this delay wasn't in there? According to that link, calling SoftwareServo::refresh() once every 50ms is necessary in order to: " keep your servos updating. " But what does this mean, really? Updating to what?!?
The short of it is that there was a falling out within the Arduino people and now there are two groups laying claim to the "Arduino" name. Arduino LLC runs arduino.cc. They are the steward of the Arduino IDE and libraries and own the "Arduino" trademark in the United States, and also owns the "Genuino" trademark outside of the United States. Arduino SRL (fka Smart Projects SRL) is the company that assembles(assembled) the majority of Arduino boards for Arduino LLC, runs arduino.org, and owns the "Arduino" trademark inside Italy and all of the other countries they have registered the "Arduino" trademark. Arduino SRL recently decided that they are no longer beholden to Arduino LLC and has stopped paying licensing fees for using the Arduino name. Whether this is justified or not has not been fully tested in court yet.
{ "source": [ "https://arduino.stackexchange.com/questions/11840", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/9842/" ] }
12,587
I need to read a sensor every five minutes, but since my sketch also has other tasks to do, I cannot just delay() between the readings. There is the Blink without delay tutorial suggesting I code along these lines: void loop() { unsigned long currentMillis = millis(); // Read the sensor when needed. if (currentMillis - previousMillis >= interval) { previousMillis = currentMillis; readSensor(); } // Do other stuff... } The problem is that millis() is going to roll over back to zero after roughly 49.7 days. Since my sketch is intended to run for longer than that, I need to make sure the rollover does not make my sketch fail. I can easily detect the rollover condition ( currentMillis < previousMillis ), but I am not sure what to do then. Thus my question: what would be the proper/simplest way to handle the millis() rollover?
Short answer: do not try to “handle” the millis rollover, write rollover-safe code instead. Your example code from the tutorial is fine. If you try to detect the rollover in order to implement corrective measures, chances are you are doing something wrong. Most Arduino programs only have to manage events that span relatively short durations, like debouncing a button for 50 ms, or turning a heater on for 12 hours... Then, and even if the program is meant to run for years at a time, the millis rollover should not be a concern. The correct way to manage (or rather, avoid having to manage) the rollover problem is to think of the unsigned long number returned by millis() in terms of modular arithmetics . For the mathematically inclined, some familiarity with this concept is very useful when programming. You can see the math in action in Nick Gammon's article millis() overflow ... a bad thing? . For the problem at hand, what's important to know is that in modular arithmetics the numbers “wrap around” when reaching a certain value – the modulus – so that 1 − modulus is not a negative number but 1 (think of a 12 hour clock where the modulus is 12: here 1 − 12 = 1). For those who do not want to go through the computational details, I offer here an alternative (hopefully simpler) way of thinking about it. It is based on the simple distinction between instants and durations . As long as your tests only involve comparing durations, you should be fine. Note on micros() : Everything said here about millis() applies equally to micros() , except for the fact that micros() rolls over every 71.6 minutes, and the setMillis() function provided below does not affect micros() . Instants, timestamps and durations When dealing with time, we have to make the distinction between at least two different concepts: instants and durations . An instant is a point on the time axis. A duration is the length of a time interval, i.e. the distance in time between the instants that define the start and the end of the interval. The distinction between these concepts is not always very sharp in everyday language. For example, if I say “ I will be back in five minutes ”, then “ five minutes ” is the estimated duration of my absence, whereas “ in five minutes ” is the instant of my predicted coming back. Keeping the distinction in mind is important, because it is the simplest way to entirely avoid the rollover problem. The return value of millis() could be interpreted as a duration: the time elapsed from the start of the program until now. This interpretation, however, breaks down as soon as millis overflows. It is generally far more useful to think of millis() as returning a timestamp , i.e. a “label” identifying a particular instant. It could be argued that this interpretation suffers from these labels being ambiguous, as they are reused every 49.7 days. This is, however, seldom a problem: in most embedded applications, anything that happened 49.7 days ago is ancient history we do not care about. Thus, recycling the old labels should not be an issue. Do not compare timestamps Trying to find out which among two timestamps is greater than the other does not make sense. Example: unsigned long t1 = millis(); delay(3000); unsigned long t2 = millis(); if (t2 > t1) { ... } Naively, one would expect the condition of the if () to be always true. But it will actually be false if millis overflows during delay(3000) . Thinking of t1 and t2 as recyclable labels is the simplest way to avoid the error: the label t1 has clearly been assigned to an instant prior to t2, but in 49.7 days it will be reassigned to a future instant. Thus, t1 happens both before and after t2. This should make clear that the expression t2 > t1 makes no sense. But, if these are mere labels, the obvious question is: how can we do any useful time calculations with them? The answer is: by restricting ourselves to the only two calculations that make sense for timestamps: later_timestamp - earlier_timestamp yields a duration, namely the amount of time elapsed between the earlier instant and the later instant. This is the most useful arithmetic operation involving timestamps. timestamp ± duration yields a timestamp which is some time after (if using +) or before (if −) the initial timestamp. Not as useful as it sounds, since the resulting timestamp can be used in only two kinds of calculations... Thanks to modular arithmetics, both of these are guaranteed to work fine across the millis rollover, at least as long as the delays involved are shorter than 49.7 days. Comparing durations is fine A duration is just the amount of milliseconds elapsed during some time interval. As long as we do not need to handle durations longer than 49.7 days, any operation that physically makes sense should also make sense computationally. We can, for example, multiply a duration by a frequency to get a number of periods. Or we can compare two durations to know which one is longer. For example, here are two alternative implementations of delay() . First, the buggy one: void myDelay(unsigned long ms) { // ms: duration unsigned long start = millis(); // start: timestamp unsigned long finished = start + ms; // finished: timestamp for (;;) { unsigned long now = millis(); // now: timestamp if (now >= finished) // comparing timestamps: BUG! return; } } And here is the correct one: void myDelay(unsigned long ms) { // ms: duration unsigned long start = millis(); // start: timestamp for (;;) { unsigned long now = millis(); // now: timestamp unsigned long elapsed = now - start; // elapsed: duration if (elapsed >= ms) // comparing durations: OK return; } } Most C programmers would write the above loops in a terser form, like while (millis() < start + ms) ; // BUGGY version and while (millis() - start < ms) ; // CORRECT version Although they look deceptively similar, the timestamp/duration distinction should make clear which one is buggy and which one is correct. What if I really need to compare timestamps? Better try to avoid the situation. If it is unavoidable, there is still hope if it is known that the respective instants are close enough: closer than 24.85 days. Yes, our maximum manageable delay of 49.7 days just got cut in half. The obvious solution is to convert our timestamp comparison problem into a duration comparison problem. Say we need to know whether instant t1 is before or after t2. We choose some reference instant in their common past, and compare the durations from this reference until both t1 and t2. The reference instant is obtained by subtracting a long enough duration from either t1 or t2: unsigned long reference_instant = t2 - LONG_ENOUGH_DURATION; unsigned long from_reference_until_t1 = t1 - reference_instant; unsigned long from_reference_until_t2 = t2 - reference_instant; if (from_reference_until_t1 < from_reference_until_t2) // t1 is before t2 This can be simplified as: if (t1 - t2 + LONG_ENOUGH_DURATION < LONG_ENOUGH_DURATION) // t1 is before t2 It is tempting to simplify further into if (t1 - t2 < 0) . Obviously, this does not work, because t1 - t2 , being computed as an unsigned number, cannot be negative. This, however, although not portable, does work: if ((signed long)(t1 - t2) < 0) // works with gcc // t1 is before t2 The keyword signed above is redundant (a plain long is always signed), but it helps make the intent clear. Converting to a signed long is equivalent to setting LONG_ENOUGH_DURATION equal to 24.85 days. The trick is not portable because, according to the C standard, the result is implementation defined . But since the gcc compiler promises to do the right thing , it works reliably on Arduino. If we wish to avoid implementation defined behavior, the above signed comparison is mathematically equivalent to this: #include <limits.h> if (t1 - t2 > LONG_MAX) // too big to be believed // t1 is before t2 with the only problem that the comparison looks backwards. It is also equivalent, as long as longs are 32-bits, to this single-bit test: if ((t1 - t2) & 0x80000000) // test the "sign" bit // t1 is before t2 The last three tests are actually compiled by gcc into the exact same machine code. How do I test my sketch against the millis rollover If you follow the precepts above, you should be all good. If you nevertheless want to test, add this function to your sketch: #include <util/atomic.h> void setMillis(unsigned long ms) { extern unsigned long timer0_millis; ATOMIC_BLOCK (ATOMIC_RESTORESTATE) { timer0_millis = ms; } } and you can now time-travel your program by calling setMillis(destination) . If you want it to go through the millis overflow over and over again, like Phil Connors reliving Groundhog Day, you can put this inside loop() : // 6-second time loop starting at rollover - 3 seconds if (millis() - (-3000) >= 6000) setMillis(-3000); The negative timestamp above (-3000) is implicitly converted by the compiler to an unsigned long corresponding to 3000 milliseconds before the rollover (it is converted to 4294964296). What if I really need to track very long durations? If you need to turn a relay on and turn it off three months later, then you really need to track the millis overflows. There are many ways to do so. The most straightforward solution may be to simply extend millis() to 64 bits: uint64_t millis64() { static uint32_t low32, high32; uint32_t new_low32 = millis(); if (new_low32 < low32) high32++; low32 = new_low32; return (uint64_t) high32 << 32 | low32; } This is essentially counting the rollover events, and using this count as the 32 most significant bits of a 64 bit millisecond count. For this counting to work properly, the function needs to be called at least once every 49.7 days. However, if it is only called once per 49.7 days, for some cases it is possible that the check (new_low32 < low32) fails and the code misses a count of high32 . Using millis() to decide when to make the only call to this code in a single "wrap" of millis (a specific 49.7 day window) could be very hazardous, depending on how the time frames line up. For safety, if using millis() to determine when to make the only calls to millis64(), there should be at least two calls in every 49.7 day window. Keep in mind, though, that 64 bit arithmetic is expensive on the Arduino. It may be worth to reduce the time resolution in order to stay at 32 bits.
{ "source": [ "https://arduino.stackexchange.com/questions/12587", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/7508/" ] }
13,178
I have no previous experience with C++ or C, but know how to program C# and am learning Arduino. I just want to organize my sketches and am quite comfortable with the Arduino language even with its limitations, but I really would like to have an object-oriented approach to my Arduino programming. So I have seen that you can have the following ways (not exhaustive list) to organize code: A single .ino file; Multiple .ino files in the same folder (what the IDE calls and displays like "tabs"); An .ino file with an included .h and .cpp file in the same folder; Same as above, but the files are an installed library inside Arduino program folder. I have also heard of the following ways, but have not got them working yet: Declaring a C++-style class in the same, single .ino file (have heard of, but never seen working - is that even possible?); [preferred approach] Including a .cpp file where a class is declared, but without using a .h file (would like this approach, should that work?); Note that I only want to use classes so that code is more partitioned, my applications should be very simple, only involving buttons, leds and buzzers mostly.
How the IDE organizes things First thing, this is how the IDE organizes your "sketch": The main .ino file is the one of the same name as the folder it is in. So, for foobar.ino in foobar folder - the main file is foobar.ino. Any other .ino files in that folder are concatenated together, in alphabetic order, at the end of the main file (regardless of where the main file is, alphabetically). This concatenated file becomes a .cpp file (eg. foobar.cpp ) - it is placed in a temporary compilation folder. The preprocessor "helpfully" generates function prototypes for functions it finds in that file. The main file is scanned for #include <libraryname> directives. This triggers the IDE to also copy all relevant files from each (mentioned) library into the temporary folder, and generate instructions to compile them. Any .c , .cpp or .asm files in the sketch folder are added to the build process as separate compilation units (that is, they are compiled in the usual way as separate files) Any .h files are also copied into the temporary compilation folder, so they can be referred to by your .c or .cpp files. The compiler adds into the build process standard files (like main.cpp ) The build process then compiles all the above files into object files. If the compilation phase succeeds they are linked together along with the AVR standard libraries (eg. giving you strcpy etc.) A side-effect of all this is that you can consider the main sketch (the .ino files) to be C++ to all intents and purposes. The function prototype generation however can lead to obscure error messages if you are not careful. Avoiding the pre-processor quirks The simplest way of avoiding these idiosyncrasies is to leave your main sketch blank (and not use any other .ino files). Then make another tab (a .cpp file) and put your stuff into it like this: #include <Arduino.h> // put your sketch here ... void setup () { } // end of setup void loop () { } // end of loop Note that you need to include Arduino.h . The IDE does that automatically for the main sketch, but for other compilation units, you have to do it. Otherwise it won't know about things like String, the hardware registers, etc. Avoiding the setup/main paradigm You don't have to run with the setup/loop concept. For example, your .cpp file can be: #include <Arduino.h> int main () { init (); // initialize timers Serial.begin (115200); Serial.println ("Hello, world"); Serial.flush (); // let serial printing finish } // end of main Force library inclusion If you run with the "empty sketch" concept you still need to include libraries used elsewhere in the project, for example in your main .ino file: #include <Wire.h> #include <SPI.h> #include <EEPROM.h> This is because the IDE only scans the main file for library usage. Effectively you can consider the main file as a "project" file which nominates which external libraries are in use. Naming issues Don't name your main sketch "main.cpp" - the IDE includes its own main.cpp so you will have a duplicate if you do that. Don't name your .cpp file with the same name as your main .ino file. Since the .ino file effectively becomes a .cpp file this also would give you a name clash. Declaring a C++-style class in the same, single .ino file (have heard of, but never seen working - is that even possible?); Yes, this compiles OK: class foo { public: }; foo bar; void setup () { } void loop () { } However you are probably best off to follow normal practice: Put your declarations in .h files and your definitions (implementations) in .cpp (or .c ) files. Why "probably"? As my example shows you can put everything together in one file. For larger projects it is better to be more organized. Eventually you get to the stage in a medium to large-size project where you want to separate out things into "black boxes" - that is, a class that does one thing, does it well, is tested, and is self-contained (as far as possible). If this class is then used in multiple other files in your project this is where the separate .h and .cpp files come into play. The .h file declares the class - that is, it provides enough detail for other files to know what it does, what functions it has, and how they are called. The .cpp file defines (implements) the class - that is, it actually provides the functions, and static class members, that make the class do its thing. Since you only want to implement it once, this is in a separate file. The .h file is what gets included into other files. The .cpp file is compiled once by the IDE to implement the class functions. Libraries If you follow this paradigm, then you are ready to move the entire class (the .h and .cpp files) into a library very easily. Then it can be shared between multiple projects. All that is required is to make a folder (eg. myLibrary ) and put the .h and .cpp files into it (eg. myLibrary.h and myLibrary.cpp ) and then put this folder inside your libraries folder in the folder where your sketches are kept (the sketchbook folder). Restart the IDE and it now knows about this library. This is really trivially simple, and now you can share this library over multiple projects. I do this a lot. A bit more detail here .
{ "source": [ "https://arduino.stackexchange.com/questions/13178", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/8044/" ] }
13,292
I can't upload sketches to my Arduino Uno. Have I "bricked" it? What steps can I take to work out what is wrong? What can I do to fix it?
It probably isn't bricked I've got quite a few Arduinos, and over the last few years have only ever "bricked" one, and I think that was by zapping it with static electricity. Unfortunately that particular one had a SMD (surface mounted) processor chip, so it isn't easy to try swapping it with another chip. Stay calm, and try the following steps ... Example board An "Arduino Uno" is not just one thing that might fail. It has multiple major components, and possibly only one has failed (if any). See this reference photograph: Major components are: Atmega16U2 processor - this handles the interface to the USB connection Atmega328P processor - this is the "main" processor which has your sketch on it Voltage regulator - this converts incoming power from the power jack to 5 V Power LED (green) - marked "On" Indicator LED (yellow) marked "L" - connected via an op-amp to digital pin 13 Rx and Tx LEDs (yellow) - these indicate if the USB chip (Atmega16U2) is receiving or transmitting Note that the Rx and Tx LEDs are not connected directly to digital pins 0 and 1 on the board (marked Rx and Tx). They only illuminate if you are doing serial communications via USB , not if you have something (like a GPS) plugged directly into digital pins 0 and 1. Also note that since the "L" LED is connected via an op-amp, it may illuminate if pin 13 is set to an input in your sketch. This is normal. It doesn't mean that something is erroneously sending data. Check the power USB power Plug the board into your computer with the USB cable and check that the green "On" LED lights up. Use a multimeter and a couple of jumper leads to test between the 5V pin and the GND pin (arrowed at the bottom). You should get a reading of around 5.0 V (I got 5.04 V on mine). (You can buy a cheap multimeter for around $10 if you don't have one, but you are better off getting a better one for around $50 - check all the electronics web sites and stores.) Also test between the 3.3 V pin and the GND pin - you should get 3.3 V. If you do not get 5 V with the USB cable plugged in make sure the other end is connected to your computer. Also try a different cable. Power jack If you are using, or planning to use, the power jack (marked "power in" on the photo) disconnect the USB, and plug in a power supply - which should be 7 to 12 V DC with positive on the center pin. Measure the 5 V and 3.3 V pins as above. You should still see the same voltages on them. If you get 5 V with the USB connected, but not with the power supply then the voltage regulator (marked on the photo) is probably damaged. Or, possibly the power supply has failed. Try a different power supply to confirm which it is. Check the power-on LED flash If you have the Optiboot bootloader (the Uno normally ships with that) then if you press and release the Reset button, or unplug and plug the USB or power cable back in, the "L" LED should flash quickly 3 times. The "on" and "off" times are 50 ms each, the three flashes should be over within about 1/3 of a second. If it doesn't, you may have a problem with the bootloader, or the main processor chip (Atmega328P). Try uploading a sketch Important: If you are having trouble uploading sketches remove any connected devices (like shields). Also remove jumper wires plugged into the board sockets. In particular, there should be nothing plugged into digital pins 0 and 1 (Rx and Tx) because that will interfere with communicating with the uploading computer. Choose one of the simple example sketches (eg. Blink) and try to upload it. This is what you should see: The "L" LED should flash 3 times. This is because the main chip is being reset by a command from the uploading process. The "Rx" LED should flash quickly. This is the instructions from the uploading process trying to activate the bootloader. The "Tx" LED should flash quickly. This is the processor acknowledging the uploaded data. You may see the above, even if the uploading process fails. This could be because the wrong board type is selected. If only the "Rx" LED flashes, it could be because of a problem with the bootloader, or the main processor chip (Atmega328P). Someone is knocking, but no-one is at home! Check the board type If the LEDs flash, but you get a message like this: avrdude: stk500_recv(): programmer is not responding Check the board type: If you have the wrong board type selected it will probably send the wrong uploading instructions, and time-out or otherwise fail. If you are like me and have different boards lying around it is easy to forget that the last upload you did was for a different board type. Check the comm port If the LEDs don't flash at all, you may have the wrong comm port selected. Check the comm port: Try a different PC / Mac if possible Try your Arduino on a different PC/Mac if you have one to hand. This can narrow down whether or not you have an issue with the particular computer you have plugged it into, or computers in general. Do a loopback test Disconnect all shields and other wires Remove the board from the power Connect a jumper wire from RESET to GND (orange wire in photo) Connect a jumper wire from Rx to Tx (white wire in photo) Wiring: Plug in the USB cable, and start up a terminal program - such as the Terminal Monitor in the Arduino IDE. Type something and send it (eg. hit Enter in the Terminal Monitor). Everything you type should be echoed back. If everything is echoed back: That confirms you have the right comm port, the USB cable is OK and the USB interface chip (Atmega16U2) is probably OK. If nothing is echoed back, check: You have the correct comm port. Try a different cable. Some cheap USB cables only have power wires and not data wires. Check the device driver for the Arduino is installed. You probably don't need to do this if that board worked previously on this computer, but it can be worth doing if this is the first time you plugged this board into this computer. Test the Atmega16U2 chip If your board fails the loop-back test, and you are certain the USB cable is OK, then you can test the Atmega16U2 chip itself. There is an ICSP (In Circuit Serial Programming) header on the board, adjacent to the Atmega16U2 chip, and near the USB socket. Disconnect the power first (unplug the USB cable and any power cable). Then you can connect the ICSP header via 6 jumper wires to a known good Uno, as shown in the photo: The pin-outs for the ICSP header are (from the top): Pin 1 on the ICSP header near the Atmega16U2 chip is marked with a small white dot, near the "F" in "AREF". Pin 1 on the ICSP header near the ATmega328P chip is marked with a small white dot, below the "N" in "ON". Connect up: Good board Target Uno MISO MISO (pin with dot - pin 1) VCC VCC SCK SCK MOSI MOSI D10 /RESET GND GND Double-check your wiring. Then on the "known good" board install the "Atmega_Board_Detector" sketch as described on the Atmega bootloader programmer page. The code is at GitHub - nickgammon/arduino_sketches . If you click the Download button on that page you will get a number of useful sketches. The one you want is called "Atmega_Board_Detector". Once installed, open the serial monitor, set it to 115200 baud, and you should see something like this: Atmega chip detector. Written by Nick Gammon. Version 1.17 Compiled on Jul 9 2015 at 08:36:24 with Arduino IDE 10604. Attempting to enter ICSP programming mode ... Entered programming mode OK. Signature = 0x1E 0x94 0x89 Processor = ATmega16U2 Flash memory size = 16384 bytes. LFuse = 0xEF HFuse = 0xD9 EFuse = 0xF4 Lock byte = 0xCF Clock calibration = 0x51 Bootloader in use: No EEPROM preserved through erase: No Watchdog timer always on: No Bootloader is 4096 bytes starting at 3000 Bootloader: 3000: 0x4B 0xC0 0x00 0x00 0x64 0xC0 0x00 0x00 0x62 0xC0 0x00 0x00 0x60 0xC0 0x00 0x00 3010: 0x5E 0xC0 0x00 0x00 0x5C 0xC0 0x00 0x00 0x5A 0xC0 0x00 0x00 0x58 0xC0 0x00 0x00 3020: 0x56 0xC0 0x00 0x00 0x54 0xC0 0x00 0x00 0x52 0xC0 0x00 0x00 0xEE 0xC4 0x00 0x00 ... 3FE0: 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 3FF0: 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF MD5 sum of bootloader = 0xD8 0x8C 0x70 0x6D 0xFE 0x1F 0xDC 0x38 0x82 0x1E 0xCE 0xAE 0x23 0xB2 0xE6 0xE7 Bootloader name: Arduino-dfu-usbserial-atmega16u2-Uno-Rev3 First 256 bytes of program memory: 0: 0x90 0xC0 0x00 0x00 0xA9 0xC0 0x00 0x00 0xA7 0xC0 0x00 0x00 0xA5 0xC0 0x00 0x00 10: 0xA3 0xC0 0x00 0x00 0xA1 0xC0 0x00 0x00 0x9F 0xC0 0x00 0x00 0x9D 0xC0 0x00 0x00 20: 0x9B 0xC0 0x00 0x00 0x99 0xC0 0x00 0x00 0x97 0xC0 0x00 0x00 0x48 0xC4 0x00 0x00 30: 0x0C 0xC4 0x00 0x00 0x91 0xC0 0x00 0x00 0x8F 0xC0 0x00 0x00 0x8D 0xC0 0x00 0x00 ... However if you get a message like this: "Failed to enter programming mode. Double-check wiring!" That would appear to indicate that your ATmega16U2 is not working. Test the ATmega328P chip Disconnect the power from the "known good" Arduino Uno and rewire the ICSP jumpers as per this photo, to connect them to the "main" processor on your Uno: The pin-outs for the ICSP header are (from the top): Pin 1 on the ICSP header near the ATmega328P chip is marked with a small white dot, below the "N" in "ON". The wiring is the same as before, except you are connecting to the other ICSP header - the one at the end of the board, furthest from the USB socket. Good board Target Uno MISO MISO (pin with dot - pin 1) VCC VCC SCK SCK MOSI MOSI D10 /RESET GND GND Once connected, open the serial monitor, set it to 115200 baud, and you should see something like this: Atmega chip detector. Written by Nick Gammon. Version 1.17 Compiled on Jul 9 2015 at 08:36:24 with Arduino IDE 10604. Attempting to enter ICSP programming mode ... Entered programming mode OK. Signature = 0x1E 0x95 0x0F Processor = ATmega328P Flash memory size = 32768 bytes. LFuse = 0xFF HFuse = 0xDE EFuse = 0xFD Lock byte = 0xEF Clock calibration = 0x83 Bootloader in use: Yes EEPROM preserved through erase: No Watchdog timer always on: No Bootloader is 512 bytes starting at 7E00 Bootloader: 7E00: 0x11 0x24 0x84 0xB7 0x14 0xBE 0x81 0xFF 0xF0 0xD0 0x85 0xE0 0x80 0x93 0x81 0x00 7E10: 0x82 0xE0 0x80 0x93 0xC0 0x00 0x88 0xE1 0x80 0x93 0xC1 0x00 0x86 0xE0 0x80 0x93 ... MD5 sum of bootloader = 0xFB 0xF4 0x9B 0x7B 0x59 0x73 0x7F 0x65 0xE8 0xD0 0xF8 0xA5 0x08 0x12 0xE7 0x9F Bootloader name: optiboot_atmega328 First 256 bytes of program memory: 0: 0x0C 0x94 0x35 0x00 0x0C 0x94 0x5D 0x00 0x0C 0x94 0x5D 0x00 0x0C 0x94 0x5D 0x00 10: 0x0C 0x94 0x5D 0x00 0x0C 0x94 0x5D 0x00 0x0C 0x94 0x5D 0x00 0x0C 0x94 0x5D 0x00 20: 0x0C 0x94 0x5D 0x00 0x0C 0x94 0x5D 0x00 0x0C 0x94 0x5D 0x00 0x0C 0x94 0x5D 0x00 30: 0x0C 0x94 0x5D 0x00 0x0C 0x94 0x5D 0x00 0x0C 0x94 0x5D 0x00 0x0C 0x94 0x5D 0x00 ... In this case it confirms that the main processor is working, and has the Optiboot bootloader on it. Things you can fix Failed voltage regulator This isn't easy to replace, but it is only needed if you use the power jack. If you run from USB then it is not required. Alternatively you could arrange for a 4 to 5 V supply (eg. 3 x AA batteries) and connect them to the 5 V socket on the board directly. Failed ATmega16U2 chip This is only required for uploading sketches via the USB port, and serial debugging. It isn't particularly easy to replace because it is a SMD (surface mounted device). However you can manage without it. You can upload sketches via the ICSP header, if you purchase a ICSP programming device. Examples of such devices plugged into the ICSP socket: (Those photos were taken of a Ruggeduino, but the concept is the same). You can also get an FTDI cable, like this: Connect it to your board's serial ports like this: FTDI Arduino Uno GND GND (black wire on FTDI cable, blue jumper wire) CTS not connected VCC 5V TxD D0 (RX) RxD D1 (TX) RTS To RESET with a 0.1 µF capacitor in series with it (green wire) Now you can upload sketches directly to the main processor, bypassing the USB chip. You can also use my Atmega chip stand-alone programmer to upload .hex files - this lets you copy the .hex file for a sketch onto an SD card, and then program the board via the ICSP header. Failed ATmega328P chip The main processor can be replaced fairly easily if it is mounted in a socket. Get a replacement chip from somewhere like Adafruit for around $US 6. Alternatively, try eBay. Try to get a chip which has the Optiboot bootloader already on it, to save hassle. Carefully prise the existing chip out of the socket, and install the new one, taking note of the location of pin 1. Pin 1 has a notch on the chip, and its correct orientation is noted on the first photo in this post with a yellow dot (closest to the edge of the board). You will probably need to straighten the legs slightly. Hold the chip by the ends and gently push down onto a flat surface, like a desk, until the are pushed inwards a bit. Try to not touch the metal pins, you may zap them with static electricity. ATmega328P responds but has no bootloader I have a sketch at Atmega bootloader programmer which will replace the Optiboot bootloader. The wiring is the same as for the chip detector sketch. The code is at GitHub - nickgammon/arduino_sketches . If you click the Download button on that page you will get a number of useful sketches. The one you want is called "Atmega_Board_Programmer". Install the sketch on your "known good" Uno, and connect it to the target board with the wiring shown earlier. Open the Serial Monitor on your "good" Uno and you should see this: Atmega chip programmer. Written by Nick Gammon. Version 1.35 Compiled on Jul 9 2015 at 15:06:58 with Arduino IDE 10604. Attempting to enter ICSP programming mode ... Entered programming mode OK. Signature = 0x1E 0x95 0x0F Processor = ATmega328P Flash memory size = 32768 bytes. LFuse = 0xFF HFuse = 0xDE EFuse = 0xFD Lock byte = 0xEF Clock calibration = 0x83 Type 'L' to use Lilypad (8 MHz) loader, or 'U' for Uno (16 MHz) loader ... Type "U" for the Uno (Optiboot) loader. Using Uno Optiboot 16 MHz loader. Bootloader address = 0x7E00 Bootloader length = 512 bytes. Type 'Q' to quit, 'V' to verify, or 'G' to program the chip with the bootloader ... Type "G" to program the chip. You should see: Erasing chip ... Writing bootloader ... Committing page starting at 0x7E00 Committing page starting at 0x7E80 Committing page starting at 0x7F00 Committing page starting at 0x7F80 Written. Verifying ... No errors found. Writing fuses ... LFuse = 0xFF HFuse = 0xDE EFuse = 0xFD Lock byte = 0xEF Clock calibration = 0x83 Done. Programming mode off. Type 'C' when ready to continue with another chip ... This takes about one second. Now the bootloader is installed. Watchdog timer problems The watchdog timer (off by default) can be configured to reset the processor after a certain time. The intention is to recover from a "hang" for a processor deployed in the field. However if the timer is set for a short period (like 16 ms) then the processor can reset again before the bootloader has a chance to do anything. The symptoms are that you cannot upload any new sketches. Some modern bootloaders (like Optiboot) take steps to stop this problem as one of the first things they do. However others do not. This can be difficult to recover from, because once the sketch runs, you have the problem of it resetting, and if you have the problem you can't replace the sketch. People often report that they have to burn a new bootloader to recover. However that is only because, as a side-effect, burning a bootloader erases the current sketch. There is a way of recovering. Take these steps: Power off the board completely (remove the USB cable). Hold down the Reset button, and keep it held down (or, run a jumper wire from the RESET pin to the GND pin). This stops the problem sketch from starting, and thus activating the watchdog timer Still holding down Reset, reconnect the USB cable. Start uploading a sketch that does not have this problem (eg. Blink) Once the IDE reports "Uploading" release the Reset button (or remove the jumper wire). It should now upload OK - as the sketch which activated the watchdog timer never started. Problems with the Mega2560 upload I mention this here, even though this post is really targetting the Uno board, because it is quite common. Some versions of the Mega2560 bootloader look for "!!!" in the incoming upload from the PC, and if they see that, drop into debugging mode. This causes the upload to fail. Example code: Serial.println ("Furnace overheating!!!"); Solutions: Install a more recent bootloader. My "bootloader uploader" sketch mentioned earlier in this reply should install a bootloader that does not have that issue. Or (more simply): Do not use "!!!" in your sketch. Problems uploading to the Leonardo / Micro / Esplora etc. Boards with the ATmega32u4 as their main (and only) processor can be trickier to upload to. This is because the same chip has to handle uploads and also run your code. There is a small window of opportunity, after the board is reset, when it looks for a new sketch to be uploaded. The technique for uploading to these boards is: Compile your sketch without errors. Start the upload As soon as the IDE reports "Uploading" press and release the Reset button. You only have a second or so to do this, before the old sketch starts running. Don't be discouraged if you have to repeat this process a couple of times. That is normal. References Solving problems with uploading programs to your Arduino Sketch to detect Atmega chip types GitHub site with chip detector / bootloader programmer Atmega bootloader programmer Atmega chip stand-alone programmer to upload .hex files Engbedded Atmel AVR® Fuse Calculator Arduino Uno Rev3 pinouts photo
{ "source": [ "https://arduino.stackexchange.com/questions/13292", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/10794/" ] }
13,429
What does this error means? I can't solve it in any way. warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings]
As is my wont, I'm going to provide a bit of background technical information into the whys and wherefores of this this error. I'm going to inspect four different ways of initializing C strings and see what the differences between them are. These are the four ways in question: char *text = "This is some text"; char text[] = "This is some text"; const char *text = "This is some text"; const char text[] = "This is some text"; Now for this I am going to want to change the third letter "i" into an "o" to make it "Thos is some text". That could, in all cases (you would think), be achieved by: text[2] = 'o'; Now let's look at what each way of declaring the string does and how that text[2] = 'o'; statement would affect things. First the most commonly seen way: char *text = "This is some text"; . What does this literally mean? Well, in C, it literally means "Create a variable called text which is a read-write pointer to this string literal which is held in read-only (code) space.". If you have the option -Wwrite-strings turned on then you get a warning as seen in the question above. Basically that means "Warning: You have tried to make a variable that is read-write point to an area you can't write to". If you try and then set the third character to "o" you would in fact be trying to write to a read-only area and things won't be nice. On a traditional PC with Linux that results in: Segmentation Fault Now the second one: char text[] = "This is some text"; . Literally, in C, that means "Create an array of type "char" and initialize it with the data "This is some text\0". The size of the array will be big enough to store the data". So that actually allocates RAM and copies the value "This is some text\0" into it at runtime. No warnings, no errors, perfectly valid. And the right way to do it if you want to be able to edit the data . Let's try running the command text[2] = 'o' : Thos is some text It worked, perfectly. Good. Now the third way: const char *text = "This is some text"; . Again the literal meaning: "Create a variable called "text" that is a read only pointer to this data in read only memory.". Note that both the pointer and the data are now read-only. No errors, no warnings. What happens if we try and run our test command? Well, we can't. The compiler is now intelligent and knows that we are trying to do something bad: error: assignment of read-only location ‘*(text + 2u)’ It won't even compile. Trying to write to read-only memory is now protected because we have told the compiler that our pointer is to read-only memory. Of course, it doesn't have to be pointing to read-only memory, but if you point it to read-write memory (RAM) that memory will still be protected from being written to by the compiler. Finally the last form: const char text[] = "This is some text"; . Again, like before with [] it allocates an array in RAM and copies the data into it. However, now this is a read-only array. You can't write to it because the pointer to it is tagged as const . Attempting to write to it results in: error: assignment of read-only location ‘*(text + 2u)’ So, a quick summary of where we are: This form is completely invalid and should be avoided at all costs. It opens the door to all sorts of bad things happening: char *text = "This is some text"; This form is the right form if you are wanting to make the data editable: char text[] = "This is some text"; This form is the right form if you want strings that won't be edited: const char *text = "This is some text"; This form seems wasteful of RAM but it does have its uses. Best forget it for now though. const char text[] = "This is some text";
{ "source": [ "https://arduino.stackexchange.com/questions/13429", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/11152/" ] }
38,477
Trying a simple led blinking program I could not get the builtin led on a LoLin Node MCU v3 working. The LED_BUILTIN constant is set to pin 16 / GPIO16 / D0. Reading several articles and QA I think that the Node MCU boards are supposed to have a the on-board led on pin 16. However, if I address this port nothing happens. With the same code I can blink the data led which is on a RX pin, pin 2. Is the builtin led missing on the LoLin Node MCU v3, or could it be that the led on my board is broken?
the ESP8266 has a builtin led that is attached to D4 as labeled on LoLin boards which maps to GPIO2. One thing to Note is that the led is active low. In other words ... setting PIN 2 to '0' will turn the LED ON and setting PIN 2 to '1' will turn the LED OFF Lolin Builtin_Led Picture This is the only LED on the LoLin boards and differs from other devkits that have an LED on GPIO16.
{ "source": [ "https://arduino.stackexchange.com/questions/38477", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/14441/" ] }
45,347
I use Visual Studio Code to develop for arduino which uses Arduino Studio installed files and needs the installation to work. This works very well. Annoying is that when verifying a sketch it takes longer than in the Arduino IDE. I suspect the following warning to cause it: [Warning] Output path is not specified. Unable to reuse previously compiled files. Verify could be slow. See README. I would like to get rid of the warning and I searched through all README files in the arduino installation folder and I also searched google but haven't found out what its supposed to mean and how to fix it. No readme file mentions that or I overlooked it. Is there a documentation on how to fix this anywhere?
Thanks to @Majenko I looked some place new: documented in the arduino plugin of VS Code Arduino Extension there is an option to set an output directory. Note though that according to this it should not be in the workspace or subfolders. So in arduino.json settings file add: "output": "../ArduinoOutput"
{ "source": [ "https://arduino.stackexchange.com/questions/45347", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/32240/" ] }
51,873
I know that Vin can be used to power the board but have also been reading that it can be used as a 5V output. Is it possible to assign Vin as an output as I would any other GPIO? If I want to power an LED from GPIO 12, I would assign GPIO: const int LEDpin_0 = 12; // D6, LED power pin ...and in the setup: pinMode(LEDpin_0, OUTPUT); I can then turn on and off the LED based on whether or not GPIO 12 goes HIGH or LOW. Can I do the same with Vin without resorting to relays or other hardware?
There is confusion about what is and what isn't possible with this board. This is because there are different versions with different power arrangements. NodeMCU 0.9 In this board the USB's 5V and the 5V pin are directly connected together. The combined result is then fed through a diode before entering the 3.3V voltage regulator. With this arrangement the 5V pin will provide the exact same voltage that the USB port feeds the board. However it is dangerous to connect that pin to any power source - it may kill (or at the very least disable) the USB port in your computer - when the board is also connected to a computer through the USB port. NodeMCU 1.0 and 1.1 On this version the USB's power is first fed through the diode and then to the 5V pin and the 3.3V regulator together. This means that the 5V pin will show about 0.5V below whatever voltage is fed in through USB. This isolates the USB from the 5V pin so it becomes safe to provide power through the 5V pin whilst at the same time having the board plugged in to the computer - at the cost of having a slightly lower output from the 5V pin. Original answer: The VIN pin not directly connected to the USB's 5V supply (at least on the LoLin v3 board). this means the pin cannot be used as a 5V supply output. You cannot control that voltage. It is always on, and always 5V (or whatever your USB port happens to provide - 4.75v - 5.25v). You must never ever connect VIN to a power source and connect the USB socket. That can destroy the USB port in your computer. There is zero back-powering protection on that board. Drawing more than 500mA from the VIN pin could cause the USB port of your computer to be shut down.
{ "source": [ "https://arduino.stackexchange.com/questions/51873", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/13355/" ] }
69,632
I have a 5 meter strip of 12V digital RGB LEDs. The chip is WS2811 (see photo of IC). There are 50 addresses on the strip. Using the FastLED library , I am able to run the "FirstLight" example that chases full white with a single address at a time, up and down the length. That works fine. So I know they individually perform properly. However, if I set all LEDs to full white (255), the first few addresses look white but then the higher the address, the more red it is. See photo below. The strip starts in the center of the reel. #include <FastLED.h> // How many leds are in the strip? #define NUM_LEDS 50 // Data pin that led data will be written out over #define DATA_PIN 5 // This is an array of leds. One item for each led in your strip. CRGB leds[NUM_LEDS]; // This function sets up the ledsand tells the controller about them void setup() { // sanity check delay - allows reprogramming if accidentally blowing power w/leds delay(2000); Serial.begin(115200); Serial.print("### SETUP ###"); //Both strips are ordered BRG FastLED.addLeds<WS2811, DATA_PIN, BRG>(leds, NUM_LEDS); //FastLED.addLeds<UCS1903, DATA_PIN, BRG>(leds, NUM_LEDS); } void loop() { for (int i = 30; i < NUM_LEDS; i++) { leds[i] = CRGB::White; } FastLED.show(); } To debug, I tried turning on just the last half. The result was still problematic: for (int i = 30; i < NUM_LEDS; i++) { leds[i] = CRGB::White; } I then tried setting all LEDs to white at half brightness, and that looks much better, but the LEDs at the outside are still off-color compared the inside: I've tried setting the whole strip to full blue, and that works fine. I also have another nearly identical LED strip that instead uses the 1903 chip. The same code (initialized for 1903 instead of WS2811) works just fine on the 1903 strip! Other things I've ruled out: I'm using a bench power supply capable of 5A, this strip pulls less than 1.5A on full white. I have verified the supply holds at 12V I have not let the LEDs heat up while coiled in the reel. I make sure to unplug them after observing their color. What could cause this on the WS2811 strip while the 1903 strip works perfectly?
I would suspect that it is a voltage drop in the power rails caused by the current draw. Probably cheap construction with copper tracks that are just too thin and so have too high a resistance. To combat it you will need to inject power into the strip at various points along it. Initially to prove the theory you can try connecting the power to both ends of the strip and see if the "dull" spot is half way along the strip. If it is, then it's certainly a voltage drop. (incidentally, a voltage drop will cause the blue LEDs to fade out first making it go yellow-red. Then the green ones would probably be next, making it go red). Then, once proved, you will need to add extra power connections into the strip at the middle of any dim sections and feed the voltage directly in at those points.
{ "source": [ "https://arduino.stackexchange.com/questions/69632", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/4455/" ] }
323
What are the best astronomy jokes you've ever heard? I'm looking for images; however, you can share whatever you like.
{ "source": [ "https://astronomy.meta.stackexchange.com/questions/323", "https://astronomy.meta.stackexchange.com", "https://astronomy.meta.stackexchange.com/users/10298/" ] }
4
Sunspots, such as this one, appear dark: Why?
Typical sunspots have a dark region (umbra) surrounded by a lighter region, the penumbra. While sunspots have a temperature of about 6300 °F (3482.2 °C), the surface of the sun which surrounds it has a temperature of 10,000 °F (5537.8 °C). From this NASA resource : Sunspots are actually regions of the solar surface where the magnetic field of the Sun becomes concentrated over 1000-fold. Scientists do not yet know how this happens. Magnetic fields produce pressure, and this pressure can cause gas inside the sunspot to be in balance with the gas outside the sunspot...but at a lower temperature. Sunspots are actually several thousand degrees cooler than the 5,770 K (5496.8 °C) surface of the Sun, and contain gases at temperature of 3000 to 4000 K (2726.9 - 3726.8 °C). They are dark only by contrast with the much hotter solar surface. If you were to put a sunspot in the night sky, it would glow brighter than the Full Moon with a crimson-orange color! Sunspots are areas of intense magnetic activity, as is apparent in this image: You can see the material kind of getting stretched into strands. As for the reason it is cooler than the rest of the surface: Although the details of sunspot generation are still a matter of research, it appears that sunspots are the visible counterparts of magnetic flux tubes in the Sun's convective zone that get "wound up" by differential rotation . If the stress on the tubes reaches a certain limit, they curl up like a rubber band and puncture the Sun's surface. Convection is inhibited at the puncture points; the energy flux from the Sun's interior decreases; and with it surface temperature. All in all, the sunspots appear dark because the are darker than the surrounding surface . They're darker because they are cooler, and they're cooler because of the intense magnetic fields in them.
{ "source": [ "https://astronomy.stackexchange.com/questions/4", "https://astronomy.stackexchange.com", "https://astronomy.stackexchange.com/users/19/" ] }
15
On Venus, there is really inhospitable weather, as well as within the gas giants in our solar system. Are there examples of even more extreme weather on planets found in other solar systems than ours?
I'd say that HD 189733b is a good candidate for the most extreme known weather on another planet (outside our Solar System). According to some recent news accounts , the atmospheric temperature is believed to be over 1000° C, with 7000 kph winds. (For comparison to the data in Rory Alsop's answer, that's about 1900 meters per second.) And it rains molten glass. Sideways. UPDATE : As Guillochon points out in a comment, HD 80606 b likely has even higher winds, though they're not continuous. It's a Jovian with an extremely eccentric orbit. Quoting the Wikipedia article: Computer models predict the planet heats up 555 °C (1,000 °F) in just a matter of hours triggering "shock wave storms" with winds that move faster than the speed of sound, at 3 miles per second. which, in civilized units, is about 4800 meters/second. Probably no molten glass rain, though, so it's not clear that it's more "extreme".
{ "source": [ "https://astronomy.stackexchange.com/questions/15", "https://astronomy.stackexchange.com", "https://astronomy.stackexchange.com/users/36/" ] }
16
Why do we only ever see the same side of the moon? If this is to do with gravity are there any variables which mean we might one day see more than we have before?
The reason for this is what we call tidal locking : Tidal locking (or captured rotation) occurs when the gravitational gradient makes one side of an astronomical body always face another, an effect known as synchronous rotation. For example, the same side of the Earth's Moon always faces the Earth. A tidally locked body takes just as long to rotate around its own axis as it does to revolve around its partner. This causes one hemisphere constantly to face the partner body. Usually, at any given time only the satellite is tidally locked around the larger body, but if the difference in mass between the two bodies and their physical separation is small, each may be tidally locked to the other, as is the case between Pluto and Charon . This effect is employed to stabilize some artificial satellites. Fig. 1 : Tidal locking results in the Moon rotating about its axis in about the same time it takes to orbit the Earth. (Source: Wikipedia ) Fig. 1, cont. : Except for libration effects, this results in the Moon keeping the same face turned towards the Earth, as seen in the figure on the left. (The Moon is shown in polar view, and is not drawn to scale.) If the Moon were not spinning at all, it would alternately show its near and far sides to the Earth while moving around our planet in orbit, as shown in the figure on the right. Fig. 2 : Lunar librations in latitude and longitude over a period of one month (Source: Wikipedia ) Libration is manifested as a slow rocking back and forth of the Moon as viewed from Earth, permitting an observer to see slightly different halves of the surface at different times. There are three types of lunar libration: Libration in longitude results from the eccentricity of the Moon's orbit around Earth; the Moon's rotation sometimes leads and sometimes lags its orbital position. Libration in latitude results from a slight inclination between the Moon's axis of rotation and the normal to the plane of its orbit around Earth. Its origin is analogous to how the seasons arise from Earth's revolution about the Sun. Diurnal libration is a small daily oscillation due to the Earth's rotation, which carries an observer first to one side and then to the other side of the straight line joining Earth's and the Moon's centers, allowing the observer to look first around one side of the Moon and then around the other—because the observer is on the surface of the Earth, not at its center. All quotes and images from Wikipedia on Tidal locking and Wikipedia on Libration .
{ "source": [ "https://astronomy.stackexchange.com/questions/16", "https://astronomy.stackexchange.com", "https://astronomy.stackexchange.com/users/41/" ] }
24
Black holes have so much gravity that even light can't escape from them . If we can't see them, and the suck up all electromagnetic radiation, then how can we find them?
To add to John Conde's answer. According to the NASA web page "Black Holes" , detection of black holes can obviously not be performed by detection any form of electromagnetic radiation coming directly from it (hence, can not be 'seen'). The black hole is inferred by observing the interaction with surrounding matter, from the webpage: We can, however, infer the presence of black holes and study them by detecting their effect on other matter nearby. This also includes detection of x-ray radiation that radiates from matter accelerating towards the black hole. Although this seems contradictory to my first paragraph - it needs to be noted that this is not directly from the black hole, rather from the interaction with matter accelerating towards it.
{ "source": [ "https://astronomy.stackexchange.com/questions/24", "https://astronomy.stackexchange.com", "https://astronomy.stackexchange.com/users/19/" ] }
138
The planets rotate as an after effect of their creation, the dust clouds that compressed span as they did so and the inertia has kept it rotating ever since. It's fairly easy to prove that planetary bodies are rotating just by watching their features move across their respective horizons. This seems less easy to discern for amateur astronomers in the case of the sun though. Does the Sun also rotate as a by-product of it's creation? What evidence is there to support this? Does the sun have any discerning features that make it evident it is rotating?
Yes . It does not rotate uniformly though, different portions have a different angular velocity (as a body made of plasma, it can get away with this). Measuring this in theory is pretty easy, we just need to track the motion of the sunspots. This isn't as simple as calculating the changes in relative positions of the sunspots, though, as the Earth is rotating and revolving, which makes the calculations harder. This measurement can be done using the celestial sphere (the field of stars that we see) as a "fixed" reference point and seeing how the Earth and the sunspots move relative to that. Almost everything in the universe rotates/revolves, at least a little bit, because angular momentum is hard to get rid of. It can be transferred from body to body, but for a body to end up with zero angular momentum, it needs to meet another body with the exact same angular momentum and collide with it in a particular way. Given that this is pretty rare, all celestial bodies rotate. In addition to that, a non rotating body that is revolving will eventually start spinning due to tidal forces.
{ "source": [ "https://astronomy.stackexchange.com/questions/138", "https://astronomy.stackexchange.com", "https://astronomy.stackexchange.com/users/-1/" ] }
270
Each new star we find is generally considered to be part of the constellation it is nearest to. Our Sun is obviously a star, just much closer. Is our Sun part of any constellation? If so, which constellation is it a part of?
Constellations are human constructs to make sense of the night sky. When you are trying to find your way around, it helps to " chunk " stars into patterns and assign those groupings names. When I want to point out a particular object in the sky (say Polaris, the North Star), I start by pointing out a familiar constellation (say Ursa Major , the Big Dipper). From there, I can tell my friend to follow this or that line to get them to look where I'm looking: With the advent of computerized telescopes and large data sets, constellations are less important for professional astronomers. However, many stellar databases use Flamsteed or Bayer designations, which assign stars to constellations. In order to include all stars, the sky is divided into irregular regions that encompass the familiar constellations. So, which constellations is the Sun assigned to? Well, from the perspective of someone on the Earth, the Sun moves through the constellations throughout the course of the year. Or rather, Sol moves through the region of the sky where some of the constellations would be seen if its light did not drown out distant stars. Our moon and the rest of the planets move through those same constellations. (The Greek phrase which gives us the word "planet" means "wandering star".) The current position of the sun against the background of distant stars changes over the course of the year. (This is important for astrology .) It's a little easier to make sense of with a diagram: So perhaps a better question is: What constellation does the Sun belong to today ? Presumably an observer on an exoplanet would assign Sol to some constellation that is convenient from her perspective. But from our perspective within the Solar system our sun, moon, and planets are not part of any constellation.
{ "source": [ "https://astronomy.stackexchange.com/questions/270", "https://astronomy.stackexchange.com", "https://astronomy.stackexchange.com/users/-1/" ] }