question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I would like to create a vector in which each element is the i+6th element of another vector. For example, in a vector of length 120 I want to create another vector of length 20 in which each element is value i, i+6, i+12, i+18... of the initial vector, i.e. I want to extract every 6th element of the original.
a <- 1:120 b <- a[seq(1, length(a), 6)]
Vector
5,237,557
162
I want to create a vector out of a row of a data frame. But I don't want to have to row and column names. I tried several things... but had no luck. This is my data frame: > df <- data.frame(a=c(1,2,4,2),b=c(2,6,2,1),c=c(2.6,8.2,7.5,3)) > df a b c 1 1 2 2.6 2 2 6 8.2 3 4 2 7.5 4 2 1 3.0 I tried: > newV <- as.vector(df[1,]) > newV a b c 1 1 2 2.6 But I really want something looking like this: > newV <- c( 1,2,2.6) > newV [1] 1.0 2.0 2.6
When you extract a single row from a data frame you get a one-row data frame. Convert it to a numeric vector: as.numeric(df[1,]) As @Roland suggests, unlist(df[1,]) will convert the one-row data frame to a numeric vector without dropping the names. Therefore unname(unlist(df[1,])) is another, slightly more explicit way to get to the same result. As @Josh comments below, if you have a not-completely-numeric (alphabetic, factor, mixed ...) data frame, you need as.character(df[1,]) instead.
Vector
14,484,728
162
I have an array of values that is passed to my function from a different part of the program that I need to store for later processing. Since I don't know how many times my function will be called before it is time to process the data, I need a dynamic storage structure, so I chose a std::vector. I don't want to have to do the standard loop to push_back all the values individually, it would be nice if I could just copy it all using something similar to memcpy.
There have been many answers here and just about all of them will get the job done. However there is some misleading advice! Here are the options: vector<int> dataVec; int dataArray[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; unsigned dataArraySize = sizeof(dataArray) / sizeof(int); // Method 1: Copy the array to the vector using back_inserter. { copy(&dataArray[0], &dataArray[dataArraySize], back_inserter(dataVec)); } // Method 2: Same as 1 but pre-extend the vector by the size of the array using reserve { dataVec.reserve(dataVec.size() + dataArraySize); copy(&dataArray[0], &dataArray[dataArraySize], back_inserter(dataVec)); } // Method 3: Memcpy { dataVec.resize(dataVec.size() + dataArraySize); memcpy(&dataVec[dataVec.size() - dataArraySize], &dataArray[0], dataArraySize * sizeof(int)); } // Method 4: vector::insert { dataVec.insert(dataVec.end(), &dataArray[0], &dataArray[dataArraySize]); } // Method 5: vector + vector { vector<int> dataVec2(&dataArray[0], &dataArray[dataArraySize]); dataVec.insert(dataVec.end(), dataVec2.begin(), dataVec2.end()); } To cut a long story short Method 4, using vector::insert, is the best for bsruth's scenario. Here are some gory details: Method 1 is probably the easiest to understand. Just copy each element from the array and push it into the back of the vector. Alas, it's slow. Because there's a loop (implied with the copy function), each element must be treated individually; no performance improvements can be made based on the fact that we know the array and vectors are contiguous blocks. Method 2 is a suggested performance improvement to Method 1; just pre-reserve the size of the array before adding it. For large arrays this might help. However the best advice here is never to use reserve unless profiling suggests you may be able to get an improvement (or you need to ensure your iterators are not going to be invalidated). Bjarne agrees. Incidentally, I found that this method performed the slowest most of the time though I'm struggling to comprehensively explain why it was regularly significantly slower than method 1... Method 3 is the old school solution - throw some C at the problem! Works fine and fast for POD types. In this case resize is required to be called since memcpy works outside the bounds of vector and there is no way to tell a vector that its size has changed. Apart from being an ugly solution (byte copying!) remember that this can only be used for POD types. I would never use this solution. Method 4 is the best way to go. It's meaning is clear, it's (usually) the fastest and it works for any objects. There is no downside to using this method for this application. Method 5 is a tweak on Method 4 - copy the array into a vector and then append it. Good option - generally fast-ish and clear. Finally, you are aware that you can use vectors in place of arrays, right? Even when a function expects c-style arrays you can use vectors: vector<char> v(50); // Ensure there's enough space strcpy(&v[0], "prefer vectors to c arrays");
Vector
259,297
159
I can not find within the documentation of Vec<T> how to retrieve a slice from a specified range. Is there something like this in the standard library: let a = vec![1, 2, 3, 4]; let suba = a.subvector(0, 2); // Contains [1, 2];
The documentation for Vec covers this in the section titled "slicing". You can create a slice of a Vec or array by indexing it with a Range (or RangeInclusive, RangeFrom, RangeTo, RangeToInclusive, or RangeFull), for example: fn main() { let a = vec![1, 2, 3, 4, 5]; // With a start and an end println!("{:?}", &a[1..4]); // With a start and an end, inclusive println!("{:?}", &a[1..=3]); // With just a start println!("{:?}", &a[2..]); // With just an end println!("{:?}", &a[..3]); // With just an end, inclusive println!("{:?}", &a[..=2]); // All elements println!("{:?}", &a[..]); }
Vector
39,785,597
157
My question is simple: are std::vector elements guaranteed to be contiguous? In other words, can I use the pointer to the first element of a std::vector as a C-array? If my memory serves me well, the C++ standard did not make such guarantee. However, the std::vector requirements were such that it was virtually impossible to meet them if the elements were not contiguous. Can somebody clarify this? Example: std::vector<int> values; // ... fill up values if( !values.empty() ) { int *array = &values[0]; for( int i = 0; i < values.size(); ++i ) { int v = array[i]; // do something with 'v' } }
This was missed from C++98 standard proper but later added as part of a TR. The forthcoming C++0x standard will of course contain this as a requirement. From n2798 (draft of C++0x): 23.2.6 Class template vector [vector] 1 A vector is a sequence container that supports random access iterators. In addition, it supports (amortized) constant time insert and erase operations at the end; insert and erase in the middle take linear time. Storage management is handled automatically, though hints can be given to improve efficiency. The elements of a vector are stored contiguously, meaning that if v is a vector where T is some type other than bool, then it obeys the identity &v[n] == &v[0] + n for all 0 <= n < v.size().
Vector
849,168
142
I'm coding in C++. If I have some function void foo(vector<int> test) and I call it in my program, will the vector be passed by value or reference? I'm unsure because I know vectors and arrays are similar and that a function like void bar(int test[]) would pass test in by reference (pointer?) instead of by value. My guess is that I would need to pass the vector by pointer/reference explicitly if I wanted to avoid passing by value but I'm not sure.
In C++, things are passed by value unless you specify otherwise using the &-operator (note that this operator is also used as the 'address-of' operator, but in a different context). This is all well documented, but I'll re-iterate anyway: void foo(vector<int> bar); // by value void foo(vector<int> &bar); // by reference (non-const, so modifiable inside foo) void foo(vector<int> const &bar); // by const-reference You can also choose to pass a pointer to a vector (void foo(vector<int> *bar)), but unless you know what you're doing and you feel that this is really is the way to go, don't do this. Also, vectors are not the same as arrays! Internally, the vector keeps track of an array of which it handles the memory management for you, but so do many other STL containers. You can't pass a vector to a function expecting a pointer or array or vice versa (you can get access to (pointer to) the underlying array and use this though). Vectors are classes offering a lot of functionality through its member-functions, whereas pointers and arrays are built-in types. Also, vectors are dynamically allocated (which means that the size may be determined and changed at runtime) whereas the C-style arrays are statically allocated (its size is constant and must be known at compile-time), limiting their use. I suggest you read some more about C++ in general (specifically array decay), and then have a look at the following program which illustrates the difference between arrays and pointers: void foo1(int *arr) { cout << sizeof(arr) << '\n'; } void foo2(int arr[]) { cout << sizeof(arr) << '\n'; } void foo3(int arr[10]) { cout << sizeof(arr) << '\n'; } void foo4(int (&arr)[10]) { cout << sizeof(arr) << '\n'; } int main() { int arr[10] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; foo1(arr); foo2(arr); foo3(arr); foo4(arr); }
Vector
26,647,152
141
What are the differences between an array and a vector in C++? An example of the differences might be included libraries, symbolism, abilities, etc. Array Arrays contain a specific number of elements of a particular type. So that the compiler can reserve the required amount of space when the program is compiled, you must specify the type and number of elements that the array will contain when it is defined. The compiler must be able to determine this value when the program is compiled. Once an array has been defined, you use the identifier for the array along with an index to access specific elements of the array. [...] arrays are zero-indexed; that is, the first element is at index 0. This indexing scheme is indicative of the close relationship in C++ between pointers and arrays and the rules that the language defines for pointer arithmetic. — C++ Pocket Reference Vector A vector is a dynamically-sized sequence of objects that provides array-style operator[] random access. The member function push_back copies its arguments via copy constructor, adds that copy as the last item in the vector and increments its size by one. pop_back does the exact opposite, by removing the last element. Inserting or deleting items from the end of a vector takes amortized constant time, and inserting or deleting from any other location takes linear time. These are the basics of vectors. There is a lot more to them. In most cases, a vector should be your first choice over a C-style array. First of all, they are dynamically sized, which means they can grow as needed. You don't have to do all sorts of research to figure out an optimal static size, as in the case of C arrays; a vector grows as needed, and it can be resized larger or smaller manually if you need to. Second, vectors offer bounds checking with the at member function (but not with operator[]), so that you can do something if you reference a nonexistent index instead of simply watching your program crash or worse, continuing execution with corrupt data. — C++ Cookbook
arrays: are a builtin language construct; come almost unmodified from C89; provide just a contiguous, indexable sequence of elements; no bells and whistles; are of fixed size; you can't resize an array in C++ (unless it's an array of POD and it's allocated with malloc); their size must be a compile-time constant unless they are allocated dynamically; they take their storage space depending from the scope where you declare them; if dynamically allocated, you must explicitly deallocate them; if they are dynamically allocated, you just get a pointer, and you can't determine their size; otherwise, you can use sizeof (hence the common idiom sizeof(arr)/sizeof(*arr), that however fails silently when used inadvertently on a pointer); automatically decay to a pointers in most situations; in particular, this happens when passing them to a function, which usually requires passing a separate parameter for their size; can't be returned from a function; (Unless it is std::array) can't be copied/assigned directly; dynamical arrays of objects require a default constructor, since all their elements must be constructed first; std::vector: is a template class; is a C++ only construct; is implemented as a dynamic array; grows and shrinks dynamically; automatically manage their memory, which is freed on destruction; can be passed to/returned from functions (by value); can be copied/assigned (this performs a deep copy of all the stored elements); doesn't decay to pointers, but you can explicitly get a pointer to their data (&vec[0] is guaranteed to work as expected); always brings along with the internal dynamic array its size (how many elements are currently stored) and capacity (how many elements can be stored in the currently allocated block); the internal dynamic array is not allocated inside the object itself (which just contains a few "bookkeeping" fields), but is allocated dynamically by the allocator specified in the relevant template parameter; the default one gets the memory from the freestore (the so-called heap), independently from how where the actual object is allocated; for this reason, they may be less efficient than "regular" arrays for small, short-lived, local arrays; when reallocating, the objects are copied (moved, in C++11); does not require a default constructor for the objects being stored; is better integrated with the rest of the so-called STL (it provides the begin()/end() methods, the usual STL typedefs, ...) Also consider the "modern alternative" to arrays - std::array; I already described in another answer the difference between std::vector and std::array, you may want to have a look at it.
Vector
15,079,057
140
The goal is to access the "nth" element of a vector of strings instead of the [] operator or the "at" method. From what I understand, iterators can be used to navigate through containers, but I've never used iterators before, and what I'm reading is confusing. If anyone could give me some information on how to achieve this, I would appreciate it. Thank you.
You need to make use of the begin and end method of the vector class, which return the iterator referring to the first and the last element respectively. using namespace std; vector<string> myvector; // a vector of stings. // push some strings in the vector. myvector.push_back("a"); myvector.push_back("b"); myvector.push_back("c"); myvector.push_back("d"); vector<string>::iterator it; // declare an iterator to a vector of strings int n = 3; // nth element to be found. int i = 0; // counter. // now start at from the beginning // and keep iterating over the element till you find // nth element...or reach the end of vector. for(it = myvector.begin(); it != myvector.end(); it++,i++ ) { // found nth element..print and break. if(i == n) { cout<< *it << endl; // prints d. break; } } // other easier ways of doing the same. // using operator[] cout<<myvector[n]<<endl; // prints d. // using the at method cout << myvector.at(n) << endl; // prints d.
Vector
2,395,275
137
vector<int> v; v.push_back(1); v.push_back(v[0]); If the second push_back causes a reallocation, the reference to the first integer in the vector will no longer be valid. So this isn't safe? vector<int> v; v.push_back(1); v.reserve(v.size() + 1); v.push_back(v[0]); This makes it safe?
It looks like http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-closed.html#526 addressed this problem (or something very similar to it) as a potential defect in the standard: 1) Parameters taken by const reference can be changed during execution of the function Examples: Given std::vector v: v.insert(v.begin(), v[2]); v[2] can be changed by moving elements of vector The proposed resolution was that this was not a defect: vector::insert(iter, value) is required to work because the standard doesn't give permission for it not to work.
Vector
18,788,780
137
I have two vectors u and v. Is there a way of finding a quaternion representing the rotation from u to v?
Quaternion q; vector a = crossproduct(v1, v2); q.xyz = a; q.w = sqrt((v1.Length ^ 2) * (v2.Length ^ 2)) + dotproduct(v1, v2); Don't forget to normalize q. Richard is right about there not being a unique rotation, but the above should give the "shortest arc," which is probably what you need.
Vector
1,171,849
136
I'm trying to test whether all elements of a vector are equal to one another. The solutions I have come up with seem somewhat roundabout, both involving checking length(). x <- c(1, 2, 3, 4, 5, 6, 1) # FALSE y <- rep(2, times = 7) # TRUE With unique(): length(unique(x)) == 1 length(unique(y)) == 1 With rle(): length(rle(x)$values) == 1 length(rle(y)$values) == 1 A solution that would let me include a tolerance value for assessing 'equality' among elements would be ideal to avoid FAQ 7.31 issues. Is there a built-in function for type of test that I have completely overlooked? identical() and all.equal() compare two R objects, so they won't work here. Edit 1 Here are some benchmarking results. Using the code: library(rbenchmark) John <- function() all( abs(x - mean(x)) < .Machine$double.eps ^ 0.5 ) DWin <- function() {diff(range(x)) < .Machine$double.eps ^ 0.5} zero_range <- function() { if (length(x) == 1) return(TRUE) x <- range(x) / mean(x) isTRUE(all.equal(x[1], x[2], tolerance = .Machine$double.eps ^ 0.5)) } x <- runif(500000); benchmark(John(), DWin(), zero_range(), columns=c("test", "replications", "elapsed", "relative"), order="relative", replications = 10000) With the results: test replications elapsed relative 2 DWin() 10000 109.415 1.000000 3 zero_range() 10000 126.912 1.159914 1 John() 10000 208.463 1.905251 So it looks like diff(range(x)) < .Machine$double.eps ^ 0.5 is fastest.
Why not simply using the variance: var(x) == 0 If all the elements of x are equal, you will get a variance of 0. This works only for double and integers though. Edit based on the comments below: A more generic option would be to check for the length of unique elements in the vector which must be 1 in this case. This has the advantage that it works with all classes beyond just double and integer from which variance can be calculated from. length(unique(x)) == 1
Vector
4,752,275
130
What is a good clean way to convert a std::vector<int> intVec to std::vector<double> doubleVec. Or, more generally, to convert two vectors of convertible types?
Use std::vector's range constructor: std::vector<int> intVec; std::vector<double> doubleVec(intVec.begin(), intVec.end());
Vector
6,399,090
125
I want to clear a element from a vector using the erase method. But the problem here is that the element is not guaranteed to occur only once in the vector. It may be present multiple times and I need to clear all of them. My code is something like this: void erase(std::vector<int>& myNumbers_in, int number_in) { std::vector<int>::iterator iter = myNumbers_in.begin(); std::vector<int>::iterator endIter = myNumbers_in.end(); for(; iter != endIter; ++iter) { if(*iter == number_in) { myNumbers_in.erase(iter); } } } int main(int argc, char* argv[]) { std::vector<int> myNmbers; for(int i = 0; i < 2; ++i) { myNmbers.push_back(i); myNmbers.push_back(i); } erase(myNmbers, 1); return 0; } This code obviously crashes because I am changing the end of the vector while iterating through it. What is the best way to achieve this? I.e. is there any way to do this without iterating through the vector multiple times or creating one more copy of the vector?
Since C++20, there are freestanding std::erase and std::erase_if functions that work on containers and simplify things considerably: std::erase(myNumbers, number_in); // or std::erase_if(myNumbers, [&](int x) { return x == number_in; }); Prior to C++20, use the erase-remove idiom: std::vector<int>& vec = myNumbers; // use shorter name vec.erase(std::remove(vec.begin(), vec.end(), number_in), vec.end()); // or vec.erase(std::remove_if(vec.begin(), vec.end(), [&](int x) { return x == number_in; }), vec.end()); What happens is that std::remove compacts the elements that differ from the value to be removed (number_in) in the beginning of the vector and returns the iterator to the first element after that range. Then erase removes these elements (whose value is unspecified).
Vector
347,441
124
I need to determine the angle(s) between two n-dimensional vectors in Python. For example, the input can be two lists like the following: [1,2,3,4] and [6,7,8,9].
Note: all of the other answers here will fail if the two vectors have either the same direction (ex, (1, 0, 0), (1, 0, 0)) or opposite directions (ex, (-1, 0, 0), (1, 0, 0)). Here is a function which will correctly handle these cases: import numpy as np def unit_vector(vector): """ Returns the unit vector of the vector. """ return vector / np.linalg.norm(vector) def angle_between(v1, v2): """ Returns the angle in radians between vectors 'v1' and 'v2':: >>> angle_between((1, 0, 0), (0, 1, 0)) 1.5707963267948966 >>> angle_between((1, 0, 0), (1, 0, 0)) 0.0 >>> angle_between((1, 0, 0), (-1, 0, 0)) 3.141592653589793 """ v1_u = unit_vector(v1) v2_u = unit_vector(v2) return np.arccos(np.clip(np.dot(v1_u, v2_u), -1.0, 1.0))
Vector
2,827,393
124
I have a vector of IInventory*, and I am looping through the list using C++11 range for, to do stuff with each one. After doing some stuff with one, I may want to remove it from the list and delete the object. I know I can call delete on the pointer any time to clean it up, but what is the proper way to remove it from the vector, while in the range for loop? And if I remove it from the list will my loop be invalidated? std::vector<IInventory*> inv; inv.push_back(new Foo()); inv.push_back(new Bar()); for (IInventory* index : inv) { // Do some stuff // OK, I decided I need to remove this object from 'inv'... }
No, you can't. Range-based for is for when you need to access each element of a container once. You should use the normal for loop or one of its cousins if you need to modify the container as you go along, access an element more than once, or otherwise iterate in a non-linear fashion through the container. For example: auto i = std::begin(inv); while (i != std::end(inv)) { // Do some stuff if (blah) i = inv.erase(i); else ++i; }
Vector
10,360,461
123
What is the difference between std::array and std::vector? When do you use one over other? I have always used and considered std:vector as an C++ way of using C arrays, so what is the difference?
std::array is just a class version of the classic C array. That means its size is fixed at compile time and it will be allocated as a single chunk (e.g. taking space on the stack). The advantage it has is slightly better performance because there is no indirection between the object and the arrayed data. std::vector is a small class containing pointers into the heap. (So when you allocate a std::vector, it always calls new.) They are slightly slower to access because those pointers have to be chased to get to the arrayed data... But in exchange for that, they can be resized and they only take a trivial amount of stack space no matter how large they are. [edit] As for when to use one over the other, honestly std::vector is almost always what you want. Creating large objects on the stack is generally frowned upon, and the extra level of indirection is usually irrelevant. (For example, if you iterate through all of the elements, the extra memory access only happens once at the start of the loop.) The vector's elements are guaranteed to be contiguous, so you can pass &vec[0] to any function expecting a pointer to an array; e.g., C library routines. (As an aside, std::vector<char> buf(8192); is a great way to allocate a local buffer for calls to read/write or similar without directly invoking new.) That said, the lack of that extra level of indirection, plus the compile-time constant size, can make std::array significantly faster for a very small array that gets created/destroyed/accessed a lot. So my advice would be: Use std::vector unless (a) your profiler tells you that you have a problem and (b) the array is tiny.
Vector
6,632,971
121
How can I list the distinct values in a vector where the values are replicative? I mean, similarly to the following SQL statement: SELECT DISTINCT product_code FROM data
Do you mean unique: R> x = c(1,1,2,3,4,4,4) R> x [1] 1 1 2 3 4 4 4 R> unique(x) [1] 1 2 3 4
Vector
7,755,240
121
What is the capacity() of an std::vector which is created using the default constuctor? I know that the size() is zero. Can we state that a default constructed vector does not call heap memory allocation? This way it would be possible to create an array with an arbitrary reserve using a single allocation, like std::vector<int> iv; iv.reserve(2345);. Let's say that for some reason, I do not want to start the size() on 2345. For example, on Linux (g++ 4.4.5, kernel 2.6.32 amd64) #include <iostream> #include <vector> int main() { using namespace std; cout << vector<int>().capacity() << "," << vector<int>(10).capacity() << endl; return 0; } printed 0,10. Is it a rule, or is it STL vendor dependent?
The standard doesn't specify what the initial capacity of a container should be, so you're relying on the implementation. A common implementation will start the capacity at zero, but there's no guarantee. On the other hand there's no way to better your strategy of std::vector<int> iv; iv.reserve(2345); so stick with it.
Vector
12,271,017
120
Goal: from a list of vectors of equal length, create a matrix where each vector becomes a row. Example: > a <- list() > for (i in 1:10) a[[i]] <- c(i,1:5) > a [[1]] [1] 1 1 2 3 4 5 [[2]] [1] 2 1 2 3 4 5 [[3]] [1] 3 1 2 3 4 5 [[4]] [1] 4 1 2 3 4 5 [[5]] [1] 5 1 2 3 4 5 [[6]] [1] 6 1 2 3 4 5 [[7]] [1] 7 1 2 3 4 5 [[8]] [1] 8 1 2 3 4 5 [[9]] [1] 9 1 2 3 4 5 [[10]] [1] 10 1 2 3 4 5 I want: [,1] [,2] [,3] [,4] [,5] [,6] [1,] 1 1 2 3 4 5 [2,] 2 1 2 3 4 5 [3,] 3 1 2 3 4 5 [4,] 4 1 2 3 4 5 [5,] 5 1 2 3 4 5 [6,] 6 1 2 3 4 5 [7,] 7 1 2 3 4 5 [8,] 8 1 2 3 4 5 [9,] 9 1 2 3 4 5 [10,] 10 1 2 3 4 5
One option is to use do.call(): > do.call(rbind, a) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 1 1 2 3 4 5 [2,] 2 1 2 3 4 5 [3,] 3 1 2 3 4 5 [4,] 4 1 2 3 4 5 [5,] 5 1 2 3 4 5 [6,] 6 1 2 3 4 5 [7,] 7 1 2 3 4 5 [8,] 8 1 2 3 4 5 [9,] 9 1 2 3 4 5 [10,] 10 1 2 3 4 5
Vector
1,329,940
119
I am looking for just the value of the B1(newx) linear model coefficient, not the name. I just want the 0.5 value. I do not want the name "newx". newx <- c(0.5,1.5,2.5) newy <- c(2,3,4) out <- lm(newy ~ newx) out looks like: Call: lm(formula = newy ~ newx) Coefficients: (Intercept) newx 1.5 1.0 I arrived here. But now I am stuck. out$coefficients["newx"] newx 1.0
For a single element like this, use [[ rather than [. Compare: coefficients(out)["newx"] # newx # 1 coefficients(out)[["newx"]] # [1] 1 More generally, use unname(): unname(coefficients(out)[c("newx", "(Intercept)")]) # [1] 1.0 1.5 head(unname(mtcars)) # NA NA NA NA NA NA NA NA NA NA NA # Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4 # Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4 # Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1 # Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1 # Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2 # Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1 ## etc.
Vector
15,736,719
119
From wikipedia: the cross product is a binary operation on two vectors in a three-dimensional Euclidean space that results in another vector which is perpendicular to the plane containing the two input vectors. Given that the definition is only defined in three (or seven, one and zero) dimensions, how does one calculate the cross product of two 2d vectors? I have seen two implementations. One returns a new vector (but only accepts a single vector), the other returns a scalar (but is a calculation between two vectors). Implementation 1 (returns a scalar): float CrossProduct(const Vector2D & v1, const Vector2D & v2) const { return (v1.X*v2.Y) - (v1.Y*v2.X); } Implementation 2 (returns a vector): Vector2D CrossProduct(const Vector2D & v) const { return Vector2D(v.Y, -v.X); } Why the varying implementations? What would I use the scalar implementation for? What would I use the vector implementation for? The reason I ask is because I'm writing a Vector2D class myself and don't know which method to use.
Implementation 1 returns the magnitude of the vector that would result from a regular 3D cross product of the input vectors, taking their Z values implicitly as 0 (i.e. treating the 2D space as a plane in the 3D space). The 3D cross product will be perpendicular to that plane, and thus have 0 X & Y components (thus the scalar returned is the Z value of the 3D cross product vector). Note that the magnitude of the vector resulting from 3D cross product is also equal to the area of the parallelogram between the two vectors, which gives Implementation 1 another purpose. In addition, this area is signed and can be used to determine whether rotating from V1 to V2 moves in an counter clockwise or clockwise direction. It should also be noted that implementation 1 is the determinant of the 2x2 matrix built from these two vectors. Implementation 2 returns a vector perpendicular to the input vector still in the same 2D plane. Not a cross product in the classical sense but consistent in the "give me a perpendicular vector" sense. Note that 3D euclidean space is closed under the cross product operation--that is, a cross product of two 3D vectors returns another 3D vector. Both of the above 2D implementations are inconsistent with that in one way or another.
Vector
243,945
117
I am attempting to use vector drawables in my Android app. From http://developer.android.com/training/material/drawables.html (emphasis mine): In Android 5.0 (API Level 21) and above, you can define vector drawables, which scale without losing definition. Using this drawable: <vector xmlns:android="http://schemas.android.com/apk/res/android" android:height="24dp" android:width="24dp" android:viewportWidth="24" android:viewportHeight="24"> <path android:fillColor="@color/colorPrimary" android:pathData="M14,20A2,2 0 0,1 12,22A2,2 0 0,1 10,20H14M12,2A1,1 0 0,1 13,3V4.08C15.84,4.56 18,7.03 18,10V16L21,19H3L6,16V10C6,7.03 8.16,4.56 11,4.08V3A1,1 0 0,1 12,2Z" /> and this ImageView: <ImageView android:layout_width="400dp" android:layout_height="400dp" android:src="@drawable/icon_bell"/> produces this blurry image when attempting to display the icon at 400dp (on a largish high-res circa 2015 mobile device running lollipop): Changing the width and height in the definition of the vector drawable to 200dp significantly improves the situation at the 400dp rendered size. However, setting this as a drawable for a TextView element (i.e. icon to the left of the text) now creates a huge icon. My questions: 1) Why is there a width/height specification in the vector drawable? I thought the entire point of these is that they scale up and down losslessly making width and height meaningless in its definition? 2) Is it possible to use a single vector drawable which works as a 24dp drawable on a TextView but scales up well to use as much larger images too? E.g. how do I avoid creating multiple vector drawables of different sizes and instead use one which scales to my rendered requirements? 3) How do I effectively use the width/height attributes and what is the difference with viewportWidth/Height? Additional details: Device is running API 22 Using Android Studio v1.5.1 with Gradle version 1.5.0 Manifest is compile and target level 23, min level 15. I've also tried moving min level to 21, but this made no difference. Decompiling the APK (with min level set to 21) shows a single XML resource in the drawable folder. No rasterized images are produced.
There is new info about this issue here: https://code.google.com/p/android/issues/detail?id=202019 It looks like using android:scaleType="fitXY" will make it scale correctly on Lollipop. From a Google engineer: Hi, Let me know if scaleType='fitXY' can be a workaround for you , in order to get the image look sharp. The marshmallow Vs Lollipop is due to a special scaling treatment added into marshmallow. Also, for your comments: " Correct behavior: The vector drawable should scale without quality loss. So if we want to use the same asset in 3 different sizes in our application, we don't have to duplicate vector_drawable.xml 3 times with different hardcoded sizes. " Even though I totally agree this should be the case, in reality, the Android platform has performance concern such that we have not reach the ideal world yet. So it is actually recommended to use 3 different vector_drawable.xml for better performance if you are sure you want to draw 3 different size on the screen at the same time. The technical detail is basically we are using a bitmap under the hook to cache the complex path rendering, such that we can get the best redrawing performance, on a par with redrawing a bitmap drawable.
Vector
34,936,590
117
C++ has std::vector and Java has ArrayList, and many other languages have their own form of dynamically allocated array. When a dynamic array runs out of space, it gets reallocated into a larger area and the old values are copied into the new array. A question central to the performance of such an array is how fast the array grows in size. If you always only grow it large enough to fit the current push, you'll end up reallocating every time. So it makes sense to double the array size, or multiply it by say 1.5. Is there an ideal growth factor? 2x? 1.5x? By ideal I mean mathematically justified, best balancing performance and wasted memory. I realize that theoretically, given that your application could have any potential distribution of pushes that this is somewhat application dependent. But I'm curious to know if there's a value that's "usually" best, or is considered best within some rigorous constraint. I've heard there's a paper on this somewhere, but I've been unable to find it.
I remember reading many years ago why 1.5 is preferred over two, at least as applied to C++ (this probably doesn't apply to managed languages, where the runtime system can relocate objects at will). The reasoning is this: Say you start with a 16-byte allocation. When you need more, you allocate 32 bytes, then free up 16 bytes. This leaves a 16-byte hole in memory. When you need more, you allocate 64 bytes, freeing up the 32 bytes. This leaves a 48-byte hole (if the 16 and 32 were adjacent). When you need more, you allocate 128 bytes, freeing up the 64 bytes. This leaves a 112-byte hole (assuming all previous allocations are adjacent). And so and and so forth. The idea is that, with a 2x expansion, there is no point in time that the resulting hole is ever going to be large enough to reuse for the next allocation. Using a 1.5x allocation, we have this instead: Start with 16 bytes. When you need more, allocate 24 bytes, then free up the 16, leaving a 16-byte hole. When you need more, allocate 36 bytes, then free up the 24, leaving a 40-byte hole. When you need more, allocate 54 bytes, then free up the 36, leaving a 76-byte hole. When you need more, allocate 81 bytes, then free up the 54, leaving a 130-byte hole. When you need more, use 122 bytes (rounding up) from the 130-byte hole.
Vector
1,100,311
115
I have a named character vector returned from xmlAttrs like this: testVect <- structure(c("11.2.0.3.0", "12.89", "12.71"), .Names = c("db_version", "elapsed_time", "cpu_time")) I would like to convert it to a data frame that looks like this: testDF <- data.frame("db_version"="11.2.0.3.0","elapsed_time"=12.89,"cpu_time"=12.71) head(testDF) db_version elapsed_time cpu_time 1 11.2.0.3.0 12.89 12.71
It's as simple as data.frame(as.list(testVect)). Or if you want sensible data types for your columns, data.frame(lapply(testVect, type.convert), stringsAsFactors=FALSE).
Vector
16,816,032
115
Please consider this code. I have seen this type of code several times. words is a local vector. How is it possible to return it from a function? Can we guarantee it will not die? std::vector<std::string> read_file(const std::string& path) { std::ifstream file("E:\\names.txt"); if (!file.is_open()) { std::cerr << "Unable to open file" << "\n"; std::exit(-1); } std::vector<string> words;//this vector will be returned std::string token; while (std::getline(file, token, ',')) { words.push_back(token); } return words; }
Pre C++11: The function will not return the local variable, but rather a copy of it. Your compiler might however perform an optimization where no actual copy action is made. See this question & answer for further details. C++11: The function will move the value. See this answer for further details.
Vector
22,655,059
115
Right now I have vector3 values represented as lists. is there a way to subtract 2 of these like vector3 values, like [2,2,2] - [1,1,1] = [1,1,1] Should I use tuples? If none of them defines these operands on these types, can I define it instead? If not, should I create a new vector3 class?
If this is something you end up doing frequently, and with different operations, you should probably create a class to handle cases like this, or better use some library like Numpy. Otherwise, look for list comprehensions used with the zip builtin function: [a_i - b_i for a_i, b_i in zip(a, b)]
Vector
534,855
114
I have a vector<int> container that has integers (e.g. {1,2,3,4}) and I would like to convert to a string of the form "1,2,3,4" What is the cleanest way to do that in C++? In Python this is how I would do it: >>> array = [1,2,3,4] >>> ",".join(map(str,array)) '1,2,3,4'
Definitely not as elegant as Python, but nothing quite is as elegant as Python in C++. You could use a stringstream ... #include <sstream> //... std::stringstream ss; for(size_t i = 0; i < v.size(); ++i) { if(i != 0) ss << ","; ss << v[i]; } std::string s = ss.str(); You could also make use of std::for_each instead.
Vector
1,430,757
112
What is the simplest way to convert array to vector? void test(vector<int> _array) { ... } int x[3]={1, 2, 3}; test(x); // Syntax error. I want to convert x from int array to vector in simplest way.
Use the vector constructor that takes two iterators, note that pointers are valid iterators, and use the implicit conversion from arrays to pointers: int x[3] = {1, 2, 3}; std::vector<int> v(x, x + sizeof x / sizeof x[0]); test(v); or test(std::vector<int>(x, x + sizeof x / sizeof x[0])); where sizeof x / sizeof x[0] is obviously 3 in this context; it's the generic way of getting the number of elements in an array. Note that x + sizeof x / sizeof x[0] points one element beyond the last element.
Vector
8,777,603
111
Is there a good way of differentiating between row and column vectors in numpy? If I was to give one a vector, say: from numpy import * v = array([1,2,3]) they wouldn't be able to say weather I mean a row or a column vector. Moreover: >>> array([1,2,3]) == array([1,2,3]).transpose() array([ True, True, True]) Which compares the vectors element-wise. I realize that most of the functions on vectors from the mentioned modules don't need the differentiation. For example outer(a,b) or a.dot(b) but I'd like to differentiate for my own convenience.
You can make the distinction explicit by adding another dimension to the array. >>> a = np.array([1, 2, 3]) >>> a array([1, 2, 3]) >>> a.transpose() array([1, 2, 3]) >>> a.dot(a.transpose()) 14 Now force it to be a column vector: >>> a.shape = (3,1) >>> a array([[1], [2], [3]]) >>> a.transpose() array([[1, 2, 3]]) >>> a.dot(a.transpose()) array([[1, 2, 3], [2, 4, 6], [3, 6, 9]]) Another option is to use np.newaxis when you want to make the distinction: >>> a = np.array([1, 2, 3]) >>> a array([1, 2, 3]) >>> a[:, np.newaxis] array([[1], [2], [3]]) >>> a[np.newaxis, :] array([[1, 2, 3]])
Vector
17,428,621
111
Is there any built in function which tells me that my vector contains a certain element or not e.g. std::vector<string> v; v.push_back("abc"); v.push_back("xyz"); if (v.contains("abc")) // I am looking for one such feature, is there any // such function or i need to loop through whole vector?
You can use std::find as follows: if (std::find(v.begin(), v.end(), "abc") != v.end()) { // Element in vector. } To be able to use std::find: include <algorithm>.
Vector
6,277,646
110
As (hopefully) we all know, vector<bool> is totally broken and can't be treated as a C array. What is the best way to get this functionality? So far, the ideas I have thought of are: Use a vector<char> instead, or Use a wrapper class and have vector<bool_wrapper> How do you guys handle this problem? I need the c_array() functionality. As a side question, if I don't need the c_array() method, what is the best way to approach this problem if I need random access? Should I use a deque or something else? Edit: I do need dynamic sizing. For those who don't know, vector<bool> is specialized so that each bool takes 1 bit. Thus you can't convert it to a C-style array. I guess "wrapper" is a bit of a misnomer. I was thinking something like this: Of course, then I have to read into a my_bool due to possible alignment issues :( struct my_bool { bool the_bool; }; vector<my_bool> haha_i_tricked_you;
Use std::deque if you don't need the array, yes. Otherwise use an alternative vector that doesn't specialize on bool, such as the one in Boost Container.
Vector
670,308
107
Lately I've been asked to write a function that reads the binary file into the std::vector<BYTE> where BYTE is an unsigned char. Quite quickly I came with something like this: #include <fstream> #include <vector> typedef unsigned char BYTE; std::vector<BYTE> readFile(const char* filename) { // open the file: std::streampos fileSize; std::ifstream file(filename, std::ios::binary); // get its size: file.seekg(0, std::ios::end); fileSize = file.tellg(); file.seekg(0, std::ios::beg); // read the data: std::vector<BYTE> fileData(fileSize); file.read((char*) &fileData[0], fileSize); return fileData; } which seems to be unnecessarily complicated and the explicit cast to char* that I was forced to use while calling file.read doesn't make me feel any better about it. Another option is to use std::istreambuf_iterator: std::vector<BYTE> readFile(const char* filename) { // open the file: std::ifstream file(filename, std::ios::binary); // read the data: return std::vector<BYTE>((std::istreambuf_iterator<char>(file)), std::istreambuf_iterator<char>()); } which is pretty simple and short, but still I have to use the std::istreambuf_iterator<char> even when I'm reading into std::vector<unsigned char>. The last option that seems to be perfectly straightforward is to use std::basic_ifstream<BYTE>, which kinda expresses it explicitly that "I want an input file stream and I want to use it to read BYTEs": std::vector<BYTE> readFile(const char* filename) { // open the file: std::basic_ifstream<BYTE> file(filename, std::ios::binary); // read the data: return std::vector<BYTE>((std::istreambuf_iterator<BYTE>(file)), std::istreambuf_iterator<BYTE>()); } but I'm not sure whether basic_ifstream is an appropriate choice in this case. What is the best way of reading a binary file into the vector? I'd also like to know what's happening "behind the scene" and what are the possible problems I might encounter (apart from stream not being opened properly which might be avoided by simple is_open check). Is there any good reason why one would prefer to use std::istreambuf_iterator here? (the only advantage that I can see is simplicity)
When testing for performance, I would include a test case for: std::vector<BYTE> readFile(const char* filename) { // open the file: std::ifstream file(filename, std::ios::binary); // Stop eating new lines in binary mode!!! file.unsetf(std::ios::skipws); // get its size: std::streampos fileSize; file.seekg(0, std::ios::end); fileSize = file.tellg(); file.seekg(0, std::ios::beg); // reserve capacity std::vector<BYTE> vec; vec.reserve(fileSize); // read the data: vec.insert(vec.begin(), std::istream_iterator<BYTE>(file), std::istream_iterator<BYTE>()); return vec; } My thinking is that the constructor of Method 1 touches the elements in the vector, and then the read touches each element again. Method 2 and Method 3 look most promising, but could suffer one or more resize's. Hence the reason to reserve before reading or inserting. I would also test with std::copy: ... std::vector<byte> vec; vec.reserve(fileSize); std::copy(std::istream_iterator<BYTE>(file), std::istream_iterator<BYTE>(), std::back_inserter(vec)); In the end, I think the best solution will avoid operator >> from istream_iterator (and all the overhead and goodness from operator >> trying to interpret binary data). But I don't know what to use that allows you to directly copy the data into the vector. Finally, my testing with binary data is showing ios::binary is not being honored. Hence the reason for noskipws from <iomanip>.
Vector
15,138,353
107
Container requirements have changed from C++03 to C++11. While C++03 had blanket requirements (e.g. copy constructibility and assignability for vector), C++11 defines fine-grained requirements on each container operation (section 23.2). As a result, you can e.g. store a type that is copy-constructible but not assignable - such as a structure with a const member - in a vector as long as you only perform certain operations that do not require assignment (construction and push_back are such operations; insert is not). What I'm wondering is: does this mean the standard now allows vector<const T>? I don't see any reason it shouldn't - const T, just like a structure with a const member, is a type that is copy constructible but not assignable - but I may have missed something. (Part of what makes me think I may have missed something, is that gcc trunk crashes and burns if you try to instantiate vector<const T>, but it's fine with vector<T> where T has a const member).
No, I believe the allocator requirements say that T can be a "non-const, non-reference object type". You wouldn't be able to do much with a vector of constant objects. And a const vector<T> would be almost the same anyway. Many years later this quick-and-dirty answer still seems to be attracting comments and votes. Not always up. :-) So to add some proper references: For the C++03 standard, which I have on paper, Table 31 in section [lib.allocator.requirements] says: T, U any type Not that any type actually worked. So, the next standard, C++11, says in a close draft in [allocator.requirements] and now Table 27: T, U, C any non-const, non-reference object type which is extremely close to what I originally wrote above from memory. This is also what the question was about. However, in C++14 (draft N4296) Table 27 now says: T, U, C any non-const object type Possibly because a reference perhaps isn't an object type after all? And now in C++17 (draft N4659) it is Table 30 that says: T, U, C any cv-unqualified object type (6.9) So not only is const ruled out, but also volatile. Probably old news anyway, and just a clarification. Please also see Howard Hinnant's first-hand info, currently right below.
Vector
6,954,906
105
There is a thread in the comments section in this post about using std::vector::reserve() vs. std::vector::resize(). Here is the original code: void MyClass::my_method() { my_member.reserve(n_dim); for(int k = 0 ; k < n_dim ; k++ ) my_member[k] = k ; } I believe that to write elements in the vector, the correct thing to do is to call std::vector::resize(), not std::vector::reserve(). In fact, the following test code "crashes" in debug builds in VS2010 SP1: #include <vector> using namespace std; int main() { vector<int> v; v.reserve(10); v[5] = 2; return 0; } Am I right, or am I wrong? And is VS2010 SP1 right, or is it wrong?
There are two different methods for a reason: std::vector::reserve will allocate the memory but will not resize your vector, which will have a logical size the same as it was before. std::vector::resize will actually modify the size of your vector and will fill any space with objects in their default state. If they are ints, they will all be zero. After reserve, in your case, you will need a lot of push_backs to write to element 5. If you don't wish to do that then in your case you should use resize. One thing about reserve: if you then add elements with push_back, until you reach the capacity you have reserved, any existing references, iterators or pointers to data in your vector will remain valid. So if I reserve 1000 and my size is 5, the &vec[4] will remain the same until the vector has 1000 elements. After that, I can call push_back() and it will work, but the stored pointer of &vec[4] earlier may no longer be valid.
Vector
13,029,299
103
Currently when I have to use vector.push_back() multiple times. The code I'm currently using is std::vector<int> TestVector; TestVector.push_back(2); TestVector.push_back(5); TestVector.push_back(8); TestVector.push_back(11); TestVector.push_back(14); Is there a way to only use vector.push_back() once and just pass multiple values into the vector?
You can do it with initializer list: std::vector<unsigned int> array; // First argument is an iterator to the element BEFORE which you will insert: // In this case, you will insert before the end() iterator, which means appending value // at the end of the vector. array.insert(array.end(), { 1, 2, 3, 4, 5, 6 });
Vector
14,561,941
103
I would like to randomly reorganize the order of the numbers in a vector, in a simple one-line command? My particular vector V has 150 entries for each value from 1 to 10: V <- rep(1:10, each=150)
Yes. sample(V) From ?sample: For ‘sample’ the default for ‘size’ is the number of items inferred from the first argument, so that ‘sample(x)’ generates a random permutation of the elements of ‘x’ (or ‘1:x’).
Vector
13,765,972
102
I need to convert a multi-row two-column data.frame to a named character vector. My data.frame would be something like: dd = data.frame(crit = c("a","b","c","d"), name = c("Alpha", "Beta", "Caesar", "Doris") ) and what I actually need would be: whatiwant = c("a" = "Alpha", "b" = "Beta", "c" = "Caesar", "d" = "Doris")
Use the names function: whatyouwant <- as.character(dd$name) names(whatyouwant) <- dd$crit as.character is necessary, because data.frame and read.table turn characters into factors with default settings. If you want a one-liner: whatyouwant <- setNames(as.character(dd$name), dd$crit)
Vector
19,265,172
102
I'm using a external library which at some point gives me a raw pointer to an array of integers and a size. Now I'd like to use std::vector to access and modify these values in place, rather than accessing them with raw pointers. Here is an articifial example that explains the point: size_t size = 0; int * data = get_data_from_library(size); // raw data from library {5,3,2,1,4}, size gets filled in std::vector<int> v = ????; // pseudo vector to be used to access the raw data std::sort(v.begin(), v.end()); // sort raw data in place for (int i = 0; i < 5; i++) { std::cout << data[i] << "\n"; // display sorted raw data } Expected output: 1 2 3 4 5 The reason is that I need to apply algorithms from <algorithm> (sorting, swaping elements etc.) on that data. On the other hand changing the size of that vector would never be changed, so push_back, erase, insert are not required to work on that vector. I could construct a vector based on the data from the library, use modify that vector and copying the data back to to library, but that would be two complete copies that I'd like to avoid as the data set could be really big.
C++20's std::span If you are able to use C++20, you could use std::span which is a pointer - length pair that gives the user a view into a contiguous sequence of elements. It is some sort of a std::string_view, and while both std::span and std::string_view are non-owning views, std::string_view is a read-only view. From the docs: The class template span describes an object that can refer to a contiguous sequence of objects with the first element of the sequence at position zero. A span can either have a static extent, in which case the number of elements in the sequence is known and encoded in the type, or a dynamic extent. So the following would work: #include <span> #include <iostream> #include <algorithm> int main() { int data[] = { 5, 3, 2, 1, 4 }; std::span<int> s{data, 5}; std::sort(s.begin(), s.end()); for (auto const i : s) { std::cout << i << "\n"; } return 0; } Check it out live Since std::span is basically pointer - length pair, you can use in a following manner too: size_t size = 0; int *data = get_data_from_library(size); std::span<int> s{data, size}; Note: Not all compilers support std::span. Check compiler support here. UPDATE If you are not able to use C++20, you could use gsl::span which is basically the base version of the C++ standard's std::span. C++11 solution If you are limited to C++11 standard, you can try implementing your own simple span class: template<typename T> class span { T* ptr_; std::size_t len_; public: span(T* ptr, std::size_t len) noexcept : ptr_{ptr}, len_{len} {} T& operator[](int i) noexcept { return *ptr_[i]; } T const& operator[](int i) const noexcept { return *ptr_[i]; } std::size_t size() const noexcept { return len_; } T* begin() noexcept { return ptr_; } T* end() noexcept { return ptr_ + len_; } }; Check out C++11 version live
Vector
60,151,514
102
In matlab there is a way to find the values in one vector but not in the other. for example: x <- c(1,2,3,4) y <- c(2,3,4) is there any function that would tell me that the value in x that's not in y is 1?
you can use the setdiff() (set difference) function: > setdiff(x, y) [1] 1
Vector
1,837,968
101
What's the C# equivalent of C++ vector? I am searching for this feature: To have a dynamic array of contiguously stored memory that has no performance penalty for access vs. standard arrays. I was searching and they say .NET equivalent to the vector in C++ is the ArrayList, so: Do ArrayList have that contiguous memory feature?
You could use a List<T> and when T is a value type it will be allocated in contiguous memory which would not be the case if T is a reference type. Example: List<int> integers = new List<int>(); integers.Add(1); integers.Add(4); integers.Add(7); int someElement = integers[1];
Vector
6,943,229
101
In C++ one can create an array of predefined size, such as 20, with int myarray[20]. However, the online documentation on vectors doesn't show an alike way of initialising vectors: Instead, a vector should be initialised with, for example, std::vector<int> myvector (4, 100);. This gives a vector of size 4 with all elements being the value 100. How can a vector be initialised with only a predefined size and no predefined value, like with arrays?
With the constructor: // create a vector with 20 integer elements std::vector<int> arr(20); for(int x = 0; x < 20; ++x) arr[x] = x;
Vector
10,559,283
101
I was trying to create a vector of lambda, but failed: auto ignore = [&]() { return 10; }; //1 std::vector<decltype(ignore)> v; //2 v.push_back([&]() { return 100; }); //3 Up to line #2, it compiles fine. But the line#3 gives compilation error: error: no matching function for call to 'std::vector<main()::<lambda()>>::push_back(main()::<lambda()>)' I don't want a vector of function pointers or vector of function objects. However, vector of function objects which encapsulate real lambda expressions, would work for me. Is this possible?
Every lambda has a different type—even if they have the same signature. You must use a run-time encapsulating container such as std::function if you want to do something like that. e.g.: std::vector<std::function<int()>> functors; functors.push_back([&] { return 100; }); functors.push_back([&] { return 10; });
Vector
7,477,310
100
I want know how I can add values to my vector of structs using the push_back method struct subject { string name; int marks; int credits; }; vector<subject> sub; So now how can I add elements to it? I have function that initializes string name(subject name to it) void setName(string s1, string s2, ...... string s6) { // how can i set name too sub[0].name= "english", sub[1].name = "math" etc sub[0].name = s1 // gives segmentation fault; so how do I use push_back method? sub.name.push_back(s1); sub.name.push_back(s2); sub.name.push_back(s3); sub.name.push_back(s4); sub.name.push_back(s6); } Function call setName("english", "math", "physics" ... "economics");
Create vector, push_back element, then modify it as so: struct subject { string name; int marks; int credits; }; int main() { vector<subject> sub; //Push back new subject created with default constructor. sub.push_back(subject()); //Vector now has 1 element @ index 0, so modify it. sub[0].name = "english"; //Add a new element if you want another: sub.push_back(subject()); //Modify its name and marks. sub[1].name = "math"; sub[1].marks = 90; } You cant access a vector with [#] until an element exists in the vector at that index. This example populates the [#] and then modifies it afterward.
Vector
8,067,338
99
How can I move some elements from first vector to second, and the elements will remove from the first? if I am using std::move, the elements not removed from first vector. this is the code I wrote: move(xSpaces1.begin() + 7, xSpaces1.end(), back_inserter(xSpaces2));
Resurrecting an old thread, but I am surprised that nobody mentioned std::make_move_iterator combined with insert. It has the important performance benefit of preallocating space in the target vector: v2.insert(v2.end(), std::make_move_iterator(v1.begin() + 7), std::make_move_iterator(v1.end())); As others have pointed out, first vector v1 is now in indeterminate state, so use erase to clear the mess: v1.erase(v1.begin() + 7, v1.end());
Vector
15,004,517
97
I am trying to do a foreach on a vector of attacks, each attack has a unique ID say, 1-3. The class method takes the keyboard input of 1-3. I am trying to use a foreach to run through my elements in m_attack to see if the number matches, if it does... do something. The problem I'm seeing is this: a'for each' statement cannot operate on an expression of type "std::vector<Attack Am I going about this totally wrong, I have C# experience and is kind of what I'm basing this on, any help would be appreciated. My code is as follows: In header vector<Attack> m_attack; In class int Player::useAttack (int input) { for each (Attack* attack in m_attack) // Problem part { //Psuedo for following action if (attack->m_num == input) { //For the found attack, do it's damage attack->makeDamage(); } } }
For next examples assumed that you use C++11. Example with ranged-based for loops: for (auto &attack : m_attack) // access by reference to avoid copying { if (attack.m_num == input) { attack.makeDamage(); } } You should use const auto &attack depending on the behavior of makeDamage(). You can use std::for_each from standard library + lambdas: std::for_each(m_attack.begin(), m_attack.end(), [](Attack * attack) { if (attack->m_num == input) { attack->makeDamage(); } } ); If you are uncomfortable using std::for_each, you can loop over m_attack using iterators: for (auto attack = m_attack.begin(); attack != m_attack.end(); ++attack) { if (attack->m_num == input) { attack->makeDamage(); } } Use m_attack.cbegin() and m_attack.cend() to get const iterators.
Vector
15,027,282
97
Is there a way to extend a vector by making it repeat itself? >v = [1 2]; >v10 = v x 5; %x represents some function. Something like "1 2" x 5 in perl Then v10 would be: >v10 1 2 1 2 1 2 1 2 1 2 This should work for the general case, not just for [1 2]
The function you're looking for is repmat(). v10 = repmat(v, 1, 5)
Vector
2,459,851
96
Since they are both contiguous memory containers; feature wise, deque has almost everything vector has but more, since it is more efficient to insert in the front. Why whould anyone prefer std::vector to std::deque?
Elements in a deque are not contiguous in memory; vector elements are guaranteed to be. So if you need to interact with a plain C library that needs contiguous arrays, or if you care (a lot) about spatial locality, then you might prefer vector. In addition, since there is some extra bookkeeping, other ops are probably (slightly) more expensive than their equivalent vector operations. On the other hand, using many/large instances of vector may lead to unnecessary heap fragmentation (slowing down calls to new). Also, as pointed out elsewhere on StackOverflow, there is more good discussion here: http://www.gotw.ca/gotw/054.htm .
Vector
5,345,152
96
The member begin has two overloadings one of them is const_iterator begin() const;. There's also the cbegin const_iterator cbegin() const noexcept;. Both of them returns const_iterator to the begin of a list. What's the difference?
begin will return an iterator or a const_iterator depending on the const-qualification of the object it is called on. cbegin will return a const_iterator unconditionally. std::vector<int> vec; const std::vector<int> const_vec; vec.begin(); //iterator vec.cbegin(); //const_iterator const_vec.begin(); //const_iterator const_vec.cbegin(); //const_iterator
Vector
31,208,640
96
I'm using vector drawables in android prior to Lollipop and these are of some of my libraries and tool versions: Android Studio : 2.0 Android Gradle Plugin : 2.0.0 Build Tools : 23.0.2 Android Support Library : 23.3.0 I added this property in my app level Build.Gradle android { defaultConfig { vectorDrawables.useSupportLibrary = true } } It is also worth mentioning that I use an extra drawable such as LayerDrawable(layer_list) as stated in Android official Blog (link here) for setting drawables for vector drawables outside of app:srcCompat <level-list xmlns:android="http://schemas.android.com/apk/res/android"> <item android:drawable="@drawable/search"/> </level-list> You’ll find directly referencing vector drawables outside of app:srcCompat will fail prior to Lollipop. However, AppCompat does support loading vector drawables when they are referenced in another drawable container such as a StateListDrawable, InsetDrawable, LayerDrawable, LevelListDrawable, and RotateDrawable. By using this indirection, you can use vector drawables in cases such as TextView’s android:drawableLeft attribute, which wouldn’t normally be able to support vector drawables. When I'm using app:srcCompat everything works fine, but when I use: android:background android:drawableLeft android:drawableRight android:drawableTop android:drawableBottom on ImageView, ImageButton, TextView or EditText prior to Lollipop, it throws an expection: Caused by: android.content.res.Resources$NotFoundException: File res/drawable/search_toggle.xml from drawable resource ID #0x7f0200a9
LATEST UPDATE - Jun/2019 Support Library has changed a bit since the original answer. Now, even the Android plugin for Gradle is able to automatically generate the PNG at build time. So, below are two new approaches that should work these days. You can find more info here: PNG Generation Gradle can automatically create PNG images from your assets at build time. However, in this approach, not all xml elements are supported. This solution is convenient because you don't need to change anything in your code or in your build.gradle. Just make sure you are using Android Plugin 1.5.0 or higher and Android Studio 2.2 or higher. I'm using this solution in my app and works fine. No additional build.gradle flag necessary. No hacks is necessary. If you go to /build/generated/res/pngs/... you can see all generated PNGs. So, if you have some simple icon (since not all xml elements are supported), this solution may work for you. Just update your Android Studio and your Android plugin for Gradle. Support Library Probably, this is the solution that will work for you. If you came here, it means your Android Studio is not generating the PNGs automatically. So, your app is crashing. Or maybe, you don't want Android Studio to generate any PNG at all. Differently from that "Auto-PNG generation" which supports a subset of XML element, this solution, supports all xml tags. So, you have full support to your vector drawable. You must first, update your build.gradle to support it: android { defaultConfig { // This flag will also prevents Android Studio from generating PNGs automatically vectorDrawables.useSupportLibrary = true } } dependencies { // Use this for Support Library implementation 'com.android.support:appcompat-v7:23.2.0' // OR HIGHER // Use this for AndroidX implementation 'androidx.appcompat:appcompat:1.1.0' // OR HIGHER } And then, use app:srcCompat instead of android:src while loading VectorDrawables. Don't forget this. For TextView, if you are using the androidx version of the Support Library, you can use app:drawableLeftCompat (or right, top, bottom) instead of app:drawableLeft In case of CheckBox/RadioButton, use app:buttonCompat instead of android:button. If you are not using the androidx version of the Support Library and your minSdkVersion is 17 or higher or using a button, you may try to set programmatically via Drawable icon = AppCompatResources.getDrawable(context, <drawable_id>); textView.setCompoundDrawablesWithIntrinsicBounds(<leftIcon>,<topIcon>,<rightIcon>,<bottomIcon>); UPDATE - Jul/2016 They re-enabled that VectorDrawable in Android Support Library 23.4.0 For AppCompat users, we’ve added an opt-in API to re-enable support Vector Drawables from resources (the behavior found in 23.2) via AppCompatDelegate.setCompatVectorFromResourcesEnabled(true) - keep in mind that this still can cause issues with memory usage and problems updating Configuration instances, hence why it is disabled by default. Maybe, build.gradle setting is now obsolete and you just need to enable it in proper activities (however, need to test). Now, to enable it, you must do: public class MainActivity extends AppCompatActivity { static { AppCompatDelegate.setCompatVectorFromResourcesEnabled(true); } ... } Original Answer - Apr/2016 I think this is happening because Support Vector was disabled in the latest library version: 23.3.0 According to this POST: For AppCompat users, we’ve decided to remove the functionality which let you use vector drawables from resources on pre-Lollipop devices due to issues found in the implementation in version 23.2.0/23.2.1 (ISSUE 205236). Using app:srcCompat and setImageResource() continues to work. If you visit issue ISSUE 205236, it seems that they will enable in the future but the memory issue will not be fixed soon: In the next release I've added an opt-in API where you can re-enable the VectorDrawable support which was removed. It comes with the same caveats as before though (memory usage and problems with Configuration updating). I had a similar issue. So, in my case, I reverted all icons which use vector drawable from resource to PNG images again (since the memory issue will keep happening even after they provide an option to enable it again). I'm not sure if this is the best option, but it fixes all the crashes in my opinion.
Vector
36,867,298
96
I'm trying to send a vector as an argument to a function and i can't figure out how to make it work. Tried a bunch of different ways but they all give different error messages. I only include part of the code, since it's only this part that doesn't work. (the vector "random" is filled with random, but sorted, values between 0 and 200) Updated the code: #include <iostream> #include <ctime> #include <algorithm> #include <vector> using namespace std; int binarySearch(int first, int last, int search4, vector<int>& random); int main() { vector<int> random(100); int search4, found; int first = 0; int last = 99; found = binarySearch(first, last, search4, random); system("pause"); return(0); } int binarySearch(int first, int last, int search4, vector<int>& random) { do { int mid = (first + last) / 2; if (search4 > random[mid]) first = mid + 1; else if (search4 < random[mid]) last = mid - 1; else return mid; } while (first <= last); return -(first + 1); }
It depends on if you want to pass the vector as a reference or as a pointer (I am disregarding the option of passing it by value as clearly undesirable). As a reference: int binarySearch(int first, int last, int search4, vector<int>& random); vector<int> random(100); // ... found = binarySearch(first, last, search4, random); As a pointer: int binarySearch(int first, int last, int search4, vector<int>* random); vector<int> random(100); // ... found = binarySearch(first, last, search4, &random); Inside binarySearch, you will need to use . or -> to access the members of random correspondingly. Issues with your current code binarySearch expects a vector<int>*, but you pass in a vector<int> (missing a & before random) You do not dereference the pointer inside binarySearch before using it (for example, random[mid] should be (*random)[mid] You are missing using namespace std; after the <include>s The values you assign to first and last are wrong (should be 0 and 99 instead of random[0] and random[99]
Vector
5,333,113
95
I have a vector, in which I save objects. I need to convert it to set. I have been reading about sets, but I still have a couple of questions: How to correctly initialize it? Honestly, some tutorials say it is fine to initialize it like set<ObjectName> something. Others say that you need an iterator there too, like set<Iterator, ObjectName> something. How to insert them correctly. Again, is it enough to just write something.insert(object) and that's all? How to get a specific object (for example, an object which has a named variable in it, which is equal to "ben") from the set? I have to convert the vector itself to be a set (a.k.a. I have to use a set rather than a vector).
Suppose you have a vector of strings, to convert it to a set you can: std::vector<std::string> v; std::set<std::string> s(v.begin(), v.end()); For other types, you must have operator< defined.
Vector
20,052,674
95
How do I extract a column from a data.table as a vector by its position? Below are some code snippets I have tried: DT<-data.table(x=c(1,2),y=c(3,4),z=c(5,6)) DT # x y z #1: 1 3 5 #2: 2 4 6 I want to get this output using column position DT$y #[1] 3 4 is.vector(DT$y) #[1] TRUE Other way to get this output using column position DT[,y] #[1] 3 4 is.vector(DT[,y]) #[1] TRUE This doesn't give a vector DT[,2,with=FALSE] # y #1: 3 #2: 4 is.vector(DT[,2,with=FALSE]) #[1] FALSE Those two doesn't work: DT$noquote(names(DT)[2]) # Doesn't work #Error: attempt to apply non-function DT[,noquote(names(DT)[2])] # Doesn't work #[1] y And this doesn't give a vector: DT[,noquote(names(DT)[2]),with=FALSE] # Not a vector # y #1: 3 #2: 4 is.vector(DT[,noquote(names(DT)[2]),with=FALSE]) #[1] FALSE
A data.table inherits from class data.frame. Therefore it is a list (of column vectors) internally and can be treated as such. is.list(DT) #[1] TRUE Fortunately, extracting a vector from a list, i.e. [[, is very fast and, in contrast to [, package data.table doesn't define a method for it. Thus, you can simply use [[ to extract by an index: DT[[2]] #[1] 3 4
Vector
20,043,313
93
I want to build a HashSet<u8> from a Vec<u8>. I'd like to do this in one line of code, copying the data only once, using only 2n memory, but the only thing I can get to compile is this piece of .. junk, which I think copies the data twice and uses 3n memory. fn vec_to_set(vec: Vec<u8>) -> HashSet<u8> { let mut victim = vec.clone(); let x: HashSet<u8> = victim.drain(..).collect(); return x; } I was hoping to write something simple, like this: fn vec_to_set(vec: Vec<u8>) -> HashSet<u8> { return HashSet::from_iter(vec.iter()); } but that won't compile: error[E0308]: mismatched types --> <anon>:5:12 | 5 | return HashSet::from_iter(vec.iter()); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected u8, found &u8 | = note: expected type `std::collections::HashSet<u8>` = note: found type `std::collections::HashSet<&u8, _>` .. and I don't really understand the error message, probably because I need to RTFM.
Because the operation does not need to consume the vector¹, I think it should not consume it. That only leads to extra copying somewhere else in the program: use std::collections::HashSet; use std::iter::FromIterator; fn hashset(data: &[u8]) -> HashSet<u8> { HashSet::from_iter(data.iter().cloned()) } Call it like hashset(&v) where v is a Vec<u8> or other thing that coerces to a slice. There are of course more ways to write this, to be generic and all that, but this answer sticks to just introducing the thing I wanted to focus on. ¹This is based on that the element type u8 is Copy, i.e. it does not have ownership semantics.
Vector
39,803,237
93
Section 23.3.7 Class vector<bool> [vector.bool], paragraph 1 states: template <class Allocator> class vector<bool, Allocator> { public: // types: typedef bool const_reference; ... However this program fails to compile when using libc++: #include <vector> #include <type_traits> int main() { static_assert(std::is_same<std::vector<bool>::const_reference, bool>{}, "?"); } Furthermore I note that the C++ standard has been consistent in this specification all the way back to C++98. And I further note that libc++ has consistently not followed this specification since the first introduction of libc++. What is the motivation for this non-conformance?
The motivation for this extension, which is detectable by a conforming program, and thus non-conforming, is to make vector<bool> behave more like vector<char> with respect to references (const and otherwise). Introduction Since 1998, vector<bool> has been derided as "not quite a container." LWG 96, one of the very first LWG issues, launched the debate. Today, 17 years later, vector<bool> remains largely unchanged. This paper goes into some specific examples on how the behavior of vector<bool> differs from every other instantiation of vector, thus hurting generic code. However the same paper discusses at length the very nice performance properties vector<bool> can have if properly implemented. Summary: vector<bool> isn't a bad container. It is actually quite useful. It just has a bad name. Back to const_reference As introduced above, and detailed here, what is bad about vector<bool> is that it behaves differently in generic code than other vector instantiations. Here is a concrete example: #include <cassert> #include <vector> template <class T> void test(std::vector<T>& v) { using const_ref = typename std::vector<T>::const_reference; const std::vector<T>& cv = v; const_ref cr = cv[0]; assert(cr == cv[0]); v[0] = 1; assert(true == cv[0]); assert(cr == cv[0]); // Fires! } int main() { std::vector<char> vc(1); test(vc); std::vector<bool> vb(1); test(vb); } The standard specification says that the assert marked // Fires! will trigger, but only when test is run with a vector<bool>. When run with a vector<char> (or any vector besides bool when an appropriate non-default T is assigned), the test passes. The libc++ implementation sought to minimize the negative effects of having vector<bool> behave differently in generic code. One thing it did to achieve this is to make vector<T>::const_reference a proxy-reference, just like the specified vector<T>::reference, except that you can't assign through it. That is, on libc++, vector<T>::const_reference is essentially a pointer to the bit inside the vector, instead of a copy of that bit. On libc++ the above test passes for both vector<char> and vector<bool>. At what cost? The downside is that this extension is detectible, as shown in the question. However, very few programs actually care about the exact type of this alias, and more programs care about the behavior. What is the motivation for this non-conformance? To give the libc++ client better behavior in generic code, and perhaps after sufficient field testing, propose this extension to a future C++ standard for the betterment of the entire C++ industry. Such a proposal might come in the form of a new container (e.g. bit_vector) that has much the same API as today's vector<bool>, but with a few upgrades such as the const_reference discussed here. Followed by deprecation (and eventual removal) of the vector<bool> specialization. bitset could also use a little upgrading in this department, e.g. add const_reference, and a set of iterators. That is, in hindsight bitset is to vector<bool> (which should be renamed to bit_vector -- or whatever), as array is to vector. And the analogy ought to hold true whether or not we are talking about bool as the value_type of vector and array. There are multiple examples of C++11 and C++14 features that started out as extensions in libc++. This is how standards evolve. Actual demonstrated positive field experience carries strong influence. The standards folk are a conservative bunch when it comes to changing existing specifications (as they should be). Guessing, even when you are sure you are guessing correctly, is a risky strategy for evolving an internationally recognized standard.
Vector
31,974,237
92
What is the most efficient way of obtaining lists (as a vector) of the keys and values from an unordered_map? For concreteness, suppose the map in question is a unordered_map<string, double>. I'd then like to obtain the keys as a vector<string>, and the values as a vector<double>. unordered_map<string, double> um; vector<string> vs = um.enum_keys(); vector<double> vd = um.enum_values(); I can just iterate across the map and collect the result, but is there a more efficient method? It would be nice to have a method that also works for regular map, since I might switch to that.
Okay, here you go: std::vector<Key> keys; keys.reserve(map.size()); std::vector<Val> vals; vals.reserve(map.size()); for(auto kv : map) { keys.push_back(kv.first); vals.push_back(kv.second); } Efficiency can probably be improved, but there it is. You're operating on two containers though, so there's not really any STL magic that can hide that fact. As Louis said, this will work for any of the STL map or set containers.
Vector
8,483,985
90
I know the size of a vector, which is the best way to initialize it? Option 1: vector<int> vec(3); //in .h vec.at(0)=var1; //in .cpp vec.at(1)=var2; //in .cpp vec.at(2)=var3; //in .cpp Option 2: vector<int> vec; //in .h vec.reserve(3); //in .cpp vec.push_back(var1); //in .cpp vec.push_back(var2); //in .cpp vec.push_back(var3); //in .cpp I guess, Option2 is better than Option1. Is it? Any other options?
Somehow, a non-answer answer that is completely wrong has remained accepted and most upvoted for ~7 years. This is not an apples and oranges question. This is not a question to be answered with vague cliches. For a simple rule to follow: Option #1 is faster... ...but this probably shouldn't be your biggest concern. Firstly, the difference is pretty minor. Secondly, as we crank up the compiler optimization, the difference becomes even smaller. For example, on my gcc-5.4.0, the difference is arguably trivial when running level 3 compiler optimization (-O3): So in general, I would recommending using method #1 whenever you encounter this situation. However, if you can't remember which one is optimal, it's probably not worth the effort to find out. Just pick either one and move on, because this is unlikely to ever cause a noticeable slowdown in your program as a whole. These tests were run by sampling random vector sizes from a normal distribution, and then timing the initialization of vectors of these sizes using the two methods. We keep a dummy sum variable to ensure the vector initialization is not optimized out, and we randomize vector sizes and values to make an effort to avoid any errors due to branch prediction, caching, and other such tricks. main.cpp: /* * Test constructing and filling a vector in two ways: construction with size * then assignment versus construction of empty vector followed by push_back * We collect dummy sums to prevent the compiler from optimizing out computation */ #include <iostream> #include <vector> #include "rng.hpp" #include "timer.hpp" const size_t kMinSize = 1000; const size_t kMaxSize = 100000; const double kSizeIncrementFactor = 1.2; const int kNumVecs = 10000; int main() { for (size_t mean_size = kMinSize; mean_size <= kMaxSize; mean_size = static_cast<size_t>(mean_size * kSizeIncrementFactor)) { // Generate sizes from normal distribution std::vector<size_t> sizes_vec; NormalIntRng<size_t> sizes_rng(mean_size, mean_size / 10.0); for (int i = 0; i < kNumVecs; ++i) { sizes_vec.push_back(sizes_rng.GenerateValue()); } Timer timer; UniformIntRng<int> values_rng(0, 5); // Method 1: construct with size, then assign timer.Reset(); int method_1_sum = 0; for (size_t num_els : sizes_vec) { std::vector<int> vec(num_els); for (size_t i = 0; i < num_els; ++i) { vec[i] = values_rng.GenerateValue(); } // Compute sum - this part identical for two methods for (size_t i = 0; i < num_els; ++i) { method_1_sum += vec[i]; } } double method_1_seconds = timer.GetSeconds(); // Method 2: reserve then push_back timer.Reset(); int method_2_sum = 0; for (size_t num_els : sizes_vec) { std::vector<int> vec; vec.reserve(num_els); for (size_t i = 0; i < num_els; ++i) { vec.push_back(values_rng.GenerateValue()); } // Compute sum - this part identical for two methods for (size_t i = 0; i < num_els; ++i) { method_2_sum += vec[i]; } } double method_2_seconds = timer.GetSeconds(); // Report results as mean_size, method_1_seconds, method_2_seconds std::cout << mean_size << ", " << method_1_seconds << ", " << method_2_seconds; // Do something with the dummy sums that cannot be optimized out std::cout << ((method_1_sum > method_2_sum) ? "" : " ") << std::endl; } return 0; } The header files I used are located here: rng.hpp timer.hpp
Vector
8,928,547
90
Is there an equivalent of list slicing [1:] from Python in C++ with vectors? I simply want to get all but the first element from a vector. Python's list slicing operator: list1 = [1, 2, 3] list2 = list1[1:] print(list2) # [2, 3] C++ Desired result: std::vector<int> v1 = {1, 2, 3}; std::vector<int> v2; v2 = v1[1:]; std::cout << v2 << std::endl; //{2, 3}
This can easily be done using std::vector's copy constructor: v2 = std::vector<int>(v1.begin() + 1, v1.end());
Vector
50,549,611
90
I have a vector vec of structures. Such a structure has elements int a, int b, int c. I would like to assign to some int var the element c, from the last structure in a vector. Please can you provide me with this simple solution? I'm going circle in line like this: var = vec.end().c;
The immediate answer to your question as to fetching access to the last element in a vector can be accomplished using the back() member. Such as: int var = vec.back().c; Note: If there is a possibility your vector is empty, such a call to back() causes undefined behavior. In such cases you can check your vector's empty-state prior to using back() by using the empty() member: if (!vec.empty()) var = vec.back().c; Likely one of these two methods will be applicable for your needs.
Vector
14,275,291
89
I have two vectors as Python lists and an angle. E.g.: v = [3, 5, 0] axis = [4, 4, 1] theta = 1.2 # In radians. What is the best/easiest way to get the resulting vector when rotating the v vector around the axis? The rotation should appear to be counter clockwise for an observer to whom the axis vector is pointing. This is called the right hand rule
Using the Euler-Rodrigues formula: import numpy as np import math def rotation_matrix(axis, theta): """ Return the rotation matrix associated with counterclockwise rotation about the given axis by theta radians. """ axis = np.asarray(axis) axis = axis / math.sqrt(np.dot(axis, axis)) a = math.cos(theta / 2.0) b, c, d = -axis * math.sin(theta / 2.0) aa, bb, cc, dd = a * a, b * b, c * c, d * d bc, ad, ac, ab, bd, cd = b * c, a * d, a * c, a * b, b * d, c * d return np.array([[aa + bb - cc - dd, 2 * (bc + ad), 2 * (bd - ac)], [2 * (bc - ad), aa + cc - bb - dd, 2 * (cd + ab)], [2 * (bd + ac), 2 * (cd - ab), aa + dd - bb - cc]]) v = [3, 5, 0] axis = [4, 4, 1] theta = 1.2 print(np.dot(rotation_matrix(axis, theta), v)) # [ 2.74911638 4.77180932 1.91629719]
Vector
6,802,577
88
Can/should prometheus be used as a log aggregator? We are deploying apps into a kubernetes cluster. All containers already log to stdout/err and we want all devs to instrument their code with logs to stdout/err. Fluentd will then collate all logs across the whole cluster and send to an aggregator. We have thought about using Elasticsearch/kibana however we will already have Prometheus for node metric gathering so if we can have fluentd send all logs to Prometheus it keeps everything in one place. So, can/should Prometheus be used as a logging aggregator? Would it still have to poll the fluentd server? Really, it would be great to be able to use the alerting features of Prometheus so that if a certain log is made it (for instance) dumps the log message into a slack channel etc. Appreciate some pointers on this one, thanks.
Prometheus is a metrics system rather than a logs system. There's the mtail and grok exporters to process logs, but really that's only for cases where instrumenting your code with metrics is not possible. For logs something like Elasticsearch is far more appropriate.
Prometheus
41,596,104
12
docker-compose.yml: This is the docker-compose to run the prometheus, node-exporter and alert-manager service. All the services are running great. Even the health status in target menu of prometheus shows ok. version: '2' services: prometheus: image: prom/prometheus privileged: true volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml - ./alertmanger/alert.rules:/alert.rules command: - '--config.file=/etc/prometheus/prometheus.yml' ports: - '9090:9090' node-exporter: image: prom/node-exporter ports: - '9100:9100' alertmanager: image: prom/alertmanager privileged: true volumes: - ./alertmanager/alertmanager.yml:/alertmanager.yml command: - '--config.file=/alertmanager.yml' ports: - '9093:9093' prometheus.yml This is the prometheus config file with targets and alerts target sets. The alertmanager target url is working fine. global: scrape_interval: 5s external_labels: monitor: 'my-monitor' # this is where I have simple alert rules rule_files: - ./alertmanager/alert.rules scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'node-exporter' static_configs: - targets: ['node-exporter:9100'] alerting: alertmanagers: - static_configs: - targets: ['some-ip:9093'] alert.rules: Just a simple alert rules to show alert when service is down ALERT service_down IF up == 0 alertmanager.yml This is to send the message on slack when alerting occurs. global: slack_api_url: 'https://api.slack.com/apps/A90S3Q753' route: receiver: 'slack' receivers: - name: 'slack' slack_configs: - send_resolved: true username: 'tara gurung' channel: '#general' api_url: 'https://hooks.slack.com/services/T52GRFN3F/B90NMV1U2/QKj1pZu3ZVY0QONyI5sfsdf' Problems: All the containers are working fine I am not able to figure out the exact problem.What am I really missing. Checking the alerts in prometheus shows. Alerts No alerting rules defined
Your ./alertmanager/alert.rules file is not included in your docker config, so it is not available in the container. You need to add it to the prometheus service: prometheus: image: prom/prometheus privileged: true volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml - ./alertmanager/alert.rules:/alertmanager/alert.rules command: - '--config.file=/etc/prometheus/prometheus.yml' ports: - '9090:9090' And probably give an absolute path inside prometheus.yml: rule_files: - "/alertmanager/alert.rules" You also need to make sure you alerting rules are valid. Please see the prometheus docs for details and examples. You alert.rules file should look something like this: groups: - name: example rules: # Alert for any instance that is unreachable for >5 minutes. - alert: InstanceDown expr: up == 0 for: 5m Once you have multiple files, it may be better to add the entire directory as a volume rather than individual files. Moreover, you have not specified the correct hostname to the alertmanager in your prometheus.yml file. If you are using the shown docker compose configuration it should be: alerting: alertmanagers: - static_configs: - targets: - 'alertmanager:9093' Based on: alertmanager: image: prom/alertmanager privileged: true volumes: - ./alertmanager/alertmanager.yml:/alertmanager.yml command: - '--config.file=/alertmanager.yml' ports: - '9093:9093'
Prometheus
48,556,768
12
I recently upgraded a spring boot application from 1.5 to 2.0.1. I also migrated the prometheus integration to the new actuator approach using micrometer. Most things work now - including some custom counters and gauges. I noted the new prometheus endpoint /actuator/prometheus does no longer publish the spring cache metrics (size and hit ratio). The only thing I could find was this issue and its related commit. Still I can't get cache metrics on the prometheus export. I tried settings some properties: management.metrics.cache.instrument-cache=true spring.cache.cache-names=cache1Name,cache2Name... But nothing really works. I can see the Hazelcast cache manager starting up, registering the cache manager bean and so on - but neither /metrics nor /prometheus show any statistics. The caches are populated using the @Cacheable annotation. This worked with Spring Boot 1.5 - I think via Hazelcast exposing its metrics via JMX and the prometheus exporter picking it up from there? Not sure now how to wire this together. Any hints are welcome!
As you've answered my question, I can provide an answer for this. my caches get created through scheduled tasks later on Then this section of the doc applies to you: Only caches that are available on startup are bound to the registry. For caches created on-the-fly or programmatically after the startup phase, an explicit registration is required. A CacheMetricsRegistrar bean is made available to make that process easier. So you have to register such caches yourself, hopefully it is pretty easy, something like: public class MyComponent { private final CacheMetricsRegistrar cacheMetricsRegistrar; private final CacheManager cacheManager public MyComponent(CacheMetricsRegistrar cacheMetricsRegistrar, CacheManager cacheManager) { ... } public void register() { // you have just registered cache "xyz" Cache xyz = this.cacheManager.getCache("xyz"); this.cacheMetricsRegistrar.bindCacheToRegistry(xyz); } } you can include this code in your existing code. If you don't want to do that, then you need something else that runs after your existing code to register those caches to the registry.
Prometheus
49,697,063
12
There's an article "Tracking Every Release" which tells about displaying a vertical line on graphs for every code deployment. They are using Graphite. I would like to do something similar with Prometheus 2.2 and Grafana 5.1. More specifically I want to get an "application start" event displayed on a graph. Grafana annotations seem to be the appropriate mechanism for this but I can't figure out what type of prometheus metric to use and how to query it.
The simplest way to do this is via the same basic approach as in the article, by having your deployment tool tell Grafana when it performs a deployment. Grafan has a built-in system for storing annotations, which are displayed on graphs as vertical lines and can have text associated with them. It would be as simple as creating an API key in your Grafana instance and adding a curl call to your deploy script: curl -H "Authorization: Bearer <apikey>" http://grafana:3000/api/annotations -H "Content-Type: application/json" -d '{"text":"version 1.2.3 deployed","tags":["deploy","production"]}' For more info on the available options check the documentation: http://docs.grafana.org/http_api/annotations/ Once you have your deployments being added as annotations, you can display those on your dashboard by going to the annotations tab in the dashboard settings and adding a new annotation source: Then the annotations will be shown on the panels in your dashboard:
Prometheus
50,415,659
12
I am trying to load prometheus with docker using the following custom conf file: danilo@machine:/prometheus-data/prometheus.yml: global: scrape_interval: 15s # By default, scrape targets every 15 seconds. # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'codelab-monitor' # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s static_configs: - targets: ['localhost:9090'] - targets: ['localhost:8083', 'localhost:8080'] labels: my_app group: 'my_app_group' With the following command: $ sudo docker run -p 9090:9090 prom/prometheus --config.file=/prometheus- data/prometheus.yml The file already exists. However, I am getting the following message: level=error ts=2018-09-26T17:45:00.586704798Z caller=main.go:617 err="error loading config from "/prometheus-data/prometheus.yml": couldn't load configuration (--config.file="/prometheus-data/prometheus.yml"): open /prometheus-data/prometheus.yml: no such file or directory" I'm following this guide: https://prometheus.io/docs/prometheus/latest/installation/ What can I do to load this file correctly?
By “the file already exists”, do you mean that the file is on your host at /prometheus-data/prometheus.yml? If so, then you need to bind mount it into your container for it to be accessible to Prometheus. sudo docker run -p 9090:9090 -v /prometheus-data/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus It's covered under Volumes & bind-mount in the documentation.
Prometheus
52,523,610
12
I have one query where I am trying to join two metrics on a label. K_Status_Value == 5 and ON(macAddr) state_details{live="True"} The label macAddr is present in both the metrics. The value of the label appears in 'K_Status_Value' sometimes in upper case (78:32:5A:29:2F:0D) and sometimes in lower case (78:72:5d:39:2f:0a) but always appears in upper case for 'state_details'. Is there any way I can make the label macAddr value case-insensitive in the query so that I don't miss out on the occurrences where the cases don't match?
I can think of two options Using regex "i" match modifier: To quote Ben Kochie on Prometheus user mailing list: The regexp matching in Prometheus is based on RE2 I think you can set flags within a match by using (?i(matchstring)) It works indeed: this metric up{instance="localhost:9090",job="prometheus"} is matched by this expression : up{job=~"(?i:(ProMeTHeUs))"} This hint won't help in the case described above. It won't help either to join on (xx) or group_left. Using a recording rule: I was initialy hoping to use a recording rule to lower case at ingestion time (in prometheus.yml). However this features is not implemented at this time (issue 1548)
Prometheus
53,312,007
12
I want to deploy Prometheus from the official helm chart on the stable repo. Also, I want to add my own scrape config. I can successfully add extra configs directly from the values.yml file of the chart, after downloading and altering it, but when I try to pass it as argument with --set nothing happens. This works [in values.yml]: # adds additional scrape configs to prometheus.yml # must be a string so you have to add a | after extraScrapeConfigs: # example adds prometheus-blackbox-exporter scrape config extraScrapeConfigs: | - job_name: 'sample-job' scrape_interval: 1s metrics_path: / kubernetes_sd_configs: - role: endpoints this does not: sudo helm upgrade --install prometheus \ --set rbac.create=true \ --set server.persistentVolume.enabled=false \ --set alertmanager.persistentVolume.enabled=false \ --set alertmanager.enabled=false \ --set kubeStateMetrics.enabled=false \ --set nodeExporter.enabled=false \ --set pushgateway.enabled=false \ --set extraScrapeConfigs="| - job_name: 'sample-pods' scrape_interval: 1s metrics_path: / kubernetes_sd_configs: - role: endpoints " \ stable/prometheus Is it possible someway? I found this SO question How to use --set to set values with Prometheus chart? , but I cannot find a way to apply it to my case.
When we are going to inject a multi-line text into values we need to deal with indentation in YAML. For your particular case it is: sudo helm upgrade --install prometheus \ --set rbac.create=true \ --set server.persistentVolume.enabled=false \ --set alertmanager.persistentVolume.enabled=false \ --set alertmanager.enabled=false \ --set kubeStateMetrics.enabled=false \ --set nodeExporter.enabled=false \ --set pushgateway.enabled=false \ --set extraScrapeConfigs="- job_name: 'sample-pods' scrape_interval: 1s metrics_path: / kubernetes_sd_configs: - role: endpoints" \ stable/prometheus The more elegant way is to use --set-file key=filepath construction instead of --set: 1. We create extraScrapeConfigs.yaml and put there our value as is: - job_name: 'sample-pods' scrape_interval: 1s metrics_path: / kubernetes_sd_configs: - role: endpoints 2. We deploy stable/prometheus helm chart: sudo helm upgrade --install prometheus \ --set rbac.create=true \ --set server.persistentVolume.enabled=false \ --set alertmanager.persistentVolume.enabled=false \ --set alertmanager.enabled=false \ --set kubeStateMetrics.enabled=false \ --set nodeExporter.enabled=false \ --set pushgateway.enabled=false \ --set-file extraScrapeConfigs=extraScrapeConfigs.yaml \ stable/prometheus
Prometheus
55,360,726
12
I've upgraded my Spring Boot application to the latest 2.2.2 Boot version. Since then, I only have a metrics endpoint but no Prometheus. My build.gradle.kts file has org.springframework.boot:spring-boot-starter-actuator as dependency, I also added io.micrometer:micrometer-registry-prometheus as the reference suggests (Prometheus endpoint). My application.yml looks like the following: management: server: port: 9000 endpoints: web: exposure: include: health, shutdown, prometheus endpoint: shutdown: enabled: true Can someone guide me to the right direction? Edit: It was working in Spring Boot 2.2.0. This is the link to download an identical project: link Edit 2: I can verify that it works with 2.2.1 as well.
I followed your setup, I created a project from this project loaded Spring Boot 2.2.2.RELEASE, I added the following dependency for Prometheus implementation("io.micrometer:micrometer-registry-prometheus") Also I added the following configuration in application.yml management: server: port: 9000 endpoints: web: exposure: include: health, shutdown, prometheus endpoint: shutdown: enabled: true When the application starts you will see the following info which shows you that 3 endpoints are exposed (health, shutdown and prometheus). 2020-01-05 23:48:19.489 INFO 7700 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 3 endpoint(s) beneath base path '/actuator' And used Postman for method GET this endpoint http://localhost:9000/actuator/prometheus and it works well. I created a repository by following these steps here So please let me know what error is displayed, or what happens when you don't get the expected result so that I can help and edit this answer.
Prometheus
59,392,548
12
I am using this chart : https://github.com/helm/charts/tree/master/stable/prometheus-mongodb-exporter This chart requires MONGODB_URI environment variable or mongodb.uri populated in values.yaml file, Since this is a connection string I don't want to check-in that into git. I was thinking of kubernetes secrets and provide the connection string from kubernetes secrets. I have not been able to successfully find a solution for this one. I also tried creating another helm chart and using this one as a dependency for that chart and provide value for MONGODB_URI from secrets.yaml but that also didn't work because in prometheus-mongodb-exporter chart MONGODB_URI is defined as a required value which is then passed into secrets.yaml file within that chart, so dependency chart never gets installed because of that. What would be the best way of achieving this?
Solution 1: Create custom chart Delete the secret.yaml from chart's template directory. Create the k8s secret on your own, maybe named cumstom-secret Edit the deployment.yaml: here - name: MONGODB_URI valueFrom: secretKeyRef: name: custom-secret ## {{ include "prometheus-mongodb-exporter.fullname" . }}## key: mongodb-uri Solution 2: Use original chart Set a dummy value for mongodb.uri in value.yaml. Use --set flag to overwrite the dummy value with original while installing the chart. So, your git won't have the history. $ helm install prometheus-mongodb-exporter stable/prometheus-mongodb-exporter --set mongodb.uri=******
Prometheus
60,051,056
12
I'm trying to show system uptime as DD-HH-MM-SS format, doing it using common code wouldn't be an issue but I'm doing it using Prometheus (PromQL) and Grafana only, here's the PromQL query: time()-process_start_time_seconds{instance="INSTANCE",job="JOB"} I achieved the basic output I wanted, it shows me the process life time. The output for the query above gives me time in seconds (for instance 68003) and converts it to bigger time units (minutes, hours etc.), but in its decimal form: The 89 after the decimal point refers to 89% of an hour,about 53 minutes. That's not a really "intuitive" way to show time, I would have liked it to display a normal DD:HH:MM:SS presentation of that time like the following screenshot from a simple online tool that converts seconds to time: Is there away to achieve it using only PromQL and Grafana configuration?
You can achieve this using "Unit" drop-down in the visualization section and select your unit as duration with the hh:mm:ss format, as you can see in the screenshot.
Prometheus
60,757,793
12
Here's my query, which is supposed to show the delta for each counter of the form taskcnt.*: delta(label_replace({__name__=~"taskcnt.*"}, "old_name", "$1", "__name__", "(.+)")[1w]) I'm getting: Error executing query: 1:83: parse error: ranges only allowed for vector selectors Basically, without the label_replace I'll get: vector cannot contain metrics with the same labelset How can I make this query work?
The Subquery is what you need indeed (credit to commenter above M. Doubez). This should work for you - it calculates the weekly delta by calculating the subquery each day (see [1w:1d]) delta(label_replace({__name__=~"desired_metric_prefix_.+_suffix"}, "metric_name", "$1", "__name__", "desired_metric_prefix_(.+)_suffix")[1w:1d]) Be sure that all of the metrics that match your regex are compatible with the label_replace and with the delta functions. If you're displaying this in Grafana, use Legend expression {{ metric_name }} to display the extracted metric_name for each series.
Prometheus
61,169,517
12
I went through the PromQL docs and found rate little bit confusing. Then I tried one query from Prometheus query dashboard and found below given results Time Count increase rate(count[1m]) 15s 4381 0 0 30s 4381 0 0 45s 4381 0 0 1m 4381 0 0 15s 4381 0 0 30s 4402 21 0.700023 45s 4402 0 0.700023 2m 4423 21 0.7 15s 4423 0 0.7 30s 4440 17 0.56666666 45s 4440 0 0.56666666 3m 4456 16 0.53333333 Last column value I am getting from dashboard but I am not able to understand how is this calculated. Resolution - 15s scrape_interval: 30s
The "increase" function calculates how much some counter has grown and the "rate" function calculates the amount per second the measure grows. Analyzing your data I think you used [30s] for the "increase" and [1m] for the "rate" (the correct used values are important to the result). Basically, for example, in time 2m we have: increase[30s] = count at 2m - count at 1.5m = 4423 - 4402 = 21 rate[1m] = (count at 2m - count at 1m) / 60 = (4423 - 4381) / 60 = 0.7 Prometheus documentation: increase and rate.
Prometheus
66,674,880
12
I want to exclude mulitple app groups from my query... Not sure how to go about it.. My thoughts are like this count(master_build_state{app_group~! "oss-data-repair", "pts-plan-tech-solution", kubernets_namespace = "etc"} ==0) I do not want to include those two app_groups, but am not sure how to implement in PromQL. You would thing to add () or [], but it throws errors. Let me know if anyone can help! Thanks
count(master_build_state{app_group !~ "(oss-data-repair|pts-plan-tech-solution)", kubernets_namespace="etc"} ==0)
Prometheus
68,681,720
12
I tried to obtains these measurements from prometheus: increase(http_server_requests_seconds_count{uri="myURI"}[10s]) increase(http_server_requests_seconds_count{uri="myURI"}[30s]) rate(http_server_requests_seconds_count{uri="myURI"}[10s]) rate(http_server_requests_seconds_count{uri="myURI"}[30s]) Then I run a python script where 5 threads are created, each of them hitting this myURI endpoint: What I see on Grafana is: I received these values: 0 6 0 0.2 I expected to receive these (but didn't): 5 (as in the last 10 seconds this endpoint received 5 calls) 5 (as in the last 30 seconds this endpoint received 5 calls) 0.5 (the endpoint received 5 calls in 10 seconds 5/10) 0.167 (the endpoint received 5 calls in 30 seconds 5/30) Can someone explain with my example the formula behind this function and a way to achieve the metrics/value I expect?
Prometheus calculates increase(m[d]) at timestamp t in the following way: It fetches raw samples stored in the database for time series matching m on a time range (t-d .. t]. Note that samples at timestamp t-d aren't included in the time range, while samples at t are included. It is expected that every selected time series is a counter, since increase() works only with counters. It calculates the difference between the last and the first raw sample value on the selected time range individually per each time series matching m. Note that Prometheus doesn't take into account the difference between the last raw sample just before the (t-d ... t] time range and the first raw samples at this time range. This may lead to lower than expected results in some cases. It extrapolates results obtained at step 2 if the first and/or the last raw samples are located too far from time range boundaries (t-d .. t]. This may lead to unexpected results. For example, fractional results for integer counters. See this issue for details. Prometheus calculates rate(m[d]) as increase(m[d]) / d, so rate() results may be also unexpected sometimes. Prometheus developers are aware of these issues and are going to fix them eventually - see these design docs. In the meantime you can use VictoriaMetrics - this is Prometheus-like monitoring solution I work on. It provides increase() and rate() functions, which are free from issues mentioned above.
Prometheus
70,835,778
12
I've got an alert configured like this: ALERT InstanceDown IF up == 0 FOR 30s ANNOTATIONS { summary = "Server {{ $labels.Server }} is down.", description = "{{ $labels.Server }} ({{ $labels.job }}) is down for more than 30 seconds." } The slack receiver looks like this: receivers: - name: general_receiver slack_configs: - api_url: https://hooks.slack.com/services/token channel: "#telemetry" title: "My summary" text: "My description" Is it possible to use the annotations in my receiver? This github comment indicates it is but I haven't been able to get anything to work from it.
You need to define your own template (I just hat to walk that path). For a brief example see: https://prometheus.io/blog/2016/03/03/custom-alertmanager-templates/ All alerts here have at least a summary and a runbook annotation. runbook contains an Wiki URL. I've defined the following template (as described in the blog article above) to include them into the slack message body: {{ define "__slack_text" }} {{ range .Alerts }}{{ .Annotations.summary }} {{ if gt (len .Labels) (len .GroupLabels) }}({{ with .Labels.Remove .GroupLabels.Names }}{{ .Values | join " " }}{{ end }}){{ end }} {{ end }}<{{ (index .Alerts 0).GeneratorURL }}|Source> | {{ if .CommonAnnotations.runbook }}<{{ .CommonAnnotations.runbook }}|:notebook_with_decorative_cover: Runbook>{{ else }}<https://wiki.some.where/Runbooks|:exclamation:*NO RUNBOOK*:exclamation:>{{ end }} {{ end }} {{ define "slack.default.text" }}{{ template "__slack_text" . }}{{ end }} The templates overwrites the slack.default.text so there is no need to reference it in the receiver configuration. Defaults can be found in source as well as the "documentation": https://github.com/prometheus/alertmanager/blob/v0.4.2/template/default.tmpl https://github.com/prometheus/alertmanager/blob/v0.4.2/template/template.go For basics and details about the golang templating language: https://gohugo.io/templates/go-templates/ https://golang.org/pkg/text/template/ For completeness' sake, the eventual template: {{ define "__slack_text" }} {{ range .Alerts }}{{ .Annotations.text }}{{ end }} {{ end }} {{ define "__slack_title" }} {{ range .Alerts }}{{ .Annotations.title }}{{ end }} {{ end }} {{ define "slack.default.text" }}{{ template "__slack_text" . }}{{ end }} {{ define "slack.default.title" }}{{ template "__slack_title" . }}{{ end }}
Prometheus
39,389,463
11
I use the Prometheus Java Client to export session information of my application. We want to show how long sessions have been idle. The problem is that we have a maximum of 1000 sessions and sessions are removed after a certain period. Unfortunately they do not disappear from Prometheus: My code looks like this: static final Gauge sessionInactivity = Gauge.build() .name("sessions_inactivity_duration") .labelNames("internal_key", "external_key", "browser") .help("Number of milliseconds a certain session has been inactive") .register(); sessionInactivity.labels(internalKey, externalKey, browser).set(inactivityTime); I tried to do sessionInactivity.clear() during scrapes but obviously this does not empty the content of the Gauge.
The Gauge class has a remove method which has the same signature as the labels method. For your specific example, removing the metrics associated with that gauge would look like this sessionInactivity.remove(internalKey, externalKey, browser); The client library documentation states: Metrics with labels SHOULD support a remove() method with the same signature as labels() that will remove a Child from the metric no longer exporting it, and a clear() method that removes all Children from the metric. These invalidate caching of Children.
Prometheus
45,172,765
11
Calculating the maximum quantile over all dataseries is a problem for me: query http_response_time{job=~"^(x|y)$", quantile="0.95",...} result http_response_time{job="x",...} 0.26 http_response_time{job="y",...} NaN This is how I would try to calculate the maximum: avg(http_response_time{job=~"^(x|y)$",...}) Now the result will be "NaN". How can I ignore the "NaN" result (from the result section)? UPDATE 0 The metric is a self made summary-metric. UPDATE 1 Using prometheus version 1.8.
I didn't try this one with NaN, but you can simply filter by values with binary operators. Since NaN mathematically doesn't equal NaN you could try this trick (since a response time should be always positive): avg(http_response_time{job=~"^(x|y)$",...} >= 0)
Prometheus
47,887,142
11
I am trying to solve a problem of making a sum and group by query in Prometheus on a metric where the labels assigned to the metric values to unique to my sum and group by requirements. I have a metric sampling sizes of ElasticSearch indices, where the index names are labelled on the metric. The indices are named like this and are placed in the label "index": project.<projectname>.<uniqueid>.<date> with concrete value that would look like this: project.sample-x.ad19f880-2f16-11e7-8a64-jkzdfaskdfjk.2018.03.12 project.sample-y.jkcjdjdk-1234-11e7-kdjd-005056bf2fbf.2018.03.12 project.sample-x.ueruwuhd-dsfg-11e7-8a64-kdfjkjdjdjkk.2018.03.11 project.sample-y.jksdjkfs-2f16-11e7-3454-005056bf2fbf.2018.03.11 so if I had the short version of values in the "index" label I would just do: sum(metric) by (index) but what I am trying to do is something like this: sum(metric) by ("project.<projectname>") where I can group by a substring of the "index" label. How can this be done with a Prometheus query? I assume this could maybe be solved using a label_replace as part of the group, but I can't just see how to "truncate" the label value to achieve this. Best regards Lars Milland
It is possible to use label_replace() function in order to extract the needed parts of the label into a separate label and then group by this label when summing the results. For example, the following query extracts the project.sample-y from project.sample-y.jksdjkfs-2f16-11e7-3454-005056bf2fbf.2018.03.11 value stored in the index label and puts the extracted value into project_name label. Then the sum() is grouped by project_name label values: sum( label_replace(metric, "project_name", "$1", "index", "(project[.][^.]+).+") ) by (project_name)
Prometheus
49,334,224
11
When i make a table panel and go to "Options" tab, columns parameter set to Auto: Columns and their order are determined by the data query. Is there a doc on how write prometheus queries for grafana tables? My prometheus data is a metric with 2 labels my_col and my_row: my_metric{instance="lh",job="job",my_col="1",my_row="A"} 6 my_metric{instance="lh",job="job",my_col="2",my_row="A"} 8 my_metric{instance="lh",job="job",my_col="1",my_row="B"} 10 my_metric{instance="lh",job="job",my_col="2",my_row="B"} 17 I want to make a table that looks like: | | 1 | 2 | | A | 6 | 8 | | B | 10| 17|
After some experimentations in Grafana 9.1.1, I have obtained a way to construct a table like you have described with prometheus metric like that. Here are Grafana transform functions you will need: Labels to fields This function separate the labels in the metric to columns. Set Mode to Columns Set Labels to be only the columns that are relevant so in this case it's my_col and my_row Set Value field name to my_col Reduce This function reduce all values in separate times into one row. Set Mode to Reduce fields Set Calculations to Last*. You may change this part according to your needs. Set include time to false Merge This function will merge all datasets into one with corresponding columns. Organize fields Finally, this function will help you reorganize the table into something that's more proper. For presenting the data in bar chart, ensure that my_row column is the most left one.
Prometheus
52,163,689
11
I have used a variable in grafana which looks like this: label_values(some_metric, service) If the metric is not emitted by the data source at the current time the variable values are not available for the charts. The variable in my case is the release name and all the charts of grafana are dependent on this variable. After the server I was monitoring crashed, this metric is not emitted. Even if I set a time range to match the time when metric was emitted, it has no impact as the query for the variable is not taking the time range into account. In Prometheus I can see the values for the metric using the query: some_metric[24h] In grafana this is invalid: label_values(some_metric[24h], service) Also as per the documentation its invalid to provide $__range etc for label_values. If I have to use the query_result instead how do I write the above invalid grafana query in correct way so that I get the same result as label_values? Is there any other way to do this? The data source is Prometheus.
I'd suggest query_result(count by (somelabel)(count_over_time(some_metric[$__range]))) and then use regular expressions to extract out the label value you want. That I'm using count here isn't too important, it's more that I'm using an over_time function and then aggregating.
Prometheus
52,778,031
11
Using this [https://github.com/prometheus/pushgateway][1] we are trying to push one metric to prometheus. It seems to require the data in a very specific format. It works fine when doing their example curl of echo "some_metric 3.14" | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job Yet doing a curl with -d option fails as missing end of line/file curl -d 'some_metric 3.15\n' http://pushgateway.example.org:9091/metrics/job/some_job I'm trying to understand the difference in behaviour since I believe both are doing POST commands and I need to replicate this --data-binary option in node.js via "request.post" method but I seem to only be able to replicate the curl -d option which doesn't work. Any suggestions on hints on what the difference is between -d and --data-binary and to do the equivalent to --data-binary from within node.js?
From curl man page: --data-ascii (HTTP) This is just an alias for -d, --data. --data-binary (HTTP) This posts data exactly as specified with no extra processing whatsoever. If you start the data with the letter @, the rest should be a filename. Data is posted > in a similar manner as -d, --data does, except that newlines and carriage returns are > > preserved and conversions are never done. Like -d, --data the default content-type sent to the server is application/x-www-form-> > urlencoded. If you want the data to be treated as arbitrary binary data by the server > then set the content-type to octet-stream: -H "Content-Type: application/octet-stream". If this option is used several times, the ones following the first will append data as > described in -d, --data. Using @- will make curl read the filename from stdin. So, basically in your first variant you send a binary file named "some_metric 3.14". In the second one, you're sending an ascii string "some_metric 3.15\n". If you want curl to strip new lines before sending, use --data-ascii or -d option: echo "some_metric 3.14" | curl -d @- http://pushgateway.example.org:9091/metrics/job/some_job
Prometheus
53,318,936
11
I have a Kubernetes Cluster and want to know how much disk space my containers use. I am not talking about mounted Volumes. I can get this information by using docker commands like docker system df -v or docker ps -s, but I don't want to connect to every single worker node. Is there a way to get a container's disk usage via kubectl or are there kubelet metrics where I can get this information from?
Yes, but currently not with kubectl, you can get metrics from the kubelet, either through the kube-apiserver (proxied) or directly calling the kubelet HTTP(s) server endpoint (default port 10250). Disk metrics are generally available on the /stats/summary endpoint and you can also find some cAdvisor metrics on the /metrics/cavisor endpoint. For example, to get the 'usedBytes' for the first container in the first pod returned through the kube-apiserver: $ curl -k -s -H 'Authorization: Bearer <REDACTED>' \ https://kube-apiserver:6443/api/v1/nodes/<node-name>/proxy/stats/summary \ | jq '.pods[0].containers[0].rootfs.usedBytes' The Bearer token can be a service account token tied to a ClusterRole like this: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: name: myrole rules: - apiGroups: - "" resources: - nodes - nodes/proxy verbs: - get - list - watch - nonResourceURLs: - /metrics - /api/* verbs: - get
Prometheus
53,328,104
11
I am using Prometheus to query metrics from Apache Flink. I want to measure the number of records In and Out per second of a Map function. When I query two different metrics in Prometheus, the chart only shows one of them. flink_taskmanager_job_task_operator_numRecordsInPerSecond{operator_name="Map"} or flink_taskmanager_job_task_operator_numRecordsOutPerSecond{operator_name="Map"} Does not matter if I change the operator or to and. The chart shows only the first (flink_taskmanager_job_task_operator_numRecordsInPerSecond). I also have tried to edit the Prometheus config file /etc/prometheus/prometheus.yml but I don't have too much experience on Prometheus and there is something wrong in my configuration. I was basing my solution on this post. global: scrape_interval: 15s scrape_configs: - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['localhost:9090'] - job_name: 'node_exporter' scrape_interval: 5s static_configs: - targets: ['localhost:9100'] - job_name: 'flink' scrape_interval: 5s static_configs: - targets: ['localhost:9250', 'localhost:9251', '192.168.56.20:9250'] metrics_path: / # HOW TO ADD THE OPERATOR NAME ON THE METRIC NAME? metric_relabel_configs: - source_labels: [__name__] regex: '(flink_taskmanager_job_task_operator)_(\w+)' replacement: '${2}' target_label: pool - source_labels: [__name__] regex: '(flink_taskmanager_job_task_operator)_(\w+)' replacement: '${1}_bytes' target_label: __name__
First of all, for more complex graphing you should definitely investigate Grafana. The built-in Prometheus graphs are useful eg. for debugging, but definitely more limited. In particular one graph will only display the results of one query. Now for a hack that I definitely do not recommend: flink_taskmanager_job_task_operator_numRecordsInPerSecond{operator_name="Map"} or label_replace(flink_taskmanager_job_task_operator_numRecordsOutPerSecond{operator_name="Map"}, "distinct", "foo", "job", ".*") Since, as documented vector1 or vector2 results in a vector that contains all original elements (label sets + values) of vector1 and additionally all elements of vector2 which do not have matching label sets in vector1. you can add a new label that is not present in the labels on the first vector to the second vector and thus keep all elements from both.
Prometheus
55,490,701
11
I have several metrics with the label "service". I want to get a list of all the "service" levels that begin with "abc" and end with "xyz". These will be the values of a grafana template variable. This is that I have tried: label_values(service) =~ "abc.*xyz" However this produces a error Template variables could not be initialized: parse error at char 13: could not parse remaining input "(service_name) "... Any ideas on how to filter the label values?
This should work (replacing up with the metric you mention): label_values(up{service=~"abc.*xyz"}, service) Or, in case you actually need to look across multiple metrics (assuming that for some reason some metrics have some service label values and other metrics have other values): label_values({__name__=~"metric1|metric2|metric3", service=~"abc.*xyz"}, service)
Prometheus
55,958,636
11
I'm looking for a query to get the average uptime of the server on which prometheus runs over the last week. It should be about 15h/week, so about 8-10 %. I'm using Prometheus 2.5.0 with node_exporter on CentOS 7.6.1810. My most promising experiments would be: 1 - avg_over_time(up{job="prometheus"}[7d]) This is what I've found when looking for ways to get average uptimes, but it gives me exactly 1. (My guess is it ignores the times in which no scrapes happened?) 2 - sum_over_time(up{job="prometheus"}[7d]) * 15 / 604800 This technically works, but is dependent on the scrape interval, which is 15s in my case. I can't seem to find a way to get said interval from prometheus' config, so I have to hardcode it into the query. I've also tried to find ways to get all start and end times of a job, but to no avail thus far.
Here you go. Don't ask. (o: avg_over_time( ( sum without() (up{job="prometheus"}) or (0 * sum_over_time(up{job="prometheus"}[7d])) )[7d:5m] ) To explain that bit by bit: sum without() (up{job="prometheus"}): take the up metric (the sum without() part is there to get rid of the metric name while keeping all other labels); 0 * sum_over_time(up{job="prometheus"}[7d]): produces a zero-valued vector for each of the up{job="prometheus"} label combinations seen over the past week (e.g. in case you have multiple Prometheus instances); or the two together, so you get the actual value where available, zero where missing; [7d:5m]: PromQL subquery, produces a range vector spanning 7 days, with 5 minute resolution based on the expression preceding it; avg_over_time: takes an average over time of the up metric with zeroes filled in as defaults, where missing. You may also want to tack on an and sum_over_time(up{job="prometheus"}[7d] to the end of that expression, to only get a result for label combinations that existed at some point over the previous 7 days. Else, because of the combination of 7 days range and 7 days subquery, you'll get results for all combinations over the previous 14 days. It is not an efficient query by any stretch of the imagination, but it does not require you to hardcode your scrape interval into the query. As requested. (o:
Prometheus
58,080,200
11
In Prometheus I've got 14 seconds for http_server_requests_seconds_max. http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/**",} 14.3 Does this mean the total time for the request from the server to the client or does it only measure the time in the Spring container? I'm also measuring the time inside spring to process the data and it only takes 2.5 seconds. What I want to know is if it is a problem in Spring or if it is because of a slow network. Any Ideas?
From the copy of Spring documentation at Archive.org (or current Micrometer.io page), when @Timed attribute is used on a function or a Controller, it produces the metrics http_server_requests. which by default contains dimensions for the HTTP status of the response, HTTP method, exception type if the request fails, and the pre-variable substitution parameterized endpoint URI. The http_server_requests_seconds_max is then computed as explained in this discussion public static final Statistic MAX The maximum amount recorded. When this represents a time, it is reported in the monitoring system's base unit of time. In your case, it means that one of you endpoint in the range /v1/** (i.e. one of all of them) called a @Timed function that took 14sec to execute. For more information you would need percentiles or histograms metrics. It may happen only once; usually at the first request wen cache need to be built or services needs to warm up.
Prometheus
60,206,507
11
In the prometheus configuration I have a job with these specs: - job_name: name_of_my_job scrape_interval: 5m scrape_timeout: 30s metrics_path: /metrics scheme: http The script that creates the metrics takes 3 minutes to finish, but from prometheus I don't see the metrics. What is the operation of the scrape_timeout variable?
Every 5 minutes (scrape_interval) Prometheus will get the metrics from the given URL. It will try 30 seconds (scrape_timeout) to get the metrics if it can't scrape in this time it will time out.
Prometheus
60,989,807
11
We have been struggling to create a good memory monitoring for our nodes running Docker components. We use Prometheus in combination with cadvisor and node_exporter. What is the best way to determine the used memory per node? Method 1: gives in our example around 42% (1-(node_memory_MemAvailable_bytes/node_memory_MemTotal_bytes))*100 Method 2: gives around 80% (1-((node_memory_MemFree_bytes+node_memory_Buffers_bytes+node_memory_Cached_bytes)/node_memory_MemTotal_bytes))*100 Q2: Why is this difference? What can I learn from this? So I digged a bit deeper on determined the individual metrics: Free memory: in our experiment was about 5% (node_memory_MemFree_bytes/node_memory_MemTotal_bytes)*100 Buffered memory: around 0.002% (node_memory_Buffers_bytes/node_memory_MemTotal_bytes)*100 Cached memory: around 15% (node_memory_Cached_bytes/node_memory_MemTotal_bytes)*100 Availabable memory: 58% (node_memory_MemAvailable_bytes/node_memory_MemTotal_bytes)*100 I would expect that FreeMem + BufferedMem + CachedMem would be around the AvailableMem. But that is not the outcome of this simple experiment. Q3: Why is this not true? It is said that the free memory on Linux consists of free mem + buffered mem + cached mem. When there is a short of memory, the cached memory could be freed, etc.
This documentation tells in detail what are those numbers mean: https://github.com/torvalds/linux/blob/master/Documentation/filesystems/proc.rst#meminfo MemAvailable: An estimate of how much memory is available for starting new applications, without swapping. Calculated from MemFree, SReclaimable, the size of the file LRU lists, and the low watermarks in each zone. The estimate takes into account that the system needs some page cache to function well, and that not all reclaimable slab will be reclaimable, due to items being in use. The impact of those factors will vary from system to system. So the MemAvailable is an estimation how much memory can be used without swapping for new processes. FreeMem is only a part that is calculated into MemAvailable. BufferedMem and CachedMem might be taken into the estimation, but these are just a smaller section of the memory that might be reclaimed: Buffers: Relatively temporary storage for raw disk blocks shouldn't get tremendously large (20MB or so) Cached: in-memory cache for files read from the disk (the pagecache). Doesn't include SwapCached
Prometheus
61,751,232
11
I am considering to use Prometheus as a time-series database to store data for long periods of time (months or maybe even over a year). However, I read in few places that Prometheus is not suitable for long-term storage and other TSDB would be a better solution in that case. But why exactly is it not suitable and what's the cons of using it as a long-term storage? The official docs mention: Prometheus's local storage is not intended to be durable long-term storage; external solutions offer extended retention and data durability. But what "extended retention and data durability" means exactly and why is it not achievable with Prometheus?
It is a design decision and it has mainly to do with the scope of a project/tool. The original authors, in the context of their use case at SoundCloud, decided not to build a distributed data storage layer but keep things simple. In other words: Prometheus will fill up a disk but doesn't shard or replicate the data for you. Now, if you have many different environments you want to monitor, creating hundreds of thousands of timeseries and gazillion of metrics, that won't scale (local disks are to small and an NFS-based solution might now be what you want either). So, there are different solutions out there, allowing you to federate and/or deduplicate metrics from different environments. The important thing to remember here is that it is not a shortcoming of Prometheus but a conscious decision to focus on one thing and do it really well and over time developing APIs (remote_write and remote_read) that enable others to build systems that address the distributed/at scale use case.
Prometheus
68,891,824
11
I'm new to monitoring the k8s cluster with prometheus, node exporter and so on. I want to know that what the metrics exactly mean for though the name of metrics are self descriptive. I already checked the github of node exporter, but I got not useful information. Where can I get the descriptions of node exporter metrics? Thanks
There is a short description along with each of the metrics. You can see them if you open node exporter in browser or just curl http://my-node-exporter:9100/metrics. You will see all the exported metrics and lines with # HELP are the description ones: # HELP node_cpu_seconds_total Seconds the cpus spent in each mode. # TYPE node_cpu_seconds_total counter node_cpu_seconds_total{cpu="0",mode="idle"} 2.59840376e+07 Grafana can show this help message in the editor: Prometheus (with recent experimental editor) can show it too: And this works for all metrics, not just node exporter's. If you need more technical details about those values, I recommend searching for the information in Google and man pages (if you're on Linux). Node exporter takes most of the metrics from /proc almost as-is and it is not difficult to find the details. Take for example node_memory_KReclaimable_bytes. 'Bytes' suffix is obviously the unit, node_memory is just a namespace prefix, and KReclaimable is the actual metric name. Using man -K KReclaimable will bring you to the proc(5) man page, where you can find that: KReclaimable %lu (since Linux 4.20) Kernel allocations that the kernel will attempt to reclaim under memory pressure. Includes SReclaimable (below), and other direct allocations with a shrinker. Finally, if this intention to learn more about the metrics is inspired by the desire to configure alerts for your hardware, you can skip to the last part and grab some alerts shared by the community from here: https://awesome-prometheus-alerts.grep.to/rules#host-and-hardware
Prometheus
70,300,286
11
I am using a NewGaugeVec to report my metrics: elapsed := prometheus.NewGaugeVec(prometheus.GaugeOpts{ Name: "gogrinder_elapsed_ms", Help: "Current time elapsed of gogrinder teststep", }, []string{"teststep", "user", "iteration", "timestamp"}) prometheus.MustRegister(elapsed) All works fine but I noticed that my custom exporter contains all metrics from prometheus/go_collector.go: # HELP go_gc_duration_seconds A summary of the GC invocation durations. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 0.00041795300000000004 go_gc_duration_seconds{quantile="0.25"} 0.00041795300000000004 go_gc_duration_seconds{quantile="0.5"} 0.00041795300000000004 ... I suspect that this is kind of a default behavior but I did not find anything in the documentation on how to disable that. Any ideas on how to configure my custom exporter so that these default metrics disappear?
Well the topic is rather old but in case others have to deal with it. The following code works fine with current codebase v0.9.0-pre1 // [...] imports, metric initialization ... func main() { // go get rid of any additional metrics // we have to expose our metrics with a custom registry r := prometheus.NewRegistry() r.MustRegister(myMetrics) handler := promhttp.HandlerFor(r, promhttp.HandlerOpts{}) // [...] update metrics within a goroutine http.Handle("/metrics", handler) log.Fatal(http.ListenAndServe(":12345", nil)) }
Prometheus
35,117,993
10
I want to know why prometheus is not suitable for billing system. the Prometheus overview page says If you need 100% accuracy, such as for per-request billing, Prometheus is not a good choice as the collected data will likely not be detailed and complete enough. I don't really understand 100% accuracy. Does it mean "the prometheus's monitoring data is not accurate"?
Prometheus prefers reliability over 100% accuracy, so there are tradeoffs where a tiny amount of data may be lost rather than taking out the whole system. This is fine for monitoring, but rarely okay when money is involved. See also https://www.robustperception.io/monitoring-without-consensus/
Prometheus
44,518,575
10
I have different metrices in prometheus counter_metrics a, couneter_metrices b and I want a singlestat for the count of all the different request metrics. How am I able to fetch this? (sum(couneter_metrics{instance="a,job="b"}))+
For the singlestat panel, you can just sum the two metrics and then add them together. Here is an example with two different metrics: sum(prometheus_local_storage_memory_series) + sum(counters_logins) Recommended reading, just in case you are doing anything with rates as well: https://www.robustperception.io/rate-then-sum-never-sum-then-rate/
Prometheus
45,343,371
10
Counters and Gauges allow for labels to be added to them. When I try to add labels to a Summary, I get an "incorrect number of labels" error. This is what I'm trying: private static final Summary latencySummary = Summary.build() .name("all_latencies") .help("all latencies.") .register(); latencySummary.labels("xyz_api_latency").observe(timer.elapsedSeconds()); I've looked at the Summary github source code, but can't find the answer. How are labels added to a Summary?
You need to provide the labelname in the metric: private static final Summary latencySummary = Summary.build() .name("latency_seconds") .help("All latencies.") .labelNames("api") .register();
Prometheus
48,287,188
10
Use Helm installed Prometheus and Grafana in a kubernetes cluster: helm install stable/prometheus helm install stable/grafana It has an alertmanage service. But I saw a blog introduced how to setup alertmanager config with yaml files: http://blog.wercker.com/how-to-setup-alerts-on-prometheus Is it possible to use the current way(installed by helm) to set some alert rules and config for CPU, memory and send Email without create other yaml files? I saw a introduction for k8s configmap to alertmanager: https://github.com/kubernetes/charts/tree/master/stable/prometheus#configmap-files But not clear how to use and how to do. Edit I downloaded source code of stable/prometheus to see what it do. From the values.yaml file I found: serverFiles: alerts: "" rules: "" prometheus.yml: |- rule_files: - /etc/config/rules - /etc/config/alerts scrape_configs: - job_name: prometheus static_configs: - targets: - localhost:9090 https://github.com/kubernetes/charts/blob/master/stable/prometheus/values.yaml#L600 So I think should write to this config file by myself to define alert rules and alertmanager here. But don't clear about this block: rule_files: - /etc/config/rules - /etc/config/alerts Maybe it's meaning the path in the container. But there isn't any file now. Should add here: serverFiles: alert: "" rules: "" Edit 2 After set alert rules and alertmanager configuration in values.yaml: ## Prometheus server ConfigMap entries ## serverFiles: alerts: "" rules: |- # # CPU Alerts # ALERT HighCPU IF ((sum(node_cpu{mode=~"user|nice|system|irq|softirq|steal|idle|iowait"}) by (instance, job)) - ( sum(node_cpu{mode=~"idle|iowait"}) by (instance,job) ) ) / (sum(node_cpu{mode=~"user|nice|system|irq|softirq|steal|idle|iowait"}) by (instance, job)) * 100 > 95 FOR 10m LABELS { service = "backend" } ANNOTATIONS { summary = "High CPU Usage", description = "This machine has really high CPU usage for over 10m", } # TEST ALERT APIHighRequestLatency IF api_http_request_latencies_second{quantile="0.5"} >1 FOR 1m ANNOTATIONS { summary = "High request latency on {{$labels.instance }}", description = "{{ $labels.instance }} has amedian request latency above 1s (current value: {{ $value }}s)", } Ran helm install prometheus/ to install it. Start port-forward for alertmanager component: export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9093 Then access browser to http://127.0.0.1:9003, got these messages: Forwarding from 127.0.0.1:9093 -> 9093 Handling connection for 9093 Handling connection for 9093 E0122 17:41:53.229084 7159 portforward.go:331] an error occurred forwarding 9093 -> 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:54 socat[31237.140275133073152] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused Handling connection for 9093 E0122 17:41:53.243511 7159 portforward.go:331] an error occurred forwarding 9093 -> 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:54 socat[31238.140565602109184] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused E0122 17:41:53.246011 7159 portforward.go:331] an error occurred forwarding 9093 -> 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:54 socat[31239.140184300869376] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused Handling connection for 9093 Handling connection for 9093 E0122 17:41:53.846399 7159 portforward.go:331] an error occurred forwarding 9093 -> 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:55 socat[31250.140004515874560] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused E0122 17:41:53.847821 7159 portforward.go:331] an error occurred forwarding 9093 -> 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:55 socat[31251.140355466835712] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused Handling connection for 9093 E0122 17:41:53.858521 7159 portforward.go:331] an error occurred forwarding 9093 -> 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:55 socat[31252.140268300003072] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused Why? When I check kubectl describe po illocutionary-heron-prometheus-alertmanager-587d747b9c-qwmm6, got: Name: illocutionary-heron-prometheus-alertmanager-587d747b9c-qwmm6 Namespace: default Node: minikube/192.168.99.100 Start Time: Mon, 22 Jan 2018 17:33:54 +0900 Labels: app=prometheus component=alertmanager pod-template-hash=1438303657 release=illocutionary-heron Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"illocutionary-heron-prometheus-alertmanager-587d747b9c","uid":"f... Status: Running IP: 172.17.0.10 Created By: ReplicaSet/illocutionary-heron-prometheus-alertmanager-587d747b9c Controlled By: ReplicaSet/illocutionary-heron-prometheus-alertmanager-587d747b9c Containers: prometheus-alertmanager: Container ID: docker://0808a3ecdf1fa94b36a1bf4b8f0d9d2933bc38afa8b25e09d0d86f036ac3165b Image: prom/alertmanager:v0.9.1 Image ID: docker-pullable://prom/alertmanager@sha256:ed926b227327eecfa61a9703702c9b16fc7fe95b69e22baa656d93cfbe098320 Port: 9093/TCP Args: --config.file=/etc/config/alertmanager.yml --storage.path=/data State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Mon, 22 Jan 2018 17:55:24 +0900 Finished: Mon, 22 Jan 2018 17:55:24 +0900 Ready: False Restart Count: 9 Readiness: http-get http://:9093/%23/status delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /data from storage-volume (rw) /etc/config from config-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-h5b8l (ro) prometheus-alertmanager-configmap-reload: Container ID: docker://b4a349bf7be4ea78abe6899ad0173147f0d3f6ff1005bc513b2c0ac726385f0b Image: jimmidyson/configmap-reload:v0.1 Image ID: docker-pullable://jimmidyson/configmap-reload@sha256:2d40c2eaa6f435b2511d0cfc5f6c0a681eeb2eaa455a5d5ac25f88ce5139986e Port: <none> Args: --volume-dir=/etc/config --webhook-url=http://localhost:9093/-/reload State: Running Started: Mon, 22 Jan 2018 17:33:56 +0900 Ready: True Restart Count: 0 Environment: <none> Mounts: /etc/config from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-h5b8l (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: illocutionary-heron-prometheus-alertmanager Optional: false storage-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: illocutionary-heron-prometheus-alertmanager ReadOnly: false default-token-h5b8l: Type: Secret (a volume populated by a Secret) SecretName: default-token-h5b8l Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 29m (x2 over 29m) default-scheduler PersistentVolumeClaim is not bound: "illocutionary-heron-prometheus-alertmanager" Normal Scheduled 29m default-scheduler Successfully assigned illocutionary-heron-prometheus-alertmanager-587d747b9c-qwmm6 to minikube Normal SuccessfulMountVolume 29m kubelet, minikube MountVolume.SetUp succeeded for volume "config-volume" Normal SuccessfulMountVolume 29m kubelet, minikube MountVolume.SetUp succeeded for volume "pvc-fa84b197-ff4e-11e7-a584-0800270fb7fc" Normal SuccessfulMountVolume 29m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-h5b8l" Normal Started 29m kubelet, minikube Started container Normal Created 29m kubelet, minikube Created container Normal Pulled 29m kubelet, minikube Container image "jimmidyson/configmap-reload:v0.1" already present on machine Normal Started 29m (x3 over 29m) kubelet, minikube Started container Normal Created 29m (x4 over 29m) kubelet, minikube Created container Normal Pulled 29m (x4 over 29m) kubelet, minikube Container image "prom/alertmanager:v0.9.1" already present on machine Warning BackOff 9m (x91 over 29m) kubelet, minikube Back-off restarting failed container Warning FailedSync 4m (x113 over 29m) kubelet, minikube Error syncing pod Edit 3 alertmanager config in values.yaml file: ## alertmanager ConfigMap entries ## alertmanagerFiles: alertmanager.yml: |- global: resolve_timeout: 5m smtp_smarthost: smtp.gmail.com:587 smtp_from: sender@gmail.com smtp_auth_username: sender@gmail.com smtp_auth_password: sender_password receivers: - name: default-receiver email_configs: - to: target_email@gmail.com route: group_wait: 10s group_interval: 5m receiver: default-receiver repeat_interval: 3h Not work. Got errors above. alertmanagerFiles: alertmanager.yml: |- global: # slack_api_url: '' receivers: - name: default-receiver # slack_configs: # - channel: '@you' # send_resolved: true route: group_wait: 10s group_interval: 5m receiver: default-receiver repeat_interval Works without any error. So, the problem was the email_configs config method.
The alerts and rules keys in the serverFiles group of the values.yaml file are mounted in the Prometheus container in the /etc/config folder. You can put in there the configuration you want (for example take inspiration by the blog post you linked) and it will be used by Prometheus to handle the alerts. For example, a simple rule could be set like this: serverFiles: alerts: | ALERT cpu_threshold_exceeded IF (100 * (1 - avg by(job)(irate(node_cpu{mode='idle'}[5m])))) > 80 FOR 300s LABELS { severity = "warning", } ANNOTATIONS { summary = "CPU usage > 80% for {{ $labels.job }}", description = "CPU usage avg for last 5m: {{ $value }}", }
Prometheus
48,374,858
10
I run a small flask application with gunicorn and multiple worker processes on kubernetes. I would like to collect metrics from this application with prometheus, but the metrics should only be accessible cluster internally on a separate port (as this required in our current setting). For one gunicorn worker process I could use the start_http_server fucntion from the python client library to expose metrics on a different port than the flask app. A minimal example may look like this: from flask import Flask from prometheus_client import start_http_server, Counter NUM_REQUESTS = Counter("num_requests", "Example counter") app = Flask(__name__) @app.route('/') def hello_world(): NUM_REQUESTS.inc() return 'Hello, World!' start_http_server(9001) To start the app do the following: gunicorn --bind 127.0.0.1:8082 -w 1 app:app However this only works for one worker process. In the documentation of the client library is also a section on how to use prometheus and gunicorn with multiple worker processes by specifying a shared directory for the worker processes as an environment variable where metrics are written to (prometheus_multiproc_dir). So following the documentation the above example for multiple workers would be: A gunicorn config file: from prometheus_client import multiprocess def worker_exit(server, worker): multiprocess.mark_process_dead(worker.pid) The application file: import os from flask import Flask from prometheus_client import Counter NUM_REQUESTS = Counter("num_requests", "Example counter") app = Flask(__name__) @app.route('/') def hello_world(): NUM_REQUESTS.inc() return "[PID {}]: Hello World".format(os.getpid()) To start the app do: rm -rf flask-metrics/ mkdir flask-metrics export prometheus_multiproc_dir=flask-metrics gunicorn --bind 127.0.0.1:8082 -c gunicorn_conf.py -w 3 app:app However in this setting I don't really know how to accesses the metrics stored in flask-metrics on a separate port. Is there way to get this done? I am a bit new to these things, so if I am approaching the problem in the wrong way I am also happy for advice what would be the best way to address my case.
What you would want to do here is start up a separate process just to serve the metrics. Put the app function in https://github.com/prometheus/client_python#multiprocess-mode-gunicorn in an app of its own, and make sure that prometheus_multiproc_dir is the same for both it and the main application.
Prometheus
49,737,354
10
I am trying to drop all but a few whitelisted metrics in prometheus. I can persist them selectively with something like this: metric_relabel_configs: - source_labels: [__name__] regex: (?i)(metric1|metric2|metric3) action: keep However I want to drop all of the other, non-matching metrics. Is there any straightforward way to do this?
The keep action drops everything that doesn't match, so that single action is enough to do what you want.
Prometheus
50,108,835
10
I am using the following query to calculate the cost of nodes in our GKE cluster (new lines added for readability) sum( kube_node_status_capacity_cpu_cores * on(node) group_left(label_cloud_google_com_gke_nodepool) kube_node_labels{ label_cloud_google_com_gke_preemptible = "true" } ) * 5.10 + sum( kube_node_status_capacity_cpu_cores * on(node) group_left(label_cloud_google_com_gke_nodepool) kube_node_labels{ label_cloud_google_com_gke_preemptible = "" } ) * 16.95 It WORKS if the cluster has preemptible nodes because there is at least one node with label_cloud_google_com_gke_preemptible = "true" and hence the first sum operator returns a value. It FAILS when the cluster has NO preemtible nodes because there is no node with label_cloud_google_com_gke_preemptible = "true" and hence the first sum returns no value Is it possible to modify the query so that the first sum returns a 0 value instead?
You can use or to insert a value if one is not present: ( sum( kube_node_status_capacity_cpu_cores * on(node) group_left(label_cloud_google_com_gke_nodepool) kube_node_labels{label_cloud_google_com_gke_preemptible = "true"} ) * 5.10 or vector(0) ) + sum( kube_node_status_capacity_cpu_cores * on(node) group_left(label_cloud_google_com_gke_nodepool) kube_node_labels{label_cloud_google_com_gke_preemptible = ""} ) * 16.95
Prometheus
50,420,467
10
Is there a way to round a decimal value in grafana? round() and ceil() functions gets an "instant-vector", not a numeric value, and for example, adding a query like ceil(1/15) will return 0.
It depends what you're using to display the data, for example a single stat or gauge you'll find the 'Decimals' option in Grafana, for graphs it's in the 'Axes' options. You don't need to do this in the query for the metric.
Prometheus
50,634,445
10
I have an application that I have to monitor every 5 mins. However, that application does not have a /metrics port for Prometheus to directly scrape from and I don't have any control over that application. As a workaround, I wrote a python program to manually scrape the data, and transforms those data to my own metrics such as gauge and counters. Then I pushed those metrics to pushgateway for Prometheus to scrape from. Everything worked fine at local. Now, I want to take a step further by using the AWS Lambda function to pull data and generate metrics for me every 5 mins(so I don't have to keep the python program running on my laptop). My question would be, instead of using: push_to_gateway(gateway='localhost:9091', job="Monitor", registry=registry) to push my metrics to pushgateway, what would this be in the lambda function? Also, I believe the pushgateway should be hosted somewhere for AWS to access. How do we achieve that?
You can create the lambda and run it every 5 minutes with a cloudwatch rule. Inside the lambda, instead of calling push_to_gateway, you can just curl the pushgateway. see and example here. Make sure that the gateway is accessible from the lambda - either behind a public ELB or have them both in the same vpc.
Prometheus
52,744,475
10
I have implemented a Java web service using Dropwizard. Now I want it to also expose Prometheus metrics. I have followed this pretty straight-forward example. However, the endpoint at http://localhost:9090/metrics is still not exposed. Here's the relevant code: Dependencies in the pom.xml: <dependency> <groupId>io.prometheus</groupId> <artifactId>simpleclient_dropwizard</artifactId> <version>0.5.0</version> </dependency> <!-- https://mvnrepository.com/artifact/io.prometheus/simpleclient_servlet --> <dependency> <groupId>io.prometheus</groupId> <artifactId>simpleclient_servlet</artifactId> <version>0.5.0</version> </dependency> The Java code: import io.dropwizard.Application; import io.dropwizard.setup.Bootstrap; import io.dropwizard.setup.Environment; import io.prometheus.client.CollectorRegistry; import io.prometheus.client.dropwizard.DropwizardExports; import io.prometheus.client.exporter.MetricsServlet; [...] public class MyApplication extends Application<MyServiceConfiguration> { @Override public void run(final MyServiceConfiguration configuration, final Environment environment) { final MyServiceResource resource = createResource(configuration); environment.jersey().register(resource); registerHealthChecks(environment, resource); registerMetrics(environment); } private void registerMetrics(Environment environment) { CollectorRegistry collectorRegistry = new CollectorRegistry(); collectorRegistry.register(new DropwizardExports(environment.metrics())); environment.admin().addServlet("metrics", new MetricsServlet(collectorRegistry)) .addMapping("/metrics"); } Any pointers to what I'm doing wrong?
Remember default dropwizard configuration has the admin app on a different port. That's where you'd find the metrics servlet.
Prometheus
52,931,289
10