qid
int64 469
74.7M
| question
stringlengths 36
37.8k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 5
31.5k
| response_k
stringlengths 10
31.6k
|
---|---|---|---|---|---|
56,066,816 | I have several data frames (with equal # columns but different names). I'm trying to create one data frame with rows stacked below each other. I don't care now about the column names (I can always rename them later). I saw different SO links but they don't address this problem completely.
Note I've 21 data frames and scalability is important. I was looking at
[this](https://stackoverflow.com/questions/45590866/python-pandas-concat-dataframes-with-different-columns-ignoring-column-names)
[](https://i.stack.imgur.com/U5W0x.jpg)
How I get df:
```
df = []
for f in files:
data = pd.read_csv(f, usecols = [0,1,2,3,4])
df.append(data)
``` | 2019/05/09 | [
"https://Stackoverflow.com/questions/56066816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9473446/"
] | I will do it at the beginning adding `skiprows=1`
```
names=[0,1,2,3,4]# what every you want to call them ..
pd.concat([pd.read_csv(f, usecols = [0,1,2,3,4],skiprows=1,names=[0,1,2,3,4]) for f in files])
``` | Once you put all the data frames into a list, try this code.
```
import pandas as pd
df = [df1, df2, df3]
result = pd.DataFrame(columns=df1.columns)
for df in df:
result = pd.concat([result,df.rename(columns=df1.columns)], ignore_index=True)
``` |
6,774,561 | I have a python code treating a lot of apache logs (decompress, parse, crunching numbers, regexping etc). One parent process which takes a list of files (up to few millions), and sends a list of files to parse to workers, using multiprocess pool.
I wonder, if there is any guidelines / benchmarks / advices which can help me to **estimate ideal number of child process ?** Ie. having one process per core is better than launching few hundreds of them?
Currently 3/4 time of script execution is reading files and decompressing them, and in terms of resources, its CPU which is 100% loaded, memory and I/O being ok. So I assume there is a lot which can be done with proper multiprocessing settings. Script will be running on different machines / os, so os-specific hints are welcome, too.
Also, is there any benefit in using threads rather than multiprocess? | 2011/07/21 | [
"https://Stackoverflow.com/questions/6774561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/129289/"
] | >
> I wonder, if there is any guidelines / benchmarks / advices which can help me to estimate ideal number of child process ?
>
>
>
No.
>
> having one process per core is better than launching few hundreds of them?
>
>
>
You can never know *in advance*.
There are too many degrees of freedom.
You can only discover it empirically by running experiments until you get the level of performance you desire.
>
> Also, is there any benefit in using threads rather than multiprocess?
>
>
>
Rarely.
Threads don't help much. Multiple threads doing I/O will be locked up waiting while the process (as a whole) waits for the O/S to finish the I/O request.
Your operating system does a very, very good job of scheduling processes. When you have I/O intensive operations, you really want multiple processes. | I'll address the last question first. In CPython, it is next to impossible to make sizeable performance gains by distributing CPU-bound load across threads. This is due to the [Global Interpreter Lock](http://en.wikipedia.org/wiki/Global_Interpreter_Lock). In that respect [`multiprocessing`](http://docs.python.org/library/multiprocessing.html) is a better bet.
As to estimating the ideal number of workers, here is my advice: run some experiments with your code, your data, your hardware and a varying number of workers, and see what you can glean from that in terms of speedups, bottlenecks etc. |
6,774,561 | I have a python code treating a lot of apache logs (decompress, parse, crunching numbers, regexping etc). One parent process which takes a list of files (up to few millions), and sends a list of files to parse to workers, using multiprocess pool.
I wonder, if there is any guidelines / benchmarks / advices which can help me to **estimate ideal number of child process ?** Ie. having one process per core is better than launching few hundreds of them?
Currently 3/4 time of script execution is reading files and decompressing them, and in terms of resources, its CPU which is 100% loaded, memory and I/O being ok. So I assume there is a lot which can be done with proper multiprocessing settings. Script will be running on different machines / os, so os-specific hints are welcome, too.
Also, is there any benefit in using threads rather than multiprocess? | 2011/07/21 | [
"https://Stackoverflow.com/questions/6774561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/129289/"
] | Multiple cores do not provide better performance if the program is I/O bound. The performance might even become worse if the disk is serving two or more masters. | I'll address the last question first. In CPython, it is next to impossible to make sizeable performance gains by distributing CPU-bound load across threads. This is due to the [Global Interpreter Lock](http://en.wikipedia.org/wiki/Global_Interpreter_Lock). In that respect [`multiprocessing`](http://docs.python.org/library/multiprocessing.html) is a better bet.
As to estimating the ideal number of workers, here is my advice: run some experiments with your code, your data, your hardware and a varying number of workers, and see what you can glean from that in terms of speedups, bottlenecks etc. |
6,774,561 | I have a python code treating a lot of apache logs (decompress, parse, crunching numbers, regexping etc). One parent process which takes a list of files (up to few millions), and sends a list of files to parse to workers, using multiprocess pool.
I wonder, if there is any guidelines / benchmarks / advices which can help me to **estimate ideal number of child process ?** Ie. having one process per core is better than launching few hundreds of them?
Currently 3/4 time of script execution is reading files and decompressing them, and in terms of resources, its CPU which is 100% loaded, memory and I/O being ok. So I assume there is a lot which can be done with proper multiprocessing settings. Script will be running on different machines / os, so os-specific hints are welcome, too.
Also, is there any benefit in using threads rather than multiprocess? | 2011/07/21 | [
"https://Stackoverflow.com/questions/6774561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/129289/"
] | >
> I wonder, if there is any guidelines / benchmarks / advices which can help me to estimate ideal number of child process ?
>
>
>
No.
>
> having one process per core is better than launching few hundreds of them?
>
>
>
You can never know *in advance*.
There are too many degrees of freedom.
You can only discover it empirically by running experiments until you get the level of performance you desire.
>
> Also, is there any benefit in using threads rather than multiprocess?
>
>
>
Rarely.
Threads don't help much. Multiple threads doing I/O will be locked up waiting while the process (as a whole) waits for the O/S to finish the I/O request.
Your operating system does a very, very good job of scheduling processes. When you have I/O intensive operations, you really want multiple processes. | I'm not sure if current OSes do this, but it used to be that I/O buffers were allocated per-process, so dividing one process' buffer among multiple threads would lead to buffer thrashing. You're far better off using multiple processes for I/O-heavy tasks. |
6,774,561 | I have a python code treating a lot of apache logs (decompress, parse, crunching numbers, regexping etc). One parent process which takes a list of files (up to few millions), and sends a list of files to parse to workers, using multiprocess pool.
I wonder, if there is any guidelines / benchmarks / advices which can help me to **estimate ideal number of child process ?** Ie. having one process per core is better than launching few hundreds of them?
Currently 3/4 time of script execution is reading files and decompressing them, and in terms of resources, its CPU which is 100% loaded, memory and I/O being ok. So I assume there is a lot which can be done with proper multiprocessing settings. Script will be running on different machines / os, so os-specific hints are welcome, too.
Also, is there any benefit in using threads rather than multiprocess? | 2011/07/21 | [
"https://Stackoverflow.com/questions/6774561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/129289/"
] | Multiple cores do not provide better performance if the program is I/O bound. The performance might even become worse if the disk is serving two or more masters. | I'm not sure if current OSes do this, but it used to be that I/O buffers were allocated per-process, so dividing one process' buffer among multiple threads would lead to buffer thrashing. You're far better off using multiple processes for I/O-heavy tasks. |
28,191,221 | I used SQL to convert a social security number to MD5 hash. I am wondering if there is a module or function in python/pandas that can do the same thing.
My sql script is:
```
CREATE OR REPLACE FUNCTION MD5HASH(STR IN VARCHAR2) RETURN VARCHAR2 IS
V_CHECKSUM VARCHAR2(32);
BEGIN
V_CHECKSUM := LOWER(RAWTOHEX(UTL_RAW.CAST_TO_RAW(SYS.DBMS_OBFUSCATION_TOOLKIT.MD5(INPUT_ST RING => STR))));
RETURN V_CHECKSUM;
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
WHEN OTHERS THEN
RAISE;
END MD5HASH;
SELECT HRPRO.MD5HASH('555555555') FROM DUAL
```
thanks.
I apologize, now that I read back over my initial question it is quite confusing.
I have a data frame that contains the following headings:
```
df[['ssno','regions','occ_ser','ethnicity','veteran','age','age_category']][:10]
```
Where ssno is personal information that I would like to convert to an md5 hash number and then create a new column into the dataframe.
thanks... sorry for the confusion.
Right now I have to send my file to Oracle and then convert the ssn to hash and then export back out so that I can continue working with it in Pandas. I want to eliminate this step. | 2015/01/28 | [
"https://Stackoverflow.com/questions/28191221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2201603/"
] | Using the standard hashlib module:
```
import hashlib
hash = hashlib.md5()
hash.update('555555555')
print hash.hexdigest()
```
**output**
```
3665a76e271ada5a75368b99f774e404
```
As mentioned in timkofu's comment, you can also do this more simply, using
```
print hashlib.md5('555555555').hexdigest()
```
The `.update()` method is useful when you want to generate a checksum in stages. Please see the [hashlib documentation](https://docs.python.org/2/library/hashlib.html) (or the [Python 3 version](https://docs.python.org/3/library/hashlib.html)) for further details. | hashlib with `md5` might be of your interest.
```
import hashlib
hashlib.md5("Nobody inspects the spammish repetition").hexdigest()
```
output:
```
bb649c83dd1ea5c9d9dec9a18df0ffe9
```
Constructors for hash algorithms that are always present in this module are `md5(), sha1(), sha224(), sha256(), sha384(), and sha512()`.
If you want more condensed result, then you may try `sha` series
output for `sha224`:
```
'a4337bc45a8fc544c03f52dc550cd6e1e87021bc896588bd79e901e2'
```
For more details : [hashlib](https://docs.python.org/2/library/hashlib.html) |
39,361,496 | I am a python coder but recently started a forey into Java. I am trying to understand a specific piece of code but am running into difficulties which I believe are associated with not knowing Java too well, yet.
Something that stood out to me is that sometimes inside class definitions methods are called twice. I am wondering why that is? For example:
The following code is taken from a file called ApplicationCreator.java. I noticed that the public class ApplicationCreator essentially instantiates itself twice, or am I missing something here?
```
public class ApplicationCreator<MR> implements
IResourceObjectCreator<BinaryRuleSet<MR>> {
private String type;
public ApplicationCreator() {
this("rule.application");
}
public ApplicationCreator(String type) {
this.type = type;
}
```
So here my questions:
1) Why would the class instantiate itself inside the class?
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
Any advice would be highly appreciated. | 2016/09/07 | [
"https://Stackoverflow.com/questions/39361496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2439540/"
] | The class is really not instantiating itself twice. Rather, the default constructor `ApplicationCreator()` (i.e. the one which takes no parameters), is simply calling the constructor which accepts an input string.
This ensures that an `ApplicationCreator` object will always have a type. When a type is not specified the default value `rule.application` will be used.
This is an example of overloaded constructors. | Here this class has two constructor.
When class name "method" name are same you can understand those are constructor.
Here constructor is over loaded . Based on parameter classes will be instantiated. Here user have a choice based on need . |
39,361,496 | I am a python coder but recently started a forey into Java. I am trying to understand a specific piece of code but am running into difficulties which I believe are associated with not knowing Java too well, yet.
Something that stood out to me is that sometimes inside class definitions methods are called twice. I am wondering why that is? For example:
The following code is taken from a file called ApplicationCreator.java. I noticed that the public class ApplicationCreator essentially instantiates itself twice, or am I missing something here?
```
public class ApplicationCreator<MR> implements
IResourceObjectCreator<BinaryRuleSet<MR>> {
private String type;
public ApplicationCreator() {
this("rule.application");
}
public ApplicationCreator(String type) {
this.type = type;
}
```
So here my questions:
1) Why would the class instantiate itself inside the class?
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
Any advice would be highly appreciated. | 2016/09/07 | [
"https://Stackoverflow.com/questions/39361496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2439540/"
] | 1) Why would the class instantiate itself inside the class?
>
> The class is not calling itself, it is proving a way for others to instantiate its object. Read about [constructor](https://docs.oracle.com/javase/tutorial/java/javaOO/constructors.html).
>
>
>
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
>
> As I said, it is a way to create object. 1st one will assign default value to type. And 2nd will give others an option to assign a value. Read about [constructor overloading](http://beginnersbook.com/2013/05/constructor-overloading/).
>
>
>
`this` in the constructor will call another constructor of the same class depending upon argument type passed to [this](https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwiy59_vvfzOAhVBqY8KHe8kAd0QFggeMAE&url=http%3A%2F%2Fjavabeginnerstutorial.com%2Fcore-java-tutorial%2Fthis-keyword-in-java%2F&usg=AFQjCNEreh7rKcCt7xagztPRByWuDwyubw&bvm=bv.131783435,d.c2I). | It's not instantiating itself in the class, it's calling a different constructor in the class.
What these are are overloaded constructors. Constructors are somewhat method-like, but they are called on object creation. Consider this:
```
public class Example {
private int instanceVariable;
public Example() { //a constructor of Example
instanceVariable = 3;
System.out.println("New Example object was created!");
}
public static void main(String[] args) {
Example ex = new Example();
}
}
```
Here, we have an `Example` class which has a constructor. If you look in the `main` method, we create a new instance of `Example`. The program will output `New Example object was created!` and set the instance's `instanceVariable` to 3 because the constructor is immediately called on object *as it constructs the object* (hence the name).
Now if you take a look at your situation, the constructors have different arguments (and thus signatures) so the object can be constructed by giving no arguments, or supplying a String. Let me illustrate what this does:
```
public ApplicationCreator() {
this("rule.application");
}
```
`this` refers to the class in this case, and invoking `this(args)` calls a constructor of the class. Since we have overloaded constructors, Java will call the constructor that matches most closely to the passed arguments. Since, in this case, a String is passed, Java will see that the constructor `public ApplicationCreator(String)` is the one that matches most and will invoke it.
Inside the no-argument constructor, it calls the other constructor with String `rule.application`, so you can think of the no-argument constructor as passing a default value to the constructor taking in a String. |
39,361,496 | I am a python coder but recently started a forey into Java. I am trying to understand a specific piece of code but am running into difficulties which I believe are associated with not knowing Java too well, yet.
Something that stood out to me is that sometimes inside class definitions methods are called twice. I am wondering why that is? For example:
The following code is taken from a file called ApplicationCreator.java. I noticed that the public class ApplicationCreator essentially instantiates itself twice, or am I missing something here?
```
public class ApplicationCreator<MR> implements
IResourceObjectCreator<BinaryRuleSet<MR>> {
private String type;
public ApplicationCreator() {
this("rule.application");
}
public ApplicationCreator(String type) {
this.type = type;
}
```
So here my questions:
1) Why would the class instantiate itself inside the class?
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
Any advice would be highly appreciated. | 2016/09/07 | [
"https://Stackoverflow.com/questions/39361496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2439540/"
] | These are two different constructors.
They have what is refered to as "different signatures.
Using it you can construct a `ApplicationCreator` object in two different ways :
```
ApplicationCreator ac = new ApplicationCreator();
```
Or
```
ApplicationCreator ac = new ApplicationCreator("A String");
```
For further reading see: [The Java Class Constructor](http://www.homeandlearn.co.uk/java/class_constructor.html) | The class is really not instantiating itself twice. Rather, the default constructor `ApplicationCreator()` (i.e. the one which takes no parameters), is simply calling the constructor which accepts an input string.
This ensures that an `ApplicationCreator` object will always have a type. When a type is not specified the default value `rule.application` will be used.
This is an example of overloaded constructors. |
39,361,496 | I am a python coder but recently started a forey into Java. I am trying to understand a specific piece of code but am running into difficulties which I believe are associated with not knowing Java too well, yet.
Something that stood out to me is that sometimes inside class definitions methods are called twice. I am wondering why that is? For example:
The following code is taken from a file called ApplicationCreator.java. I noticed that the public class ApplicationCreator essentially instantiates itself twice, or am I missing something here?
```
public class ApplicationCreator<MR> implements
IResourceObjectCreator<BinaryRuleSet<MR>> {
private String type;
public ApplicationCreator() {
this("rule.application");
}
public ApplicationCreator(String type) {
this.type = type;
}
```
So here my questions:
1) Why would the class instantiate itself inside the class?
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
Any advice would be highly appreciated. | 2016/09/07 | [
"https://Stackoverflow.com/questions/39361496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2439540/"
] | It's not instantiating itself in the class, it's calling a different constructor in the class.
What these are are overloaded constructors. Constructors are somewhat method-like, but they are called on object creation. Consider this:
```
public class Example {
private int instanceVariable;
public Example() { //a constructor of Example
instanceVariable = 3;
System.out.println("New Example object was created!");
}
public static void main(String[] args) {
Example ex = new Example();
}
}
```
Here, we have an `Example` class which has a constructor. If you look in the `main` method, we create a new instance of `Example`. The program will output `New Example object was created!` and set the instance's `instanceVariable` to 3 because the constructor is immediately called on object *as it constructs the object* (hence the name).
Now if you take a look at your situation, the constructors have different arguments (and thus signatures) so the object can be constructed by giving no arguments, or supplying a String. Let me illustrate what this does:
```
public ApplicationCreator() {
this("rule.application");
}
```
`this` refers to the class in this case, and invoking `this(args)` calls a constructor of the class. Since we have overloaded constructors, Java will call the constructor that matches most closely to the passed arguments. Since, in this case, a String is passed, Java will see that the constructor `public ApplicationCreator(String)` is the one that matches most and will invoke it.
Inside the no-argument constructor, it calls the other constructor with String `rule.application`, so you can think of the no-argument constructor as passing a default value to the constructor taking in a String. | The class is really not instantiating itself twice. Rather, the default constructor `ApplicationCreator()` (i.e. the one which takes no parameters), is simply calling the constructor which accepts an input string.
This ensures that an `ApplicationCreator` object will always have a type. When a type is not specified the default value `rule.application` will be used.
This is an example of overloaded constructors. |
39,361,496 | I am a python coder but recently started a forey into Java. I am trying to understand a specific piece of code but am running into difficulties which I believe are associated with not knowing Java too well, yet.
Something that stood out to me is that sometimes inside class definitions methods are called twice. I am wondering why that is? For example:
The following code is taken from a file called ApplicationCreator.java. I noticed that the public class ApplicationCreator essentially instantiates itself twice, or am I missing something here?
```
public class ApplicationCreator<MR> implements
IResourceObjectCreator<BinaryRuleSet<MR>> {
private String type;
public ApplicationCreator() {
this("rule.application");
}
public ApplicationCreator(String type) {
this.type = type;
}
```
So here my questions:
1) Why would the class instantiate itself inside the class?
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
Any advice would be highly appreciated. | 2016/09/07 | [
"https://Stackoverflow.com/questions/39361496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2439540/"
] | 1) Why would the class instantiate itself inside the class?
>
> The class is not calling itself, it is proving a way for others to instantiate its object. Read about [constructor](https://docs.oracle.com/javase/tutorial/java/javaOO/constructors.html).
>
>
>
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
>
> As I said, it is a way to create object. 1st one will assign default value to type. And 2nd will give others an option to assign a value. Read about [constructor overloading](http://beginnersbook.com/2013/05/constructor-overloading/).
>
>
>
`this` in the constructor will call another constructor of the same class depending upon argument type passed to [this](https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwiy59_vvfzOAhVBqY8KHe8kAd0QFggeMAE&url=http%3A%2F%2Fjavabeginnerstutorial.com%2Fcore-java-tutorial%2Fthis-keyword-in-java%2F&usg=AFQjCNEreh7rKcCt7xagztPRByWuDwyubw&bvm=bv.131783435,d.c2I). | Here this class has two constructor.
When class name "method" name are same you can understand those are constructor.
Here constructor is over loaded . Based on parameter classes will be instantiated. Here user have a choice based on need . |
39,361,496 | I am a python coder but recently started a forey into Java. I am trying to understand a specific piece of code but am running into difficulties which I believe are associated with not knowing Java too well, yet.
Something that stood out to me is that sometimes inside class definitions methods are called twice. I am wondering why that is? For example:
The following code is taken from a file called ApplicationCreator.java. I noticed that the public class ApplicationCreator essentially instantiates itself twice, or am I missing something here?
```
public class ApplicationCreator<MR> implements
IResourceObjectCreator<BinaryRuleSet<MR>> {
private String type;
public ApplicationCreator() {
this("rule.application");
}
public ApplicationCreator(String type) {
this.type = type;
}
```
So here my questions:
1) Why would the class instantiate itself inside the class?
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
Any advice would be highly appreciated. | 2016/09/07 | [
"https://Stackoverflow.com/questions/39361496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2439540/"
] | These are two different constructors.
They have what is refered to as "different signatures.
Using it you can construct a `ApplicationCreator` object in two different ways :
```
ApplicationCreator ac = new ApplicationCreator();
```
Or
```
ApplicationCreator ac = new ApplicationCreator("A String");
```
For further reading see: [The Java Class Constructor](http://www.homeandlearn.co.uk/java/class_constructor.html) | It's called a constructor. And it's not "called twice", one simply redirects to the other via `this()` with the given parameters.
Essentially the first way, without parameters, simply has a default value. Otherwise, you construct an instance with the given `String type` |
39,361,496 | I am a python coder but recently started a forey into Java. I am trying to understand a specific piece of code but am running into difficulties which I believe are associated with not knowing Java too well, yet.
Something that stood out to me is that sometimes inside class definitions methods are called twice. I am wondering why that is? For example:
The following code is taken from a file called ApplicationCreator.java. I noticed that the public class ApplicationCreator essentially instantiates itself twice, or am I missing something here?
```
public class ApplicationCreator<MR> implements
IResourceObjectCreator<BinaryRuleSet<MR>> {
private String type;
public ApplicationCreator() {
this("rule.application");
}
public ApplicationCreator(String type) {
this.type = type;
}
```
So here my questions:
1) Why would the class instantiate itself inside the class?
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
Any advice would be highly appreciated. | 2016/09/07 | [
"https://Stackoverflow.com/questions/39361496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2439540/"
] | 1) Why would the class instantiate itself inside the class?
>
> The class is not calling itself, it is proving a way for others to instantiate its object. Read about [constructor](https://docs.oracle.com/javase/tutorial/java/javaOO/constructors.html).
>
>
>
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
>
> As I said, it is a way to create object. 1st one will assign default value to type. And 2nd will give others an option to assign a value. Read about [constructor overloading](http://beginnersbook.com/2013/05/constructor-overloading/).
>
>
>
`this` in the constructor will call another constructor of the same class depending upon argument type passed to [this](https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwiy59_vvfzOAhVBqY8KHe8kAd0QFggeMAE&url=http%3A%2F%2Fjavabeginnerstutorial.com%2Fcore-java-tutorial%2Fthis-keyword-in-java%2F&usg=AFQjCNEreh7rKcCt7xagztPRByWuDwyubw&bvm=bv.131783435,d.c2I). | The class is really not instantiating itself twice. Rather, the default constructor `ApplicationCreator()` (i.e. the one which takes no parameters), is simply calling the constructor which accepts an input string.
This ensures that an `ApplicationCreator` object will always have a type. When a type is not specified the default value `rule.application` will be used.
This is an example of overloaded constructors. |
39,361,496 | I am a python coder but recently started a forey into Java. I am trying to understand a specific piece of code but am running into difficulties which I believe are associated with not knowing Java too well, yet.
Something that stood out to me is that sometimes inside class definitions methods are called twice. I am wondering why that is? For example:
The following code is taken from a file called ApplicationCreator.java. I noticed that the public class ApplicationCreator essentially instantiates itself twice, or am I missing something here?
```
public class ApplicationCreator<MR> implements
IResourceObjectCreator<BinaryRuleSet<MR>> {
private String type;
public ApplicationCreator() {
this("rule.application");
}
public ApplicationCreator(String type) {
this.type = type;
}
```
So here my questions:
1) Why would the class instantiate itself inside the class?
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
Any advice would be highly appreciated. | 2016/09/07 | [
"https://Stackoverflow.com/questions/39361496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2439540/"
] | 1) Why would the class instantiate itself inside the class?
>
> The class is not calling itself, it is proving a way for others to instantiate its object. Read about [constructor](https://docs.oracle.com/javase/tutorial/java/javaOO/constructors.html).
>
>
>
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
>
> As I said, it is a way to create object. 1st one will assign default value to type. And 2nd will give others an option to assign a value. Read about [constructor overloading](http://beginnersbook.com/2013/05/constructor-overloading/).
>
>
>
`this` in the constructor will call another constructor of the same class depending upon argument type passed to [this](https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwiy59_vvfzOAhVBqY8KHe8kAd0QFggeMAE&url=http%3A%2F%2Fjavabeginnerstutorial.com%2Fcore-java-tutorial%2Fthis-keyword-in-java%2F&usg=AFQjCNEreh7rKcCt7xagztPRByWuDwyubw&bvm=bv.131783435,d.c2I). | These are two different constructors.
They have what is refered to as "different signatures.
Using it you can construct a `ApplicationCreator` object in two different ways :
```
ApplicationCreator ac = new ApplicationCreator();
```
Or
```
ApplicationCreator ac = new ApplicationCreator("A String");
```
For further reading see: [The Java Class Constructor](http://www.homeandlearn.co.uk/java/class_constructor.html) |
39,361,496 | I am a python coder but recently started a forey into Java. I am trying to understand a specific piece of code but am running into difficulties which I believe are associated with not knowing Java too well, yet.
Something that stood out to me is that sometimes inside class definitions methods are called twice. I am wondering why that is? For example:
The following code is taken from a file called ApplicationCreator.java. I noticed that the public class ApplicationCreator essentially instantiates itself twice, or am I missing something here?
```
public class ApplicationCreator<MR> implements
IResourceObjectCreator<BinaryRuleSet<MR>> {
private String type;
public ApplicationCreator() {
this("rule.application");
}
public ApplicationCreator(String type) {
this.type = type;
}
```
So here my questions:
1) Why would the class instantiate itself inside the class?
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
Any advice would be highly appreciated. | 2016/09/07 | [
"https://Stackoverflow.com/questions/39361496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2439540/"
] | 1) Why would the class instantiate itself inside the class?
>
> The class is not calling itself, it is proving a way for others to instantiate its object. Read about [constructor](https://docs.oracle.com/javase/tutorial/java/javaOO/constructors.html).
>
>
>
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
>
> As I said, it is a way to create object. 1st one will assign default value to type. And 2nd will give others an option to assign a value. Read about [constructor overloading](http://beginnersbook.com/2013/05/constructor-overloading/).
>
>
>
`this` in the constructor will call another constructor of the same class depending upon argument type passed to [this](https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwiy59_vvfzOAhVBqY8KHe8kAd0QFggeMAE&url=http%3A%2F%2Fjavabeginnerstutorial.com%2Fcore-java-tutorial%2Fthis-keyword-in-java%2F&usg=AFQjCNEreh7rKcCt7xagztPRByWuDwyubw&bvm=bv.131783435,d.c2I). | It's called a constructor. And it's not "called twice", one simply redirects to the other via `this()` with the given parameters.
Essentially the first way, without parameters, simply has a default value. Otherwise, you construct an instance with the given `String type` |
39,361,496 | I am a python coder but recently started a forey into Java. I am trying to understand a specific piece of code but am running into difficulties which I believe are associated with not knowing Java too well, yet.
Something that stood out to me is that sometimes inside class definitions methods are called twice. I am wondering why that is? For example:
The following code is taken from a file called ApplicationCreator.java. I noticed that the public class ApplicationCreator essentially instantiates itself twice, or am I missing something here?
```
public class ApplicationCreator<MR> implements
IResourceObjectCreator<BinaryRuleSet<MR>> {
private String type;
public ApplicationCreator() {
this("rule.application");
}
public ApplicationCreator(String type) {
this.type = type;
}
```
So here my questions:
1) Why would the class instantiate itself inside the class?
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
Any advice would be highly appreciated. | 2016/09/07 | [
"https://Stackoverflow.com/questions/39361496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2439540/"
] | The class is really not instantiating itself twice. Rather, the default constructor `ApplicationCreator()` (i.e. the one which takes no parameters), is simply calling the constructor which accepts an input string.
This ensures that an `ApplicationCreator` object will always have a type. When a type is not specified the default value `rule.application` will be used.
This is an example of overloaded constructors. | It's called a constructor. And it's not "called twice", one simply redirects to the other via `this()` with the given parameters.
Essentially the first way, without parameters, simply has a default value. Otherwise, you construct an instance with the given `String type` |
4,088,471 | I have a dictionary in the view layer, that I am passing to my templates. The dictionary values are (mostly) lists, although a few scalars also reside in the dictionary. The lists if present are initialized to None.
The None values are being printed as 'None' in the template, so I wrote this little function to clean out the Nones before passing the dictionary of lists to the template. Since I am new to Python, I am wondering if there could be a more pythonic way of doing this?
```
# Clean the table up and turn Nones into ''
for k, v in table.items():
#debug_str = 'key: %s, value: %s' % (k,v)
#logging.debug(debug_str)
try:
for i, val in enumerate(v):
if val == None: v[i] = ''
except TypeError:
continue;
``` | 2010/11/03 | [
"https://Stackoverflow.com/questions/4088471",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/461722/"
] | Have you looked at `defaultdict` within collections? You'd have a dictionary formed via
```
defaultdict(list)
```
which initializes an empty list when a key is queried and that key does not exist. | ```
filtered_dict = dict((k, v) for k, v in table.items() if v is not None)
```
or in Python 2.7+, use the dictionary comprehension syntax:
```
filtered_dict = {k: v for k, v in table.items() if v is not None}
``` |
45,125,441 | I have a dataframe that has a column of boroughs visited (among many other columns):
```
Index User Boroughs_visited
0 Eminem Manhattan, Bronx
1 BrSpears NaN
2 Elvis Brooklyn
3 Adele Queens, Brooklyn
```
**I want to create a third column that shows which User visited Brooklyn**, so I wrote the slowest code possible in python:
```
df['Brooklyn']= 0
def borough():
for index,x in enumerate(df['Boroughs_visited']):
if pd.isnull(x):
continue
elif re.search(r'\bBrooklyn\b',x):
df_vols['Brooklyn'][index]= 1
borough()
```
Resulting in:
```
Index User Boroughs_visited Brooklyn
0 Eminem Manhattan, Bronx 0
1 BrSpears NaN 0
2 Elvis Brooklyn 1
3 Adele Queens, Brooklyn 1
```
**It took my computer 15 seconds to run this for 2000 rows. Is there a faster way of doing this?** | 2017/07/16 | [
"https://Stackoverflow.com/questions/45125441",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8005777/"
] | Let use `.str` accessor with `contains` and `fillna`:
```
df['Brooklyn'] = (df.Boroughs_visited.str.contains('Brooklyn') * 1).fillna(0)
```
Or another format of the same statement:
```
df['Brooklyn'] = df.Boroughs_visited.str.contains('Brooklyn').mul(1, fill_value=0)
```
Output:
```
Index User Boroughs_visited Brooklyn
0 0 Eminem Manhattan, Bronx 0
1 1 BrSpears NaN None 0
2 2 Elvis Brooklyn 1
3 3 Adele Queens, Brooklyn 1
``` | You can get all Boroughs for the price of one
```
df.join(df.Boroughs_visited.str.get_dummies(sep=', '))
Index User Boroughs_visited Bronx Brooklyn Manhattan Queens
0 0 Eminem Manhattan, Bronx 1 0 1 0
1 1 BrSpears NaN 0 0 0 0
2 2 Elvis Brooklyn 0 1 0 0
3 3 Adele Queens, Brooklyn 0 1 0 1
```
But if you really, really just wanted Brooklyn
```
df.join(df.Boroughs_visited.str.get_dummies(sep=', ').Brooklyn)
Index User Boroughs_visited Brooklyn
0 0 Eminem Manhattan, Bronx 0
1 1 BrSpears NaN 0
2 2 Elvis Brooklyn 1
3 3 Adele Queens, Brooklyn 1
``` |
13,409,559 | I'm trying to replace all single quotes with double quotes, but leave behind all escaped single quotes. Does anyone know a simple way to do this with python regexs?
```
Input:
"{ 'name': 'Skrillex', 'Genre':'Dubstep', 'Bass': 'Heavy', 'thoughts': 'this\'s ahmazing'}"
output:
"{ "name": "Skrillex", "Genre": "Dubstep", "Bass": "Heavy", "thoughts": "this\'s ahmazing"}"
``` | 2012/11/16 | [
"https://Stackoverflow.com/questions/13409559",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1432960/"
] | This is kind of...odd, but it may work. Remember to preface your string with `r` to denote a raw string so that we can remove the backslashes:
```
In [19]: s = r"{ 'name': 'Skrillex', 'Genre':'Dubstep', 'Bass': 'Heavy', 'thoughts': 'this\'s ahmazing'}"
In [20]: s.replace("\\'", 'REPLACEMEOHYEAH').replace("'", '"').replace('REPLACEMEOHYEAH', "\\'")
Out[20]: '{ "name": "Skrillex", "Genre":"Dubstep", "Bass": "Heavy", "thoughts": "this\'s ahmazing"}'
```
The `REPLACEMEOHYEAH` the token to replace, so it would need to be something that is not going to appear in your actual string. The response format looks like something that could be parsed in more natural way, but if that isn't an option this should work. | 1. replace all the \' into a magic word
2. replace all the ' into "
3. replace all the magic words back to \' |
68,570,102 | Basically, I'm trying to build a code to get the largest number from the user's inputs. This is my 1st time using a for loop and I'm pretty new to python. This is my code:
```
session_live = True
numbers = []
a = 0
def largest_num(arr, n):
#Create a variable to hold the max number
max = arr[0]
#Using for loop for 1st time to check for largest number
for i in range(1, n):
if arr[i] > max:
max = arr[i]
#Returning max's value using return
return max
while session_live:
print("Tell us a number")
num = int(input())
numbers.insert(a, num)
a += 1
print("Continue? (Y/N)")
confirm = input()
if confirm == "Y":
pass
elif confirm == "N":
session_live = False
#Now I'm running the function
arr = numbers
n = len(arr)
ans = largest_num(arr, n)
print("Largest number is", ans)
else:
print(":/")
session_live = False
```
When I try running my code this is what happens:
```
Tell us a number
9
Continue? (Y/N)
Y
Tell us a number
8
Continue? (Y/N)
Y
Tell us a number
10
Continue? (Y/N)
N
Largest number is 9
```
Any fixes? | 2021/07/29 | [
"https://Stackoverflow.com/questions/68570102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16420917/"
] | The error in your `largest_num` function is that it returns in the first iteration -- hence it will only return the larger of the first two numbers.
Using the builtin `max()` function makes life quite a bit easier; any time you reimplement a function that already exists, you're creating work for yourself and (as you've just discovered) it's another place for bugs to creep into your program.
Here's the same program using `max()` instead of `largest_num()`, and removing a few unnecessary variables:
```
numbers = []
while True:
print("Tell us a number")
numbers.append(int(input()))
print("Continue? (Y/N)")
confirm = input()
if confirm == "Y":
continue
if confirm == "N":
print(f"Largest number is {max(numbers)}")
else:
print(":/")
break
``` | I made it without using the built-in function 'max'.
It is a way to update the 'maxNum' variable with the largest number by comparing through the for statement.
```py
numbers = []
while True:
print("Tell us a number")
numbers.append(int(input()))
print("Continue? (Y/N)")
confirm = input()
if confirm == "Y":
continue
if confirm == "N":
maxNum = numbers[0]
for i in numbers:
if i > maxNum:
maxNum = i
print("Largest number is", maxNum)
else:
print(":/")
break
``` |
68,570,102 | Basically, I'm trying to build a code to get the largest number from the user's inputs. This is my 1st time using a for loop and I'm pretty new to python. This is my code:
```
session_live = True
numbers = []
a = 0
def largest_num(arr, n):
#Create a variable to hold the max number
max = arr[0]
#Using for loop for 1st time to check for largest number
for i in range(1, n):
if arr[i] > max:
max = arr[i]
#Returning max's value using return
return max
while session_live:
print("Tell us a number")
num = int(input())
numbers.insert(a, num)
a += 1
print("Continue? (Y/N)")
confirm = input()
if confirm == "Y":
pass
elif confirm == "N":
session_live = False
#Now I'm running the function
arr = numbers
n = len(arr)
ans = largest_num(arr, n)
print("Largest number is", ans)
else:
print(":/")
session_live = False
```
When I try running my code this is what happens:
```
Tell us a number
9
Continue? (Y/N)
Y
Tell us a number
8
Continue? (Y/N)
Y
Tell us a number
10
Continue? (Y/N)
N
Largest number is 9
```
Any fixes? | 2021/07/29 | [
"https://Stackoverflow.com/questions/68570102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16420917/"
] | So, first things first,
* the use of `max` can be avoided, as it is a reserved keyword in python
And coming to your fix, you are comparing it with the value only once in the loop, and you are returning the number, the indentation is the key here. You will have to wait for the loop to complete its job then return the value.
* There are many inbuilt methods to do the job, Here is your implementation (a bit modified)
```py
session_live = True
numbers = []
a = 0
def largest_num(arr, n):
#Create a variable to hold the max number
max_number = arr[0]
#Using for loop for 1st time to check for largest number
for i in range(1, n):
if arr[i] > max_number:
max_number = arr[i]
# --- The indentation matters
#Returning max's value using return
return max_number
while session_live:
print("Tell us a number")
num = int(input())
numbers.insert(a, num)
a += 1
print("Continue? (Y/N)")
confirm = input()
if confirm == "Y":
pass
elif confirm == "N":
session_live = False
#Now I'm running the function
arr = numbers
n = len(arr)
ans = largest_num(arr, n)
print("Largest number is", ans)
else:
print(":/")
session_live = False
``` | I made it without using the built-in function 'max'.
It is a way to update the 'maxNum' variable with the largest number by comparing through the for statement.
```py
numbers = []
while True:
print("Tell us a number")
numbers.append(int(input()))
print("Continue? (Y/N)")
confirm = input()
if confirm == "Y":
continue
if confirm == "N":
maxNum = numbers[0]
for i in numbers:
if i > maxNum:
maxNum = i
print("Largest number is", maxNum)
else:
print(":/")
break
``` |
5,633,067 | I have a pylons project where I need to update some in-memory structures periodically. This should be done on-demand. I decided to come up with a signal handler for this. User sends `SIGUSR1` to the main pylons thread and it is handled by the project.
This works except after handling the signal, the server crashes with following exception:
```
File "/usr/lib/python2.6/SocketServer.py", line 264, in handle_request
fd_sets = select.select([self], [], [], timeout)
select.error: (4, 'Interrupted system call')
```
Is it possible to fix this?
TIA. | 2011/04/12 | [
"https://Stackoverflow.com/questions/5633067",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408426/"
] | Yes, it is possible, but not easy using the stock Python libraries. This is due to Python translating all OS errors to exceptions. However, EINTR should really cause a retry of the system call used. Whenever you start using signals in Python you will see this error sporadically.
I have [code that fixes this](http://code.google.com/p/pycopia/source/browse/trunk/aid/pycopia/socket.py) (SafeSocket), by forking Python modules and adding that functionality. But it needs to be added everywhere system calls are used. So it's possible, but not easy. But you can use my open-source code, it may save you years of work. ;-)
The basic pattern is this (implemented as a system call decorator):
```
# decorator to make system call methods safe from EINTR
def systemcall(meth):
# have to import this way to avoid a circular import
from _socket import error as SocketError
def systemcallmeth(*args, **kwargs):
while 1:
try:
rv = meth(*args, **kwargs)
except EnvironmentError as why:
if why.args and why.args[0] == EINTR:
continue
else:
raise
except SocketError as why:
if why.args and why.args[0] == EINTR:
continue
else:
raise
else:
break
return rv
return systemcallmeth
```
You could also just use that around your select call. | A fix, at least works for me, from an [12 year old python-dev list post](http://mail.python.org/pipermail/python-dev/2000-October/009671.html)
```
while True:
try:
readable, writable, exceptional = select.select(inputs, outputs, inputs, timeout)
except select.error, v:
if v[0] != errno.EINTR: raise
else: break
```
The details of the actual select line isn't important... your "fd\_sets = select.select([self], [], [], timeout)" line should work exactly the same.
The important bit is to check for EINTR and retry/loop if that is caught.
Oh, and don't forget to import errno. |
57,507,832 | I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS.
I am trying to allocate memory for a numpy array with shape `(156816, 36, 53806)`
with
```
np.zeros((156816, 36, 53806), dtype='uint8')
```
and while I'm getting an error on Ubuntu OS
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8
```
I'm not getting it on MacOS:
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...,
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)
```
I've read somewhere that `np.zeros` shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb.
versions:
```
Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0
mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0
```
PS: also failed on Google Colab | 2019/08/15 | [
"https://Stackoverflow.com/questions/57507832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5123537/"
] | I had this same problem on Window's and came across this solution. So if someone comes across this problem in Windows the solution for me was to increase the [pagefile](https://whatis.techtarget.com/definition/pagefile) size, as it was a Memory overcommitment problem for me too.
Windows 8
1. On the Keyboard Press the WindowsKey + X then click System in the popup menu
2. Tap or click Advanced system settings. You might be asked for an admin password or to confirm your choice
3. On the Advanced tab, under Performance, tap or click Settings.
4. Tap or click the Advanced tab, and then, under Virtual memory, tap or click Change
5. Clear the Automatically manage paging file size for all drives check box.
6. Under Drive [Volume Label], tap or click the drive that contains the paging file you want to change
7. Tap or click Custom size, enter a new size in megabytes in the initial size (MB) or Maximum size (MB) box, tap or click Set, and then tap or click OK
8. Reboot your system
Windows 10
1. Press the Windows key
2. Type SystemPropertiesAdvanced
3. Click Run as administrator
4. Under Performance, click Settings
5. Select the Advanced tab
6. Select Change...
7. Uncheck Automatically managing paging file size for all drives
8. Then select Custom size and fill in the appropriate size
9. Press Set then press OK then exit from the Virtual Memory, Performance Options, and System Properties Dialog
10. Reboot your system
Note: I did not have the enough memory on my system for the ~282GB in this example but for my particular case this worked.
**EDIT**
From [here](https://www.geeksinphoenix.com/blog/post/2016/05/10/how-to-manage-windows-10-virtual-memory.aspx) the suggested recommendations for page file size:
>
> There is a formula for calculating the correct pagefile size. Initial size is one and a half (1.5) x the amount of total system memory. Maximum size is three (3) x the initial size. So let's say you have 4 GB (1 GB = 1,024 MB x 4 = 4,096 MB) of memory. The initial size would be 1.5 x 4,096 = 6,144 MB and the maximum size would be 3 x 6,144 = 18,432 MB.
>
>
>
Some things to keep in mind from [here](https://www.computerhope.com/issues/ch001293.htm):
>
> However, this does not take into consideration other important factors and system settings that may be unique to your computer. Again, let Windows choose what to use instead of relying on some arbitrary formula that worked on a different computer.
>
>
>
Also:
>
> Increasing page file size may help prevent instabilities and crashing in Windows. However, a hard drive read/write times are much slower than what they would be if the data were in your computer memory. Having a larger page file is going to add extra work for your hard drive, causing everything else to run slower. Page file size should only be increased when encountering out-of-memory errors, and only as a temporary fix. A better solution is to adding more memory to the computer.
>
>
> | change the data type to another one which uses less memory works. For me, I change the data type to numpy.uint8:
```
data['label'] = data['label'].astype(np.uint8)
``` |
57,507,832 | I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS.
I am trying to allocate memory for a numpy array with shape `(156816, 36, 53806)`
with
```
np.zeros((156816, 36, 53806), dtype='uint8')
```
and while I'm getting an error on Ubuntu OS
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8
```
I'm not getting it on MacOS:
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...,
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)
```
I've read somewhere that `np.zeros` shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb.
versions:
```
Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0
mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0
```
PS: also failed on Google Colab | 2019/08/15 | [
"https://Stackoverflow.com/questions/57507832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5123537/"
] | In my case, adding a dtype attribute changed dtype of the array to a smaller type(from float64 to uint8), decreasing array size enough to not throw MemoryError in Windows(64 bit).
from
```
mask = np.zeros(edges.shape)
```
to
```
mask = np.zeros(edges.shape,dtype='uint8')
``` | I faced the same issue running pandas in a docker contain on EC2. I tried the above solution of allowing overcommit memory allocation via `sysctl -w vm.overcommit_memory=1` (more info on this [here](https://www.kernel.org/doc/Documentation/vm/overcommit-accounting)), however this still didn't solve the issue.
Rather than digging deeper into the memory allocation internals of Ubuntu/EC2, I started looking at options to parallelise the DataFrame, and discovered that using [dask](https://docs.dask.org/en/stable/) worked in my case:
```
import dask.dataframe as dd
df = dd.read_csv('path_to_large_file.csv')
...
```
Your mileage may vary, and note that the dask API is very similar but not a complete like to like for pandas/numpy (e.g. you may need to make some code changes in places depending on what you're doing with the data). |
57,507,832 | I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS.
I am trying to allocate memory for a numpy array with shape `(156816, 36, 53806)`
with
```
np.zeros((156816, 36, 53806), dtype='uint8')
```
and while I'm getting an error on Ubuntu OS
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8
```
I'm not getting it on MacOS:
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...,
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)
```
I've read somewhere that `np.zeros` shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb.
versions:
```
Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0
mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0
```
PS: also failed on Google Colab | 2019/08/15 | [
"https://Stackoverflow.com/questions/57507832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5123537/"
] | This is likely due to your system's [overcommit handling](https://www.kernel.org/doc/Documentation/vm/overcommit-accounting) mode.
In the default mode, `0`,
>
> Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. The root is allowed to allocate slightly more memory in this mode. This is the default.
>
>
>
The exact heuristic used is not well explained here, but this is discussed more on [Linux over commit heuristic](https://stackoverflow.com/questions/38688824/linux-over-commit-heuristic) and [on this page](http://engineering.pivotal.io/post/virtual_memory_settings_in_linux_-_the_problem_with_overcommit/).
You can check your current overcommit mode by running
```
$ cat /proc/sys/vm/overcommit_memory
0
```
In this case, you're allocating
```
>>> 156816 * 36 * 53806 / 1024.0**3
282.8939827680588
```
~282 GB and the kernel is saying well obviously there's no way I'm going to be able to commit that many physical pages to this, and it refuses the allocation.
If (as root) you run:
```
$ echo 1 > /proc/sys/vm/overcommit_memory
```
This will enable the "always overcommit" mode, and you'll find that indeed the system will allow you to make the allocation no matter how large it is (within 64-bit memory addressing at least).
I tested this myself on a machine with 32 GB of RAM. With overcommit mode `0` I also got a `MemoryError`, but after changing it back to `1` it works:
```
>>> import numpy as np
>>> a = np.zeros((156816, 36, 53806), dtype='uint8')
>>> a.nbytes
303755101056
```
You can then go ahead and write to any location within the array, and the system will only allocate physical pages when you explicitly write to that page. So you can use this, with care, for sparse arrays. | I faced the same issue running pandas in a docker contain on EC2. I tried the above solution of allowing overcommit memory allocation via `sysctl -w vm.overcommit_memory=1` (more info on this [here](https://www.kernel.org/doc/Documentation/vm/overcommit-accounting)), however this still didn't solve the issue.
Rather than digging deeper into the memory allocation internals of Ubuntu/EC2, I started looking at options to parallelise the DataFrame, and discovered that using [dask](https://docs.dask.org/en/stable/) worked in my case:
```
import dask.dataframe as dd
df = dd.read_csv('path_to_large_file.csv')
...
```
Your mileage may vary, and note that the dask API is very similar but not a complete like to like for pandas/numpy (e.g. you may need to make some code changes in places depending on what you're doing with the data). |
57,507,832 | I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS.
I am trying to allocate memory for a numpy array with shape `(156816, 36, 53806)`
with
```
np.zeros((156816, 36, 53806), dtype='uint8')
```
and while I'm getting an error on Ubuntu OS
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8
```
I'm not getting it on MacOS:
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...,
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)
```
I've read somewhere that `np.zeros` shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb.
versions:
```
Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0
mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0
```
PS: also failed on Google Colab | 2019/08/15 | [
"https://Stackoverflow.com/questions/57507832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5123537/"
] | In my case, adding a dtype attribute changed dtype of the array to a smaller type(from float64 to uint8), decreasing array size enough to not throw MemoryError in Windows(64 bit).
from
```
mask = np.zeros(edges.shape)
```
to
```
mask = np.zeros(edges.shape,dtype='uint8')
``` | I was having this issue with numpy by trying to have **image sizes of 600x600 (360K)**, I decided to **reduce to 224x224 (~50k)**, a reduction in memory usage by a factor of 7.
`X_set = np.array(X_set).reshape(-1 , 600 * 600 * 3)`
is now
`X_set = np.array(X_set).reshape(-1 , 224 * 224 * 3)`
hope this helps |
57,507,832 | I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS.
I am trying to allocate memory for a numpy array with shape `(156816, 36, 53806)`
with
```
np.zeros((156816, 36, 53806), dtype='uint8')
```
and while I'm getting an error on Ubuntu OS
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8
```
I'm not getting it on MacOS:
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...,
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)
```
I've read somewhere that `np.zeros` shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb.
versions:
```
Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0
mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0
```
PS: also failed on Google Colab | 2019/08/15 | [
"https://Stackoverflow.com/questions/57507832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5123537/"
] | I had this same problem on Window's and came across this solution. So if someone comes across this problem in Windows the solution for me was to increase the [pagefile](https://whatis.techtarget.com/definition/pagefile) size, as it was a Memory overcommitment problem for me too.
Windows 8
1. On the Keyboard Press the WindowsKey + X then click System in the popup menu
2. Tap or click Advanced system settings. You might be asked for an admin password or to confirm your choice
3. On the Advanced tab, under Performance, tap or click Settings.
4. Tap or click the Advanced tab, and then, under Virtual memory, tap or click Change
5. Clear the Automatically manage paging file size for all drives check box.
6. Under Drive [Volume Label], tap or click the drive that contains the paging file you want to change
7. Tap or click Custom size, enter a new size in megabytes in the initial size (MB) or Maximum size (MB) box, tap or click Set, and then tap or click OK
8. Reboot your system
Windows 10
1. Press the Windows key
2. Type SystemPropertiesAdvanced
3. Click Run as administrator
4. Under Performance, click Settings
5. Select the Advanced tab
6. Select Change...
7. Uncheck Automatically managing paging file size for all drives
8. Then select Custom size and fill in the appropriate size
9. Press Set then press OK then exit from the Virtual Memory, Performance Options, and System Properties Dialog
10. Reboot your system
Note: I did not have the enough memory on my system for the ~282GB in this example but for my particular case this worked.
**EDIT**
From [here](https://www.geeksinphoenix.com/blog/post/2016/05/10/how-to-manage-windows-10-virtual-memory.aspx) the suggested recommendations for page file size:
>
> There is a formula for calculating the correct pagefile size. Initial size is one and a half (1.5) x the amount of total system memory. Maximum size is three (3) x the initial size. So let's say you have 4 GB (1 GB = 1,024 MB x 4 = 4,096 MB) of memory. The initial size would be 1.5 x 4,096 = 6,144 MB and the maximum size would be 3 x 6,144 = 18,432 MB.
>
>
>
Some things to keep in mind from [here](https://www.computerhope.com/issues/ch001293.htm):
>
> However, this does not take into consideration other important factors and system settings that may be unique to your computer. Again, let Windows choose what to use instead of relying on some arbitrary formula that worked on a different computer.
>
>
>
Also:
>
> Increasing page file size may help prevent instabilities and crashing in Windows. However, a hard drive read/write times are much slower than what they would be if the data were in your computer memory. Having a larger page file is going to add extra work for your hard drive, causing everything else to run slower. Page file size should only be increased when encountering out-of-memory errors, and only as a temporary fix. A better solution is to adding more memory to the computer.
>
>
> | Sometimes, this error pops up because of the kernel has reached its limit. Try to restart the kernel redo the necessary steps. |
57,507,832 | I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS.
I am trying to allocate memory for a numpy array with shape `(156816, 36, 53806)`
with
```
np.zeros((156816, 36, 53806), dtype='uint8')
```
and while I'm getting an error on Ubuntu OS
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8
```
I'm not getting it on MacOS:
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...,
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)
```
I've read somewhere that `np.zeros` shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb.
versions:
```
Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0
mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0
```
PS: also failed on Google Colab | 2019/08/15 | [
"https://Stackoverflow.com/questions/57507832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5123537/"
] | I had this same problem on Window's and came across this solution. So if someone comes across this problem in Windows the solution for me was to increase the [pagefile](https://whatis.techtarget.com/definition/pagefile) size, as it was a Memory overcommitment problem for me too.
Windows 8
1. On the Keyboard Press the WindowsKey + X then click System in the popup menu
2. Tap or click Advanced system settings. You might be asked for an admin password or to confirm your choice
3. On the Advanced tab, under Performance, tap or click Settings.
4. Tap or click the Advanced tab, and then, under Virtual memory, tap or click Change
5. Clear the Automatically manage paging file size for all drives check box.
6. Under Drive [Volume Label], tap or click the drive that contains the paging file you want to change
7. Tap or click Custom size, enter a new size in megabytes in the initial size (MB) or Maximum size (MB) box, tap or click Set, and then tap or click OK
8. Reboot your system
Windows 10
1. Press the Windows key
2. Type SystemPropertiesAdvanced
3. Click Run as administrator
4. Under Performance, click Settings
5. Select the Advanced tab
6. Select Change...
7. Uncheck Automatically managing paging file size for all drives
8. Then select Custom size and fill in the appropriate size
9. Press Set then press OK then exit from the Virtual Memory, Performance Options, and System Properties Dialog
10. Reboot your system
Note: I did not have the enough memory on my system for the ~282GB in this example but for my particular case this worked.
**EDIT**
From [here](https://www.geeksinphoenix.com/blog/post/2016/05/10/how-to-manage-windows-10-virtual-memory.aspx) the suggested recommendations for page file size:
>
> There is a formula for calculating the correct pagefile size. Initial size is one and a half (1.5) x the amount of total system memory. Maximum size is three (3) x the initial size. So let's say you have 4 GB (1 GB = 1,024 MB x 4 = 4,096 MB) of memory. The initial size would be 1.5 x 4,096 = 6,144 MB and the maximum size would be 3 x 6,144 = 18,432 MB.
>
>
>
Some things to keep in mind from [here](https://www.computerhope.com/issues/ch001293.htm):
>
> However, this does not take into consideration other important factors and system settings that may be unique to your computer. Again, let Windows choose what to use instead of relying on some arbitrary formula that worked on a different computer.
>
>
>
Also:
>
> Increasing page file size may help prevent instabilities and crashing in Windows. However, a hard drive read/write times are much slower than what they would be if the data were in your computer memory. Having a larger page file is going to add extra work for your hard drive, causing everything else to run slower. Page file size should only be increased when encountering out-of-memory errors, and only as a temporary fix. A better solution is to adding more memory to the computer.
>
>
> | I came across this problem on Windows too. The solution for me was to **switch from a 32-bit to a 64-bit version of Python**. Indeed, a 32-bit software, like a 32-bit CPU, can adress a [maximum of 4 GB](https://techterms.com/help/difference_between_32-bit_and_64-bit_systems) of RAM (2^32). So if you have more than 4 GB of RAM, a 32-bit version cannot take advantage of it.
With a 64-bit version of Python (the one labeled **x86-64** in the download page), the issue disappears.
You can check which version you have by entering the interpreter. I, with a 64-bit version, now have:
`Python 3.7.5rc1 (tags/v3.7.5rc1:4082f600a5, Oct 1 2019, 20:28:14) [MSC v.1916 64 bit (AMD64)]`, where [MSC v.1916 64 bit (AMD64)] means "64-bit Python".
Sources :
* [Quora - memory error generated by large numpy array](https://www.quora.com/How-can-I-deal-with-the-memory-error-generated-by-large-Numpy-Python-arrays)
* [Stackoverflow : 32 or 64-bit version of Python](https://stackoverflow.com/questions/1405913/how-do-i-determine-if-my-python-shell-is-executing-in-32bit-or-64bit-mode-on-os) |
57,507,832 | I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS.
I am trying to allocate memory for a numpy array with shape `(156816, 36, 53806)`
with
```
np.zeros((156816, 36, 53806), dtype='uint8')
```
and while I'm getting an error on Ubuntu OS
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8
```
I'm not getting it on MacOS:
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...,
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)
```
I've read somewhere that `np.zeros` shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb.
versions:
```
Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0
mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0
```
PS: also failed on Google Colab | 2019/08/15 | [
"https://Stackoverflow.com/questions/57507832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5123537/"
] | Sometimes, this error pops up because of the kernel has reached its limit. Try to restart the kernel redo the necessary steps. | I faced the same issue running pandas in a docker contain on EC2. I tried the above solution of allowing overcommit memory allocation via `sysctl -w vm.overcommit_memory=1` (more info on this [here](https://www.kernel.org/doc/Documentation/vm/overcommit-accounting)), however this still didn't solve the issue.
Rather than digging deeper into the memory allocation internals of Ubuntu/EC2, I started looking at options to parallelise the DataFrame, and discovered that using [dask](https://docs.dask.org/en/stable/) worked in my case:
```
import dask.dataframe as dd
df = dd.read_csv('path_to_large_file.csv')
...
```
Your mileage may vary, and note that the dask API is very similar but not a complete like to like for pandas/numpy (e.g. you may need to make some code changes in places depending on what you're doing with the data). |
57,507,832 | I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS.
I am trying to allocate memory for a numpy array with shape `(156816, 36, 53806)`
with
```
np.zeros((156816, 36, 53806), dtype='uint8')
```
and while I'm getting an error on Ubuntu OS
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8
```
I'm not getting it on MacOS:
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...,
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)
```
I've read somewhere that `np.zeros` shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb.
versions:
```
Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0
mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0
```
PS: also failed on Google Colab | 2019/08/15 | [
"https://Stackoverflow.com/questions/57507832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5123537/"
] | I had this same problem on Window's and came across this solution. So if someone comes across this problem in Windows the solution for me was to increase the [pagefile](https://whatis.techtarget.com/definition/pagefile) size, as it was a Memory overcommitment problem for me too.
Windows 8
1. On the Keyboard Press the WindowsKey + X then click System in the popup menu
2. Tap or click Advanced system settings. You might be asked for an admin password or to confirm your choice
3. On the Advanced tab, under Performance, tap or click Settings.
4. Tap or click the Advanced tab, and then, under Virtual memory, tap or click Change
5. Clear the Automatically manage paging file size for all drives check box.
6. Under Drive [Volume Label], tap or click the drive that contains the paging file you want to change
7. Tap or click Custom size, enter a new size in megabytes in the initial size (MB) or Maximum size (MB) box, tap or click Set, and then tap or click OK
8. Reboot your system
Windows 10
1. Press the Windows key
2. Type SystemPropertiesAdvanced
3. Click Run as administrator
4. Under Performance, click Settings
5. Select the Advanced tab
6. Select Change...
7. Uncheck Automatically managing paging file size for all drives
8. Then select Custom size and fill in the appropriate size
9. Press Set then press OK then exit from the Virtual Memory, Performance Options, and System Properties Dialog
10. Reboot your system
Note: I did not have the enough memory on my system for the ~282GB in this example but for my particular case this worked.
**EDIT**
From [here](https://www.geeksinphoenix.com/blog/post/2016/05/10/how-to-manage-windows-10-virtual-memory.aspx) the suggested recommendations for page file size:
>
> There is a formula for calculating the correct pagefile size. Initial size is one and a half (1.5) x the amount of total system memory. Maximum size is three (3) x the initial size. So let's say you have 4 GB (1 GB = 1,024 MB x 4 = 4,096 MB) of memory. The initial size would be 1.5 x 4,096 = 6,144 MB and the maximum size would be 3 x 6,144 = 18,432 MB.
>
>
>
Some things to keep in mind from [here](https://www.computerhope.com/issues/ch001293.htm):
>
> However, this does not take into consideration other important factors and system settings that may be unique to your computer. Again, let Windows choose what to use instead of relying on some arbitrary formula that worked on a different computer.
>
>
>
Also:
>
> Increasing page file size may help prevent instabilities and crashing in Windows. However, a hard drive read/write times are much slower than what they would be if the data were in your computer memory. Having a larger page file is going to add extra work for your hard drive, causing everything else to run slower. Page file size should only be increased when encountering out-of-memory errors, and only as a temporary fix. A better solution is to adding more memory to the computer.
>
>
> | In my case, adding a dtype attribute changed dtype of the array to a smaller type(from float64 to uint8), decreasing array size enough to not throw MemoryError in Windows(64 bit).
from
```
mask = np.zeros(edges.shape)
```
to
```
mask = np.zeros(edges.shape,dtype='uint8')
``` |
57,507,832 | I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS.
I am trying to allocate memory for a numpy array with shape `(156816, 36, 53806)`
with
```
np.zeros((156816, 36, 53806), dtype='uint8')
```
and while I'm getting an error on Ubuntu OS
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8
```
I'm not getting it on MacOS:
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...,
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)
```
I've read somewhere that `np.zeros` shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb.
versions:
```
Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0
mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0
```
PS: also failed on Google Colab | 2019/08/15 | [
"https://Stackoverflow.com/questions/57507832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5123537/"
] | I came across this problem on Windows too. The solution for me was to **switch from a 32-bit to a 64-bit version of Python**. Indeed, a 32-bit software, like a 32-bit CPU, can adress a [maximum of 4 GB](https://techterms.com/help/difference_between_32-bit_and_64-bit_systems) of RAM (2^32). So if you have more than 4 GB of RAM, a 32-bit version cannot take advantage of it.
With a 64-bit version of Python (the one labeled **x86-64** in the download page), the issue disappears.
You can check which version you have by entering the interpreter. I, with a 64-bit version, now have:
`Python 3.7.5rc1 (tags/v3.7.5rc1:4082f600a5, Oct 1 2019, 20:28:14) [MSC v.1916 64 bit (AMD64)]`, where [MSC v.1916 64 bit (AMD64)] means "64-bit Python".
Sources :
* [Quora - memory error generated by large numpy array](https://www.quora.com/How-can-I-deal-with-the-memory-error-generated-by-large-Numpy-Python-arrays)
* [Stackoverflow : 32 or 64-bit version of Python](https://stackoverflow.com/questions/1405913/how-do-i-determine-if-my-python-shell-is-executing-in-32bit-or-64bit-mode-on-os) | change the data type to another one which uses less memory works. For me, I change the data type to numpy.uint8:
```
data['label'] = data['label'].astype(np.uint8)
``` |
57,507,832 | I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS.
I am trying to allocate memory for a numpy array with shape `(156816, 36, 53806)`
with
```
np.zeros((156816, 36, 53806), dtype='uint8')
```
and while I'm getting an error on Ubuntu OS
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8
```
I'm not getting it on MacOS:
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...,
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)
```
I've read somewhere that `np.zeros` shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb.
versions:
```
Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0
mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0
```
PS: also failed on Google Colab | 2019/08/15 | [
"https://Stackoverflow.com/questions/57507832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5123537/"
] | I had this same problem on Window's and came across this solution. So if someone comes across this problem in Windows the solution for me was to increase the [pagefile](https://whatis.techtarget.com/definition/pagefile) size, as it was a Memory overcommitment problem for me too.
Windows 8
1. On the Keyboard Press the WindowsKey + X then click System in the popup menu
2. Tap or click Advanced system settings. You might be asked for an admin password or to confirm your choice
3. On the Advanced tab, under Performance, tap or click Settings.
4. Tap or click the Advanced tab, and then, under Virtual memory, tap or click Change
5. Clear the Automatically manage paging file size for all drives check box.
6. Under Drive [Volume Label], tap or click the drive that contains the paging file you want to change
7. Tap or click Custom size, enter a new size in megabytes in the initial size (MB) or Maximum size (MB) box, tap or click Set, and then tap or click OK
8. Reboot your system
Windows 10
1. Press the Windows key
2. Type SystemPropertiesAdvanced
3. Click Run as administrator
4. Under Performance, click Settings
5. Select the Advanced tab
6. Select Change...
7. Uncheck Automatically managing paging file size for all drives
8. Then select Custom size and fill in the appropriate size
9. Press Set then press OK then exit from the Virtual Memory, Performance Options, and System Properties Dialog
10. Reboot your system
Note: I did not have the enough memory on my system for the ~282GB in this example but for my particular case this worked.
**EDIT**
From [here](https://www.geeksinphoenix.com/blog/post/2016/05/10/how-to-manage-windows-10-virtual-memory.aspx) the suggested recommendations for page file size:
>
> There is a formula for calculating the correct pagefile size. Initial size is one and a half (1.5) x the amount of total system memory. Maximum size is three (3) x the initial size. So let's say you have 4 GB (1 GB = 1,024 MB x 4 = 4,096 MB) of memory. The initial size would be 1.5 x 4,096 = 6,144 MB and the maximum size would be 3 x 6,144 = 18,432 MB.
>
>
>
Some things to keep in mind from [here](https://www.computerhope.com/issues/ch001293.htm):
>
> However, this does not take into consideration other important factors and system settings that may be unique to your computer. Again, let Windows choose what to use instead of relying on some arbitrary formula that worked on a different computer.
>
>
>
Also:
>
> Increasing page file size may help prevent instabilities and crashing in Windows. However, a hard drive read/write times are much slower than what they would be if the data were in your computer memory. Having a larger page file is going to add extra work for your hard drive, causing everything else to run slower. Page file size should only be increased when encountering out-of-memory errors, and only as a temporary fix. A better solution is to adding more memory to the computer.
>
>
> | I faced the same issue running pandas in a docker contain on EC2. I tried the above solution of allowing overcommit memory allocation via `sysctl -w vm.overcommit_memory=1` (more info on this [here](https://www.kernel.org/doc/Documentation/vm/overcommit-accounting)), however this still didn't solve the issue.
Rather than digging deeper into the memory allocation internals of Ubuntu/EC2, I started looking at options to parallelise the DataFrame, and discovered that using [dask](https://docs.dask.org/en/stable/) worked in my case:
```
import dask.dataframe as dd
df = dd.read_csv('path_to_large_file.csv')
...
```
Your mileage may vary, and note that the dask API is very similar but not a complete like to like for pandas/numpy (e.g. you may need to make some code changes in places depending on what you're doing with the data). |
10,643,982 | Is there a way in python to truncate the decimal part at 5 or 7 digits?
If not, how can i avoid a float like e\*\*(-x) number to get too big in size?
Thanks | 2012/05/17 | [
"https://Stackoverflow.com/questions/10643982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1308318/"
] | Either catch the `OverflowError` or use the `decimal` module. Python is not going to assume you were okay with the overflow.
```
>>> 0.0000000000000000000000000000000000000000000000000000000000000001**-30
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: (34, 'Result too large')
>>> d = decimal.Decimal(0.0000000000000000000000000000000000000000000000000000000000000001)
>>> d**-30
Decimal('1.000000000000001040827834994E+1920')
``` | The "Result too large" doesn't refer to the number of characters in the decimal representation of the number, it means that the number that resulted from your exponential function is large enough to overflow whatever type python uses internally to store floating point values.
You need to either use a different type to handle your floating point calculations, or rework you code so that e\*\*(-x) doesn't overflow or underflow. |
10,643,982 | Is there a way in python to truncate the decimal part at 5 or 7 digits?
If not, how can i avoid a float like e\*\*(-x) number to get too big in size?
Thanks | 2012/05/17 | [
"https://Stackoverflow.com/questions/10643982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1308318/"
] | Either catch the `OverflowError` or use the `decimal` module. Python is not going to assume you were okay with the overflow.
```
>>> 0.0000000000000000000000000000000000000000000000000000000000000001**-30
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: (34, 'Result too large')
>>> d = decimal.Decimal(0.0000000000000000000000000000000000000000000000000000000000000001)
>>> d**-30
Decimal('1.000000000000001040827834994E+1920')
``` | this seems to work
```
from decimal import *
getcontext().prec = 7
math.exp(- Decimal(x))
``` |
10,643,982 | Is there a way in python to truncate the decimal part at 5 or 7 digits?
If not, how can i avoid a float like e\*\*(-x) number to get too big in size?
Thanks | 2012/05/17 | [
"https://Stackoverflow.com/questions/10643982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1308318/"
] | The "Result too large" doesn't refer to the number of characters in the decimal representation of the number, it means that the number that resulted from your exponential function is large enough to overflow whatever type python uses internally to store floating point values.
You need to either use a different type to handle your floating point calculations, or rework you code so that e\*\*(-x) doesn't overflow or underflow. | this seems to work
```
from decimal import *
getcontext().prec = 7
math.exp(- Decimal(x))
``` |
56,814,981 | the following code gives me the python error 'failed to parse' addon.xml:
(I've used an online checker and it says "error on line 33 at column 15: Opening and ending tag mismatch: description line 0 and extension" - which is the very end of the /extension end tag at the end of the document).
Any advice would be appreciated. This worked yesterday and I have no idea why it's not working at all
```
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<addon id="plugin.audio.criminalpodcast" name="Criminal Podcast" version="1.1.0" provider-name="leopheard">
<requires>
<import addon="xbmc.python" version="2.1.0"/>
<import addon="script.module.xbmcswift2" version="2.4.0"/>
<import addon="script.module.beautifulsoup4" version="4.3.1"/>
<import addon="script.module.requests" version="1.1.0"/>
<import addon="script.module.routing" version="0.2.0"/> </requires>
```
```
<provides>audio</provides> </extension>
<extension point="xbmc.addon.metadata">
<platform>all</platform>
<language></language>
<summary lang="en"></summary>
<description lang="en">description </description>
<license>The MIT License (MIT)</license>
<forum>https://forum.kodi.tv/showthread.php?tid=344790</forum>
<email>leopheard@gmail.com</email>
<source>https://github.com/leopheard/criminalpodcast</source>
<website>http://www.thisiscriminal.com</website>
<audio_guide></audio_guide>
<assets>
<icon>icon.png</icon>
<fanart>fanart.jpg</fanart>
<screenshot>resources/media/Criminal_SocialShare_2.png</screenshot>
<screenshot>resources/media/Criminal_SocialShare_3.png</screenshot>
<screenshot>resources/media/Radiotopia-logo.png</screenshot>
</assets>
``` | 2019/06/29 | [
"https://Stackoverflow.com/questions/56814981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11611598/"
] | Your "XML" file is not well-formed, so it cannot be parsed. Find out how it was created, correct the process so the problem does not occur again, and then regenerate the file.
Files that are vaguely XML-like but not well-formed are pretty well useless. Repair is sometimes possible if the errors are very systematic, but that doesn't appear to the the case here. | Most of the time a "failed to parse" error msg is due to the XML File itself.
Check you're XML File for the correct formatting.
I once forgot the root tag and had the same error message. |
55,197,425 | Ok so here is what I am trying to archieve:
1. Call a URL with a list of dynamically filtered search results
2. Click on the first search result (5/page)
3. Scrape the headlines, paragraphs and images and store them as a json object in a a seperate file e.g.
{
"Title": "Headline element of the individual entry",
"Content" : "Pargraphs and images in DOM order od the individual entry"
}
4. Navigate back to the search results overview page and repeat steps 2 - 3
5. After 5/5 results hav ebeen scraped go to te next page (click pagination link)
6. Repeat steps 2 - 5 until no entry is left
To visualize once more what is intedned:
[](https://i.stack.imgur.com/QJPSA.png)
What I have so far is:
```
#import libraries
from selenium import webdriver
from bs4 import BeautfifulSoup
#URL
url = "https://URL.com"
#Create a browser session
driver = webdriver.Chrome("PATH TO chromedriver.exe")
driver.implicitly_wait(30)
driver.get(url)
#click consent btn on destination URL ( overlays rest of the content )
python_consentButton = driver.find_element_by_id('acceptAllCookies')
python_consentButton.click() #click cookie consent btn
#Seleium hands the page source to Beautiful Soup
soup_results_overview = BeautifulSoup(driver.page_source, 'lxml')
for link in soup_results_overview.findAll("a", class_="searchResults__detail"):
#Selenium visits each Search Result Page
searchResult = driver.find_element_by_class_name('searchResults__detail')
searchResult.click() #click Search Result
#Ask Selenium to go back to the search results overview page
driver.back()
#Tell Selenium to click paginate "next" link
#probably needs to be in a sourounding for loop?
paginate = driver.find_element_by_class_name('pagination-link-next')
paginate.click() #click paginate next
driver.quit()
```
**Problem**
The list count resets every time Selenium navigates back to te search results overview page
so it clicks the first entry 5 times, navigates to the next 5 items and stops
This is probably a predestinated case for a recursive approach, not sure.
Any advice on how to tackle this issue is appreciated. | 2019/03/16 | [
"https://Stackoverflow.com/questions/55197425",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4536968/"
] | You can use only `requests` and `BeautifulSoup` to scrape, without Selenium. It will be much faster and will consume much less resources:
```
import json
import requests
from bs4 import BeautifulSoup
# Get 1000 results
params = {"$filter": "TemplateName eq 'Application Article'", "$orderby": "ArticleDate desc", "$top": "1000",
"$inlinecount": "allpages", }
response = requests.get("https://www.cst.com/odata/Articles", params=params).json()
# iterate 1000 results
articles = response["value"]
for article in articles:
article_json = {}
article_content = []
# title of article
article_title = article["Title"]
# article url
article_url = str(article["Url"]).split("|")[1]
print(article_title)
# request article page and parse it
article_page = requests.get(article_url).text
page = BeautifulSoup(article_page, "html.parser")
# get header
header = page.select_one("h1.head--bordered").text
article_json["Title"] = str(header).strip()
# get body content with images links and descriptions
content = page.select("section.content p, section.content img, section.content span.imageDescription, "
"section.content em")
# collect content to json format
for x in content:
if x.name == "img":
article_content.append("https://cst.com/solutions/article/" + x.attrs["src"])
else:
article_content.append(x.text)
article_json["Content"] = article_content
# write to json file
with open(f"{article_json['Title']}.json", 'w') as to_json_file:
to_json_file.write(json.dumps(article_json))
print("the end")
``` | You aren’t using your link variable anywhere in your loop, just telling the driver to find the top link and click it. (When you use the singular find\_element selector and there are multiple results selenium just grabs the first one). I think all you need to do is replace these lines
```
searchResult = driver.find_element_by_class_name('searchResults__detail')
searchResult.click()
```
With
```
link.click()
```
Does that help?
OK.. with regard to the pagination you could use the following strategy since the 'Next' button disappears:
```
paginate = driver.find_element_by_class_name('pagination-link-next')
while paginate.is_displayed() == true:
for link in soup_results_overview.findAll("a", class_="searchResults__detail"):
#Selenium visits each Search Result Page
searchResult.click() #click Search Result
#Scrape the form with a function defined elsewhere
scrape()
#Ask Selenium to go back to the search results overview page
driver.back()
#Click pagination button after executing the for loop finishes on each page
paginate.click()
``` |
55,197,425 | Ok so here is what I am trying to archieve:
1. Call a URL with a list of dynamically filtered search results
2. Click on the first search result (5/page)
3. Scrape the headlines, paragraphs and images and store them as a json object in a a seperate file e.g.
{
"Title": "Headline element of the individual entry",
"Content" : "Pargraphs and images in DOM order od the individual entry"
}
4. Navigate back to the search results overview page and repeat steps 2 - 3
5. After 5/5 results hav ebeen scraped go to te next page (click pagination link)
6. Repeat steps 2 - 5 until no entry is left
To visualize once more what is intedned:
[](https://i.stack.imgur.com/QJPSA.png)
What I have so far is:
```
#import libraries
from selenium import webdriver
from bs4 import BeautfifulSoup
#URL
url = "https://URL.com"
#Create a browser session
driver = webdriver.Chrome("PATH TO chromedriver.exe")
driver.implicitly_wait(30)
driver.get(url)
#click consent btn on destination URL ( overlays rest of the content )
python_consentButton = driver.find_element_by_id('acceptAllCookies')
python_consentButton.click() #click cookie consent btn
#Seleium hands the page source to Beautiful Soup
soup_results_overview = BeautifulSoup(driver.page_source, 'lxml')
for link in soup_results_overview.findAll("a", class_="searchResults__detail"):
#Selenium visits each Search Result Page
searchResult = driver.find_element_by_class_name('searchResults__detail')
searchResult.click() #click Search Result
#Ask Selenium to go back to the search results overview page
driver.back()
#Tell Selenium to click paginate "next" link
#probably needs to be in a sourounding for loop?
paginate = driver.find_element_by_class_name('pagination-link-next')
paginate.click() #click paginate next
driver.quit()
```
**Problem**
The list count resets every time Selenium navigates back to te search results overview page
so it clicks the first entry 5 times, navigates to the next 5 items and stops
This is probably a predestinated case for a recursive approach, not sure.
Any advice on how to tackle this issue is appreciated. | 2019/03/16 | [
"https://Stackoverflow.com/questions/55197425",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4536968/"
] | You can use only `requests` and `BeautifulSoup` to scrape, without Selenium. It will be much faster and will consume much less resources:
```
import json
import requests
from bs4 import BeautifulSoup
# Get 1000 results
params = {"$filter": "TemplateName eq 'Application Article'", "$orderby": "ArticleDate desc", "$top": "1000",
"$inlinecount": "allpages", }
response = requests.get("https://www.cst.com/odata/Articles", params=params).json()
# iterate 1000 results
articles = response["value"]
for article in articles:
article_json = {}
article_content = []
# title of article
article_title = article["Title"]
# article url
article_url = str(article["Url"]).split("|")[1]
print(article_title)
# request article page and parse it
article_page = requests.get(article_url).text
page = BeautifulSoup(article_page, "html.parser")
# get header
header = page.select_one("h1.head--bordered").text
article_json["Title"] = str(header).strip()
# get body content with images links and descriptions
content = page.select("section.content p, section.content img, section.content span.imageDescription, "
"section.content em")
# collect content to json format
for x in content:
if x.name == "img":
article_content.append("https://cst.com/solutions/article/" + x.attrs["src"])
else:
article_content.append(x.text)
article_json["Content"] = article_content
# write to json file
with open(f"{article_json['Title']}.json", 'w') as to_json_file:
to_json_file.write(json.dumps(article_json))
print("the end")
``` | I have one solutions for you.fetch `href` value of the link and then do `driver.get(url)`
Instead of this.
```
for link in soup_results_overview.findAll("a", class_="searchResults__detail"):
#Selenium visits each Search Result Page
searchResult = driver.find_element_by_class_name('searchResults__detail')
searchResult.click() #click Search Result
#Ask Selenium to go back to the search results overview page
driver.back()
```
Try this.
```
for link in soup_results_overview.findAll("a", class_="searchResults__detail"):
print(link['href'])
driver.get(link['href'])
driver.back()
```
Here i have print the url before navigate.
```
https://cst.com/solutions/article/sar-spherical-phantom-model
https://cst.com/solutions/article/pin-fed-four-edges-gap-coupled-microstrip-antenna-magus
https://cst.com/solutions/article/printed-self-matched-normal-mode-helix-antenna-antenna-magus
https://cst.com/solutions/article/broadband-characterization-of-launchers
https://cst.com/solutions/article/modal-analysis-of-a-dielectric-2-port-filter
``` |
55,197,425 | Ok so here is what I am trying to archieve:
1. Call a URL with a list of dynamically filtered search results
2. Click on the first search result (5/page)
3. Scrape the headlines, paragraphs and images and store them as a json object in a a seperate file e.g.
{
"Title": "Headline element of the individual entry",
"Content" : "Pargraphs and images in DOM order od the individual entry"
}
4. Navigate back to the search results overview page and repeat steps 2 - 3
5. After 5/5 results hav ebeen scraped go to te next page (click pagination link)
6. Repeat steps 2 - 5 until no entry is left
To visualize once more what is intedned:
[](https://i.stack.imgur.com/QJPSA.png)
What I have so far is:
```
#import libraries
from selenium import webdriver
from bs4 import BeautfifulSoup
#URL
url = "https://URL.com"
#Create a browser session
driver = webdriver.Chrome("PATH TO chromedriver.exe")
driver.implicitly_wait(30)
driver.get(url)
#click consent btn on destination URL ( overlays rest of the content )
python_consentButton = driver.find_element_by_id('acceptAllCookies')
python_consentButton.click() #click cookie consent btn
#Seleium hands the page source to Beautiful Soup
soup_results_overview = BeautifulSoup(driver.page_source, 'lxml')
for link in soup_results_overview.findAll("a", class_="searchResults__detail"):
#Selenium visits each Search Result Page
searchResult = driver.find_element_by_class_name('searchResults__detail')
searchResult.click() #click Search Result
#Ask Selenium to go back to the search results overview page
driver.back()
#Tell Selenium to click paginate "next" link
#probably needs to be in a sourounding for loop?
paginate = driver.find_element_by_class_name('pagination-link-next')
paginate.click() #click paginate next
driver.quit()
```
**Problem**
The list count resets every time Selenium navigates back to te search results overview page
so it clicks the first entry 5 times, navigates to the next 5 items and stops
This is probably a predestinated case for a recursive approach, not sure.
Any advice on how to tackle this issue is appreciated. | 2019/03/16 | [
"https://Stackoverflow.com/questions/55197425",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4536968/"
] | You can use only `requests` and `BeautifulSoup` to scrape, without Selenium. It will be much faster and will consume much less resources:
```
import json
import requests
from bs4 import BeautifulSoup
# Get 1000 results
params = {"$filter": "TemplateName eq 'Application Article'", "$orderby": "ArticleDate desc", "$top": "1000",
"$inlinecount": "allpages", }
response = requests.get("https://www.cst.com/odata/Articles", params=params).json()
# iterate 1000 results
articles = response["value"]
for article in articles:
article_json = {}
article_content = []
# title of article
article_title = article["Title"]
# article url
article_url = str(article["Url"]).split("|")[1]
print(article_title)
# request article page and parse it
article_page = requests.get(article_url).text
page = BeautifulSoup(article_page, "html.parser")
# get header
header = page.select_one("h1.head--bordered").text
article_json["Title"] = str(header).strip()
# get body content with images links and descriptions
content = page.select("section.content p, section.content img, section.content span.imageDescription, "
"section.content em")
# collect content to json format
for x in content:
if x.name == "img":
article_content.append("https://cst.com/solutions/article/" + x.attrs["src"])
else:
article_content.append(x.text)
article_json["Content"] = article_content
# write to json file
with open(f"{article_json['Title']}.json", 'w') as to_json_file:
to_json_file.write(json.dumps(article_json))
print("the end")
``` | This solution navigates to each link, scrapes the title and paragraphs, stores the image urls, and downloads all the images to the machine as `.png`s:
```
from bs4 import BeautifulSoup as soup
import requests, re
from selenium import webdriver
def scrape_page(_d, _link):
_head, _paras = _d.find('h1', {'class':'head--bordered'}).text, [i.text for i in _d.find_all('p')]
images = [i.img['src'] for i in _d.find_all('a', {'class':'fancybox'})]
for img in images:
_result, _url = requests.get(f'{_link}{img}').content, re.findall("\w+\.ashx$", img)
if _url:
with open('electroresults/{}.png'.format(_url[0][:-5]), 'wb') as f:
f.write(_result)
return _head, _paras, images
d = webdriver.Chrome('/path/to/chromedriver')
d.get('https://www.cst.com/solutions#size=5&TemplateName=Application+Article')
results, page, _previous = [], 1, None
while True:
_articles = [i.get_attribute('href') for i in d.find_elements_by_class_name('searchResults__detail')]
page_results = []
previous = d.current_url
for article in _articles:
d.get(article)
try:
d.find_elements_by_class_name('interaction')[0].click()
except:
pass
page_results.append(dict(zip(['title', 'paragraphs', 'imgs'], scrape_page(soup(d.page_source, 'html.parser'), d.current_url))))
results.append(page_results)
d.get(previous)
_next = d.find_elements_by_class_name('pagination-link-next')
if _next:
_next[0].click()
else:
break
```
Output (first several articles on the first page only, due to SO's character limit):
```
[{'title': '\n Predicting SAR Behavior using Spherical Phantom Models\n ', 'paragraphs': ['', '\nAntenna Magus is a software tool to help accelerate the antenna design and modelling process. It increases efficiency by helping the engineer to make a more informed choice of antenna element, providing a good starting design.\n', '', '', '\n IdEM is a user friendly tool for the generation of macromodels of linear lumped multi-port structures (e.g., via fields, connectors, packages, discontinuities, etc.), known from their input-output port responses. The raw characterization of the structure can come from measurement or simulation, either in frequency domain or in time domain.\n ', '', '', '\n FEST3D is a software tool capable of analysing complex passive microwave components based on waveguide technology (including multiplexers, couplers and filters) in very short computational times with high accuracy. This suite offers all the capabilities needed for the design of passive components such as optimization and tolerance analysis. Moreover, FEST3D advanced synthesis tools allow designing bandpass, dual-mode and lowpass filters from user specifications.\n ', '', '', '\n SPARK3D is a unique simulation tool for determining the RF breakdown power level in a wide variety of passive devices, including those based on cavities, waveguides, microstrip and antennas. Field results from CST STUDIO SUITE® simulations can be imported directly into SPARK3D to analyse vacuum breakdown (multipactor) and gas discharge. From this, SPARK3D calculates the maximum power that the device can handle without causing discharge effects.\n ', '', '', '\nEasy-to-use matching circuit optimization and antenna analysis software\n Optenni Lab is a professional software tool with innovative analysis features to increase the productivity of engineers requiring matching circuits. It can, e.g., speed up the antenna design process and provide antennas with optimal total performance. Optenni Lab offers fast fully-automatic matching circuit optimization tools, including automatic generation of multiple optimal topologies, estimation of the obtainable bandwidth of antennas and calculation of the worst-case isolation in multi-antenna systems.\n ', '', '', '\n The ability to visualize electromagnetic fields intuitively in 3D and also the possibility to demonstrate in a straightforward way the effect of parameter changes are obvious benefits in teaching. To support learning, teaching and research at academic institutions, CST offers four types of licenses, namely the free CST STUDIO SUITE®Student Edition, a Classroom license, an Educational license and an Extended license. \n ', '', '', '\n The CST STUDIO SUITE® Student Edition has been developed with the aim of introducing you to the world of electromagnetic simulation, making Maxwell’s equations easier to understand than ever.\n ', '', '', '\n Below you will find several examples which were selected from some commonly used textbooks. Each example contains a short description of the theory, detailed information on how to construct the model, a video showing how to construct the model, and the fully constructed model ready for you to download.\n ', '', '', '\n In acknowledgement of the importance of university research and the impact of groundbreaking publications on the reputation of both author and tool used for the research, CST announces the endowment of a University Publication Award.\n ', '', '', "\n Regular training courses are held in CST's offices in Asia, Europe, and North America. Please check the local websites for detail of trainings in China, Korea and Japan. Advance registration is normally required.\n ", '', '', '\nCST exhibits at events around the globe. See a list of exhibitions CST is attending where you can speak to our sales and support staff and learn more about our products and their applications.\n', '', '', '\nThroughout the year, CST simulation experts present eSeminars on the applications, features and usage of our software. You can also view past eSeminars by searching our archive and filtering for the markets or industries that interest you most.\n\n', '', '', '\n CST hosts workshops in multiple languages and in countries around the world. Workshops provide an opportunity to learn about specific applications and refresh your skills with experienced CST support staff.\n ', '', '', '\n The CST user conference offers an informal and enlightening environment where developers and researchers using CST STUDIO SUITE® tools can exchange ideas and talk with CST staff about future developments.\n ', '', 'facebooklinkedinswymtwitteryoutuberss', 'Events', 'Due to the fact that measurements in true biological heads typically cannot be carried out, SAR norms for mobile phones or EMI problems are commonly defined in terms of standardized phantom models. In the easiest case, only spherical structures are considered. To predict the SAR behavior of a new product already during the design stage, it is desirable to include the phantom head in the EM simulations. ', 'The following examples\xa0investigate two spherical phantom models, a basic one that only contains of tissue material inside a glass sphere and a more complex one that has two\xa0additional layers of bone and tissue.\xa0\xa0A dipole antenna is used for the excitation and\xa0is displayed as a yellow line in the following picture.', 'The SAR distribution is simulated at 835 MHz and visualized in the figure below. A comparison of the SAR values over a radial line shows good agreement with the measurement of the same structure.', 'For the following simulation a more complex model including a simplified skull is used.', 'A comparison of the SAR values at 1.95 GHz on an off-axis path shows\xa0a significant difference between the basic homogeneous model and the more complex one. Since the values are higher, the simplified model may not be sufficient in all cases.', ' Go to Article', ' Go to Article', ' Go to Article', ' Go to Article', ' Go to Article', '\n Please read our\n Privacy Statement\xa0|\xa0\n Impressum \xa0|\xa0\n Sitemap \xa0|\xa0\n © 2019 Dassault Systemes Deutschland GmbH. All rights reserved.\n ', 'Your session has expired. Redirecting you to the login page...', '\n We use cookie to operate this website, improve its usability, personalize your experience, and track visits. By continuing to use this site, you are consenting to use of cookies. You have the possibility to manage the parameters and choose whether to accept certain cookies while on the site. For more information, please read our updated privacy policy\n', 'When you browse our website, cookies are enabled by default and data may be read or stored locally on your device. You can set your preferences below:', 'These cookies enable additional functionality like saving preferences, allowing social interactions and analyzing usage for site optimization.', 'These cookies enable us and third parties to serve ads that are relevant to your interests.'], 'imgs': ['~/media/B692C95635564BBDA18AFE7C35D3CC7E.ashx', '~/media/DC7423B9D92542CF8254365D9C83C9E7.ashx', '~/media/54E5C0BE872B411EBDC1698E19894670.ashx', '~/media/114789FC714042A89019C5E41E64ADEE.ashx', '~/media/B9AF3151613C44D2BFE1B5B9B6504885.ashx']}, {'title': '\n Pin-fed Four Edges Gap Coupled Microstrip Antenna | Antenna Magus\n ', 'paragraphs': ['', '\nAntenna Magus is a software tool to help accelerate the antenna design and modelling process. It increases efficiency by helping the engineer to make a more informed choice of antenna element, providing a good starting design.\n', '', '', '\n IdEM is a user friendly tool for the generation of macromodels of linear lumped multi-port structures (e.g., via fields, connectors, packages, discontinuities, etc.), known from their input-output port responses. The raw characterization of the structure can come from measurement or simulation, either in frequency domain or in time domain.\n ', '', '', '\n FEST3D is a software tool capable of analysing complex passive microwave components based on waveguide technology (including multiplexers, couplers and filters) in very short computational times with high accuracy. This suite offers all the capabilities needed for the design of passive components such as optimization and tolerance analysis. Moreover, FEST3D advanced synthesis tools allow designing bandpass, dual-mode and lowpass filters from user specifications.\n ', '', '', '\n SPARK3D is a unique simulation tool for determining the RF breakdown power level in a wide variety of passive devices, including those based on cavities, waveguides, microstrip and antennas. Field results from CST STUDIO SUITE® simulations can be imported directly into SPARK3D to analyse vacuum breakdown (multipactor) and gas discharge. From this, SPARK3D calculates the maximum power that the device can handle without causing discharge effects.\n ', '', '', '\nEasy-to-use matching circuit optimization and antenna analysis software\n Optenni Lab is a professional software tool with innovative analysis features to increase the productivity of engineers requiring matching circuits. It can, e.g., speed up the antenna design process and provide antennas with optimal total performance. Optenni Lab offers fast fully-automatic matching circuit optimization tools, including automatic generation of multiple optimal topologies, estimation of the obtainable bandwidth of antennas and calculation of the worst-case isolation in multi-antenna systems.\n ', '', '', '\n The ability to visualize electromagnetic fields intuitively in 3D and also the possibility to demonstrate in a straightforward way the effect of parameter changes are obvious benefits in teaching. To support learning, teaching and research at academic institutions, CST offers four types of licenses, namely the free CST STUDIO SUITE®Student Edition, a Classroom license, an Educational license and an Extended license. \n ', '', '', '\n The CST STUDIO SUITE® Student Edition has been developed with the aim of introducing you to the world of electromagnetic simulation, making Maxwell’s equations easier to understand than ever.\n ', '', '', '\n Below you will find several examples which were selected from some commonly used textbooks. Each example contains a short description of the theory, detailed information on how to construct the model, a video showing how to construct the model, and the fully constructed model ready for you to download.\n ', '', '', '\n In acknowledgement of the importance of university research and the impact of groundbreaking publications on the reputation of both author and tool used for the research, CST announces the endowment of a University Publication Award.\n ', '', '', "\n Regular training courses are held in CST's offices in Asia, Europe, and North America. Please check the local websites for detail of trainings in China, Korea and Japan. Advance registration is normally required.\n ", '', '', '\nCST exhibits at events around the globe. See a list of exhibitions CST is attending where you can speak to our sales and support staff and learn more about our products and their applications.\n', '', '', '\nThroughout the year, CST simulation experts present eSeminars on the applications, features and usage of our software. You can also view past eSeminars by searching our archive and filtering for the markets or industries that interest you most.\n\n', '', '', '\n CST hosts workshops in multiple languages and in countries around the world. Workshops provide an opportunity to learn about specific applications and refresh your skills with experienced CST support staff.\n ', '', '', '\n The CST user conference offers an informal and enlightening environment where developers and researchers using CST STUDIO SUITE® tools can exchange ideas and talk with CST staff about future developments.\n ', '', 'facebooklinkedinswymtwitteryoutuberss', 'Events', 'Although microstrip antennas are very popular in the microwave frequency range because of their simplicity and compatibility with circuit board technology, their limited bandwidth often restricts their usefulness.', 'Various methods have been suggested to overcome this limitation – including the use of gap- or direct-coupled parasitic patches. In the FEGCOMA, these parasitic patches are placed alongside all four edges of the driven patch element. The introduction of parasitic patches of slightly different resonant lengths yields further resonances improving the bandwidth and gain of the standard patch. In this case, the structure is optimized to obtain a well-defined, designable bandwidth with near-optimally spaced zeros. Typical gain values of 10 dBi may be expected, with a designable fractional impedance bandwidth between 12 % and 30 %....', '', ' Go to Article', ' Go to Article', ' Go to Article', ' Go to Article', ' Go to Article', '\n Please read our\n Privacy Statement\xa0|\xa0\n Impressum \xa0|\xa0\n Sitemap \xa0|\xa0\n © 2019 Dassault Systemes Deutschland GmbH. All rights reserved.\n ', 'Your session has expired. Redirecting you to the login page...', '\n We use cookie to operate this website, improve its usability, personalize your experience, and track visits. By continuing to use this site, you are consenting to use of cookies. You have the possibility to manage the parameters and choose whether to accept certain cookies while on the site. For more information, please read our updated privacy policy\n', 'When you browse our website, cookies are enabled by default and data may be read or stored locally on your device. You can set your preferences below:', 'These cookies enable additional functionality like saving preferences, allowing social interactions and analyzing usage for site optimization.', 'These cookies enable us and third parties to serve ads that are relevant to your interests.'], 'imgs': ['http://www.antennamagus.com/database/antennas/341/Patch_FEGCOMA_Pin_small.png', 'http://www.antennamagus.com/images/Newsletter2019-0/FEGCOMA_3D_with_plus.png', 'http://www.antennamagus.com/images/Newsletter2019-0/FEGCOMA_s11_with_plus.png']}, {'title': '\n Printed Self-Matched Normal Mode Helix Antenna | Antenna Magus\n ', 'paragraphs': ['', '\nAntenna Magus is a software tool to help accelerate the antenna design and modelling process. It increases efficiency by helping the engineer to make a more informed choice of antenna element, providing a good starting design.\n', '', '', '\n IdEM is a user friendly tool for the generation of macromodels of linear lumped multi-port structures (e.g., via fields, connectors, packages, discontinuities, etc.), known from their input-output port responses. The raw characterization of the structure can come from measurement or simulation, either in frequency domain or in time domain.\n ', '', '', '\n FEST3D is a software tool capable of analysing complex passive microwave components based on waveguide technology (including multiplexers, couplers and filters) in very short computational times with high accuracy. This suite offers all the capabilities needed for the design of passive components such as optimization and tolerance analysis. Moreover, FEST3D advanced synthesis tools allow designing bandpass, dual-mode and lowpass filters from user specifications.\n ', '', '', '\n SPARK3D is a unique simulation tool for determining the RF breakdown power level in a wide variety of passive devices, including those based on cavities, waveguides, microstrip and antennas. Field results from CST STUDIO SUITE® simulations can be imported directly into SPARK3D to analyse vacuum breakdown (multipactor) and gas discharge. From this, SPARK3D calculates the maximum power that the device can handle without causing discharge effects.\n ', '', '', '\nEasy-to-use matching circuit optimization and antenna analysis software\n Optenni Lab is a professional software tool with innovative analysis features to increase the productivity of engineers requiring matching circuits. It can, e.g., speed up the antenna design process and provide antennas with optimal total performance. Optenni Lab offers fast fully-automatic matching circuit optimization tools, including automatic generation of multiple optimal topologies, estimation of the obtainable bandwidth of antennas and calculation of the worst-case isolation in multi-antenna systems.\n ', '', '', '\n The ability to visualize electromagnetic fields intuitively in 3D and also the possibility to demonstrate in a straightforward way the effect of parameter changes are obvious benefits in teaching. To support learning, teaching and research at academic institutions, CST offers four types of licenses, namely the free CST STUDIO SUITE®Student Edition, a Classroom license, an Educational license and an Extended license. \n ', '', '', '\n The CST STUDIO SUITE® Student Edition has been developed with the aim of introducing you to the world of electromagnetic simulation, making Maxwell’s equations easier to understand than ever.\n ', '', '', '\n Below you will find several examples which were selected from some commonly used textbooks. Each example contains a short description of the theory, detailed information on how to construct the model, a video showing how to construct the model, and the fully constructed model ready for you to download.\n ', '', '', '\n In acknowledgement of the importance of university research and the impact of groundbreaking publications on the reputation of both author and tool used for the research, CST announces the endowment of a University Publication Award.\n ', '', '', "\n Regular training courses are held in CST's offices in Asia, Europe, and North America. Please check the local websites for detail of trainings in China, Korea and Japan. Advance registration is normally required.\n ", '', '', '\nCST exhibits at events around the globe. See a list of exhibitions CST is attending where you can speak to our sales and support staff and learn more about our products and their applications.\n', '', '', '\nThroughout the year, CST simulation experts present eSeminars on the applications, features and usage of our software. You can also view past eSeminars by searching our archive and filtering for the markets or industries that interest you most.\n\n', '', '', '\n CST hosts workshops in multiple languages and in countries around the world. Workshops provide an opportunity to learn about specific applications and refresh your skills with experienced CST support staff.\n ', '', '', '\n The CST user conference offers an informal and enlightening environment where developers and researchers using CST STUDIO SUITE® tools can exchange ideas and talk with CST staff about future developments.\n ', '', 'facebooklinkedinswymtwitteryoutuberss', 'Events', 'Normal mode helix antennas (NMHA) are often used for handheld radio transceivers and mobile communications applications. The printed self-matched NMHA is naturally matched to 50 Ω, thus avoiding the typical design challenge of matching similar structures at resonance.', 'It exhibits properties similar to other NMHAs, namely: It is compact (with the total height being typically 0.14 λ), it is vertically polarized and omni-directional and has a bandwidth of approximately 3%.', 'The helical structure consists of two (inner and outer) metallic helical strips of equal width, with a central dielectric section between them.', ' Go to Article', ' Go to Article', ' Go to Article', ' Go to Article', ' Go to Article', '\n Please read our\n Privacy Statement\xa0|\xa0\n Impressum \xa0|\xa0\n Sitemap \xa0|\xa0\n © 2019 Dassault Systemes Deutschland GmbH. All rights reserved.\n ', 'Your session has expired. Redirecting you to the login page...', '\n We use cookie to operate this website, improve its usability, personalize your experience, and track visits. By continuing to use this site, you are consenting to use of cookies. You have the possibility to manage the parameters and choose whether to accept certain cookies while on the site. For more information, please read our updated privacy policy\n', 'When you browse our website, cookies are enabled by default and data may be read or stored locally on your device. You can set your preferences below:', 'These cookies enable additional functionality like saving preferences, allowing social interactions and analyzing usage for site optimization.', 'These cookies enable us and third parties to serve ads that are relevant to your interests.'], 'imgs': ['http://www.antennamagus.com/database/antennas/342/Printed_Matched_NMHA_small.png', 'http://www.antennamagus.com/images/Newsletter2019-0/NMHA_3D_Farfield_with_plus.png', 'http://www.antennamagus.com/images/Newsletter2019-0/NMHA_2D_sketch_with_plus.png', 'http://www.antennamagus.com/images/Newsletter2019-0/NMHA_S11vsFrequency_with_plus.png']}]
``` |
55,197,425 | Ok so here is what I am trying to archieve:
1. Call a URL with a list of dynamically filtered search results
2. Click on the first search result (5/page)
3. Scrape the headlines, paragraphs and images and store them as a json object in a a seperate file e.g.
{
"Title": "Headline element of the individual entry",
"Content" : "Pargraphs and images in DOM order od the individual entry"
}
4. Navigate back to the search results overview page and repeat steps 2 - 3
5. After 5/5 results hav ebeen scraped go to te next page (click pagination link)
6. Repeat steps 2 - 5 until no entry is left
To visualize once more what is intedned:
[](https://i.stack.imgur.com/QJPSA.png)
What I have so far is:
```
#import libraries
from selenium import webdriver
from bs4 import BeautfifulSoup
#URL
url = "https://URL.com"
#Create a browser session
driver = webdriver.Chrome("PATH TO chromedriver.exe")
driver.implicitly_wait(30)
driver.get(url)
#click consent btn on destination URL ( overlays rest of the content )
python_consentButton = driver.find_element_by_id('acceptAllCookies')
python_consentButton.click() #click cookie consent btn
#Seleium hands the page source to Beautiful Soup
soup_results_overview = BeautifulSoup(driver.page_source, 'lxml')
for link in soup_results_overview.findAll("a", class_="searchResults__detail"):
#Selenium visits each Search Result Page
searchResult = driver.find_element_by_class_name('searchResults__detail')
searchResult.click() #click Search Result
#Ask Selenium to go back to the search results overview page
driver.back()
#Tell Selenium to click paginate "next" link
#probably needs to be in a sourounding for loop?
paginate = driver.find_element_by_class_name('pagination-link-next')
paginate.click() #click paginate next
driver.quit()
```
**Problem**
The list count resets every time Selenium navigates back to te search results overview page
so it clicks the first entry 5 times, navigates to the next 5 items and stops
This is probably a predestinated case for a recursive approach, not sure.
Any advice on how to tackle this issue is appreciated. | 2019/03/16 | [
"https://Stackoverflow.com/questions/55197425",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4536968/"
] | You can use only `requests` and `BeautifulSoup` to scrape, without Selenium. It will be much faster and will consume much less resources:
```
import json
import requests
from bs4 import BeautifulSoup
# Get 1000 results
params = {"$filter": "TemplateName eq 'Application Article'", "$orderby": "ArticleDate desc", "$top": "1000",
"$inlinecount": "allpages", }
response = requests.get("https://www.cst.com/odata/Articles", params=params).json()
# iterate 1000 results
articles = response["value"]
for article in articles:
article_json = {}
article_content = []
# title of article
article_title = article["Title"]
# article url
article_url = str(article["Url"]).split("|")[1]
print(article_title)
# request article page and parse it
article_page = requests.get(article_url).text
page = BeautifulSoup(article_page, "html.parser")
# get header
header = page.select_one("h1.head--bordered").text
article_json["Title"] = str(header).strip()
# get body content with images links and descriptions
content = page.select("section.content p, section.content img, section.content span.imageDescription, "
"section.content em")
# collect content to json format
for x in content:
if x.name == "img":
article_content.append("https://cst.com/solutions/article/" + x.attrs["src"])
else:
article_content.append(x.text)
article_json["Content"] = article_content
# write to json file
with open(f"{article_json['Title']}.json", 'w') as to_json_file:
to_json_file.write(json.dumps(article_json))
print("the end")
``` | The following set the results count to 20 and calculate the number of results pages. It clicks next until all pages visited. Condition is added to ensure page has loaded. I print the articles just to show you different pages. You can use this structure to create your desired output.
```
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import math
startUrl = 'https://www.cst.com/solutions#size=20&TemplateName=Application+Article'
url = 'https://www.cst.com/solutions#size=20&TemplateName=Application+Article&page={}'
driver = webdriver.Chrome()
driver.get(startUrl)
driver.find_element_by_id('acceptAllCookies').click()
items = WebDriverWait(driver,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, ".searchResults__detail")))
resultCount = int(driver.find_element_by_css_selector('[data-bind="text: resultsCount()"]').text.replace('items were found','').strip())
resultsPerPage = 20
numPages = math.ceil(resultCount/resultsPerPage)
currentCount = resultsPerPage
header = driver.find_element_by_css_selector('.searchResults__detail h3').text
test = header
for page in range(1, numPages + 1):
if page == 1:
print([item.text for item in items])
#do something with first page
else:
driver.find_element_by_css_selector('.pagination-link-next').click()
while header == test:
try:
header = driver.find_element_by_css_selector('.searchResults__detail h3').text
except:
continue
items = WebDriverWait(driver,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, ".searchResults__detail")))
test = header
#do something with next page
print([item.text for item in items])
if page == 4: #delete later
break #delete later
``` |
34,771,191 | I just upgraded to the latest stable release of `matplotlib` (1.5.1) and everytime I import matplotlib I get this message:
```
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```
... which always stalls for a few seconds.
Is this the expected behaviour? Was it the same also before, but just without the printed message? | 2016/01/13 | [
"https://Stackoverflow.com/questions/34771191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/497180/"
] | This worked for me on Ubuntu **16.04 LST** with **Python 3.5.2 | Anaconda 4.2.0 (64-bit)**. I deleted all of the files in `~/.cache/matplotlib/`.
```
sudo rm -r fontList.py3k.cache tex.cache
```
At first I thought it wouldn't work, because I got the warning afterward. But after the cache files were rebuilt the warning went away. So, close your file, and reopen again(open again), it has no warning. | This worked for me:
```
sudo apt-get install libfreetype6-dev libxft-dev
``` |
34,771,191 | I just upgraded to the latest stable release of `matplotlib` (1.5.1) and everytime I import matplotlib I get this message:
```
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```
... which always stalls for a few seconds.
Is this the expected behaviour? Was it the same also before, but just without the printed message? | 2016/01/13 | [
"https://Stackoverflow.com/questions/34771191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/497180/"
] | I ran the python code w. sudo and it cured it...my guess was that there wasn't permission to write that table... good luck! | This worked for me on Ubuntu **16.04 LST** with **Python 3.5.2 | Anaconda 4.2.0 (64-bit)**. I deleted all of the files in `~/.cache/matplotlib/`.
```
sudo rm -r fontList.py3k.cache tex.cache
```
At first I thought it wouldn't work, because I got the warning afterward. But after the cache files were rebuilt the warning went away. So, close your file, and reopen again(open again), it has no warning. |
34,771,191 | I just upgraded to the latest stable release of `matplotlib` (1.5.1) and everytime I import matplotlib I get this message:
```
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```
... which always stalls for a few seconds.
Is this the expected behaviour? Was it the same also before, but just without the printed message? | 2016/01/13 | [
"https://Stackoverflow.com/questions/34771191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/497180/"
] | As tom suggested in the comment above, deleting the files:
```
fontList.cache
fontList.py3k.cache
tex.cache
```
solve the problem.
In my case the files were under:
```
`~/.matplotlib`
```
EDITED
A couple of days ago the message appeared again, I deleted the files in the locations mention above without any success. I found that as suggested [here](https://stackoverflow.com/questions/35734074/problems-with-matplotlib-is-building-the-font-cache-using-fc-list-this-may-tak) by [T Mudau](https://stackoverflow.com/users/5695374/tshilidzi-mudau) there's an extra location with text cache files is: `~/.cache/fontconfig` | I ran the python code w. sudo and it cured it...my guess was that there wasn't permission to write that table... good luck! |
34,771,191 | I just upgraded to the latest stable release of `matplotlib` (1.5.1) and everytime I import matplotlib I get this message:
```
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```
... which always stalls for a few seconds.
Is this the expected behaviour? Was it the same also before, but just without the printed message? | 2016/01/13 | [
"https://Stackoverflow.com/questions/34771191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/497180/"
] | HI you must find this file : font\_manager.py in my case : C:\Users\gustavo\Anaconda3\Lib\site-packages\matplotlib\ font\_manager.py
and FIND def win32InstalledFonts(directory=None, fontext='ttf') and replace by :
def win32InstalledFonts(directory=None, fontext='ttf'):
"""
Search for fonts in the specified font directory, or use the
system directories if none given. A list of TrueType font
filenames are returned by default, or AFM fonts if *fontext* ==
'afm'.
"""
```
from six.moves import winreg
if directory is None:
directory = win32FontDirectory()
fontext = get_fontext_synonyms(fontext)
key, items = None, {}
for fontdir in MSFontDirectories:
try:
local = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, fontdir)
except OSError:
continue
if not local:
return list_fonts(directory, fontext)
try:
for j in range(winreg.QueryInfoKey(local)[1]):
try:
key, direc, any = winreg.EnumValue(local, j)
if not is_string_like(direc):
continue
if not os.path.dirname(direc):
direc = os.path.join(directory, direc)
direc = direc.split('\0', 1)[0]
if os.path.splitext(direc)[1][1:] in fontext:
items[direc] = 1
except EnvironmentError:
continue
except WindowsError:
continue
except MemoryError:
continue
return list(six.iterkeys(items))
finally:
winreg.CloseKey(local)
return None
``` | This worked for me:
```
sudo apt-get install libfreetype6-dev libxft-dev
``` |
34,771,191 | I just upgraded to the latest stable release of `matplotlib` (1.5.1) and everytime I import matplotlib I get this message:
```
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```
... which always stalls for a few seconds.
Is this the expected behaviour? Was it the same also before, but just without the printed message? | 2016/01/13 | [
"https://Stackoverflow.com/questions/34771191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/497180/"
] | I ran the python code using sudo just once, and it resolved the warning for me.
Now it runs faster. Running without sudo gives no warning at all.
Cheers | HI you must find this file : font\_manager.py in my case : C:\Users\gustavo\Anaconda3\Lib\site-packages\matplotlib\ font\_manager.py
and FIND def win32InstalledFonts(directory=None, fontext='ttf') and replace by :
def win32InstalledFonts(directory=None, fontext='ttf'):
"""
Search for fonts in the specified font directory, or use the
system directories if none given. A list of TrueType font
filenames are returned by default, or AFM fonts if *fontext* ==
'afm'.
"""
```
from six.moves import winreg
if directory is None:
directory = win32FontDirectory()
fontext = get_fontext_synonyms(fontext)
key, items = None, {}
for fontdir in MSFontDirectories:
try:
local = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, fontdir)
except OSError:
continue
if not local:
return list_fonts(directory, fontext)
try:
for j in range(winreg.QueryInfoKey(local)[1]):
try:
key, direc, any = winreg.EnumValue(local, j)
if not is_string_like(direc):
continue
if not os.path.dirname(direc):
direc = os.path.join(directory, direc)
direc = direc.split('\0', 1)[0]
if os.path.splitext(direc)[1][1:] in fontext:
items[direc] = 1
except EnvironmentError:
continue
except WindowsError:
continue
except MemoryError:
continue
return list(six.iterkeys(items))
finally:
winreg.CloseKey(local)
return None
``` |
34,771,191 | I just upgraded to the latest stable release of `matplotlib` (1.5.1) and everytime I import matplotlib I get this message:
```
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```
... which always stalls for a few seconds.
Is this the expected behaviour? Was it the same also before, but just without the printed message? | 2016/01/13 | [
"https://Stackoverflow.com/questions/34771191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/497180/"
] | As tom suggested in the comment above, deleting the files:
```
fontList.cache
fontList.py3k.cache
tex.cache
```
solve the problem.
In my case the files were under:
```
`~/.matplotlib`
```
EDITED
A couple of days ago the message appeared again, I deleted the files in the locations mention above without any success. I found that as suggested [here](https://stackoverflow.com/questions/35734074/problems-with-matplotlib-is-building-the-font-cache-using-fc-list-this-may-tak) by [T Mudau](https://stackoverflow.com/users/5695374/tshilidzi-mudau) there's an extra location with text cache files is: `~/.cache/fontconfig` | This worked for me on Ubuntu **16.04 LST** with **Python 3.5.2 | Anaconda 4.2.0 (64-bit)**. I deleted all of the files in `~/.cache/matplotlib/`.
```
sudo rm -r fontList.py3k.cache tex.cache
```
At first I thought it wouldn't work, because I got the warning afterward. But after the cache files were rebuilt the warning went away. So, close your file, and reopen again(open again), it has no warning. |
34,771,191 | I just upgraded to the latest stable release of `matplotlib` (1.5.1) and everytime I import matplotlib I get this message:
```
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```
... which always stalls for a few seconds.
Is this the expected behaviour? Was it the same also before, but just without the printed message? | 2016/01/13 | [
"https://Stackoverflow.com/questions/34771191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/497180/"
] | Confirmed Hugo's approach works for Ubuntu 14.04 LTS/matplotlib 1.5.1:
* deleted ~/.cache/matplotlib/fontList.cache
* ran code, again the warning was issued (assumption: is rebuilding the cache correctly)
* ran code again, no more warning (finally) | HI you must find this file : font\_manager.py in my case : C:\Users\gustavo\Anaconda3\Lib\site-packages\matplotlib\ font\_manager.py
and FIND def win32InstalledFonts(directory=None, fontext='ttf') and replace by :
def win32InstalledFonts(directory=None, fontext='ttf'):
"""
Search for fonts in the specified font directory, or use the
system directories if none given. A list of TrueType font
filenames are returned by default, or AFM fonts if *fontext* ==
'afm'.
"""
```
from six.moves import winreg
if directory is None:
directory = win32FontDirectory()
fontext = get_fontext_synonyms(fontext)
key, items = None, {}
for fontdir in MSFontDirectories:
try:
local = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, fontdir)
except OSError:
continue
if not local:
return list_fonts(directory, fontext)
try:
for j in range(winreg.QueryInfoKey(local)[1]):
try:
key, direc, any = winreg.EnumValue(local, j)
if not is_string_like(direc):
continue
if not os.path.dirname(direc):
direc = os.path.join(directory, direc)
direc = direc.split('\0', 1)[0]
if os.path.splitext(direc)[1][1:] in fontext:
items[direc] = 1
except EnvironmentError:
continue
except WindowsError:
continue
except MemoryError:
continue
return list(six.iterkeys(items))
finally:
winreg.CloseKey(local)
return None
``` |
34,771,191 | I just upgraded to the latest stable release of `matplotlib` (1.5.1) and everytime I import matplotlib I get this message:
```
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```
... which always stalls for a few seconds.
Is this the expected behaviour? Was it the same also before, but just without the printed message? | 2016/01/13 | [
"https://Stackoverflow.com/questions/34771191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/497180/"
] | Confirmed Hugo's approach works for Ubuntu 14.04 LTS/matplotlib 1.5.1:
* deleted ~/.cache/matplotlib/fontList.cache
* ran code, again the warning was issued (assumption: is rebuilding the cache correctly)
* ran code again, no more warning (finally) | This worked for me:
```
sudo apt-get install libfreetype6-dev libxft-dev
``` |
34,771,191 | I just upgraded to the latest stable release of `matplotlib` (1.5.1) and everytime I import matplotlib I get this message:
```
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```
... which always stalls for a few seconds.
Is this the expected behaviour? Was it the same also before, but just without the printed message? | 2016/01/13 | [
"https://Stackoverflow.com/questions/34771191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/497180/"
] | As tom suggested in the comment above, deleting the files:
```
fontList.cache
fontList.py3k.cache
tex.cache
```
solve the problem.
In my case the files were under:
```
`~/.matplotlib`
```
EDITED
A couple of days ago the message appeared again, I deleted the files in the locations mention above without any success. I found that as suggested [here](https://stackoverflow.com/questions/35734074/problems-with-matplotlib-is-building-the-font-cache-using-fc-list-this-may-tak) by [T Mudau](https://stackoverflow.com/users/5695374/tshilidzi-mudau) there's an extra location with text cache files is: `~/.cache/fontconfig` | Confirmed Hugo's approach works for Ubuntu 14.04 LTS/matplotlib 1.5.1:
* deleted ~/.cache/matplotlib/fontList.cache
* ran code, again the warning was issued (assumption: is rebuilding the cache correctly)
* ran code again, no more warning (finally) |
34,771,191 | I just upgraded to the latest stable release of `matplotlib` (1.5.1) and everytime I import matplotlib I get this message:
```
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```
... which always stalls for a few seconds.
Is this the expected behaviour? Was it the same also before, but just without the printed message? | 2016/01/13 | [
"https://Stackoverflow.com/questions/34771191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/497180/"
] | Confirmed Hugo's approach works for Ubuntu 14.04 LTS/matplotlib 1.5.1:
* deleted ~/.cache/matplotlib/fontList.cache
* ran code, again the warning was issued (assumption: is rebuilding the cache correctly)
* ran code again, no more warning (finally) | I ran the python code w. sudo and it cured it...my guess was that there wasn't permission to write that table... good luck! |
68,616,659 | I am trying to find all instance of a number within an equation. And for that, I wrote this python script:
```
re.findall(fr"([\-\+\*\/\(]|^)({val})([\-\+\*\/\)]|$)", equation)
```
Now, when I give it this: `20+5-20`, and search for `20`, the output is as expected: `[('', '20', '+'), ('-', '20', '')]`
But, when I simply do `20+20-5`, it doesn't work anymore and I only get the first instance: `[('', '20', '+')]`
I don't understand why, it's not even a problem of `20` being at start and end, for example, this `5-20*4-20/3` will still match `20` very well. It just doesn't work when the value is repeated consecutively
how do I fix this?
Thank you | 2021/08/02 | [
"https://Stackoverflow.com/questions/68616659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8754028/"
] | The reason your pattern initially does not work for `20+20-5` is that the character class after matching the first occurrence of 20 actually consumes the `+`
After consuming it, for the second occurrence of 20 right after it, this part of the pattern `[\-\+\*\/\(]|^)` can not match as there is no character to match with the character class, and it is not at the start of the string.
Using 20 for example at the place of `{val}` you can use lookarounds, which do not consume the value but only assert that it is present.
Note that you don't have to escape the values in the character class, and for the last assertion you don't have to add another non capture group.
```
(?:(?<=[-+*/(])|^)20(?=[-+*/)]|$)
```
[Regex demo](https://regex101.com/r/dFXhl0/1)
```
import re
strings = [
"20+5-20",
"20+20-5"
]
val = 20
pattern = fr"(?:(?<=[-+*/(])|^){val}(?=[-+*/)]|$)"
for equation in strings:
print(re.findall(pattern, equation))
```
Output
```
['20', '20']
['20', '20']
``` | I suggest just searching for all numbers (integer + decimal) in your expression, and then filtering for certain values:
```py
inp = "20+5-20*3.20"
matches = re.findall(r'\d+(?:\.\d+)?', inp)
matches = [x for x in matches if x == '20']
print(matches) # ['20', '20']
```
Every number in your formula should *only* be surrounded by either arithmetic symbols, parentheses, or whitespace, all of which are non word characters. |
68,616,659 | I am trying to find all instance of a number within an equation. And for that, I wrote this python script:
```
re.findall(fr"([\-\+\*\/\(]|^)({val})([\-\+\*\/\)]|$)", equation)
```
Now, when I give it this: `20+5-20`, and search for `20`, the output is as expected: `[('', '20', '+'), ('-', '20', '')]`
But, when I simply do `20+20-5`, it doesn't work anymore and I only get the first instance: `[('', '20', '+')]`
I don't understand why, it's not even a problem of `20` being at start and end, for example, this `5-20*4-20/3` will still match `20` very well. It just doesn't work when the value is repeated consecutively
how do I fix this?
Thank you | 2021/08/02 | [
"https://Stackoverflow.com/questions/68616659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8754028/"
] | The reason your pattern initially does not work for `20+20-5` is that the character class after matching the first occurrence of 20 actually consumes the `+`
After consuming it, for the second occurrence of 20 right after it, this part of the pattern `[\-\+\*\/\(]|^)` can not match as there is no character to match with the character class, and it is not at the start of the string.
Using 20 for example at the place of `{val}` you can use lookarounds, which do not consume the value but only assert that it is present.
Note that you don't have to escape the values in the character class, and for the last assertion you don't have to add another non capture group.
```
(?:(?<=[-+*/(])|^)20(?=[-+*/)]|$)
```
[Regex demo](https://regex101.com/r/dFXhl0/1)
```
import re
strings = [
"20+5-20",
"20+20-5"
]
val = 20
pattern = fr"(?:(?<=[-+*/(])|^){val}(?=[-+*/)]|$)"
for equation in strings:
print(re.findall(pattern, equation))
```
Output
```
['20', '20']
['20', '20']
``` | I think I found an answer, still not sure how correct it is or why it's working and mine doesn't :/
```
re.findall(fr"(?:(?<=[\=\-\+\*\/\(])|^)({val})(?:(?=[\=\-\+\*\/\)])|$)", equation
```
basically, performing backward lookup and forward lookup to see if the value is between operations |
51,132,025 | I want to create a folder after an hour of the current time in python. I know how to get the current time and date and to create a folder. But how to create a folder at a time specified by me. Any help would be appreciated.
```
from datetime import datetime
from datetime import timedelta
import os
while True:
now = datetime.now ()
#print(now.strftime("%H:%M:%S"))
y = datetime.now () + timedelta (hours = 1)
#print(y.strftime("%H:%M:%S"))
if now== y:
os.makedirs (y.strftime ("%H/%M/%S"))
```
will this work?
EDIT :- I have to run the code continuously i.e. creating folders at every instant of time | 2018/07/02 | [
"https://Stackoverflow.com/questions/51132025",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10020438/"
] | Try this simple code
```
import os
import time
while True:
time.sleep(3600) # pending for 1 hour (3600 seconds)
os.makedirs(your directory) # create the directory
```
EDIT (using parallel programming)
```
import os
import time
from datetime import datetime
from multiprocessing import Pool
def create_folder(now):
# you can manipulate variable "now" as you wish
time.sleep(3600) # pending for 1 hour (3600 seconds)
os.makedirs(your directory) # create the directory
return
while True:
pool = Pool()
now = datetime.now()
result = pool.apply_async(create_folder, [now]) # asynchronously evaluate 'create_folder(now)'
```
this may consume many of your computer resources | check this post for better explanation,you can create a function which will run after given time and you can use this function for creating a folder by simple one line code
os.makedirs("path\directory name")
[Python - Start a Function at Given Time](https://stackoverflow.com/questions/11523918/python-start-a-function-at-given-time?noredirect=1&lq=1) |
51,132,025 | I want to create a folder after an hour of the current time in python. I know how to get the current time and date and to create a folder. But how to create a folder at a time specified by me. Any help would be appreciated.
```
from datetime import datetime
from datetime import timedelta
import os
while True:
now = datetime.now ()
#print(now.strftime("%H:%M:%S"))
y = datetime.now () + timedelta (hours = 1)
#print(y.strftime("%H:%M:%S"))
if now== y:
os.makedirs (y.strftime ("%H/%M/%S"))
```
will this work?
EDIT :- I have to run the code continuously i.e. creating folders at every instant of time | 2018/07/02 | [
"https://Stackoverflow.com/questions/51132025",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10020438/"
] | Try this simple code
```
import os
import time
while True:
time.sleep(3600) # pending for 1 hour (3600 seconds)
os.makedirs(your directory) # create the directory
```
EDIT (using parallel programming)
```
import os
import time
from datetime import datetime
from multiprocessing import Pool
def create_folder(now):
# you can manipulate variable "now" as you wish
time.sleep(3600) # pending for 1 hour (3600 seconds)
os.makedirs(your directory) # create the directory
return
while True:
pool = Pool()
now = datetime.now()
result = pool.apply_async(create_folder, [now]) # asynchronously evaluate 'create_folder(now)'
```
this may consume many of your computer resources | to create multiple folders after every 60 sec, folders like New1, New2,...
```
import time
while True:
time_Begin = time.time()
print("Creating Folder....")
# CODE FOR CREATING FOLDER AND CONDITION
time_End = time.time()
time_Elapsed = time_End - time_Begin
time.sleep(60-time_Elapsed)
```
Until external process not done
```
import time
import random
def creatingFolder():
while externalProcess() != 30:
timeBegin = time.time()
print("Creating Folder....", timeBegin)
timeEnd = time.time()
timeElapsed = timeEnd - timeBegin
time.sleep(5-timeElapsed)
def externalProcess():
return random.randint(1, 30)
creatingFolder()
``` |
42,696,635 | I am trying to use the owlready library in Python. I downloaded the file from link(<https://pypi.python.org/pypi/Owlready>) but when I am importing owlready I am getting following error:
```
>>> from owlready import *
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'owlready'
```
I tried running:
```
pip install owlready
```
I am get the error:
```
error: could not create '/usr/local/lib/python3.4/dist-packages/owlready': Permission denied
``` | 2017/03/09 | [
"https://Stackoverflow.com/questions/42696635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5879314/"
] | Try installing it using `pip` instead.
Run the command `pip install <module name here>` to do so. If you are using python3, run `pip3 install <module name here>`.
If neither of these work you may also try:
`python -m pip install <module name here>`
or
`python3 -m pip install <module name here>`
If you don't yet have `pip`, you should probably get it. Very commonly used python package manager. [Here](https://stackoverflow.com/questions/4750806/how-do-i-install-pip-on-windows) are some details on how to set the tool up. | You need installed library:
```
C:\PythonX.X\Scripts
pip install owlready
Successfully installed Owlready-0.3
``` |
69,969,792 | So, I have to write a code in python that will draw four squares under a function called draw\_square that will take four arguments: the canvas on which the square will be drawn, the color of the square, the side length of the square, and the position of the center of the square. This function should draw the square and return the handle of the square. The create\_rectangle method should only be used inside the draw\_square function. This is what I have so far:
```
from tkinter import*
root = Tk()
my_canvas = Canvas(root, width=900, height=900, bg="white")
my_canvas.pack(pady=30)
def draw_square():
draw_square.create_rectangle(0, 0, 150, 150, fill = "orange",
outline = "orange")
draw_square.create_rectangle(750, 0, 900, 150, fill = "green",
outline = "green")
draw_square.create_rectangle(0, 750, 150, 900, fill = "blue",
outline = "blue")
draw_square.create_rectangle(750, 750, 900, 900, fill = "black",
outline = "black")
draw_square()
```
Please let me know what to do so my code can work. | 2021/11/15 | [
"https://Stackoverflow.com/questions/69969792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17414982/"
] | Use `my_canvas.create_rectangle(...)`.
You were calling a draw rectangle from your function rather than the canvas itself.
Extra info: [Tkinter Canvas creating rectangle](https://stackoverflow.com/questions/42039564/tkinter-canvas-creating-rectangle) | you need to do following:
my\_canvas.create\_rectangle(...)
my\_canvas.pack()
...
...
after you finish for all 4 squares drawing and packing you need to call function like following:
draw\_square()
root.mainloop() |
50,505,067 | I have a simple DAG
```
from airflow import DAG
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
with DAG(dag_id='my_dags.my_dag') as dag:
start = DummyOperator(task_id='start')
end = DummyOperator(task_id='end')
sql = """
SELECT *
FROM 'another_dataset.another_table'
"""
bq_query = BigQueryOperator(bql=sql,
destination_dataset_table='my_dataset.my_table20180524'),
task_id='bq_query',
bigquery_conn_id='my_bq_connection',
use_legacy_sql=False,
write_disposition='WRITE_TRUNCATE',
create_disposition='CREATE_IF_NEEDED',
query_params={})
start >> bq_query >> end
```
When executing the `bq_query` task the SQL query gets saved in a sharded table. I want it to get saved in a daily partitioned table. In order to do so, I only changed `destination_dataset_table` to `my_dataset.my_table$20180524`. I got the error below when executing the `bq_task`:
```
Partitioning specification must be provided in order to create partitioned table
```
How can I specify to BigQuery to save query result to a daily partitioned table ? my first guess has been to use `query_params` in `BigQueryOperator`
but I didn't find any example on how to use that parameter.
**EDIT:**
I'm using `google-cloud==0.27.0` python client ... and it's the one used in Prod :( | 2018/05/24 | [
"https://Stackoverflow.com/questions/50505067",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5715610/"
] | You first need to create an Empty partitioned destination table. Follow instructions here: [link](https://cloud.google.com/bigquery/docs/creating-column-partitions#creating_an_empty_partitioned_table_with_a_schema_definition) to create an empty partitioned table
and then run below airflow pipeline again.
You can try code:
```py
import datetime
from airflow import DAG
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
today_date = datetime.datetime.now().strftime("%Y%m%d")
table_name = 'my_dataset.my_table' + '$' + today_date
with DAG(dag_id='my_dags.my_dag') as dag:
start = DummyOperator(task_id='start')
end = DummyOperator(task_id='end')
sql = """
SELECT *
FROM 'another_dataset.another_table'
"""
bq_query = BigQueryOperator(bql=sql,
destination_dataset_table={{ params.t_name }}),
task_id='bq_query',
bigquery_conn_id='my_bq_connection',
use_legacy_sql=False,
write_disposition='WRITE_TRUNCATE',
create_disposition='CREATE_IF_NEEDED',
query_params={'t_name': table_name},
dag=dag
)
start >> bq_query >> end
```
So what I did is that I created a dynamic table name variable and passed to the BQ operator. | The main issue here is that I don't have access to the new version of google cloud python API, the prod is using version [0.27.0](https://gcloud-python.readthedocs.io/en/stable/bigquery/usage.html).
So, to get the job done, I made something bad and dirty:
* saved the query result in a sharded table, let it be `table_sharded`
* got `table_sharded`'s schema, let it be `table_schema`
* saved `" SELECT * FROM dataset.table_sharded"` query to a partitioned table providing `table_schema`
All this is abstracted in one single operator that uses a hook. The hook is responsible of creating/deleting tables/partitions, getting table schema and running queries on BigQuery.
Have a look at the [code](https://gist.github.com/MassyB/be4555a5fc8e6c433766d71e9d760f91). If there is any other solution, please let me know. |
50,505,067 | I have a simple DAG
```
from airflow import DAG
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
with DAG(dag_id='my_dags.my_dag') as dag:
start = DummyOperator(task_id='start')
end = DummyOperator(task_id='end')
sql = """
SELECT *
FROM 'another_dataset.another_table'
"""
bq_query = BigQueryOperator(bql=sql,
destination_dataset_table='my_dataset.my_table20180524'),
task_id='bq_query',
bigquery_conn_id='my_bq_connection',
use_legacy_sql=False,
write_disposition='WRITE_TRUNCATE',
create_disposition='CREATE_IF_NEEDED',
query_params={})
start >> bq_query >> end
```
When executing the `bq_query` task the SQL query gets saved in a sharded table. I want it to get saved in a daily partitioned table. In order to do so, I only changed `destination_dataset_table` to `my_dataset.my_table$20180524`. I got the error below when executing the `bq_task`:
```
Partitioning specification must be provided in order to create partitioned table
```
How can I specify to BigQuery to save query result to a daily partitioned table ? my first guess has been to use `query_params` in `BigQueryOperator`
but I didn't find any example on how to use that parameter.
**EDIT:**
I'm using `google-cloud==0.27.0` python client ... and it's the one used in Prod :( | 2018/05/24 | [
"https://Stackoverflow.com/questions/50505067",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5715610/"
] | You first need to create an Empty partitioned destination table. Follow instructions here: [link](https://cloud.google.com/bigquery/docs/creating-column-partitions#creating_an_empty_partitioned_table_with_a_schema_definition) to create an empty partitioned table
and then run below airflow pipeline again.
You can try code:
```py
import datetime
from airflow import DAG
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
today_date = datetime.datetime.now().strftime("%Y%m%d")
table_name = 'my_dataset.my_table' + '$' + today_date
with DAG(dag_id='my_dags.my_dag') as dag:
start = DummyOperator(task_id='start')
end = DummyOperator(task_id='end')
sql = """
SELECT *
FROM 'another_dataset.another_table'
"""
bq_query = BigQueryOperator(bql=sql,
destination_dataset_table={{ params.t_name }}),
task_id='bq_query',
bigquery_conn_id='my_bq_connection',
use_legacy_sql=False,
write_disposition='WRITE_TRUNCATE',
create_disposition='CREATE_IF_NEEDED',
query_params={'t_name': table_name},
dag=dag
)
start >> bq_query >> end
```
So what I did is that I created a dynamic table name variable and passed to the BQ operator. | Using BigQueryOperator you can pass time\_partitioning parameter which will create ingestion-time partitioned tables
```
bq_cmd = BigQueryOperator (
task_id= "task_id",
sql= [query],
destination_dataset_table= destination_tbl,
use_legacy_sql= False,
write_disposition= 'WRITE_TRUNCATE',
time_partitioning= {'time_partitioning_type':'DAY'},
allow_large_results= True,
trigger_rule= 'all_success',
query_params= query_params,
dag= dag
)
``` |
50,505,067 | I have a simple DAG
```
from airflow import DAG
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
with DAG(dag_id='my_dags.my_dag') as dag:
start = DummyOperator(task_id='start')
end = DummyOperator(task_id='end')
sql = """
SELECT *
FROM 'another_dataset.another_table'
"""
bq_query = BigQueryOperator(bql=sql,
destination_dataset_table='my_dataset.my_table20180524'),
task_id='bq_query',
bigquery_conn_id='my_bq_connection',
use_legacy_sql=False,
write_disposition='WRITE_TRUNCATE',
create_disposition='CREATE_IF_NEEDED',
query_params={})
start >> bq_query >> end
```
When executing the `bq_query` task the SQL query gets saved in a sharded table. I want it to get saved in a daily partitioned table. In order to do so, I only changed `destination_dataset_table` to `my_dataset.my_table$20180524`. I got the error below when executing the `bq_task`:
```
Partitioning specification must be provided in order to create partitioned table
```
How can I specify to BigQuery to save query result to a daily partitioned table ? my first guess has been to use `query_params` in `BigQueryOperator`
but I didn't find any example on how to use that parameter.
**EDIT:**
I'm using `google-cloud==0.27.0` python client ... and it's the one used in Prod :( | 2018/05/24 | [
"https://Stackoverflow.com/questions/50505067",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5715610/"
] | You first need to create an Empty partitioned destination table. Follow instructions here: [link](https://cloud.google.com/bigquery/docs/creating-column-partitions#creating_an_empty_partitioned_table_with_a_schema_definition) to create an empty partitioned table
and then run below airflow pipeline again.
You can try code:
```py
import datetime
from airflow import DAG
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
today_date = datetime.datetime.now().strftime("%Y%m%d")
table_name = 'my_dataset.my_table' + '$' + today_date
with DAG(dag_id='my_dags.my_dag') as dag:
start = DummyOperator(task_id='start')
end = DummyOperator(task_id='end')
sql = """
SELECT *
FROM 'another_dataset.another_table'
"""
bq_query = BigQueryOperator(bql=sql,
destination_dataset_table={{ params.t_name }}),
task_id='bq_query',
bigquery_conn_id='my_bq_connection',
use_legacy_sql=False,
write_disposition='WRITE_TRUNCATE',
create_disposition='CREATE_IF_NEEDED',
query_params={'t_name': table_name},
dag=dag
)
start >> bq_query >> end
```
So what I did is that I created a dynamic table name variable and passed to the BQ operator. | ```
from datetime import datetime,timedelta
from airflow import DAG
from airflow.models import Variable
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
from airflow.operators.dummy_operator import DummyOperator
DEFAULT_DAG_ARGS = {
'owner': 'airflow',
'depends_on_past': False,
'retries': 2,
'retry_delay': timedelta(minutes=10),
'project_id': Variable.get('gcp_project'),
'zone': Variable.get('gce_zone'),
'region': Variable.get('gce_region'),
'location': Variable.get('gce_zone'),
}
with DAG(
'test',
start_date=datetime(2019, 1, 1),
schedule_interval=None,
catchup=False,
default_args=DEFAULT_DAG_ARGS) as dag:
bq_query = BigQueryOperator(
task_id='create-partition',
bql="""SELECT
*
FROM
`dataset.table_name`""", -- table from which you want to pull data
destination_dataset_table='project.dataset.table_name' + '$' + datetime.now().strftime('%Y%m%d'), -- Auto partitioned table in Bq
write_disposition='WRITE_TRUNCATE',
create_disposition='CREATE_IF_NEEDED',
use_legacy_sql=False,
)
```
I recommend to use Variable in Airflow and create all fields and use in DAG.
By above code, partition will be added in Bigquery table for Todays date. |
69,795,302 | I am a beginner in python so please be gentle and if you do have an answer please provide details.
I just installed the most recent python version 3.10 after making sure to delete all previous installations (including anaconda). I am positive my system is clear of any prior installation.
after installing python 3.10 I open my terminal and run the following:
```
pip list
```
which outputs:
```
pip list
Package Version
---------- -------
pip 21.2.3
setuptools 57.4.0
```
Then I install pipenv
```
pip install pipenv
```
which outputs
```
WARNING: The script virtualenv-clone.exe is installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script virtualenv.exe is installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The scripts pipenv-resolver.exe and pipenv.exe are installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed backports.entry-points-selectable-1.1.0 certifi-2021.10.8 distlib-0.3.3 filelock-3.3.2 pipenv-2021.5.29 platformdirs-2.4.0 six-1.16.0 virtualenv-20.10.0 virtualenv-clone-0.5.7
```
Finally:
```
pipenv
'pipenv' is not recognized as an internal or external command,
operable program or batch file.
```
Now I can see that the terminal spits out 3 warning concerning paths not included in Environment Variables.
I don't understand why pipenv gets installed in user folders.
Indeed my python installation is in C:\Program Files (as I made sure to set up during installation):
```
where python
C:\Program Files\Python310\python.exe
```
If I run:
```
python -m pipenv
```
pipenv does his thing.
So Ok I resolve to use it like this (despite all tutorials have it easy).
I proceed to create a virtual environment in a given folder
```
python -m pipenv shell
```
Everything works and I see the output:
```
Successfully created virtual environment!
Virtualenv location: C:\Users\Giulio\.virtualenvs\project-dhMbrBv2
```
Finally, I inspect the .virtualenvs related folder:
```
01/11/2021 10:58 <DIR> .
01/11/2021 10:58 <DIR> ..
01/11/2021 10:54 42 .gitignore
01/11/2021 10:54 38 .project
01/11/2021 10:58 0 contents.txt
01/11/2021 10:54 <DIR> Lib
01/11/2021 10:54 319 pyvenv.cfg
01/11/2021 10:54 <DIR> Scripts
4 File(s) 399 bytes
4 Dir(s) 660,409,012,224 bytes free
```
Now... shouldn't there be a BIN folder as well?
For instance I would like to set the interpreter in VSCode.
I cannot understand why I am getting all of these small inconsistencies.
Gladly appreciate any help!
EDIT (1):
So apparently there is no `\bin` folder because I am using windows:
In windows the `\Scripts` folder is created instead.
But the problem of pipenv not running without the preemptive call to python persists. | 2021/11/01 | [
"https://Stackoverflow.com/questions/69795302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5159404/"
] | You can refer to this answer solution with the highest upvotes - [Windows reports error when trying to install package using pipenv](https://stackoverflow.com/questions/46041719/windows-reports-error-when-trying-to-install-package-using-pipenv/46041892#46041892)
Or refer to this GitHub issue on pipenv - <https://github.com/pypa/pipenv/issues/3101>
1. First, remove your current version of virtualenv: `pip uninstall virtualenv`
2. Then, remove your current version of pipenv: `pip uninstall pipenv`
3. When you are asked Proceed (y/n)? just enter y. This will give you a clean slate.
4. Finally, you can once again install pipenv and its dependencies: pip install pipenv
5. Check installation with `pipenv --version` | Did follow the suggested steps, but did not work,
Later, set the `C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\Scripts` to "PATH" environment variable and relaunched the cmd.
It worked like a charm...
Note: During the installation itself it warns to set the `C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\Scripts` to "PATH" env variable |
69,795,302 | I am a beginner in python so please be gentle and if you do have an answer please provide details.
I just installed the most recent python version 3.10 after making sure to delete all previous installations (including anaconda). I am positive my system is clear of any prior installation.
after installing python 3.10 I open my terminal and run the following:
```
pip list
```
which outputs:
```
pip list
Package Version
---------- -------
pip 21.2.3
setuptools 57.4.0
```
Then I install pipenv
```
pip install pipenv
```
which outputs
```
WARNING: The script virtualenv-clone.exe is installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script virtualenv.exe is installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The scripts pipenv-resolver.exe and pipenv.exe are installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed backports.entry-points-selectable-1.1.0 certifi-2021.10.8 distlib-0.3.3 filelock-3.3.2 pipenv-2021.5.29 platformdirs-2.4.0 six-1.16.0 virtualenv-20.10.0 virtualenv-clone-0.5.7
```
Finally:
```
pipenv
'pipenv' is not recognized as an internal or external command,
operable program or batch file.
```
Now I can see that the terminal spits out 3 warning concerning paths not included in Environment Variables.
I don't understand why pipenv gets installed in user folders.
Indeed my python installation is in C:\Program Files (as I made sure to set up during installation):
```
where python
C:\Program Files\Python310\python.exe
```
If I run:
```
python -m pipenv
```
pipenv does his thing.
So Ok I resolve to use it like this (despite all tutorials have it easy).
I proceed to create a virtual environment in a given folder
```
python -m pipenv shell
```
Everything works and I see the output:
```
Successfully created virtual environment!
Virtualenv location: C:\Users\Giulio\.virtualenvs\project-dhMbrBv2
```
Finally, I inspect the .virtualenvs related folder:
```
01/11/2021 10:58 <DIR> .
01/11/2021 10:58 <DIR> ..
01/11/2021 10:54 42 .gitignore
01/11/2021 10:54 38 .project
01/11/2021 10:58 0 contents.txt
01/11/2021 10:54 <DIR> Lib
01/11/2021 10:54 319 pyvenv.cfg
01/11/2021 10:54 <DIR> Scripts
4 File(s) 399 bytes
4 Dir(s) 660,409,012,224 bytes free
```
Now... shouldn't there be a BIN folder as well?
For instance I would like to set the interpreter in VSCode.
I cannot understand why I am getting all of these small inconsistencies.
Gladly appreciate any help!
EDIT (1):
So apparently there is no `\bin` folder because I am using windows:
In windows the `\Scripts` folder is created instead.
But the problem of pipenv not running without the preemptive call to python persists. | 2021/11/01 | [
"https://Stackoverflow.com/questions/69795302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5159404/"
] | You can refer to this answer solution with the highest upvotes - [Windows reports error when trying to install package using pipenv](https://stackoverflow.com/questions/46041719/windows-reports-error-when-trying-to-install-package-using-pipenv/46041892#46041892)
Or refer to this GitHub issue on pipenv - <https://github.com/pypa/pipenv/issues/3101>
1. First, remove your current version of virtualenv: `pip uninstall virtualenv`
2. Then, remove your current version of pipenv: `pip uninstall pipenv`
3. When you are asked Proceed (y/n)? just enter y. This will give you a clean slate.
4. Finally, you can once again install pipenv and its dependencies: pip install pipenv
5. Check installation with `pipenv --version` | 1. Go to Advanced System Settings in Control Panel
2. Click on Environmental Variables
3. Under System Variables Look for PATH (If you don't see it then you can click on New and create one).
4. Click on Edit and in Variable Value Paste Link Which Look Like This C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\Scripts
5. Click Ok |
69,795,302 | I am a beginner in python so please be gentle and if you do have an answer please provide details.
I just installed the most recent python version 3.10 after making sure to delete all previous installations (including anaconda). I am positive my system is clear of any prior installation.
after installing python 3.10 I open my terminal and run the following:
```
pip list
```
which outputs:
```
pip list
Package Version
---------- -------
pip 21.2.3
setuptools 57.4.0
```
Then I install pipenv
```
pip install pipenv
```
which outputs
```
WARNING: The script virtualenv-clone.exe is installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script virtualenv.exe is installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The scripts pipenv-resolver.exe and pipenv.exe are installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed backports.entry-points-selectable-1.1.0 certifi-2021.10.8 distlib-0.3.3 filelock-3.3.2 pipenv-2021.5.29 platformdirs-2.4.0 six-1.16.0 virtualenv-20.10.0 virtualenv-clone-0.5.7
```
Finally:
```
pipenv
'pipenv' is not recognized as an internal or external command,
operable program or batch file.
```
Now I can see that the terminal spits out 3 warning concerning paths not included in Environment Variables.
I don't understand why pipenv gets installed in user folders.
Indeed my python installation is in C:\Program Files (as I made sure to set up during installation):
```
where python
C:\Program Files\Python310\python.exe
```
If I run:
```
python -m pipenv
```
pipenv does his thing.
So Ok I resolve to use it like this (despite all tutorials have it easy).
I proceed to create a virtual environment in a given folder
```
python -m pipenv shell
```
Everything works and I see the output:
```
Successfully created virtual environment!
Virtualenv location: C:\Users\Giulio\.virtualenvs\project-dhMbrBv2
```
Finally, I inspect the .virtualenvs related folder:
```
01/11/2021 10:58 <DIR> .
01/11/2021 10:58 <DIR> ..
01/11/2021 10:54 42 .gitignore
01/11/2021 10:54 38 .project
01/11/2021 10:58 0 contents.txt
01/11/2021 10:54 <DIR> Lib
01/11/2021 10:54 319 pyvenv.cfg
01/11/2021 10:54 <DIR> Scripts
4 File(s) 399 bytes
4 Dir(s) 660,409,012,224 bytes free
```
Now... shouldn't there be a BIN folder as well?
For instance I would like to set the interpreter in VSCode.
I cannot understand why I am getting all of these small inconsistencies.
Gladly appreciate any help!
EDIT (1):
So apparently there is no `\bin` folder because I am using windows:
In windows the `\Scripts` folder is created instead.
But the problem of pipenv not running without the preemptive call to python persists. | 2021/11/01 | [
"https://Stackoverflow.com/questions/69795302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5159404/"
] | You can refer to this answer solution with the highest upvotes - [Windows reports error when trying to install package using pipenv](https://stackoverflow.com/questions/46041719/windows-reports-error-when-trying-to-install-package-using-pipenv/46041892#46041892)
Or refer to this GitHub issue on pipenv - <https://github.com/pypa/pipenv/issues/3101>
1. First, remove your current version of virtualenv: `pip uninstall virtualenv`
2. Then, remove your current version of pipenv: `pip uninstall pipenv`
3. When you are asked Proceed (y/n)? just enter y. This will give you a clean slate.
4. Finally, you can once again install pipenv and its dependencies: pip install pipenv
5. Check installation with `pipenv --version` | Search for Environmental Variables on your search and go on it
Click on the "Environmental Variables" Button
Under System Variables Look for PATH (If you don't see it then you can click on New and create one):
Click on Edit and in Variable Value Paste Link Which Look Like This C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\Scripts
Click Ok
Create this path too:
C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\site-packages |
69,795,302 | I am a beginner in python so please be gentle and if you do have an answer please provide details.
I just installed the most recent python version 3.10 after making sure to delete all previous installations (including anaconda). I am positive my system is clear of any prior installation.
after installing python 3.10 I open my terminal and run the following:
```
pip list
```
which outputs:
```
pip list
Package Version
---------- -------
pip 21.2.3
setuptools 57.4.0
```
Then I install pipenv
```
pip install pipenv
```
which outputs
```
WARNING: The script virtualenv-clone.exe is installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script virtualenv.exe is installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The scripts pipenv-resolver.exe and pipenv.exe are installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed backports.entry-points-selectable-1.1.0 certifi-2021.10.8 distlib-0.3.3 filelock-3.3.2 pipenv-2021.5.29 platformdirs-2.4.0 six-1.16.0 virtualenv-20.10.0 virtualenv-clone-0.5.7
```
Finally:
```
pipenv
'pipenv' is not recognized as an internal or external command,
operable program or batch file.
```
Now I can see that the terminal spits out 3 warning concerning paths not included in Environment Variables.
I don't understand why pipenv gets installed in user folders.
Indeed my python installation is in C:\Program Files (as I made sure to set up during installation):
```
where python
C:\Program Files\Python310\python.exe
```
If I run:
```
python -m pipenv
```
pipenv does his thing.
So Ok I resolve to use it like this (despite all tutorials have it easy).
I proceed to create a virtual environment in a given folder
```
python -m pipenv shell
```
Everything works and I see the output:
```
Successfully created virtual environment!
Virtualenv location: C:\Users\Giulio\.virtualenvs\project-dhMbrBv2
```
Finally, I inspect the .virtualenvs related folder:
```
01/11/2021 10:58 <DIR> .
01/11/2021 10:58 <DIR> ..
01/11/2021 10:54 42 .gitignore
01/11/2021 10:54 38 .project
01/11/2021 10:58 0 contents.txt
01/11/2021 10:54 <DIR> Lib
01/11/2021 10:54 319 pyvenv.cfg
01/11/2021 10:54 <DIR> Scripts
4 File(s) 399 bytes
4 Dir(s) 660,409,012,224 bytes free
```
Now... shouldn't there be a BIN folder as well?
For instance I would like to set the interpreter in VSCode.
I cannot understand why I am getting all of these small inconsistencies.
Gladly appreciate any help!
EDIT (1):
So apparently there is no `\bin` folder because I am using windows:
In windows the `\Scripts` folder is created instead.
But the problem of pipenv not running without the preemptive call to python persists. | 2021/11/01 | [
"https://Stackoverflow.com/questions/69795302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5159404/"
] | 1. Go to Advanced System Settings in Control Panel
2. Click on Environmental Variables
3. Under System Variables Look for PATH (If you don't see it then you can click on New and create one).
4. Click on Edit and in Variable Value Paste Link Which Look Like This C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\Scripts
5. Click Ok | Did follow the suggested steps, but did not work,
Later, set the `C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\Scripts` to "PATH" environment variable and relaunched the cmd.
It worked like a charm...
Note: During the installation itself it warns to set the `C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\Scripts` to "PATH" env variable |
69,795,302 | I am a beginner in python so please be gentle and if you do have an answer please provide details.
I just installed the most recent python version 3.10 after making sure to delete all previous installations (including anaconda). I am positive my system is clear of any prior installation.
after installing python 3.10 I open my terminal and run the following:
```
pip list
```
which outputs:
```
pip list
Package Version
---------- -------
pip 21.2.3
setuptools 57.4.0
```
Then I install pipenv
```
pip install pipenv
```
which outputs
```
WARNING: The script virtualenv-clone.exe is installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script virtualenv.exe is installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The scripts pipenv-resolver.exe and pipenv.exe are installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed backports.entry-points-selectable-1.1.0 certifi-2021.10.8 distlib-0.3.3 filelock-3.3.2 pipenv-2021.5.29 platformdirs-2.4.0 six-1.16.0 virtualenv-20.10.0 virtualenv-clone-0.5.7
```
Finally:
```
pipenv
'pipenv' is not recognized as an internal or external command,
operable program or batch file.
```
Now I can see that the terminal spits out 3 warning concerning paths not included in Environment Variables.
I don't understand why pipenv gets installed in user folders.
Indeed my python installation is in C:\Program Files (as I made sure to set up during installation):
```
where python
C:\Program Files\Python310\python.exe
```
If I run:
```
python -m pipenv
```
pipenv does his thing.
So Ok I resolve to use it like this (despite all tutorials have it easy).
I proceed to create a virtual environment in a given folder
```
python -m pipenv shell
```
Everything works and I see the output:
```
Successfully created virtual environment!
Virtualenv location: C:\Users\Giulio\.virtualenvs\project-dhMbrBv2
```
Finally, I inspect the .virtualenvs related folder:
```
01/11/2021 10:58 <DIR> .
01/11/2021 10:58 <DIR> ..
01/11/2021 10:54 42 .gitignore
01/11/2021 10:54 38 .project
01/11/2021 10:58 0 contents.txt
01/11/2021 10:54 <DIR> Lib
01/11/2021 10:54 319 pyvenv.cfg
01/11/2021 10:54 <DIR> Scripts
4 File(s) 399 bytes
4 Dir(s) 660,409,012,224 bytes free
```
Now... shouldn't there be a BIN folder as well?
For instance I would like to set the interpreter in VSCode.
I cannot understand why I am getting all of these small inconsistencies.
Gladly appreciate any help!
EDIT (1):
So apparently there is no `\bin` folder because I am using windows:
In windows the `\Scripts` folder is created instead.
But the problem of pipenv not running without the preemptive call to python persists. | 2021/11/01 | [
"https://Stackoverflow.com/questions/69795302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5159404/"
] | 1. Go to Advanced System Settings in Control Panel
2. Click on Environmental Variables
3. Under System Variables Look for PATH (If you don't see it then you can click on New and create one).
4. Click on Edit and in Variable Value Paste Link Which Look Like This C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\Scripts
5. Click Ok | Search for Environmental Variables on your search and go on it
Click on the "Environmental Variables" Button
Under System Variables Look for PATH (If you don't see it then you can click on New and create one):
Click on Edit and in Variable Value Paste Link Which Look Like This C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\Scripts
Click Ok
Create this path too:
C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\site-packages |
20,590,331 | On my local PC I can do "python manage.py runserver" and the site runs perfectly, CSS and all. I just deployed the site to a public server and while most things work, CSS (and the images) are not loading into the templates.
I found some other questions with a similar issue, but my code did not appear to suffer from any of the same problems.
Within the Django project settings the same python function is being used to allow the app to see the templates and the static CSS / image files. The templates are being found by the views and are loading without issue.
Both from settings.py:
```
STATICFILES_DIRS = (
os.path.join(os.path.dirname(__file__), 'templates/css').replace('\\','/'),
os.path.join(os.path.dirname(__file__), 'content').replace('\\','/'),
)
TEMPLATE_DIRS = (
os.path.join(os.path.dirname(__file__), 'templates').replace('\\','/'),
)
```
In the base.html file which the rest of the templates all extend:
```
<head>
{% load staticfiles %}
<link rel="stylesheet" type="text/css" href="{% static "style.css" %}" media="screen">
</head>
```
Directory structure:
```
|project_root/
|--manage.py
|--project/
| |--settings.py
| |--__init__.py
| |--content/
| | |--header.jpg
| |--templates/
| | |--base.html
| | |--css/
| | | |--style.css
```
My first thought when the CSS didn't load is that Django couldn't find the style.css file, but since I am using the same "os.path.dirname(**file**)" technique as with the templates, I am not sure this is the case.
What do I have wrong here?
Edit:
I neglected to mention that both the PC and server are running Python 2.7.5 and Django 1.5.5. | 2013/12/15 | [
"https://Stackoverflow.com/questions/20590331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1803100/"
] | In Winforms(or even in WPF) only the thread who create the component can update it you should make your code thread-safe.
For this reason the debugger raises an InvalidOperationException with the message, "Control control name accessed from a thread other than the thread it was created on." which is encapsulated as AggregateException because tasks encapsulate all exceptions in aggregate exception
you can use this code to iterate through all exceptions in aggregate exception raised by the task
```
try
{
t.Wait();
}
catch (AggregateException ae)
{
// Assume we know what's going on with this particular exception.
// Rethrow anything else. AggregateException.Handle provides
// another way to express this. See later example.
foreach (var e in ae.InnerExceptions)
{
if (e is MyCustomException)
{
Console.WriteLine(e.Message);
}
else
{
throw;
}
}
}
```
To make your thread safe just do something like this
```
// If the calling thread is different from the thread that
// created the pictureBox control, this method creates a
// SetImageCallback and calls itself asynchronously using the
// Invoke method.
// This delegate enables asynchronous calls for setting
// the text property on a TextBox control.
delegate void SetPictureBoxCallback(Image image);
// If the calling thread is the same as the thread that created
// the PictureBox control, the Image property is set directly.
private void SetPictureBox(Image image)
{
// InvokeRequired required compares the thread ID of the
// calling thread to the thread ID of the creating thread.
// If these threads are different, it returns true.
if (this.picturebox1.InvokeRequired)
{
SetPictureBoxCallback d = new SetPictureBoxCallback(SetPictureBox);
this.Invoke(d, new object[] { image });
}
else
{
picturebox1.Image= image;
}
}
``` | Another option to use a Task result within the calling thread is using `async/await` key word. This way compiler do the work of capture the right `TaskScheduler` for you. Look code below. You need to add `try/catch` statements for Exceptions handling.
This way, code is still asynchronous but looks like a synchronous one, remember that a code should be readable.
```
var _image = await Task<Image>.Factory.StartNew(InvertImage, TaskCreationOptions.LongRunning);
pictureBox1.Image = _image;
``` |
20,590,331 | On my local PC I can do "python manage.py runserver" and the site runs perfectly, CSS and all. I just deployed the site to a public server and while most things work, CSS (and the images) are not loading into the templates.
I found some other questions with a similar issue, but my code did not appear to suffer from any of the same problems.
Within the Django project settings the same python function is being used to allow the app to see the templates and the static CSS / image files. The templates are being found by the views and are loading without issue.
Both from settings.py:
```
STATICFILES_DIRS = (
os.path.join(os.path.dirname(__file__), 'templates/css').replace('\\','/'),
os.path.join(os.path.dirname(__file__), 'content').replace('\\','/'),
)
TEMPLATE_DIRS = (
os.path.join(os.path.dirname(__file__), 'templates').replace('\\','/'),
)
```
In the base.html file which the rest of the templates all extend:
```
<head>
{% load staticfiles %}
<link rel="stylesheet" type="text/css" href="{% static "style.css" %}" media="screen">
</head>
```
Directory structure:
```
|project_root/
|--manage.py
|--project/
| |--settings.py
| |--__init__.py
| |--content/
| | |--header.jpg
| |--templates/
| | |--base.html
| | |--css/
| | | |--style.css
```
My first thought when the CSS didn't load is that Django couldn't find the style.css file, but since I am using the same "os.path.dirname(**file**)" technique as with the templates, I am not sure this is the case.
What do I have wrong here?
Edit:
I neglected to mention that both the PC and server are running Python 2.7.5 and Django 1.5.5. | 2013/12/15 | [
"https://Stackoverflow.com/questions/20590331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1803100/"
] | By default continuation runs on default scheduler which is Threadpool Scheduler. Threadpool threads are always background threads so they can't update the UI components (as UI components always run on foreground thread). So your code won't work.
**Fix: Get the scheduler from UI thread.This will ensure that the continuation runs on the same thread which created the UI component**
```
var scheduler = TaskScheduler.FromCurrentSynchronizationContext();
```
and than pass it to ContinueWith function.
```
t.ContinueWith( task => {
some code here;
pictureBox1.Image = t.Result;
},
TaskContinuationOptions.OnlyOnRanToCompletition,scheduler);
``` | Another option to use a Task result within the calling thread is using `async/await` key word. This way compiler do the work of capture the right `TaskScheduler` for you. Look code below. You need to add `try/catch` statements for Exceptions handling.
This way, code is still asynchronous but looks like a synchronous one, remember that a code should be readable.
```
var _image = await Task<Image>.Factory.StartNew(InvertImage, TaskCreationOptions.LongRunning);
pictureBox1.Image = _image;
``` |
69,628,226 | I have made an browser with python. I converted it into exe file with pyinstaller. But it's size is 109,426kb!!! I need to upload it to some places and it is showing "Please try to upload files under 25md". What will I do? How to change this big exe file 24mb file? | 2021/10/19 | [
"https://Stackoverflow.com/questions/69628226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15622728/"
] | If you have task that is re-run with the same "Execution Date", using Airflow Variables is your best choice. XCom will be deleted by definition when you re-run the same task with the same execution date and it won't change.
Basically what you want to do is to store the "state" of task execution and it's kinda "against" Airflow's principle of idempotent tasks (where re-running the task should produce "final" results of running the task every time you run it. You want to store the state of the task between re-runs on the other hand and have it behave differently with subsequent re-runs - based on the stored state.
Another option that you could use, is to store the state in an external storage (for example object in S3). This might be better in case of performance if you do not want to load your DB too much. You could come up with a "convention" of naming of such state object and pull it a start and push when you finish the task. | You could use XComs with `include_prior_dates` parameter. [Docs](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/models/taskinstance/index.html#airflow.models.taskinstance.TaskInstance.xcom_pull) state the following:
>
> **include\_prior\_dates** (bool) -- If False, only XComs from the current execution\_date are returned. If True, XComs from previous dates are returned as well.
>
>
>
(Default value is `False`)
Then you would do: `xcom_pull(task_ids='previous_task', include_prior_dates=True)`
I haven't tried out personally but looks like this may be a good solution to your case. |
68,653,388 | I want to replace the values in manifest.json. My manifest.json file looks like
```
{
"uat1": {
"database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
},
"uat2": {
"database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
}
```
Whenever there will be any update on uat1 database (or any other component), it will update the manifest file with version and sysdate. My output manifest.json will look like
```
{
"uat1": {
"database": {
"artifact_version": "12.0.3",
"date": "04/08/2021 19:50:14"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
},
"uat2": {
"database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
}
```
I am writing a python code, but values not getting properly displayed:
I am running python like **test.py 12.0.3 uat1 database**
My code looks like:
```
import sys
import json
from datetime import datetime
version = str(sys.argv[1])
env = str(sys.argv[2])
script = str(sys.argv[3])
now = datetime.now()
sdate = now.strftime("%d/%m/%Y %H:%M:%S")
print(sdate)
print("%s %s %s" % (version, env, script))
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "r") as f1:
data = json.load(f1)
f1.close()
#print(data)
for k1, v1 in data.items():
if k1 == env:
for k2, v2 in v1.items():
if k2 == script:
v2['artifact_version'] = version
v2['date'] = sdate
print(v2)
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "w") as f2:
for i in k1:
json.dump(v2, f2, indent=4)
```
Th Output in manifest.json I am getting is:
```
{
"artifact_version": "0.0.1",
"date": "sysdate"
}{
"artifact_version": "0.0.1",
"date": "sysdate"
}{
"artifact_version": "0.0.1",
"date": "sysdate"
}{
"artifact_version": "0.0.1",
"date": "sysdate"
}
```
Please tell me how should I proceed. | 2021/08/04 | [
"https://Stackoverflow.com/questions/68653388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4632240/"
] | Just parse it, update the necessary value, and write it back to the file.
```
with open("manifest.json") as f:
d = json.load(f)
d[env][script] = {"artifact_version": ..., "date": ...}
with tempfile.NamedTemporaryFile(delete=False) as f:
try:
json.dump(d, f)
except Exception:
raise
else:
os.rename(f.name, "manifest.json")
```
If you aren't concerned about `manifest.json` being truncated before successfully writing the new data, you can reduce the third step to
```
with open("manifest.json", "w") as f:
json.dump(d, f)
``` | No, to 'edit' a `json` file, you have to load the whole file in with: `data = json.load(f1)`, then perform the transform, then write the write the whole lot out again:
```py
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "r") as f1:
data = json.load(f1)
#no close needed
#print(data)
for k1, v1 in data.items():
if k1 == env:
for k2, v2 in v1.items():
if k2 == script:
v2['artifact_version'] = version
v2['date'] = sdate
print(v2)
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "w") as f2:
json.dump(data, f2, indent=4)
``` |
68,653,388 | I want to replace the values in manifest.json. My manifest.json file looks like
```
{
"uat1": {
"database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
},
"uat2": {
"database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
}
```
Whenever there will be any update on uat1 database (or any other component), it will update the manifest file with version and sysdate. My output manifest.json will look like
```
{
"uat1": {
"database": {
"artifact_version": "12.0.3",
"date": "04/08/2021 19:50:14"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
},
"uat2": {
"database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
}
```
I am writing a python code, but values not getting properly displayed:
I am running python like **test.py 12.0.3 uat1 database**
My code looks like:
```
import sys
import json
from datetime import datetime
version = str(sys.argv[1])
env = str(sys.argv[2])
script = str(sys.argv[3])
now = datetime.now()
sdate = now.strftime("%d/%m/%Y %H:%M:%S")
print(sdate)
print("%s %s %s" % (version, env, script))
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "r") as f1:
data = json.load(f1)
f1.close()
#print(data)
for k1, v1 in data.items():
if k1 == env:
for k2, v2 in v1.items():
if k2 == script:
v2['artifact_version'] = version
v2['date'] = sdate
print(v2)
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "w") as f2:
for i in k1:
json.dump(v2, f2, indent=4)
```
Th Output in manifest.json I am getting is:
```
{
"artifact_version": "0.0.1",
"date": "sysdate"
}{
"artifact_version": "0.0.1",
"date": "sysdate"
}{
"artifact_version": "0.0.1",
"date": "sysdate"
}{
"artifact_version": "0.0.1",
"date": "sysdate"
}
```
Please tell me how should I proceed. | 2021/08/04 | [
"https://Stackoverflow.com/questions/68653388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4632240/"
] | You have dict and there is no need to iterate through it
And you need to dump json just once
```py
data[env][script].update(
artifact_version = version,
date = sdate
)
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "w") as f2:
json.dump(data, f2, indent=4)
``` | No, to 'edit' a `json` file, you have to load the whole file in with: `data = json.load(f1)`, then perform the transform, then write the write the whole lot out again:
```py
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "r") as f1:
data = json.load(f1)
#no close needed
#print(data)
for k1, v1 in data.items():
if k1 == env:
for k2, v2 in v1.items():
if k2 == script:
v2['artifact_version'] = version
v2['date'] = sdate
print(v2)
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "w") as f2:
json.dump(data, f2, indent=4)
``` |
68,653,388 | I want to replace the values in manifest.json. My manifest.json file looks like
```
{
"uat1": {
"database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
},
"uat2": {
"database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
}
```
Whenever there will be any update on uat1 database (or any other component), it will update the manifest file with version and sysdate. My output manifest.json will look like
```
{
"uat1": {
"database": {
"artifact_version": "12.0.3",
"date": "04/08/2021 19:50:14"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
},
"uat2": {
"database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
}
```
I am writing a python code, but values not getting properly displayed:
I am running python like **test.py 12.0.3 uat1 database**
My code looks like:
```
import sys
import json
from datetime import datetime
version = str(sys.argv[1])
env = str(sys.argv[2])
script = str(sys.argv[3])
now = datetime.now()
sdate = now.strftime("%d/%m/%Y %H:%M:%S")
print(sdate)
print("%s %s %s" % (version, env, script))
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "r") as f1:
data = json.load(f1)
f1.close()
#print(data)
for k1, v1 in data.items():
if k1 == env:
for k2, v2 in v1.items():
if k2 == script:
v2['artifact_version'] = version
v2['date'] = sdate
print(v2)
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "w") as f2:
for i in k1:
json.dump(v2, f2, indent=4)
```
Th Output in manifest.json I am getting is:
```
{
"artifact_version": "0.0.1",
"date": "sysdate"
}{
"artifact_version": "0.0.1",
"date": "sysdate"
}{
"artifact_version": "0.0.1",
"date": "sysdate"
}{
"artifact_version": "0.0.1",
"date": "sysdate"
}
```
Please tell me how should I proceed. | 2021/08/04 | [
"https://Stackoverflow.com/questions/68653388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4632240/"
] | You have dict and there is no need to iterate through it
And you need to dump json just once
```py
data[env][script].update(
artifact_version = version,
date = sdate
)
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "w") as f2:
json.dump(data, f2, indent=4)
``` | Just parse it, update the necessary value, and write it back to the file.
```
with open("manifest.json") as f:
d = json.load(f)
d[env][script] = {"artifact_version": ..., "date": ...}
with tempfile.NamedTemporaryFile(delete=False) as f:
try:
json.dump(d, f)
except Exception:
raise
else:
os.rename(f.name, "manifest.json")
```
If you aren't concerned about `manifest.json` being truncated before successfully writing the new data, you can reduce the third step to
```
with open("manifest.json", "w") as f:
json.dump(d, f)
``` |
59,939,819 | I am trying to run Django unit tests in the VSCode Test Explorer, also, I want the CodeLens 'Run Tests' button to appear above each test.
[enter image description here](https://i.stack.imgur.com/kTTjN.png)
However, in the Test Explorer, When I press the Play button, an error displays:
"No Tests were Ran" [No Tests were Ran](https://i.stack.imgur.com/mMlI0.png)
My directory structure is:
* Workspace\_Folder
+ settings.json
+ repo
- python\_module\_1
* sub\_module
+ tests
- test\_a.py
I am using the unittest framework.
My Settings.json looks like this:
```
{
"python.pythonPath": "/Users/nbonilla/.local/share/virtualenvs/koku-iTLe243o/bin/python",
"python.testing.unittestArgs": [
"-v",
"-s",
"${workspaceFolder}/python_module_1/sub_module/"
],
"python.testing.pytestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.testing.unittestEnabled": true,
}
```
When I press the green "Play" button [Test Explorer Play Button](https://i.stack.imgur.com/oeJ8U.png)
The Python Test Log Output shows the message "Unhandled exception in thread started by"
[Unhandled Exception in thread started by](https://i.stack.imgur.com/04HUt.png)
I am using a pipenv virtual environment.
How do I run these Django Tests in the VSCode Test Explorer?
I saw that using pyTest is an alternative to unittest, how can this be set up easily as a replacement? | 2020/01/27 | [
"https://Stackoverflow.com/questions/59939819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12064691/"
] | Please consider the following checks:
1. you should have `__init__.py` in your test directory
2. in vscode on test configuration use pytest framework
3. use: `pip install pytest-django`
4. copy `pytest.ini` in the root with this content:
```
# -- FILE: pytest.ini (or tox.ini)
[pytest]
DJANGO_SETTINGS_MODULE = <your-web-project-name>.settings (like mysite.settings)
# -- recommended but optional:
python_files = tests.py test_*.py *_tests.py
```
Now it should work as you wish.
You can see [this stackoverflow link](https://stackoverflow.com/questions/55837922/vscode-pytest-test-discovery-fails) | I've been looking into this as well. The thing is that python unittest pytest and nose are not alternative to Django tests, because they would not be able to load everything Django tests do.
Django Test Runner might work for you:
<https://marketplace.visualstudio.com/items?itemName=Pachwenko.django-test-runner>
-- I was having trouble with this still since my project root does not directly contain my app(s), but judging on your project structure may work for you. |
59,939,819 | I am trying to run Django unit tests in the VSCode Test Explorer, also, I want the CodeLens 'Run Tests' button to appear above each test.
[enter image description here](https://i.stack.imgur.com/kTTjN.png)
However, in the Test Explorer, When I press the Play button, an error displays:
"No Tests were Ran" [No Tests were Ran](https://i.stack.imgur.com/mMlI0.png)
My directory structure is:
* Workspace\_Folder
+ settings.json
+ repo
- python\_module\_1
* sub\_module
+ tests
- test\_a.py
I am using the unittest framework.
My Settings.json looks like this:
```
{
"python.pythonPath": "/Users/nbonilla/.local/share/virtualenvs/koku-iTLe243o/bin/python",
"python.testing.unittestArgs": [
"-v",
"-s",
"${workspaceFolder}/python_module_1/sub_module/"
],
"python.testing.pytestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.testing.unittestEnabled": true,
}
```
When I press the green "Play" button [Test Explorer Play Button](https://i.stack.imgur.com/oeJ8U.png)
The Python Test Log Output shows the message "Unhandled exception in thread started by"
[Unhandled Exception in thread started by](https://i.stack.imgur.com/04HUt.png)
I am using a pipenv virtual environment.
How do I run these Django Tests in the VSCode Test Explorer?
I saw that using pyTest is an alternative to unittest, how can this be set up easily as a replacement? | 2020/01/27 | [
"https://Stackoverflow.com/questions/59939819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12064691/"
] | I've been looking into this as well. The thing is that python unittest pytest and nose are not alternative to Django tests, because they would not be able to load everything Django tests do.
Django Test Runner might work for you:
<https://marketplace.visualstudio.com/items?itemName=Pachwenko.django-test-runner>
-- I was having trouble with this still since my project root does not directly contain my app(s), but judging on your project structure may work for you. | Here is generic way to get Django tests to run with **full** vscode support
1. Configure python tests
1. Choose unittest
2. Root Directory
3. `test*.py`
2. Then each test case will need to look like the following:
```
from django.test import TestCase
class views(TestCase):
@classmethod
def setUpClass(cls):
import django
django.setup()
def test_something(self,):
from user.model import something
...
```
Any functions you want to import **have** to be imported inside the test case (like shown). The setUpClass runs before the test class is setup and will setup your django project. Once it's setup you can import functions inside the test methods. If you try to import models/views at the top of your script, it will raise an exception since django isn't setup. If you have any other preinitialization that needs to run for your django project to work, run it inside `setUpClass` |
59,939,819 | I am trying to run Django unit tests in the VSCode Test Explorer, also, I want the CodeLens 'Run Tests' button to appear above each test.
[enter image description here](https://i.stack.imgur.com/kTTjN.png)
However, in the Test Explorer, When I press the Play button, an error displays:
"No Tests were Ran" [No Tests were Ran](https://i.stack.imgur.com/mMlI0.png)
My directory structure is:
* Workspace\_Folder
+ settings.json
+ repo
- python\_module\_1
* sub\_module
+ tests
- test\_a.py
I am using the unittest framework.
My Settings.json looks like this:
```
{
"python.pythonPath": "/Users/nbonilla/.local/share/virtualenvs/koku-iTLe243o/bin/python",
"python.testing.unittestArgs": [
"-v",
"-s",
"${workspaceFolder}/python_module_1/sub_module/"
],
"python.testing.pytestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.testing.unittestEnabled": true,
}
```
When I press the green "Play" button [Test Explorer Play Button](https://i.stack.imgur.com/oeJ8U.png)
The Python Test Log Output shows the message "Unhandled exception in thread started by"
[Unhandled Exception in thread started by](https://i.stack.imgur.com/04HUt.png)
I am using a pipenv virtual environment.
How do I run these Django Tests in the VSCode Test Explorer?
I saw that using pyTest is an alternative to unittest, how can this be set up easily as a replacement? | 2020/01/27 | [
"https://Stackoverflow.com/questions/59939819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12064691/"
] | Please consider the following checks:
1. you should have `__init__.py` in your test directory
2. in vscode on test configuration use pytest framework
3. use: `pip install pytest-django`
4. copy `pytest.ini` in the root with this content:
```
# -- FILE: pytest.ini (or tox.ini)
[pytest]
DJANGO_SETTINGS_MODULE = <your-web-project-name>.settings (like mysite.settings)
# -- recommended but optional:
python_files = tests.py test_*.py *_tests.py
```
Now it should work as you wish.
You can see [this stackoverflow link](https://stackoverflow.com/questions/55837922/vscode-pytest-test-discovery-fails) | Here is generic way to get Django tests to run with **full** vscode support
1. Configure python tests
1. Choose unittest
2. Root Directory
3. `test*.py`
2. Then each test case will need to look like the following:
```
from django.test import TestCase
class views(TestCase):
@classmethod
def setUpClass(cls):
import django
django.setup()
def test_something(self,):
from user.model import something
...
```
Any functions you want to import **have** to be imported inside the test case (like shown). The setUpClass runs before the test class is setup and will setup your django project. Once it's setup you can import functions inside the test methods. If you try to import models/views at the top of your script, it will raise an exception since django isn't setup. If you have any other preinitialization that needs to run for your django project to work, run it inside `setUpClass` |
33,551,878 | I'm having a problem to read partitioned parquet files generated by Spark in Hive. I'm able to create the external table in hive but when I try to select a few lines, hive returns only an "OK" message with no rows.
I'm able to read the partitioned parquet files correctly in Spark, so I'm assuming that they were generated correctly.
I'm also able to read these files when I create an external table in hive without partitioning.
Does anyone have a suggestion?
**My Environment is:**
* Cluster EMR 4.1.0
* Hive 1.0.0
* Spark 1.5.0
* Hue 3.7.1
* Parquet files are stored in a S3 bucket (s3://staging-dev/test/ttfourfieldspart2/year=2013/month=11)
**My Spark config file has the following parameters(/etc/spark/conf.dist/spark-defaults.conf):**
```
spark.master yarn
spark.driver.extraClassPath /etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*
spark.driver.extraLibraryPath /usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native
spark.executor.extraClassPath /etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*
spark.executor.extraLibraryPath /usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native
spark.eventLog.enabled true
spark.eventLog.dir hdfs:///var/log/spark/apps
spark.history.fs.logDirectory hdfs:///var/log/spark/apps
spark.yarn.historyServer.address ip-10-37-161-246.ec2.internal:18080
spark.history.ui.port 18080
spark.shuffle.service.enabled true
spark.driver.extraJavaOptions -Dlog4j.configuration=file:///etc/spark/conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=512M -XX:OnOutOfMemoryError='kill -9 %p'
spark.executor.extraJavaOptions -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError='kill -9 %p'
spark.executor.memory 4G
spark.driver.memory 4G
spark.dynamicAllocation.enabled true
spark.dynamicAllocation.maxExecutors 100
spark.dynamicAllocation.minExecutors 1
```
**Hive config file has the following parameters(/etc/hive/conf/hive-site.xml):**
```
<configuration>
<!-- Hive Configuration can either be stored in this file or in the hadoop configuration files -->
<!-- that are implied by Hadoop setup variables. -->
<!-- Aside from Hadoop setup variables - this file is provided as a convenience so that Hive -->
<!-- users do not have to edit hadoop configuration files (that may be managed as a centralized -->
<!-- resource). -->
<!-- Hive Execution Parameters -->
<property>
<name>hbase.zookeeper.quorum</name>
<value>ip-10-xx-xxx-xxx.ec2.internal</value>
<description>http://wiki.apache.org/hadoop/Hive/HBaseIntegration</description>
</property>
<property>
<name>hive.execution.engine</name>
<value>mr</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ip-10-xx-xxx-xxx.ec2.internal:8020</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://ip-10-xx-xxx-xxx.ec2.internal:9083</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://ip-10-xx-xxx-xxx.ec2.internal:3306/hive?createDatabaseIfNotExist=true</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.mariadb.jdbc.Driver</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>1R72JFCDG5XaaDTB</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>-1</value>
</property>
<property>
<name>mapred.max.split.size</name>
<value>256000000</value>
</property>
<property>
<name>hive.metastore.connect.retries</name>
<value>5</value>
</property>
<property>
<name>hive.optimize.sort.dynamic.partition</name>
<value>true</value>
</property>
<property><name>hive.exec.dynamic.partition</name><value>true</value></property>
<property><name>hive.exec.dynamic.partition.mode</name><value>nonstrict</value></property>
<property><name>hive.exec.max.dynamic.partitions</name><value>10000</value></property>
<property><name>hive.exec.max.dynamic.partitions.pernode</name><value>500</value></property>
</configuration>
```
**My python code that reads the partitioned parquet file:**
```
from pyspark import *
from pyspark.sql import *
from pyspark.sql.types import *
from pyspark.sql.functions import *
df7 = sqlContext.read.parquet('s3://staging-dev/test/ttfourfieldspart2/')
```
**The parquet file schema printed by Spark:**
```
>>> df7.schema
StructType(List(StructField(transactionid,StringType,true),StructField(eventts,TimestampType,true),StructField(year,IntegerType,true),StructField(month,IntegerType,true)))
>>> df7.printSchema()
root
|-- transactionid: string (nullable = true)
|-- eventts: timestamp (nullable = true)
|-- year: integer (nullable = true)
|-- month: integer (nullable = true)
>>> df7.show(10)
+--------------------+--------------------+----+-----+
| transactionid| eventts|year|month|
+--------------------+--------------------+----+-----+
|f7018907-ed3d-49b...|2013-11-21 18:41:...|2013| 11|
|f6d95a5f-d4ba-489...|2013-11-21 18:41:...|2013| 11|
|02b2a715-6e15-4bb...|2013-11-21 18:41:...|2013| 11|
|0e908c0f-7d63-48c...|2013-11-21 18:41:...|2013| 11|
|f83e30f9-950a-4b9...|2013-11-21 18:41:...|2013| 11|
|3425e4ea-b715-476...|2013-11-21 18:41:...|2013| 11|
|a20a6aeb-da4f-4fd...|2013-11-21 18:41:...|2013| 11|
|d2f57e6f-889b-49b...|2013-11-21 18:41:...|2013| 11|
|46f2eda5-408e-44e...|2013-11-21 18:41:...|2013| 11|
|36fb8b79-b2b5-493...|2013-11-21 18:41:...|2013| 11|
+--------------------+--------------------+----+-----+
only showing top 10 rows
```
**The create table in Hive:**
```
create external table if not exists t3(
transactionid string,
eventts timestamp)
partitioned by (year int, month int)
stored as parquet
location 's3://staging-dev/test/ttfourfieldspart2/';
```
**When I try to select some rows in Hive, it doesn't return any rows:**
```
hive> select * from t3 limit 10;
OK
Time taken: 0.027 seconds
hive>
``` | 2015/11/05 | [
"https://Stackoverflow.com/questions/33551878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5529573/"
] | I finally found the problem. When you create tables in Hive, where partitioned data already exists in S3 or HDFS, you need to run a command to update the Hive Metastore with the table's partition structure. Take a look here:
<https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RecoverPartitions(MSCKREPAIRTABLE)>
```
The commands are:
MSCK REPAIR TABLE table_name;
And on Hive running in Amazon EMR you can use:
ALTER TABLE table_name RECOVER PARTITIONS;
``` | Even though this Question was answered already, the following point may also help the users who are still not able to solve the issue just by `MSCK REPAIR TABLE table_name;`
I have an hdfs file system which is partitioned as below:
`<parquet_file>/<partition1>/<partition2>`
eg: `my_file.pq/column_5=test/column_6=5`
I created a hive table with partitions
eg:
```sql
CREATE EXTERNAL TABLE myschema.my_table(
`column_1` int,
`column_2` string,
`column_3` string,
`column_4` string
)
PARTITIONED BY (`column_5` string, `column_6` int) STORED AS PARQUET
LOCATION
'hdfs://u/users/iamr/my_file.pq'
```
After this, I repaired the schema partitions using the following command
`MSCK REPAIR TABLE myschema.my_table;`
After this it was started working for me.
Another thing I noticed was that, **while writing PARQUET files from spark, name the columns with lower case**, otherwise hive may not able to map it. For me after renaming the columns in PARQUET file, it started working
for eg: `my_file.pq/COLUMN_5=test/COLUMN_6=5` didn't worked for me
but `my_file.pq/column_5=test/column_6=5` worked |
12,177,405 | Dear python 3 experts,
with python2, one could do the following (I know this is a bit hairy, but that's not the point here :p):
```
class A(object):
def method(self, other):
print self, other
class B(object): pass
B.method = types.MethodType(A().method, None, B)
B.method() # print both A and B instances
```
with python3, there is no more unbound methods, only functions. If I want the same behaviour, it sounds like I've to introduce a custom descriptor such as:
```
class UnboundMethod:
"""unbound method wrapper necessary for python3 where we can't turn
arbitrary object into a method (no more unbound method and only function
are turned automatically to method when accessed through an instance)
"""
def __init__(self, callable):
self.callable = callable
def __get__(self, instance, objtype):
if instance is None:
return self.callable
return types.MethodType(self.callable, instance)
```
so I can do :
```
B.method = UnboundMethodType(A().method)
B.method() # print both A and B instances
```
Is there any other way to do that without writing such descriptor ?
TIA | 2012/08/29 | [
"https://Stackoverflow.com/questions/12177405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/853679/"
] | ```
B.method = lambda o: A.method(o,A())
b = B()
b.method()
```
the line `b.method()` then calls `A.method(b,A())`. This means a A is initialized each time. To avoid this:
```
a = A()
B.method = lambda o: A.method(o,a)
```
now every time you call b.method() on any instance of B the same instance of A is passed as the second argument. | Well, your code doesn't work in Python 2 either, but I get what you are trying to do. And you can use lambda, as in Sheena's answer, or functools.partial.
```
>>> import types
>>> from functools import partial
>>> class A(object):
... def method(self, other):
... print self, other
...
>>> class B(object): pass
...
>>> B.method = partial(A().method, A())
>>> B().method()
<__main__.A object at 0x112f590> <__main__.A object at 0x1132190>
``` |
46,395,273 | First post here at stack overflow. Please forgive my posting errors.
I have spent a lot of time at this. I started with the 500 server error.
This long is stating python not found. My app is JS, CSS, and HTML only. (at this point) I have included the yaml, because I cant rule out for myself if I have errors there through my research.
Pointers are greatly appreciated.
Thanks.
My `app.yaml`:
```
application: application
version: secureable
runtime: python27
api_version: 1
threadsafe: false
handlers:
- url: /(.*\.(gif|png|jpg|ico|js|css))
static_files: \1
upload: (.*\.(gif|png|jpg|ico|js|css))
- url: /robots.txt
static_files: robots.txt
upload: robots.txt
- url: .*
script: main.py
inbound_services:
- mail
```
The error:
```
httpRequest: {
status: 500
0: {
logMessage: "File referenced by handler not found: main.py"
severity: "WARNING"
time: "2017-09-24T21:12:30.191830Z"
}
]
megaCycles: "2"
method: "GET"
requestId: resource: "/index.html"
startTime: "2017-09-24T21:12:30.138333Z"
status: 500
traceId: "618d060203d57aea2bfddc905e350698"
urlMapEntry: "main.py"
userAgent: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:55.0) Gecko/20100101 Firefox/55.0"
versionId: "secureable"
}
receiveTimestamp: "2017-09-24T21:12:30.926277443Z"
resource: {
labels: {
module_id: "default"
project_id: "Application"
version_id: "secureable"
zone: "us9"
}
type: "gae_app"
}
severity: "WARNING"
timestamp: "2017-09-24T21:12:30.138333Z"
}
``` | 2017/09/24 | [
"https://Stackoverflow.com/questions/46395273",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4907940/"
] | If your app is only HTML, CSS, and JS, you can remove the catch-all pointer to the Python script all together and instead use an `app.yaml` format like the one shown in the [Hosting a Static Website on App Engine tutorial](https://cloud.google.com/appengine/docs/standard/python/getting-started/hosting-a-static-website#creating_the_appyaml_file):
```
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /
static_files: www/index.html
upload: www/index.html
- url: /(.*)
static_files: www/\1
upload: www/(.*)
```
Later if you want to add server-side logic with a Python module, you can add in a handler with a `script` associated with it. When you take that step, you use an import style pointer in the form of `[script_name].[var_pointing_to_wsgi_application_in_script]`. So if you have `main.py` and within that a variable called `application` that is set to your WSGI application, then you would use `script: main.application`.
Commonly a WSGI application is either webapp2 ([example](https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/appengine/standard/hello_world/main.py#L24)) or Flask ([example](https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/appengine/standard/flask/hello_world/main.py#L21)). | Your `script: main.py` statement in the `handlers` section of the `app.yaml` file is wrong, it should be `script: main.app`.
From the `script` row in the [Handlers element](https://cloud.google.com/appengine/docs/standard/python/config/appref#handlers_element) table (sadly not properly formatted, including the quote from the page source to make it readable):
>
> **script**
>
>
> A `script:` directive must be a python import path, for example,
> `package.module.app` that points to a WSGI application. The last
> component of a `script:` directive using a **Python module** path is
> the name of a global variable in the module: that variable must be a
> WSGI app, and is usually called `app` by convention.
>
>
> |
61,206,895 | the python script does execute well manually through the terminal:
```
sudo python3 /home/pi/Documents/AlarmClock/alarm.py
```
but it does not work automatically by the crontab. Here is the cronjob (crontab -e) in the /tmp/crontab.iGf7md/crontab file:
```
32 13 2 * * sudo python3 /home/pi/Documents/AlarmClock/alarm.py
```
In the alarm.py script is no print command. The script only lights up a LED-Strip connected to the gpio-pin which works fine.
Does anyone know my mistake? | 2020/04/14 | [
"https://Stackoverflow.com/questions/61206895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You can use `array_keys` with search value [PHP Doc](https://www.php.net/manual/en/function.array-keys.php)
[Demo](https://3v4l.org/kfTZH)
```
array_keys($arr,3)
```
---
>
> `array_keys()` returns the keys, numeric and string, from the array.
>
>
> If a search\_value is specified, then only the keys for that value are
> returned. Otherwise, all the keys from the array are returned.
>
>
> | With that solution you can create complex filters. In this case we compare every value to be the number three (=== operator). The filter returns the index, when the comparision true, else it will be dropped.
```
$a = [1,2,3,4,3,3,5,6];
$threes = array_filter($a, function($v, $k) {
return $v === 3 ? $k : false; },
ARRAY_FILTER_USE_BOTH
);
```
`$threes` Is an array containing all keys having the value 3.
>
> array(3) { 2, 4, 5 }
>
>
> |
61,206,895 | the python script does execute well manually through the terminal:
```
sudo python3 /home/pi/Documents/AlarmClock/alarm.py
```
but it does not work automatically by the crontab. Here is the cronjob (crontab -e) in the /tmp/crontab.iGf7md/crontab file:
```
32 13 2 * * sudo python3 /home/pi/Documents/AlarmClock/alarm.py
```
In the alarm.py script is no print command. The script only lights up a LED-Strip connected to the gpio-pin which works fine.
Does anyone know my mistake? | 2020/04/14 | [
"https://Stackoverflow.com/questions/61206895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You can use `array_keys` with search value [PHP Doc](https://www.php.net/manual/en/function.array-keys.php)
[Demo](https://3v4l.org/kfTZH)
```
array_keys($arr,3)
```
---
>
> `array_keys()` returns the keys, numeric and string, from the array.
>
>
> If a search\_value is specified, then only the keys for that value are
> returned. Otherwise, all the keys from the array are returned.
>
>
> | you can use array\_keys:
```
foreach (array_keys($arr) as $key) if ($arr[$key] == 3) $result[] = $key;
``` |
43,967,051 | What is an alternative to firebase for user management/auth for python apps. I know I can use node.js w/ firebase but, I would rather authenticate users through a managed 3rd party API in python using HTTPS requests,if possible. Appery.io has this feature but, I do not need all that comes with appery.io | 2017/05/14 | [
"https://Stackoverflow.com/questions/43967051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7317396/"
] | Check out [Amazon Cognito](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjphrjN7-PXAhUEhuAKHSABA14QFggnMAA&url=https%3A%2F%2Faws.amazon.com%2Fcognito%2F&usg=AOvVaw0IxXy-fQjM_msyj67tH2wG) . They offer a quite nice package for small projects. [Backendless](http://backendless.com) is also a fantastic service, providing authentication and database with very helpful documentation and also SDK for different platforms including iOS, Android, Javascript, Rest API, Angular, React and React Native. I have been using Backendless for a couple of months and I highly recommend you use it, too. | You could try using [Auth0](https://auth0.com/) for pure authentication management. The Auth0 python package can be found [here](https://github.com/auth0/auth0-python). |
43,967,051 | What is an alternative to firebase for user management/auth for python apps. I know I can use node.js w/ firebase but, I would rather authenticate users through a managed 3rd party API in python using HTTPS requests,if possible. Appery.io has this feature but, I do not need all that comes with appery.io | 2017/05/14 | [
"https://Stackoverflow.com/questions/43967051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7317396/"
] | Check out [Amazon Cognito](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjphrjN7-PXAhUEhuAKHSABA14QFggnMAA&url=https%3A%2F%2Faws.amazon.com%2Fcognito%2F&usg=AOvVaw0IxXy-fQjM_msyj67tH2wG) . They offer a quite nice package for small projects. [Backendless](http://backendless.com) is also a fantastic service, providing authentication and database with very helpful documentation and also SDK for different platforms including iOS, Android, Javascript, Rest API, Angular, React and React Native. I have been using Backendless for a couple of months and I highly recommend you use it, too. | If you're looking for a self-hosted solution, [Keycloak](https://www.keycloak.org/) is a pretty robust option. If you want a service, [Auth0](https://auth0.com/) and [Okta](https://okta.com/) have quite a lot of features. They also offer a free tier with reasonable limits. |
43,967,051 | What is an alternative to firebase for user management/auth for python apps. I know I can use node.js w/ firebase but, I would rather authenticate users through a managed 3rd party API in python using HTTPS requests,if possible. Appery.io has this feature but, I do not need all that comes with appery.io | 2017/05/14 | [
"https://Stackoverflow.com/questions/43967051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7317396/"
] | If you're looking for a self-hosted solution, [Keycloak](https://www.keycloak.org/) is a pretty robust option. If you want a service, [Auth0](https://auth0.com/) and [Okta](https://okta.com/) have quite a lot of features. They also offer a free tier with reasonable limits. | You could try using [Auth0](https://auth0.com/) for pure authentication management. The Auth0 python package can be found [here](https://github.com/auth0/auth0-python). |
16,973,236 | I recently installed Emacs 24.3 and try to use it coding for Python (v3.3.2 x86-64 MSI installer). (I'm new to Emacs). Then i try to install emacs-for-python by unpack the zip to
```
"C:\Users\mmsc\AppData\Roaming\.emacs.d\emacs-for-python"
```
folder and add
```
: (load-file "~/.emacs.d/emacs-for-python/epy-init.el")
```
into C:\Users\mmsc\AppData\Roaming.emacs
after I launch Emacs, I see error
>
> Warning (initialization): An error occurred while loading
> `c:/Users/Klein/AppData/Roaming/.emacs':
>
>
> error: Pymacs helper did not start within 30 seconds
>
>
> To ensure normal operation, you should investigate and remove the
> cause of the error in your initialization file. Start Emacs with the
> `--debug-init' option to view a complete error backtrace.
>
>
>
with the "--debug-init", I saw below information but I have little knowledge about Emacs/Lisp, so I can't locate the problem easily.
```
Debugger entered--Lisp error: (error "Pymacs helper did not start within 30 seconds")
signal(error ("Pymacs helper did not start within 30 seconds"))
pymacs-report-error("Pymacs helper did not start within %d seconds" 30)
(if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start))
(while (progn (goto-char (point-min)) (not (re-search-forward "<\\([0-9]+\\) " nil t))) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start)))
(let ((process (apply (quote start-process) "pymacs" buffer (let ((python (getenv "PYMACS_PYTHON"))) (if (or (null python) (equal python "")) pymacs-python-command python)) "-c" (concat "import sys;" " from Pymacs import main;" " main(*sys.argv[1:])") (append (and (>= emacs-major-version 24) (quote ("-f"))) (mapcar (quote expand-file-name) pymacs-load-path))))) (pymacs-kill-without-query process) (while (progn (goto-char (point-min)) (not (re-search-forward "<\\([0-9]+\\) " nil t))) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start))) (let ((marker (process-mark process)) (limit-position (+ (match-end 0) (string-to-number (match-string 1))))) (while (< (marker-position marker) limit-position) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper probably was interrupted at start")))))
(progn (let ((process (apply (quote start-process) "pymacs" buffer (let ((python ...)) (if (or ... ...) pymacs-python-command python)) "-c" (concat "import sys;" " from Pymacs import main;" " main(*sys.argv[1:])") (append (and (>= emacs-major-version 24) (quote ...)) (mapcar (quote expand-file-name) pymacs-load-path))))) (pymacs-kill-without-query process) (while (progn (goto-char (point-min)) (not (re-search-forward "<\\([0-9]+\\) " nil t))) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start))) (let ((marker (process-mark process)) (limit-position (+ (match-end 0) (string-to-number (match-string 1))))) (while (< (marker-position marker) limit-position) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper probably was interrupted at start"))))) (goto-char (match-end 0)) (let ((reply (read (current-buffer)))) (if (and (pymacs-proper-list-p reply) (= (length reply) 2) (eq (car reply) (quote version))) (if (string-equal (cadr reply) "0.25") nil (pymacs-report-error "Pymacs Lisp version is 0.25, Python is %s" (cadr reply))) (pymacs-report-error "Pymacs got an invalid initial reply"))))
(unwind-protect (progn (let ((process (apply (quote start-process) "pymacs" buffer (let (...) (if ... pymacs-python-command python)) "-c" (concat "import sys;" " from Pymacs import main;" " main(*sys.argv[1:])") (append (and ... ...) (mapcar ... pymacs-load-path))))) (pymacs-kill-without-query process) (while (progn (goto-char (point-min)) (not (re-search-forward "<\\([0-9]+\\) " nil t))) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start))) (let ((marker (process-mark process)) (limit-position (+ (match-end 0) (string-to-number ...)))) (while (< (marker-position marker) limit-position) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper probably was interrupted at start"))))) (goto-char (match-end 0)) (let ((reply (read (current-buffer)))) (if (and (pymacs-proper-list-p reply) (= (length reply) 2) (eq (car reply) (quote version))) (if (string-equal (cadr reply) "0.25") nil (pymacs-report-error "Pymacs Lisp version is 0.25, Python is %s" (cadr reply))) (pymacs-report-error "Pymacs got an invalid initial reply")))) (set-match-data save-match-data-internal (quote evaporate)))
(let ((save-match-data-internal (match-data))) (unwind-protect (progn (let ((process (apply (quote start-process) "pymacs" buffer (let ... ...) "-c" (concat "import sys;" " from Pymacs import main;" " main(*sys.argv[1:])") (append ... ...)))) (pymacs-kill-without-query process) (while (progn (goto-char (point-min)) (not (re-search-forward "<\\([0-9]+\\) " nil t))) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start))) (let ((marker (process-mark process)) (limit-position (+ ... ...))) (while (< (marker-position marker) limit-position) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper probably was interrupted at start"))))) (goto-char (match-end 0)) (let ((reply (read (current-buffer)))) (if (and (pymacs-proper-list-p reply) (= (length reply) 2) (eq (car reply) (quote version))) (if (string-equal (cadr reply) "0.25") nil (pymacs-report-error "Pymacs Lisp version is 0.25, Python is %s" (cadr reply))) (pymacs-report-error "Pymacs got an invalid initial reply")))) (set-match-data save-match-data-internal (quote evaporate))))
(save-current-buffer (set-buffer buffer) (erase-buffer) (buffer-disable-undo) (pymacs-set-buffer-multibyte nil) (set-buffer-file-coding-system (quote raw-text)) (let ((save-match-data-internal (match-data))) (unwind-protect (progn (let ((process (apply ... "pymacs" buffer ... "-c" ... ...))) (pymacs-kill-without-query process) (while (progn (goto-char ...) (not ...)) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start))) (let ((marker ...) (limit-position ...)) (while (< ... limit-position) (if ... nil ...)))) (goto-char (match-end 0)) (let ((reply (read ...))) (if (and (pymacs-proper-list-p reply) (= ... 2) (eq ... ...)) (if (string-equal ... "0.25") nil (pymacs-report-error "Pymacs Lisp version is 0.25, Python is %s" ...)) (pymacs-report-error "Pymacs got an invalid initial reply")))) (set-match-data save-match-data-internal (quote evaporate)))))
(let ((buffer (get-buffer-create "*Pymacs*"))) (save-current-buffer (set-buffer buffer) (erase-buffer) (buffer-disable-undo) (pymacs-set-buffer-multibyte nil) (set-buffer-file-coding-system (quote raw-text)) (let ((save-match-data-internal (match-data))) (unwind-protect (progn (let ((process ...)) (pymacs-kill-without-query process) (while (progn ... ...) (if ... nil ...)) (let (... ...) (while ... ...))) (goto-char (match-end 0)) (let ((reply ...)) (if (and ... ... ...) (if ... nil ...) (pymacs-report-error "Pymacs got an invalid initial reply")))) (set-match-data save-match-data-internal (quote evaporate))))) (if (not pymacs-use-hash-tables) (setq pymacs-weak-hash t) (if pymacs-used-ids (progn (let ((pymacs-transit-buffer buffer) (pymacs-forget-mutability t) (pymacs-gc-inhibit t)) (pymacs-call "zombie_python" pymacs-used-ids)) (setq pymacs-used-ids nil))) (setq pymacs-weak-hash (make-hash-table :weakness (quote value))) (if (boundp (quote post-gc-hook)) (add-hook (quote post-gc-hook) (quote pymacs-schedule-gc)) (setq pymacs-gc-timer (run-at-time 20 20 (quote pymacs-schedule-gc))))) (setq pymacs-transit-buffer buffer) (let ((modules pymacs-load-history)) (setq pymacs-load-history nil) (if (and modules (yes-or-no-p "Reload modules in previous session? ")) (progn (mapc (function (lambda (args) (condition-case err ... ...))) modules)))))
pymacs-start-services()
(if (and pymacs-transit-buffer (buffer-name pymacs-transit-buffer) (get-buffer-process pymacs-transit-buffer)) nil (if pymacs-weak-hash (progn (if (or (eq pymacs-auto-restart t) (and (eq pymacs-auto-restart (quote ask)) (yes-or-no-p "The Pymacs helper died. Restart it? "))) nil (pymacs-report-error "There is no Pymacs helper!")))) (pymacs-start-services))
pymacs-serve-until-reply("eval" (pymacs-print-for-apply (quote "pymacs_load_helper") (quote ("ropemacs" "rope-" nil))))
pymacs-call("pymacs_load_helper" "ropemacs" "rope-" nil)
(let ((lisp-code (pymacs-call "pymacs_load_helper" module prefix noerror))) (cond (lisp-code (let ((result (eval lisp-code))) (add-to-list (quote pymacs-load-history) (list module prefix noerror) (quote append)) (message "Pymacs loading %s...done" module) (run-hook-with-args (quote pymacs-after-load-functions) module) result)) (noerror (message "Pymacs loading %s...failed" module) nil)))
pymacs-load("ropemacs" "rope-")
setup-ropemacs()
(progn (setup-ropemacs) (autoload (quote virtualenv-activate) "virtualenv" "Activate a Virtual Environment specified by PATH" t) (autoload (quote virtualenv-workon) "virtualenv" "Activate a Virtual Environment present using virtualenvwrapper" t) (add-hook (quote python-mode-hook) (lambda nil (if (buffer-file-name) (flymake-mode)))) (defun workon-postactivate (virtualenv) (require (quote virtualenv)) (virtualenv-activate virtualenv) (desktop-change-dir virtualenv)))
(lambda nil (progn (setup-ropemacs) (autoload (quote virtualenv-activate) "virtualenv" "Activate a Virtual Environment specified by PATH" t) (autoload (quote virtualenv-workon) "virtualenv" "Activate a Virtual Environment present using virtualenvwrapper" t) (add-hook (quote python-mode-hook) (lambda nil (if (buffer-file-name) (flymake-mode)))) (defun workon-postactivate (virtualenv) (require (quote virtualenv)) (virtualenv-activate virtualenv) (desktop-change-dir virtualenv))))()
funcall((lambda nil (progn (setup-ropemacs) (autoload (quote virtualenv-activate) "virtualenv" "Activate a Virtual Environment specified by PATH" t) (autoload (quote virtualenv-workon) "virtualenv" "Activate a Virtual Environment present using virtualenvwrapper" t) (add-hook (quote python-mode-hook) (lambda nil (if (buffer-file-name) (flymake-mode)))) (defun workon-postactivate (virtualenv) (require (quote virtualenv)) (virtualenv-activate virtualenv) (desktop-change-dir virtualenv)))))
eval((funcall (quote (lambda nil (progn (setup-ropemacs) (autoload (quote virtualenv-activate) "virtualenv" "Activate a Virtual Environment specified by PATH" t) (autoload (quote virtualenv-workon) "virtualenv" "Activate a Virtual Environment present using virtualenvwrapper" t) (add-hook (quote python-mode-hook) (lambda nil (if (buffer-file-name) (flymake-mode)))) (defun workon-postactivate (virtualenv) (require (quote virtualenv)) (virtualenv-activate virtualenv) (desktop-change-dir virtualenv)))))))
eval-after-load(python (progn (setup-ropemacs) (autoload (quote virtualenv-activate) "virtualenv" "Activate a Virtual Environment specified by PATH" t) (autoload (quote virtualenv-workon) "virtualenv" "Activate a Virtual Environment present using virtualenvwrapper" t) (add-hook (quote python-mode-hook) (lambda nil (if (buffer-file-name) (flymake-mode)))) (defun workon-postactivate (virtualenv) (require (quote virtualenv)) (virtualenv-activate virtualenv) (desktop-change-dir virtualenv))))
eval-buffer(#<buffer *load*-819053> nil "c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-python.el" nil t) ; Reading at buffer position 4662
load-with-code-conversion("c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-python.el" "c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-python.el" nil t)
require(epy-python)
eval-buffer(#<buffer *load*-283406> nil "c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-init.el" nil t) ; Reading at buffer position 476
load-with-code-conversion("c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-init.el" "c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-init.el" nil nil)
load("c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-init.el" nil nil t)
load-file("C:\\Users\\mmsc\\AppData\\Roaming\\.emacs.d\\emacs-for-python\\epy-init.el")
eval-buffer(#<buffer *load*> nil "c:/Users/mmsc/AppData/Roaming/.emacs" nil t) ; Reading at buffer position 656
load-with-code-conversion("c:/Users/mmsc/AppData/Roaming/.emacs" "c:/Users/mmsc/AppData/Roaming/.emacs" t t)
load("~/.emacs" t t)
```
I have tried to search some help from Internet but most of them are for Linux/Unix env. Is there anyone using Emacs with Python under Windows and know what does this mean and how can I fix it?
Thanks! | 2013/06/06 | [
"https://Stackoverflow.com/questions/16973236",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2118555/"
] | This was a little too much for a comment:
```
(let ((process
(apply 'start-process "pymacs" buffer
(let ((python (getenv "PYMACS_PYTHON")))
(if (or (null python) (equal python ""))
pymacs-python-command
python))
"-c" (concat "import sys;"
" from Pymacs import main;"
" main(*sys.argv[1:])")
(append
(and (>= emacs-major-version 24) '("-f"))
(mapcar 'expand-file-name pymacs-load-path)))))
```
This is the bit of Pymacs code which starts the process in `*Pymacs*` buffer. You could infer from this that Pymacs will first search for environment variable `$PYMACS_PYTHON` and if that doesn't exist or it's value is empty string, then it will try `pymacs-python-command`, which, by default is `"python"`. So, it will make this call:
```
$ python -c 'import sys; from Pymacs import main; main(*sys.argv[1:])'
```
There's a problem with `-f` - I don't know what version of Python accepts this argument, but then one that I have doesn't. The intention of this code is quite clear - probably it has to load the files on `pymacs-load-path`, but for me the value of this variable is `nil` - so I don't think this code ever runs. Anyway, this argument doesn't seem to harm as for me it launches with or without it just the same.
So, if you try running the above command in console, and get something like:
```
(version "0.25")
```
Then this code works fine, otherwise, you'd get some error and that would help you identify the problem. Remember that it may not be just `python`. It is either `$PYMACS_PYHON` or `pymacs-python-command`. | I had the same symptoms but what my problem turned out to be was an old pymacs.el and a new Pymacs. Evidently Pymacs changed the module interface and I had to go hunt down the stray pymacs.el. So the pymacs.el was installed by apt-get in an odd location. You have to make sure the byte code file is gone too. |
55,784,213 | Noob, trying to create a simple form, and validate the inputs on same. However, I don't know how to properly select each input in js, so nothing is happening. I am just learning html, bootstrap and javascript, so simpler (pythonic) answers are preferred to more complex ones.
I've read the documentation, and a number of other stackoverflow posts on this exact topic, which would have likely answered my question, were I not a Noob.
```
<div class="form-group">
<label for="first_name">First Name</label>
<input autocomplete="off" autofocus="" class="form-control" name="first_name" placeholder="First Name" type="text">
<small id="first_name_Help" class="form-text text-muted">* First Name is Mandatory.</small>
</div>
<div class="form-group">
<label for="last_name">Last Name</label>
<input autocomplete="off" autofocus="" class="form-control" name="last_name" placeholder="Last Name" type="text">
<small id="last_name_Help" class="form-text text-muted">* Last Name is Mandatory.</small>
</div>
<p>Select Your Country of Residence Below</p>
<div class="form-group">
<select name="country">
<option disabled selected value="">Country</option>
<option value="Canada">Canada</option></option>
<option value="USA">USA</option></option>
<option value="Mexico">Mexico</option>
<option value="None of the Above">None of the Above</option>
</select>
</div>
<script>
document.querySelector('form').onsubmit = function() {
if (!document.querySelector('input.first_name').value) {
alert('You must provide your name!');
return false;
}
``` | 2019/04/21 | [
"https://Stackoverflow.com/questions/55784213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8519006/"
] | The reason for partial match is that engine doesn't know exactly where it should start from regarding your requirements. You tell engine by including `\d` in character class:
```
(?<![[:space:][:punct:]\d])\d+
^^
``` | [This RegEx](https://regex101.com/r/ruSstp/1/) might help you to divide your string input into two groups, where the second group (`$2`) is the target number and group one (`$1`) is the non-digit behind it:
```
([A-Za-z_+-]+)([0-9]+)
```
[](https://i.stack.imgur.com/ubaKl.png)
It might be safe to do so, if you might want to use it for text-processing. |
58,211,638 | I want to connect to Twitch server. But Godot adds binary characters in front of my data as you can see in the pictures. This happens everytime no matter the data type. Why is this happenning and how can I prevent this happening?
[](https://i.stack.imgur.com/14N2l.png)
[code](https://paste.ubuntu.com/p/5h6h5vXfPx/) | 2019/10/03 | [
"https://Stackoverflow.com/questions/58211638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10558295/"
] | You can use shapes as well with your background modifier instead using a Color.
Change
```
}.overlay(
RoundedRectangle(cornerRadius: 40)
.stroke(Color.green, lineWidth: 1)
).background(Color.gray)
```
to
```
}.overlay(
RoundedRectangle(cornerRadius: 40)
.stroke(Color.green, lineWidth: 1)
).background(RoundedRectangle(cornerRadius: 40).fill(Color.pink))
```
and it will work.
Of course the pink color is only to make the area more visible. | What you need is one more modifier to cut off anything outside the thin green outline, add this after `.background`:
```
.clipShape(RoundedRectangle(cornerRadius: 40))
```
**EDIT**
Capsule is a better shape to use in place of RoundedRectangle to achieve matching curves:
```
var body: some View {
HStack {
Text("Login")
.font(.headline)
.foregroundColor(showLogin ? Color.white : .black)
.padding()
.frame(minWidth: 0, maxWidth: .infinity)
.background(Capsule().fill(showLogin ? Color.green : .gray))
.onTapGesture { self.showLogin = true }
Text("Join")
.font(.headline)
.foregroundColor(!showLogin ? Color.white : .black)
.padding()
.frame(minWidth: 0, maxWidth: .infinity)
.background(Capsule().fill(!showLogin ? Color.green : .gray))
.onTapGesture { self.showLogin = false }
} .background(Capsule().fill(Color.gray))
.overlay(Capsule().stroke(Color.green, lineWidth: 1))
}
``` |
31,154,087 | I am developing flask app. I made one table which will populate with JSON data. For Front end I am using Angularjs and for back-end I am using flask. But I am not able to populate the table and getting error like "**UndefinedError: 'task' is undefined.**"
**Directory of flask project**
flask\_project/
rest-server.py
templates/index.html
**rest-server.py**
```
#!flask/bin/python
import six
from flask import Flask, jsonify, abort, request, make_response, url_for, render_template
app = Flask(__name__, static_url_path="")
auth = HTTPBasicAuth()
tasks = [
{
'id': 1,
'title': u'Buy groceries',
'description': u'Milk, Cheese, Pizza, Fruit, Tylenol',
'done': False
},
{
'id': 2,
'title': u'Learn Python',
'description': u'Need to find a good Python tutorial on the web',
'done': False
}
]
@app.route('/')
def index():
return render_template('index.html')
@app.route('/todo/api/v1.0/tasks', methods=['GET'])
def get_tasks():
return jsonify({'tasks': [make_public_task(task) for task in tasks]})
```
I am successfully able to get json data using
<http://127.0.0.1:5000/todo/api/v1.0/tasks>
**Json array is**
```
{
"tasks":
[
{
"description": "Milk, Cheese, Pizza, Fruit, Tylenol",
"done": false,
"title": "Buy groceries",
"uri": "http://127.0.0.1:5000/todo/api/v1.0/tasks/1"
},
{
"description": "Need to find a good Python tutorial on the web",
"done": false,
"title": "Learn Python",
"uri": "http://127.0.0.1:5000/todo/api/v1.0/tasks/2"
}
]
}
```
**Index.html**
```
<!DOCTYPE html>
<html ng-app="app">
<head>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.19/angular.min.js"></script>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css" rel="stylesheet">
</head>
<body data-ng-app="app">
<!--our controller-->
<div ng-controller="ItemController">
<button id="get-items-button" ng-click="getItems()">Get Items</button>
<p>Look at the list of tasks!</p>
<!--this table shows the items we get from our service-->
<table cellpadding="0" cellspacing="0">
<thead>
<tr>
<th>Description</th>
<th>Done</th>
<th>Title</th>
<th>URI</th>
</tr>
</thead>
<tbody>
<!--repeat this table row for each item in items-->
<tr ng-repeat="task in tasks">
<td>{{task.description}}</td>
<td>{{task.done}}</td>
<td>{{task.title}}</td>
<td>{{task.uri}}</td>
</tr>
</tbody>
</table>
</div>
<script>
(function () {
//create our module
angular.module('app', [])
//add controller
.controller('ItemController', function ($scope, $http) {
//declare an array of items. this will get populated with our ajax call
$scope.tasks = [];
//declare an action for our button
$scope.getItems = function () {
//perform ajax call.
$http({
url: "/todo/api/v1.0/tasks",
method: "GET"
}).success(function (data, status, headers, config) {
//copy the data we get to our items array. we need to use angular.copy so that
//angular can track the object and bind it automatically.
angular.copy(data.tasks, $scope.tasks);
}).error(function (data, status, headers, config) {
//something went wrong
alert('Error getting data');
});
}
});
//console.log($scope.tasks);
})();
</script>
</body>
</html>
``` | 2015/07/01 | [
"https://Stackoverflow.com/questions/31154087",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4884941/"
] | i think it's because you have two ng-app definitions in your index.html
remove the definition in your html tag and try again
```
<html ng-app="tableJson">
```
into
```
<html>
``` | Try this
```
$scope.tasks = data;
```
it works for me |
31,154,087 | I am developing flask app. I made one table which will populate with JSON data. For Front end I am using Angularjs and for back-end I am using flask. But I am not able to populate the table and getting error like "**UndefinedError: 'task' is undefined.**"
**Directory of flask project**
flask\_project/
rest-server.py
templates/index.html
**rest-server.py**
```
#!flask/bin/python
import six
from flask import Flask, jsonify, abort, request, make_response, url_for, render_template
app = Flask(__name__, static_url_path="")
auth = HTTPBasicAuth()
tasks = [
{
'id': 1,
'title': u'Buy groceries',
'description': u'Milk, Cheese, Pizza, Fruit, Tylenol',
'done': False
},
{
'id': 2,
'title': u'Learn Python',
'description': u'Need to find a good Python tutorial on the web',
'done': False
}
]
@app.route('/')
def index():
return render_template('index.html')
@app.route('/todo/api/v1.0/tasks', methods=['GET'])
def get_tasks():
return jsonify({'tasks': [make_public_task(task) for task in tasks]})
```
I am successfully able to get json data using
<http://127.0.0.1:5000/todo/api/v1.0/tasks>
**Json array is**
```
{
"tasks":
[
{
"description": "Milk, Cheese, Pizza, Fruit, Tylenol",
"done": false,
"title": "Buy groceries",
"uri": "http://127.0.0.1:5000/todo/api/v1.0/tasks/1"
},
{
"description": "Need to find a good Python tutorial on the web",
"done": false,
"title": "Learn Python",
"uri": "http://127.0.0.1:5000/todo/api/v1.0/tasks/2"
}
]
}
```
**Index.html**
```
<!DOCTYPE html>
<html ng-app="app">
<head>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.19/angular.min.js"></script>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css" rel="stylesheet">
</head>
<body data-ng-app="app">
<!--our controller-->
<div ng-controller="ItemController">
<button id="get-items-button" ng-click="getItems()">Get Items</button>
<p>Look at the list of tasks!</p>
<!--this table shows the items we get from our service-->
<table cellpadding="0" cellspacing="0">
<thead>
<tr>
<th>Description</th>
<th>Done</th>
<th>Title</th>
<th>URI</th>
</tr>
</thead>
<tbody>
<!--repeat this table row for each item in items-->
<tr ng-repeat="task in tasks">
<td>{{task.description}}</td>
<td>{{task.done}}</td>
<td>{{task.title}}</td>
<td>{{task.uri}}</td>
</tr>
</tbody>
</table>
</div>
<script>
(function () {
//create our module
angular.module('app', [])
//add controller
.controller('ItemController', function ($scope, $http) {
//declare an array of items. this will get populated with our ajax call
$scope.tasks = [];
//declare an action for our button
$scope.getItems = function () {
//perform ajax call.
$http({
url: "/todo/api/v1.0/tasks",
method: "GET"
}).success(function (data, status, headers, config) {
//copy the data we get to our items array. we need to use angular.copy so that
//angular can track the object and bind it automatically.
angular.copy(data.tasks, $scope.tasks);
}).error(function (data, status, headers, config) {
//something went wrong
alert('Error getting data');
});
}
});
//console.log($scope.tasks);
})();
</script>
</body>
</html>
``` | 2015/07/01 | [
"https://Stackoverflow.com/questions/31154087",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4884941/"
] | i think it's because you have two ng-app definitions in your index.html
remove the definition in your html tag and try again
```
<html ng-app="tableJson">
```
into
```
<html>
``` | You should use an Angular service to get the data from the server. |
14,129,983 | I need a script that updates my copy of a repository. When I type "svn up" I usually am forced to enter a password, how do I automate the password entry?
What I've tried:
```
import pexpect, sys, re
pexpect.run("svn cleanup")
child = pexpect.spawn('svn up')
child.logfile = sys.stdout
child.expect("Enter passphrase for key \'/home/rcompton/.ssh/id_rsa\':")
child.sendline("majorSecurityBreach")
matchanything = re.compile('.*', re.DOTALL)
child.expect(matchanything)
child.close()
```
But it does not seem to be updating.
**edit:** If it matters, I can get my repository to update with child.interact()
```
import pexpect, sys, re
pexpect.run("svn cleanup")
child = pexpect.spawn('svn up')
child.logfile = sys.stdout
i = child.expect("Enter passphrase for key \'/home/rcompton/.ssh/id_rsa\':")
child.interact()
```
allows me to enter my password and starts updating. However, I end up with an error anyway.
```
-bash-3.2$ python2.7 exRepUpdate.py
Enter passphrase for key '/home/rcompton/.ssh/id_rsa':
At revision 4386.
At revision 4386.
Traceback (most recent call last):
File "exRepUpdate.py", line 13, in <module>
child.interact()
File "build/bdist.linux-x86_64/egg/pexpect.py", line 1497, in interact
File "build/bdist.linux-x86_64/egg/pexpect.py", line 1525, in __interact_copy
File "build/bdist.linux-x86_64/egg/pexpect.py", line 1515, in __interact_read
OSError: [Errno 5] Input/output error
```
**edit:** Alright I found a way around plaintext password entry. An important detail I left out (which, honestly, I didn't think I'd need since this seemed like it would be an easy problem) is that I had to send a public key to our IT dept. when I first got access to the repo. Avoiding the password entry with in the ssh+svn that I'm dealing with can be done with ssh-agent. This link: <http://mah.everybody.org/docs/ssh> gives an easy overview. The solution Joseph M. Reagle by way of Daniel Starin only requires I enter my password one time ever, on login, allowing me to execute my script each night despite the password entry. | 2013/01/02 | [
"https://Stackoverflow.com/questions/14129983",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/424631/"
] | If you don't want to type password many times, but still have a secure solution you can use **ssh-agent** to keep your key passphrases for a while. If you use your default private key simply type `ssh-add` and give your passphrase when asked.
More details on `ssh-add` command usage are here: [linux.die.net/man/1/ssh-add](http://linux.die.net/man/1/ssh-add) | You should really just use ssh with public keys.
In the absence of that, you can simply create a new file in `~/.subversion/auth/svn.simple/` with the contents:
```
K 8
passtype
V 6
simple
K 999
password
V 7
password_goes_here
K 15
svn:realmstring
V 999
<url> real_identifier
K 8
username
V 999
username_goes_here
END
```
The 999 numbers are the length of the next line (minus `\n`). The filename should be the MD5 sum of the realm string. |
23,390,397 | So i've been at this one for a little while and cant seem to get it. Im trying to execute a python script via terminal and want to pass a string value with it. That way, when the script starts, it can check that value and act accordingly. Like this:
```
sudo python myscript.py mystring
```
How can i go about doing this. I know there's a way to start and stop a script using bash, but thats not really what im looking for. Any and all help accepted! | 2014/04/30 | [
"https://Stackoverflow.com/questions/23390397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1661607/"
] | Try the following inside ur script:
```
import sys
arg1 = str(sys.argv[1])
print(arg1)
``` | Since you are passing a string, you need to pass it in quotes:
```
sudo python myscript.py 'mystring'
```
Also, you shouldn't have to run it with sudo. |
57,809,780 | I'm trying to convert a .tif image in python using the module skimage.
It's not working properly.
```
from skimage import io
img = io.imread('/content/IMG_0007_4.tif')
io.imsave('/content/img.jpg', img)
```
Here is the error:
```
/usr/local/lib/python3.6/dist-packages/imageio/core/functions.py in get_writer(uri, format, mode, **kwargs)
if format is None:
raise ValueError(
"Could not find a format to write the specified file " "in mode %r" % mode)
ValueError: Could not find a format to write the specified file in mode 'i'
```
EDIT 1:
A method I found to do this was to open using skimage, convert it to 8bits and then save it as png.
Anyway I can't save it as .jpg
```
img = io.imread('/content/IMG_0007_4.tif',as_gray=True)
img8 = (img/256).astype('uint8')
matplotlib.image.imsave('/content/name.png', img8)
``` | 2019/09/05 | [
"https://Stackoverflow.com/questions/57809780",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8229169/"
] | 1. I don't think HAVING will work without GROUP.
2. I would move the having clause outside the include section and use the AS aliases.
So, roughly:
`group: ['id'], // and whatever else you need
having : { 'documents.total_balance_due' : {$eq : 0 }}`
(Making some guesses vis the aliases) | >
> To filter the date from joined table which uses groupby as well, you can make use of HAVING Property, which is accepted by Sequelize.
>
>
>
So with respect to your question, I am providing the answer.
You can make use of this code:
```
const Sequelize = require('sequelize');
let searchQuery = {
attributes: {
// include everything from business table and total_due_balance as well
include: [[Sequelize.fn('SUM', Sequelize.col('documents.due_balance')), 'total_due_balance']]
},
include: [
{
model: Documents, // table, which you require from your defined model
as: 'documents', // alias through which it is defined to join in hasMany or belongsTo Associations
required: true, // make inner join
attributes: [] // select nothing from Documents table, if you want to select you can pass column name as a string to array
}
],
group: ['business.id'], // Business is a table
having: ''
};
if (params.contactability === 'with_balance') {
searchQuery.having = Sequelize.literal(`total_due_balance > 0`);
} else if (params.contactability === 'without_balance') {
searchQuery.having = Sequelize.literal(`total_due_balance = 0`);
}
Business // table, which you require from your defined model
.findAll(searchQuery)
.then(result => {
console.log(result);
})
.catch(err => {
console.log(err);
});
```
Note : Change model name or attributes according to your requirement.
Hope this will help you or somebody else! |
26,290,871 | How can I build a python distribution RPM that is only dependent on an *earlier* version of python?
**Why?** I'm trying to build a distribution RPMs for RHEL6/CentOS 6, which only includes Python 2.6, but I am building usually on machines with Python 2.7.
This is an open source project, and I have already ensured that it shouldn't be including any libraries/APIs that are not in 2.6.
I am building the RPMs with:
```
python setup.py bdist_rpm
```
---
**setup.py file:**
```
from distutils.core import setup
setup(name='pyresttest',
version='0.1',
description=Text',
maintainer='Not listing here',
maintainer_email='no,just no',
url='project url here',
keywords='rest web http testing',
packages=['pyresttest'],
license='Apache License, Version 2.0',
requires=['yaml','pycurl']
)
```
(Specifics removed for the url, maintainer, email and description).
The RPM appears to be valid, but when I try to install on RHEL6, I get this error:
python(abi) = 2.7 is needed by pyresttest-0.1-1.noarch
There should be some way to get it to override the default python version to require, or supply a custom SPEC file, but after several hours of fiddling with it, I'm stuck. Ideas?
---
EDIT: I suppose I should clarify why I'm doing a RPM for python code, instead of just using setuptools or pip: this will hopefully go to production at work, where all deployments are RPM-based and most VMs are still RHEL6. Asking them to adopt another packaging tool is likely to be a non-starter, since our company is closely tied to the RPM format. | 2014/10/10 | [
"https://Stackoverflow.com/questions/26290871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/95122/"
] | Re-organized the answer.
Actually, there's no "rpm-package". There're rpm-packages for RHEL6, rpm-packages for FedoraNN, rpm-packagse for OpenSUSE-X.Y and so on. And besides there're Debian, Ubuntu, Arch and Gentoo :)
You have the following possibilities with your Python package:
1. You may completely avoid rpm-, deb- and other "native linux packaging systems", and may opt to use a "python-native" packaging system like [PIP](https://pip.pypa.io/en/1.5.X/index.html). Thus you completely avoid the complexity and lack of compatibility between packaging systems in various versions and various flavours of Linux. And for a package which doesn't "infiltrate" deeply into "core system", this could be the best solution.
2. You may continue to use RPM as an archive format for your package but completely turn off automatic dependency calculations. This can be done with `AutoReqProv: no` directive in the spec. To be able to work with a customized spec one may use `--spec-only` and `--spec-file` [distutils options](https://docs.python.org/2.0/dist/creating-rpms.html). But remember that a package built this way is even worse than a zip from p.1: without proper dependencies it contains less necessary metainformation and thus "defames" the whole idea behind Linux packaging systems which were invented to built consistent systems, to avoid problems like "DLL hell" and to be suitable for automatic maintainance and updates. Actually you may add dependency information manually, via `Requires: <something>` tag but this may become even more hard and bporing if you target several Linux platforms at once.
3. In order to take into account all those complex and boring details and nuances of a particular package system you may create "build sandboxes" with appropriate versions of necessary Linux flavours. My preferred way to create such sandboxes is to use pre-created ["OpenVZ templates"](http://wiki.openvz.org/Download/template/precreated), but without OpenVZ per se: simply unpack a given archive into a subdirectory (being `root` to preserve permissions), then `chroot` into the subdirectory, and voila! you've got Debian, RHEL etc... Fedora people have created [Mock](http://fedoraproject.org/wiki/Projects/Mock) for the same purposes and likely `Mock` would be a more elaborated solution. As @BobMcGee suggests in the comment one also may consider [Jenkins Docker plugin](https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin)
Once you have a build sandbox with python distribution specific to that system, distutils etc you may automate the build process using simple scripting, bash or python.
That's it. | I do not do very much python work but have done some RPM packaging. You probably need to somehow do what one would normally do in the RPM's spec file and specify and require a particular release of your python package like so ...
```
# this would be in your spec file
requires: python <= 2.6
```
Take a look here for more info:
<http://ftp.rpm.org/max-rpm/s1-rpm-depend-manual-dependencies.html> |
31,910,680 | I installed the networking module **Scapy**.
When I import scapy (`import scapy`) everything works fine. When I import all from scapy (`from scapy.all import *`), it brings up this error:
```
Traceback (most recent call last):
File "/Users/***/Downloads/test.py", line 5, in <module>
from scapy.all import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/all.py", line 16, in <module>
from .arch import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/__init__.py", line 75, in <module>
from .bsd import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/bsd.py", line 12, in <module>
from .unix import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/unix.py", line 22, in <module>
from .pcapdnet import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/pcapdnet.py", line 22, in <module>
from .cdnet import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/cdnet.py", line 17, in <module>
raise OSError("Cannot find libdnet.so")
OSError: Cannot find libdnet.so
```
I found out on another post that we might have to download additionnal modules in order to make scapy fully work. What should be done exactly?
I tried using (port \*\* install) which didn't work because port is not supported anymore? If you have any idea how to make it work in python3, I will be active. Here is more additionnal informations:
```
python 3.4.3
mac os 10.10.4
scapy-python3==0.14
```
EDIT: Another interesting thing is :
On all OS except Linux libpcap should be installed for sending and receiving packets (not python modules - just C libraries). libdnet is recommended for sending packets, without libdnet packets will be sent by libpcap, which is limited. Also, netifaces module can be used for alternative and possibly cleaner way to determine local addresses.
Source: <https://pypi.python.org/pypi/scapy-python3/0.11>
Dnet seems to only work with version 2.7 : <https://pypi.python.org/pypi/dnet/1.12> | 2015/08/10 | [
"https://Stackoverflow.com/questions/31910680",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4844191/"
] | **You can now install this easily** with [Homebrew](http://brew.sh) by using the command:
```
brew install libdnet
```
after you've installed Homebrew. | **Up-to-date edit: this issue has been fixed on recent versions of scapy, simply update your scapy version using `pip install scapy>=2.4.0`**
You have to install libdnet. Not the python library (which does not work on python3 as you mentioned), but the library itself. There has to be library file libdnet.so somewhere in your system where python searches for libraries. Downloading the libdnet source and compiling should make it work:
```
wget http://libdnet.googlecode.com/files/libdnet-1.12.tgz
tar xfz libdnet-1.12.tgz
cd libdnet-1.12
./configure
make
```
Also, there is a possibility to use libpcap for sending packets and not to use libdnet, but I recommend trying to make libdnet work first. |
31,910,680 | I installed the networking module **Scapy**.
When I import scapy (`import scapy`) everything works fine. When I import all from scapy (`from scapy.all import *`), it brings up this error:
```
Traceback (most recent call last):
File "/Users/***/Downloads/test.py", line 5, in <module>
from scapy.all import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/all.py", line 16, in <module>
from .arch import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/__init__.py", line 75, in <module>
from .bsd import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/bsd.py", line 12, in <module>
from .unix import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/unix.py", line 22, in <module>
from .pcapdnet import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/pcapdnet.py", line 22, in <module>
from .cdnet import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/cdnet.py", line 17, in <module>
raise OSError("Cannot find libdnet.so")
OSError: Cannot find libdnet.so
```
I found out on another post that we might have to download additionnal modules in order to make scapy fully work. What should be done exactly?
I tried using (port \*\* install) which didn't work because port is not supported anymore? If you have any idea how to make it work in python3, I will be active. Here is more additionnal informations:
```
python 3.4.3
mac os 10.10.4
scapy-python3==0.14
```
EDIT: Another interesting thing is :
On all OS except Linux libpcap should be installed for sending and receiving packets (not python modules - just C libraries). libdnet is recommended for sending packets, without libdnet packets will be sent by libpcap, which is limited. Also, netifaces module can be used for alternative and possibly cleaner way to determine local addresses.
Source: <https://pypi.python.org/pypi/scapy-python3/0.11>
Dnet seems to only work with version 2.7 : <https://pypi.python.org/pypi/dnet/1.12> | 2015/08/10 | [
"https://Stackoverflow.com/questions/31910680",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4844191/"
] | **Up-to-date edit: this issue has been fixed on recent versions of scapy, simply update your scapy version using `pip install scapy>=2.4.0`**
You have to install libdnet. Not the python library (which does not work on python3 as you mentioned), but the library itself. There has to be library file libdnet.so somewhere in your system where python searches for libraries. Downloading the libdnet source and compiling should make it work:
```
wget http://libdnet.googlecode.com/files/libdnet-1.12.tgz
tar xfz libdnet-1.12.tgz
cd libdnet-1.12
./configure
make
```
Also, there is a possibility to use libpcap for sending packets and not to use libdnet, but I recommend trying to make libdnet work first. | You can try the following:
```
git clone https://github.com/secdev/scapy
cd scapy
./run_scapy
``` |
31,910,680 | I installed the networking module **Scapy**.
When I import scapy (`import scapy`) everything works fine. When I import all from scapy (`from scapy.all import *`), it brings up this error:
```
Traceback (most recent call last):
File "/Users/***/Downloads/test.py", line 5, in <module>
from scapy.all import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/all.py", line 16, in <module>
from .arch import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/__init__.py", line 75, in <module>
from .bsd import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/bsd.py", line 12, in <module>
from .unix import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/unix.py", line 22, in <module>
from .pcapdnet import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/pcapdnet.py", line 22, in <module>
from .cdnet import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/cdnet.py", line 17, in <module>
raise OSError("Cannot find libdnet.so")
OSError: Cannot find libdnet.so
```
I found out on another post that we might have to download additionnal modules in order to make scapy fully work. What should be done exactly?
I tried using (port \*\* install) which didn't work because port is not supported anymore? If you have any idea how to make it work in python3, I will be active. Here is more additionnal informations:
```
python 3.4.3
mac os 10.10.4
scapy-python3==0.14
```
EDIT: Another interesting thing is :
On all OS except Linux libpcap should be installed for sending and receiving packets (not python modules - just C libraries). libdnet is recommended for sending packets, without libdnet packets will be sent by libpcap, which is limited. Also, netifaces module can be used for alternative and possibly cleaner way to determine local addresses.
Source: <https://pypi.python.org/pypi/scapy-python3/0.11>
Dnet seems to only work with version 2.7 : <https://pypi.python.org/pypi/dnet/1.12> | 2015/08/10 | [
"https://Stackoverflow.com/questions/31910680",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4844191/"
] | **You can now install this easily** with [Homebrew](http://brew.sh) by using the command:
```
brew install libdnet
```
after you've installed Homebrew. | You can try the following:
```
git clone https://github.com/secdev/scapy
cd scapy
./run_scapy
``` |
73,920,457 | How for I get the "rest of the list" after the the current element for an iterator in a loop?
I have a list:
`[ "a", "b", "c", "d" ]`
They are not actually letters, they are words, but the letters are there for illustration, and there is no reason to expect the list to be small.
For each member of the list, I need to:
```
def f(depth, list):
for i in list:
print(f"{depth} {i}")
f(depth+1, rest_of_the_list_after_i)
f(0,[ "a", "b", "c", "d" ])
```
The desired output (with spaces for clarity) would be:
```
0 a
1 b
2 c
3 d
2 d
1 c
2 d
1 d
0 b
1 c
2 d
1 d
0 c
1 d
0 d
```
I explored `enumerate` with little luck.
The reality of the situation is that there is a `yield` terminating condition. But that's another matter.
I am using (and learning with) python 3.10
This is not homework. I'm 48 :)
You could also look at it like:
```
0 a 1 b 2 c 3 d
2 d
1 c 2 d
1 d
0 b 1 c 2 d
1 d
0 c 1 d
0 d
```
That illustrates the stream nature of the thing. | 2022/10/01 | [
"https://Stackoverflow.com/questions/73920457",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1783593/"
] | Seems like there are plenty of answers here, but another way to solve your given problem:
```py
def f(depth, l):
for idx, item in enumerate(l):
step = f"{depth * ' '} {depth} {item[0]}"
print(step)
f(depth + 1, l[idx + 1:])
f(0,[ "a", "b", "c", "d" ])
``` | ```
def f(depth, alist):
# you dont need this if you only care about first
# for i in list:
print(f"{depth} {alist[0]}")
next_depth = depth + 1
rest_list = alist[1:]
f(next_depth,rest_list)
```
this doesnt seem like a very useful method though
```
def f(depth, alist):
# if you actually want to iterate it
for i,item in enumerate(alist):
print(f"{depth} {alist[0]}")
next_depth = depth + 1
rest_list = alist[i:]
f(next_depth,rest_list)
``` |
73,920,457 | How for I get the "rest of the list" after the the current element for an iterator in a loop?
I have a list:
`[ "a", "b", "c", "d" ]`
They are not actually letters, they are words, but the letters are there for illustration, and there is no reason to expect the list to be small.
For each member of the list, I need to:
```
def f(depth, list):
for i in list:
print(f"{depth} {i}")
f(depth+1, rest_of_the_list_after_i)
f(0,[ "a", "b", "c", "d" ])
```
The desired output (with spaces for clarity) would be:
```
0 a
1 b
2 c
3 d
2 d
1 c
2 d
1 d
0 b
1 c
2 d
1 d
0 c
1 d
0 d
```
I explored `enumerate` with little luck.
The reality of the situation is that there is a `yield` terminating condition. But that's another matter.
I am using (and learning with) python 3.10
This is not homework. I'm 48 :)
You could also look at it like:
```
0 a 1 b 2 c 3 d
2 d
1 c 2 d
1 d
0 b 1 c 2 d
1 d
0 c 1 d
0 d
```
That illustrates the stream nature of the thing. | 2022/10/01 | [
"https://Stackoverflow.com/questions/73920457",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1783593/"
] | Seems like there are plenty of answers here, but another way to solve your given problem:
```py
def f(depth, l):
for idx, item in enumerate(l):
step = f"{depth * ' '} {depth} {item[0]}"
print(step)
f(depth + 1, l[idx + 1:])
f(0,[ "a", "b", "c", "d" ])
``` | I guess this code is what you're looking for
```
def f(depth, lst):
for e,i in enumerate(lst):
print(f"{depth} {i}")
f(depth+1, lst[e+1:])
f(0,[ "a", "b", "c", "d" ])
``` |
48,535,962 | My data has a feature called level, and the data may have levels(-1,0,1,2,3) but my data now has only 2 levels 0 and -1. I'm using python for binary classification. How to do one-hot-encoding with all levels? What is the right approach to deal with this problem? Can I include all levels as I may expect them in test data? Or should I use only 2 levels ? | 2018/01/31 | [
"https://Stackoverflow.com/questions/48535962",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9186358/"
] | Currently it is assigning the last value as all parameter have same name.
You can use `[]` after variable name , it will create newcoach array with all values within it.
```
$test = "newcoach[]=6&newcoach[]=11&newcoach[]=12&newcoach[]=13&newcoach[]=14";
echo '<pre>';
parse_str($test,$result);
print_r($result);
```
O/p:
```
Array
(
[newcoach] => Array
(
[0] => 6
[1] => 11
[2] => 12
[3] => 13
[4] => 14
)
)
``` | Use this function
```
function proper_parse_str($str) {
# result array
$arr = array();
# split on outer delimiter
$pairs = explode('&', $str);
# loop through each pair
foreach ($pairs as $i) {
# split into name and value
list($name,$value) = explode('=', $i, 2);
# if name already exists
if( isset($arr[$name]) ) {
# stick multiple values into an array
if( is_array($arr[$name]) ) {
$arr[$name][] = $value;
}
else {
$arr[$name] = array($arr[$name], $value);
}
}
# otherwise, simply stick it in a scalar
else {
$arr[$name] = $value;
}
}
# return result array
return $arr;
}
$parsed_array = proper_parse_str($newcoach);
``` |
48,535,962 | My data has a feature called level, and the data may have levels(-1,0,1,2,3) but my data now has only 2 levels 0 and -1. I'm using python for binary classification. How to do one-hot-encoding with all levels? What is the right approach to deal with this problem? Can I include all levels as I may expect them in test data? Or should I use only 2 levels ? | 2018/01/31 | [
"https://Stackoverflow.com/questions/48535962",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9186358/"
] | Since you set your argument **newcoach** multiple times, parse\_str will only return the last one. If you want parse\_str to parse your variable as an array you need to supply it in this format with a '**[ ]**' suffix:
```
$newcoach = "newcoach[]=6&newcoach[]=11&newcoach[]=12&newcoach[]=13&newcoach[]=14";
```
**Example:**
```
<?php
$newcoach = "newcoach[]=6&newcoach[]=11&newcoach[]h=12&newcoach[]=13&newcoach[]=14";
$searcharray = array();
parse_str($newcoach, $searcharray);
print_r($searcharray);
?>
```
**Outputs:**
```
Array ( [newcoach] => Array ( [0] => 6 [1] => 11 [2] => 12 [3] => 13 [4] => 14 ) )
``` | Use this function
```
function proper_parse_str($str) {
# result array
$arr = array();
# split on outer delimiter
$pairs = explode('&', $str);
# loop through each pair
foreach ($pairs as $i) {
# split into name and value
list($name,$value) = explode('=', $i, 2);
# if name already exists
if( isset($arr[$name]) ) {
# stick multiple values into an array
if( is_array($arr[$name]) ) {
$arr[$name][] = $value;
}
else {
$arr[$name] = array($arr[$name], $value);
}
}
# otherwise, simply stick it in a scalar
else {
$arr[$name] = $value;
}
}
# return result array
return $arr;
}
$parsed_array = proper_parse_str($newcoach);
``` |
48,535,962 | My data has a feature called level, and the data may have levels(-1,0,1,2,3) but my data now has only 2 levels 0 and -1. I'm using python for binary classification. How to do one-hot-encoding with all levels? What is the right approach to deal with this problem? Can I include all levels as I may expect them in test data? Or should I use only 2 levels ? | 2018/01/31 | [
"https://Stackoverflow.com/questions/48535962",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9186358/"
] | Since you set your argument **newcoach** multiple times, parse\_str will only return the last one. If you want parse\_str to parse your variable as an array you need to supply it in this format with a '**[ ]**' suffix:
```
$newcoach = "newcoach[]=6&newcoach[]=11&newcoach[]=12&newcoach[]=13&newcoach[]=14";
```
**Example:**
```
<?php
$newcoach = "newcoach[]=6&newcoach[]=11&newcoach[]h=12&newcoach[]=13&newcoach[]=14";
$searcharray = array();
parse_str($newcoach, $searcharray);
print_r($searcharray);
?>
```
**Outputs:**
```
Array ( [newcoach] => Array ( [0] => 6 [1] => 11 [2] => 12 [3] => 13 [4] => 14 ) )
``` | Currently it is assigning the last value as all parameter have same name.
You can use `[]` after variable name , it will create newcoach array with all values within it.
```
$test = "newcoach[]=6&newcoach[]=11&newcoach[]=12&newcoach[]=13&newcoach[]=14";
echo '<pre>';
parse_str($test,$result);
print_r($result);
```
O/p:
```
Array
(
[newcoach] => Array
(
[0] => 6
[1] => 11
[2] => 12
[3] => 13
[4] => 14
)
)
``` |
39,303,710 | I am new to Python and machine learning and i am trying to work out how to fix this issue with date time. next\_unix is 13148730, because that is how many seconds are in five months, which is the time in between my dates. I have searched and i can't seem to find anything that works.
```
last_date = df.iloc[1,0]
last_unix = pd.to_datetime('2015-01-31 00:00:00') +pd.Timedelta(13148730)
five_months = 13148730
next_unix = last_unix + five_months
for i in forecast_set:
next_date = Timestamp('2015-06-30 00:00:00')
next_unix += 13148730
df.loc[next_date] = [np.nan for _ in range(len(df.columns)-1)]+[i]
```
Error:
```
Traceback (most recent call last):
File "<ipython-input-23-18adaa6b781f>", line 1, in <module>
runfile('C:/Users/HP/Documents/machine learning.py', wdir='C:/Users/HP/Documents')
File "C:\Users\HP\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Users\HP\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 89, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/HP/Documents/machine learning.py", line 74, in <module>
next_unix = last_unix + five_months
File "pandas\tslib.pyx", line 1025, in pandas.tslib._Timestamp.__add__ (pandas\tslib.c:20118)
ValueError: Cannot add integral value to Timestamp without offset.
``` | 2016/09/03 | [
"https://Stackoverflow.com/questions/39303710",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2770803/"
] | If my understanding is correct then you can get desired result with the following:
```
SELECT i.*,
CASE WHEN prop1.PROPERTY_ID = 1 THEN prop1.VALUE ELSE '' END AS PROPERTY_ONE,
CASE WHEN prop1.PROPERTY_ID = 2 THEN prop1.VALUE ELSE '' END AS PROPERTY_TWO
FROM ITEM i
LEFT JOIN ITEM_PROPERTY prop1 on i.ITEM_ID = prop1.ITEM_D
AND prop1.PROPERTY_ID IN (1, 2)
``` | ```
Select i.*, GROUP_CONCAT(prop.VALUE) as PROPERTY_VALUE
From ITEM i
Left Join ITEM_PROPERTY prop on i.ITEM_ID = prop.ITEM_D
``` |
39,303,710 | I am new to Python and machine learning and i am trying to work out how to fix this issue with date time. next\_unix is 13148730, because that is how many seconds are in five months, which is the time in between my dates. I have searched and i can't seem to find anything that works.
```
last_date = df.iloc[1,0]
last_unix = pd.to_datetime('2015-01-31 00:00:00') +pd.Timedelta(13148730)
five_months = 13148730
next_unix = last_unix + five_months
for i in forecast_set:
next_date = Timestamp('2015-06-30 00:00:00')
next_unix += 13148730
df.loc[next_date] = [np.nan for _ in range(len(df.columns)-1)]+[i]
```
Error:
```
Traceback (most recent call last):
File "<ipython-input-23-18adaa6b781f>", line 1, in <module>
runfile('C:/Users/HP/Documents/machine learning.py', wdir='C:/Users/HP/Documents')
File "C:\Users\HP\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Users\HP\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 89, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/HP/Documents/machine learning.py", line 74, in <module>
next_unix = last_unix + five_months
File "pandas\tslib.pyx", line 1025, in pandas.tslib._Timestamp.__add__ (pandas\tslib.c:20118)
ValueError: Cannot add integral value to Timestamp without offset.
``` | 2016/09/03 | [
"https://Stackoverflow.com/questions/39303710",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2770803/"
] | Old style:
```
Select i.*,
max(decode(prop.PROPERTY_ID,1,prop.VALUE,NULL)) as PROPERTY_ONE,
max(decode(prop.PROPERTY_ID,2,prop.VALUE,NULL)) as PROPERTY_TWO
From ITEM i
Left Join ITEM_PROPERTY prop on i.ITEM_ID = prop.ITEM_D and prop.PROPERTY_ID in(1,2)
group by there_will_have_to_list_all_the_fields_from_ITEM
```
Or ("light" version, less list in gorup by. But there may be a problem with optimization):
```
Select i.*,prop.PROPERTY_ONE,prop.PROPERTY_TWO
From ITEM i
Left Join (
select ITEM_ID,
max(decode(PROPERTY_ID,1,VALUE,NULL)) as PROPERTY_ONE,
max(decode(PROPERTY_ID,2,VALUE,NULL)) as PROPERTY_TWO
from ITEM_PROPERTY
where PROPERTY_ID in(1,2)
group by ITEM_ID
) prop on i.ITEM_ID = prop.ITEM_D
```
New style (Oracle 11g+):
```
select * from (
Select i.*, prop.PROPERTY_ID, prop1.VALUE
From ITEM i
Left Join ITEM_PROPERTY prop on i.ITEM_ID = prop.ITEM_D and prop.PROPERTY_ID in(1,2)
)
pivot(
max(VALUE) for PROPERTY_ID in(1 as "PROPERTY_ONE",2 as "PROPERTY_TWO")
)
``` | ```
Select i.*, GROUP_CONCAT(prop.VALUE) as PROPERTY_VALUE
From ITEM i
Left Join ITEM_PROPERTY prop on i.ITEM_ID = prop.ITEM_D
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.