qid
int64
469
74.7M
question
stringlengths
36
37.8k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
5
31.5k
response_k
stringlengths
10
31.6k
64,234,972
I'm working in python using numpy (could be a pandas series too) and am trying to make the following calculation: Lets say I have an array corresponding to points on the x axis: ``` 2, 9, 5, 6, 55, 8 ``` For each element in this array I would like to get the distance to the closest element so the output would look like the following: ``` 3, 1, 1, 1, 46, 1 ``` I am trying to find a solution that can scale to 2D (distance to nearest XY point) and ideally would avoid a for loop. Is that possible?
2020/10/06
[ "https://Stackoverflow.com/questions/64234972", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7754184/" ]
There are many ways of achieving it. Some readable and generalizable ways are: **Approach 1**: ``` dist = np.abs(a[:,None]-a) np.min(dist, where=~np.eye(len(a),dtype=bool), initial=dist.max(), axis=1) #[ 3 1 1 1 46 1] ``` **Approach 2**: ``` dist = np.abs(np.subtract.outer(a,a)) np.min(dist, where=~np.eye(len(a),dtype=bool), initial=dist.max(), axis=1) ``` **For a 2-D case approach 1** (assumes Euclidean distance. Any other is also possible): ``` from scipy.spatial.distance import cdist dist = cdist(a,a) np.min(dist, where=~np.eye(len(a),dtype=bool), initial=dist.max(), axis=1) ``` **For a 2-D case approach 2** with numpy only: ``` dist=np.sqrt(((a[:,None]-a)**2).sum(-1)) np.min(dist, where=~np.eye(len(a),dtype=bool), initial=dist.max(), axis=1) ``` You can achieve a [faster distance calculation by using `np.dot`](https://stackoverflow.com/a/63594244/4975981).
You can do some list comprehension on a pandas series: ``` s = pd.Series([2,9,5,6,55,8]) s.apply(lambda x: min([abs(x - s[y]) for y in s.index if s[y] != x])) Out[1]: 0 3 1 1 2 1 3 1 4 46 5 1 ``` Then you can just add `.to_list()` or `.to_numpy()` to the end to get rid of the series index: ``` s.apply(lambda x: min([abs(x - s[y]) for y in s.index if s[y] != x])).to_numpy() array([ 3, 1, 1, 1, 46, 1], dtype=int64) ```
64,234,972
I'm working in python using numpy (could be a pandas series too) and am trying to make the following calculation: Lets say I have an array corresponding to points on the x axis: ``` 2, 9, 5, 6, 55, 8 ``` For each element in this array I would like to get the distance to the closest element so the output would look like the following: ``` 3, 1, 1, 1, 46, 1 ``` I am trying to find a solution that can scale to 2D (distance to nearest XY point) and ideally would avoid a for loop. Is that possible?
2020/10/06
[ "https://Stackoverflow.com/questions/64234972", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7754184/" ]
There seems to be a theme with O(N^2) solutions here. For 1D, it's quite simple to get O(N log N): ``` x = np.array([2, 9, 5, 6, 55, 8]) i = np.argsort(x) dist = np.diff(x[i]) min_dist = np.r_[dist[0], np.minimum(dist[1:], dist[:-1]), dist[-1]]) min_dist = min_dist[np.argsort(i)] ``` This clearly won't scale well to multiple dimensions, so use [`scipy.special.KDTree`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.html) instead. Assuming your data is N-dimensional and has shape `(M, N)`, you can do ``` k = KDTree(data) dist = k.query(data, k=2)[0][:, -1] ``` Scipy has a Cython implementation of `KDTree`, [`cKDTree`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html). Sklearn has a [`sklearn.neighbors.KDTree`](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KDTree.html) with a similar interface as well.
There are many ways of achieving it. Some readable and generalizable ways are: **Approach 1**: ``` dist = np.abs(a[:,None]-a) np.min(dist, where=~np.eye(len(a),dtype=bool), initial=dist.max(), axis=1) #[ 3 1 1 1 46 1] ``` **Approach 2**: ``` dist = np.abs(np.subtract.outer(a,a)) np.min(dist, where=~np.eye(len(a),dtype=bool), initial=dist.max(), axis=1) ``` **For a 2-D case approach 1** (assumes Euclidean distance. Any other is also possible): ``` from scipy.spatial.distance import cdist dist = cdist(a,a) np.min(dist, where=~np.eye(len(a),dtype=bool), initial=dist.max(), axis=1) ``` **For a 2-D case approach 2** with numpy only: ``` dist=np.sqrt(((a[:,None]-a)**2).sum(-1)) np.min(dist, where=~np.eye(len(a),dtype=bool), initial=dist.max(), axis=1) ``` You can achieve a [faster distance calculation by using `np.dot`](https://stackoverflow.com/a/63594244/4975981).
64,234,972
I'm working in python using numpy (could be a pandas series too) and am trying to make the following calculation: Lets say I have an array corresponding to points on the x axis: ``` 2, 9, 5, 6, 55, 8 ``` For each element in this array I would like to get the distance to the closest element so the output would look like the following: ``` 3, 1, 1, 1, 46, 1 ``` I am trying to find a solution that can scale to 2D (distance to nearest XY point) and ideally would avoid a for loop. Is that possible?
2020/10/06
[ "https://Stackoverflow.com/questions/64234972", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7754184/" ]
There seems to be a theme with O(N^2) solutions here. For 1D, it's quite simple to get O(N log N): ``` x = np.array([2, 9, 5, 6, 55, 8]) i = np.argsort(x) dist = np.diff(x[i]) min_dist = np.r_[dist[0], np.minimum(dist[1:], dist[:-1]), dist[-1]]) min_dist = min_dist[np.argsort(i)] ``` This clearly won't scale well to multiple dimensions, so use [`scipy.special.KDTree`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.html) instead. Assuming your data is N-dimensional and has shape `(M, N)`, you can do ``` k = KDTree(data) dist = k.query(data, k=2)[0][:, -1] ``` Scipy has a Cython implementation of `KDTree`, [`cKDTree`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html). Sklearn has a [`sklearn.neighbors.KDTree`](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KDTree.html) with a similar interface as well.
You can do some list comprehension on a pandas series: ``` s = pd.Series([2,9,5,6,55,8]) s.apply(lambda x: min([abs(x - s[y]) for y in s.index if s[y] != x])) Out[1]: 0 3 1 1 2 1 3 1 4 46 5 1 ``` Then you can just add `.to_list()` or `.to_numpy()` to the end to get rid of the series index: ``` s.apply(lambda x: min([abs(x - s[y]) for y in s.index if s[y] != x])).to_numpy() array([ 3, 1, 1, 1, 46, 1], dtype=int64) ```
53,061,144
Just as the title said, is there an easy way to upgrade python version from 2.7 to 3.6 of superset and keep all old data and information (Dashboard,Charts,Tables)? I use the old version of superset is 0.25.6 and python is 2.7 for now. And I want to upgrade to 0.28 for superset, but the version 0.28 is not support python2.7. I can not just use command to upgrade: ``` pip install superset -- upgrade superset db upgrade ``` I found that if use command `pip install superset` would install at path `/usr/local/lib/python2.7/dist-packages` and use command `pip3 install superset` would install on path `/usr/local/lib/python3.6/dist-packages`. The old version of superset and data is at path python2.7, but the new one will build at path python3.6. How can I move the old version of superset and data to new version?
2018/10/30
[ "https://Stackoverflow.com/questions/53061144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5771675/" ]
Superset stores all the data of dashboards, charts, tables and datasources in it's own db. Just setup a clean copy of superset which uses python3.6 by default and replace the working database with a copy of your old database.
This worked on ubuntu 16.04 ``` pip install --upgrade setuptools pip sudo add-apt-repository ppa:jonathonf/python-3.6 sudo apt update sudo apt install python3.6 python3.6-dev wget https://bootstrap.pypa.io/get-pip.py sudo python3.6 get-pip.py pip3 install superset ```
44,763,056
In python there's a nice function (str `.format`) that can really easily replace variables (encoded like `{variable}`) in a string with values stored in a dict (with values named by variable name). Like so: ``` vars=dict(animal="shark", verb="ate", noun="fish") string="Sammy the {animal} {verb} a {noun}." print(string.format(**vars)) ``` > > Sammy the shark ate a fish. > > > What is the simplest solution in `R`? Is there a built-in equivalent 2-argument function that takes a string with variables encoded **in the same way** and replaces them with named values from a named `list`? If there is no built-in function in R, is there one in a published package? If there is none in a published package, what would you use to write one? The rules: the string is given to you with variables encoded like "{variable}". The variables must be encoded as a `list`. I will answer with my custom-made version, but will accept an answer that does it better than I did.
2017/06/26
[ "https://Stackoverflow.com/questions/44763056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/946721/" ]
I found another solution: glue package from the tidyverse: <https://github.com/tidyverse/glue> An example: ``` library(glue) animal <- "shark" verb <- "ate" noun <- "fish" string="Sammy the {animal} {verb} a {noun}." glue(string) Sammy the shark ate a fish. ``` If you insist on having list of variables, you can do: ``` l <- list(animal = "shark", verb = "ate", noun = "fish") do.call(glue, c(string , l)) Sammy the shark ate a fish. ``` Regards Paweł
Since it appears I cannot find a built-in or even a package with such a function, I tried to roll my own. My function relies on the `stringi` package. Here is what I have come up with: ``` strformat = function(str, vals) { vars = stringi::stri_match_all(str, regex = "\\{.*?\\}", vectorize_all = FALSE)[[1]][,1] x = str for (i in seq_along(names(vals))) { varName = names(vals)[i] varCode = paste0("{", varName, "}") x = stringi::stri_replace_all_fixed(x, varCode, vals[[varName]], vectorize_all = TRUE) } return(x) } ``` Example: ``` > str = "Sammy the {animal} {verb} a {noun}." > vals = list(animal="shark", verb="ate", noun="fish") > strformat(str, vals) [1] "Sammy the shark ate a fish." ```
44,763,056
In python there's a nice function (str `.format`) that can really easily replace variables (encoded like `{variable}`) in a string with values stored in a dict (with values named by variable name). Like so: ``` vars=dict(animal="shark", verb="ate", noun="fish") string="Sammy the {animal} {verb} a {noun}." print(string.format(**vars)) ``` > > Sammy the shark ate a fish. > > > What is the simplest solution in `R`? Is there a built-in equivalent 2-argument function that takes a string with variables encoded **in the same way** and replaces them with named values from a named `list`? If there is no built-in function in R, is there one in a published package? If there is none in a published package, what would you use to write one? The rules: the string is given to you with variables encoded like "{variable}". The variables must be encoded as a `list`. I will answer with my custom-made version, but will accept an answer that does it better than I did.
2017/06/26
[ "https://Stackoverflow.com/questions/44763056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/946721/" ]
Since it appears I cannot find a built-in or even a package with such a function, I tried to roll my own. My function relies on the `stringi` package. Here is what I have come up with: ``` strformat = function(str, vals) { vars = stringi::stri_match_all(str, regex = "\\{.*?\\}", vectorize_all = FALSE)[[1]][,1] x = str for (i in seq_along(names(vals))) { varName = names(vals)[i] varCode = paste0("{", varName, "}") x = stringi::stri_replace_all_fixed(x, varCode, vals[[varName]], vectorize_all = TRUE) } return(x) } ``` Example: ``` > str = "Sammy the {animal} {verb} a {noun}." > vals = list(animal="shark", verb="ate", noun="fish") > strformat(str, vals) [1] "Sammy the shark ate a fish." ```
``` library(glue) list2env(list(animal="shark", verb="ate", noun="fish"),.GlobalEnv) string="Sammy the {animal} {verb} a {noun}." glue(string) Sammy the shark ate a fish. ```
44,763,056
In python there's a nice function (str `.format`) that can really easily replace variables (encoded like `{variable}`) in a string with values stored in a dict (with values named by variable name). Like so: ``` vars=dict(animal="shark", verb="ate", noun="fish") string="Sammy the {animal} {verb} a {noun}." print(string.format(**vars)) ``` > > Sammy the shark ate a fish. > > > What is the simplest solution in `R`? Is there a built-in equivalent 2-argument function that takes a string with variables encoded **in the same way** and replaces them with named values from a named `list`? If there is no built-in function in R, is there one in a published package? If there is none in a published package, what would you use to write one? The rules: the string is given to you with variables encoded like "{variable}". The variables must be encoded as a `list`. I will answer with my custom-made version, but will accept an answer that does it better than I did.
2017/06/26
[ "https://Stackoverflow.com/questions/44763056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/946721/" ]
I found another solution: glue package from the tidyverse: <https://github.com/tidyverse/glue> An example: ``` library(glue) animal <- "shark" verb <- "ate" noun <- "fish" string="Sammy the {animal} {verb} a {noun}." glue(string) Sammy the shark ate a fish. ``` If you insist on having list of variables, you can do: ``` l <- list(animal = "shark", verb = "ate", noun = "fish") do.call(glue, c(string , l)) Sammy the shark ate a fish. ``` Regards Paweł
Here's a function that converts the `{` and `}` to `<%=` and `%>` and then uses `brew` from the `brew` package (which you need to install): ``` form = function(s,...){ s = gsub("\\}", "%>", gsub("\\{","<%=",s)) e = as.environment(list(...)) parent.env(e)=.GlobalEnv brew(text=s, envir=e) } ``` Tests: ``` > form("Sammy the {animal} {verb} a {noun}.", animal = "shark", verb="made", noun="car") Sammy the shark made a car. > form("Sammy the {animal} {verb} a {noun}.", animal = "shark", verb="made", noun="truck") Sammy the shark made a truck. ``` It will fail if there's any `{` in the format string that don't mark variable substitutions, or if it has `<%=` or any of the other `brew` syntax markers in it.
44,763,056
In python there's a nice function (str `.format`) that can really easily replace variables (encoded like `{variable}`) in a string with values stored in a dict (with values named by variable name). Like so: ``` vars=dict(animal="shark", verb="ate", noun="fish") string="Sammy the {animal} {verb} a {noun}." print(string.format(**vars)) ``` > > Sammy the shark ate a fish. > > > What is the simplest solution in `R`? Is there a built-in equivalent 2-argument function that takes a string with variables encoded **in the same way** and replaces them with named values from a named `list`? If there is no built-in function in R, is there one in a published package? If there is none in a published package, what would you use to write one? The rules: the string is given to you with variables encoded like "{variable}". The variables must be encoded as a `list`. I will answer with my custom-made version, but will accept an answer that does it better than I did.
2017/06/26
[ "https://Stackoverflow.com/questions/44763056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/946721/" ]
Here's a function that converts the `{` and `}` to `<%=` and `%>` and then uses `brew` from the `brew` package (which you need to install): ``` form = function(s,...){ s = gsub("\\}", "%>", gsub("\\{","<%=",s)) e = as.environment(list(...)) parent.env(e)=.GlobalEnv brew(text=s, envir=e) } ``` Tests: ``` > form("Sammy the {animal} {verb} a {noun}.", animal = "shark", verb="made", noun="car") Sammy the shark made a car. > form("Sammy the {animal} {verb} a {noun}.", animal = "shark", verb="made", noun="truck") Sammy the shark made a truck. ``` It will fail if there's any `{` in the format string that don't mark variable substitutions, or if it has `<%=` or any of the other `brew` syntax markers in it.
``` library(glue) list2env(list(animal="shark", verb="ate", noun="fish"),.GlobalEnv) string="Sammy the {animal} {verb} a {noun}." glue(string) Sammy the shark ate a fish. ```
44,763,056
In python there's a nice function (str `.format`) that can really easily replace variables (encoded like `{variable}`) in a string with values stored in a dict (with values named by variable name). Like so: ``` vars=dict(animal="shark", verb="ate", noun="fish") string="Sammy the {animal} {verb} a {noun}." print(string.format(**vars)) ``` > > Sammy the shark ate a fish. > > > What is the simplest solution in `R`? Is there a built-in equivalent 2-argument function that takes a string with variables encoded **in the same way** and replaces them with named values from a named `list`? If there is no built-in function in R, is there one in a published package? If there is none in a published package, what would you use to write one? The rules: the string is given to you with variables encoded like "{variable}". The variables must be encoded as a `list`. I will answer with my custom-made version, but will accept an answer that does it better than I did.
2017/06/26
[ "https://Stackoverflow.com/questions/44763056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/946721/" ]
I found another solution: glue package from the tidyverse: <https://github.com/tidyverse/glue> An example: ``` library(glue) animal <- "shark" verb <- "ate" noun <- "fish" string="Sammy the {animal} {verb} a {noun}." glue(string) Sammy the shark ate a fish. ``` If you insist on having list of variables, you can do: ``` l <- list(animal = "shark", verb = "ate", noun = "fish") do.call(glue, c(string , l)) Sammy the shark ate a fish. ``` Regards Paweł
The [`stringr`](https://cran.r-project.org/web/packages/stringr/vignettes/stringr.html) package *almost* has an exact replacement in the function `str_interp`. it requires just a bit of adjustment: ``` fmt = function(str, vals) { # str_interp requires variables encoded like ${var}, so we substitute # the {var} syntax here. str = stringr::str_replace_all(str, "\\{", "${") stringr::str_interp(str, vals) } ```
44,763,056
In python there's a nice function (str `.format`) that can really easily replace variables (encoded like `{variable}`) in a string with values stored in a dict (with values named by variable name). Like so: ``` vars=dict(animal="shark", verb="ate", noun="fish") string="Sammy the {animal} {verb} a {noun}." print(string.format(**vars)) ``` > > Sammy the shark ate a fish. > > > What is the simplest solution in `R`? Is there a built-in equivalent 2-argument function that takes a string with variables encoded **in the same way** and replaces them with named values from a named `list`? If there is no built-in function in R, is there one in a published package? If there is none in a published package, what would you use to write one? The rules: the string is given to you with variables encoded like "{variable}". The variables must be encoded as a `list`. I will answer with my custom-made version, but will accept an answer that does it better than I did.
2017/06/26
[ "https://Stackoverflow.com/questions/44763056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/946721/" ]
I found another solution: glue package from the tidyverse: <https://github.com/tidyverse/glue> An example: ``` library(glue) animal <- "shark" verb <- "ate" noun <- "fish" string="Sammy the {animal} {verb} a {noun}." glue(string) Sammy the shark ate a fish. ``` If you insist on having list of variables, you can do: ``` l <- list(animal = "shark", verb = "ate", noun = "fish") do.call(glue, c(string , l)) Sammy the shark ate a fish. ``` Regards Paweł
``` library(glue) list2env(list(animal="shark", verb="ate", noun="fish"),.GlobalEnv) string="Sammy the {animal} {verb} a {noun}." glue(string) Sammy the shark ate a fish. ```
44,763,056
In python there's a nice function (str `.format`) that can really easily replace variables (encoded like `{variable}`) in a string with values stored in a dict (with values named by variable name). Like so: ``` vars=dict(animal="shark", verb="ate", noun="fish") string="Sammy the {animal} {verb} a {noun}." print(string.format(**vars)) ``` > > Sammy the shark ate a fish. > > > What is the simplest solution in `R`? Is there a built-in equivalent 2-argument function that takes a string with variables encoded **in the same way** and replaces them with named values from a named `list`? If there is no built-in function in R, is there one in a published package? If there is none in a published package, what would you use to write one? The rules: the string is given to you with variables encoded like "{variable}". The variables must be encoded as a `list`. I will answer with my custom-made version, but will accept an answer that does it better than I did.
2017/06/26
[ "https://Stackoverflow.com/questions/44763056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/946721/" ]
The [`stringr`](https://cran.r-project.org/web/packages/stringr/vignettes/stringr.html) package *almost* has an exact replacement in the function `str_interp`. it requires just a bit of adjustment: ``` fmt = function(str, vals) { # str_interp requires variables encoded like ${var}, so we substitute # the {var} syntax here. str = stringr::str_replace_all(str, "\\{", "${") stringr::str_interp(str, vals) } ```
``` library(glue) list2env(list(animal="shark", verb="ate", noun="fish"),.GlobalEnv) string="Sammy the {animal} {verb} a {noun}." glue(string) Sammy the shark ate a fish. ```
73,595,947
When i execute this code it works fine to login but when i logout and then come again to the login window, then close login window it closes but shows this exception in terminal of visual studio code. ```none Exception in Tkinter callback Traceback (most recent call last): File "c:\Users\IMRAN\Desktop\pyapps\blood_donors_admin_dashboard\login.py", line 88, in login messagebox.showerror("Error", "Invalid Email/Password!", parent=self.root) File "C:\Users\IMRAN\AppData\Local\Programs\Python\Python310\lib\tkinter\messagebox.py", line 98, in showerror return _show(title, message, ERROR, OK, **options) File "C:\Users\IMRAN\AppData\Local\Programs\Python\Python310\lib\tkinter\messagebox.py", line 76, in _show res = Message(**options).show() File "C:\Users\IMRAN\AppData\Local\Programs\Python\Python310\lib\tkinter\commondialog.py", line 45, in show s = master.tk.call(self.command, *master._options(self.options)) _tkinter.TclError: can't invoke "tk_messageBox" command: application has been destroyed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\IMRAN\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 1921, in __call__ return self.func(*args) File "c:\Users\IMRAN\Desktop\pyapps\blood_donors_admin_dashboard\login.py", line 91, in login messagebox.showerror("Error", f"Error due to : {str(ex)}", parent=self.root) File "C:\Users\IMRAN\AppData\Local\Programs\Python\Python310\lib\tkinter\messagebox.py", line 98, in showerror return _show(title, message, ERROR, OK, **options) File "C:\Users\IMRAN\AppData\Local\Programs\Python\Python310\lib\tkinter\messagebox.py", line 76, in _show res = Message(**options).show() File "C:\Users\IMRAN\AppData\Local\Programs\Python\Python310\lib\tkinter\commondialog.py", line 45, in show s = master.tk.call(self.command, *master._options(self.options)) _tkinter.TclError: can't invoke "tk_messageBox" command: application has been destroyed ``` **NOTE: Line 88 is the last else part before except code block.** ``` def login(self): email = self.user.get() password = self.passwd.get() if email == "" and password == "": messagebox.showerror("Error", "Please Enter All The Fields!", parent=self.root) elif email == "": messagebox.showerror("Error", "Please Enter Email Address!", parent=self.root) elif password == "": messagebox.showerror("Error", "Please Enter Password!", parent=self.root) else: ref = db.reference('users') data = ref.get() for key, val in data.items(): self.loguser.update({key:val}) try: for user in self.loguser.values(): if user['email'] == email and user['password'] == password and user['admin'] == True: self.root.destroy() os.system('python main.py') else: messagebox.showerror("Error", "Invalid Email/Password!", parent=self.root) except Exception as ex: messagebox.showerror("Error", f"Error due to : {str(ex)}", parent=self.root) ```
2022/09/03
[ "https://Stackoverflow.com/questions/73595947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14163584/" ]
This error happens because you used the `destroy()` function. Just don't destroy it. If you want it to be hidden then use `withdraw()` which hides your root window and then `deiconify()` to show it again if you want. In this way the window is still there but it cannot be found by the user. A simple modification of your code ``` def login(self): email = self.user.get() password = self.passwd.get() if email == "" and password == "": messagebox.showerror("Error", "Please Enter All The Fields!", parent=self.root) elif email == "": messagebox.showerror("Error", "Please Enter Email Address!", parent=self.root) elif password == "": messagebox.showerror("Error", "Please Enter Password!", parent=self.root) else: ref = db.reference('users') data = ref.get() for key, val in data.items(): self.loguser.update({key:val}) try: for user in self.loguser.values(): if user['email'] == email and user['password'] == password and user['admin'] == True: self.root.withdraw() os.system('python main.py') else: messagebox.showerror("Error", "Invalid Email/Password!", parent=self.root) except Exception as ex: messagebox.showerror("Error", f"Error due to : {str(ex)}", parent=self.root) ``` Hope this helps
Finally after trying again and again I have solved the problem. @Thingamabobs point out the for loop problem. So i try to control the for loop after destroying root window by adding break in for loop after login. Here Are the code changes: ``` for user in self.loguser.values(): if user['email'] == email and user['password'] == password and user['admin'] == True: self.root.destroy() os.system('python main.py') break ```
38,778,954
I am trying to do a conditional update to a nested value. basically one variable in a nested array of 2 variables per array item has a boolean component I want to update based on the string value of the other variable. I also want to do all of that based on a targeted find query. I came up with this below, but it doesn't work. ``` #!/usr/bin/env python import ssl from pymongo import MongoClient client = MongoClient("somehost", ssl=True, ssl_cert_reqs=ssl.CERT_NONE, replicaSet='rs0') db = client.maestro mycollection = db.users print 'connected, now performing update' mycollection.find_and_modify(query={'emailAddress':'somedude@someplace.wat'}, update={ "nested.name" : "issomename" }, { "$set": {'nested.$*.value': True}}, upsert=True, full_response=True) ``` This code results in: ``` SyntaxError: non-keyword arg after keyword arg ``` This makes me think that the find\_and\_modify() method can't handle the conditional update bit. Is there some way to achieve this, or have I gone down a wrong path? What would you all suggest as a better approach?
2016/08/04
[ "https://Stackoverflow.com/questions/38778954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/798549/" ]
``` #!/usr/bin/env python import ssl from pymongo import MongoClient client = MongoClient("somehost.wat", ssl=True, ssl_cert_reqs=ssl.CERT_NONE, replicaSet='rs0') db = client.dbname mycollection = db.docs user_email = 'user@somehost.wat' mycollection.update({ "emailAddress": user_email,"nestedvalue": { "$elemMatch" : {"name": "somename"} } }, { "$set": {"nestedvalue.$.value": True}}) ``` This did the trick.
Instead of find\_any\_modify, use update\_one if you want to update just one record or update\_many in case of many. The usage is like this: mycollection.upadte\_one({'emailAddress':'somedude@someplace.wat'},{"$set": {'nested.$\*.value': True}}) for further detail please go through this link: <https://docs.mongodb.com/getting-started/python/update/>
63,044,893
I am trying to use factory boy and faker to generate some fake data for a website I am building. Here is my models.py: ``` # External Imports from django.db import models import uuid # Internal Imports from applications.models.application import Application from users.models.user import User from .session import Session # Fake data import factory import factory.django import factory.fuzzy from datetime import datetime from faker import Faker from faker.providers import BaseProvider import random class ButtonClick(models.Model): """**Database model that tracks and saves button clicks for an application** """ # identifier id = models.UUIDField(default=uuid.uuid4, primary_key=True, editable=False) # info button_name = models.CharField(max_length=128, null=True, blank=True) application = models.ForeignKey( Application, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) user = models.ForeignKey( User, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) session = models.ForeignKey( Session, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) timestamp = models.DateTimeField(auto_now=True) class Meta: db_table = 'button_clicks' ordering = ('-timestamp', ) def __str__(self): return f'{self.application} - {self.button_name}' fake = Faker() faker = Factory.create() class ApplicationFactory(factory.DjangoModelFactory): class Meta: model = Application application = factory.LazyAttribute(lambda _: faker.word()) class FakeButtonClick(factory.django.DjangoModelFactory): class Meta: model = ButtonClick button_name = factory.Faker('first_name') application = factory.SubFactory(ApplicationFactory) user = factory.Faker('name') session = factory.Faker('random_int') timestamp = factory.Faker('date') ``` When I try to run the following code in the terminal, I get an error: ``` >>> from analytics.models.button_click import FakeButtonClick >>> for _ in range(200): FakeButtonClick.create() ... Traceback (most recent call last): File "<console>", line 1, in <module> File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 564, in create return cls._generate(enums.CREATE_STRATEGY, kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/django.py", line 141, in _generate return super(DjangoModelFactory, cls)._generate(strategy, params) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 501, in _generate return step.build() File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/builder.py", line 279, in build kwargs=kwargs, File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 315, in instantiate return self.factory._create(model, *args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/django.py", line 185, in _create return manager.create(*args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/query.py", line 431, in create obj = self.model(**kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/base.py", line 482, in __init__ _setattr(self, field.name, rel_obj) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/fields/related_descriptors.py", line 219, in __set__ self.field.remote_field.model._meta.object_name, ValueError: Cannot assign "9714": "ButtonClick.application" must be a "Application" instance. ``` I have created some very simple data using factory boy and faker in the past but the traceback seems to be implying that I need to create an application instance within my FakeButtonClick class? I checked the documentation and application doesn't appear to be an available instance for factory boy/faker. Do I need to create the instance myself? Maybe a subfactory?
2020/07/23
[ "https://Stackoverflow.com/questions/63044893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12778676/" ]
Your `ButtonClick` model has 3 fields defined as a `ForeignKey`: `application`, `user` and `session`. When you want to create a `ButtonClick` instance, Django requires that you provide a valid value to each field defined as a ForeignKey — here, this means providing either model instances or `None` (since those ForeignKey are nullable). With FactoryBoy, this means that you'll have to: 1. Define a `Factory` class for each of these models. 2. Use a `factory.SubFactory` pointing to those factories for each of the fields. An example would be: ```py class UserFactory(factory.django.DjangoModelFactory): class Meta: model = User username = factory.Faker('username') class SessionFactory(factory.django.DjangoModelFactory): class Meta: model = Session uuid = factory.Faker('uuid4') user = factory.SubFactory(UserFactory) class ApplicationFactory(factory.django.DjangoModelFactory): class Meta: model = Application name = factory.Faker('name') class ButtonClickFactory(factory.django.DjangoModelFactory): class Meta: model = ButtonClick user = factory.SubFactory(UserFactory) # Ensure that click.user == click.session.user session = factory.SubFactory(SessionFactory, user=factory.SelfAttribute('..user')) application = factory.SubFactory(ApplicationFactory) ``` You can take a look [at the docs](https://factoryboy.readthedocs.io/en/latest/recipes.html#dependent-objects-foreignkey). By the way, with FactoryBoy's [faker integration](https://factoryboy.readthedocs.io/en/latest/reference.html#faker), you don't need to import it directly: `factory.Faker('uuid4')` is equivalent to `faker.Faker().uuid4()`.
Here's my database-populate file, probably may help you (a lot simpler than your file I believe): ```py # Don't change the format. Order matters! import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings') import django django.setup() import random from faker import Faker from todo.models import Todo fakegen = Faker() def populate(N = 10): for entry in range(N): fake_tmp = fakegen.catch_phrase() levels = ['important', 'normal', 'unimportant'] fake_title = fake_tmp if len(fake_tmp) <= 40 else (fake_tmp[:37] + '...') fake_desc = fakegen.sentence(nb_words=70) fake_level = levels[random.randint(0, 2)] todo_item = Todo.objects.get_or_create(title=fake_title, desc=fake_desc, level=fake_level) if __name__ == '__main__': print('Populating data...') populate(20) print('Populating complete') ```
63,044,893
I am trying to use factory boy and faker to generate some fake data for a website I am building. Here is my models.py: ``` # External Imports from django.db import models import uuid # Internal Imports from applications.models.application import Application from users.models.user import User from .session import Session # Fake data import factory import factory.django import factory.fuzzy from datetime import datetime from faker import Faker from faker.providers import BaseProvider import random class ButtonClick(models.Model): """**Database model that tracks and saves button clicks for an application** """ # identifier id = models.UUIDField(default=uuid.uuid4, primary_key=True, editable=False) # info button_name = models.CharField(max_length=128, null=True, blank=True) application = models.ForeignKey( Application, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) user = models.ForeignKey( User, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) session = models.ForeignKey( Session, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) timestamp = models.DateTimeField(auto_now=True) class Meta: db_table = 'button_clicks' ordering = ('-timestamp', ) def __str__(self): return f'{self.application} - {self.button_name}' fake = Faker() faker = Factory.create() class ApplicationFactory(factory.DjangoModelFactory): class Meta: model = Application application = factory.LazyAttribute(lambda _: faker.word()) class FakeButtonClick(factory.django.DjangoModelFactory): class Meta: model = ButtonClick button_name = factory.Faker('first_name') application = factory.SubFactory(ApplicationFactory) user = factory.Faker('name') session = factory.Faker('random_int') timestamp = factory.Faker('date') ``` When I try to run the following code in the terminal, I get an error: ``` >>> from analytics.models.button_click import FakeButtonClick >>> for _ in range(200): FakeButtonClick.create() ... Traceback (most recent call last): File "<console>", line 1, in <module> File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 564, in create return cls._generate(enums.CREATE_STRATEGY, kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/django.py", line 141, in _generate return super(DjangoModelFactory, cls)._generate(strategy, params) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 501, in _generate return step.build() File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/builder.py", line 279, in build kwargs=kwargs, File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 315, in instantiate return self.factory._create(model, *args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/django.py", line 185, in _create return manager.create(*args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/query.py", line 431, in create obj = self.model(**kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/base.py", line 482, in __init__ _setattr(self, field.name, rel_obj) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/fields/related_descriptors.py", line 219, in __set__ self.field.remote_field.model._meta.object_name, ValueError: Cannot assign "9714": "ButtonClick.application" must be a "Application" instance. ``` I have created some very simple data using factory boy and faker in the past but the traceback seems to be implying that I need to create an application instance within my FakeButtonClick class? I checked the documentation and application doesn't appear to be an available instance for factory boy/faker. Do I need to create the instance myself? Maybe a subfactory?
2020/07/23
[ "https://Stackoverflow.com/questions/63044893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12778676/" ]
Here's my database-populate file, probably may help you (a lot simpler than your file I believe): ```py # Don't change the format. Order matters! import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings') import django django.setup() import random from faker import Faker from todo.models import Todo fakegen = Faker() def populate(N = 10): for entry in range(N): fake_tmp = fakegen.catch_phrase() levels = ['important', 'normal', 'unimportant'] fake_title = fake_tmp if len(fake_tmp) <= 40 else (fake_tmp[:37] + '...') fake_desc = fakegen.sentence(nb_words=70) fake_level = levels[random.randint(0, 2)] todo_item = Todo.objects.get_or_create(title=fake_title, desc=fake_desc, level=fake_level) if __name__ == '__main__': print('Populating data...') populate(20) print('Populating complete') ```
**for future readers!** To generate fake data for `django` you can use [`django-seed`](https://github.com/Brobin/django-seed). It's an easy process as * `pip install django-seed` (install **django-seed**) * add `django_seed` in your apps in `settings.py` file. ``` INSTALLED_APPS = ( ... 'django_seed', ) ``` * `python manage.py seed <app-name>` for example: to seed api app of django `python manage.py seed api --number=15` If you need, you can also specify what value a particular field should have. For example, if you want to seed 15 of MyModel, but you need my\_field to be the same on all of them, you can do it like this: ``` python manage.py seed api --number=15 --seeder "MyModel.my_field" "1.1.1.1" ```
63,044,893
I am trying to use factory boy and faker to generate some fake data for a website I am building. Here is my models.py: ``` # External Imports from django.db import models import uuid # Internal Imports from applications.models.application import Application from users.models.user import User from .session import Session # Fake data import factory import factory.django import factory.fuzzy from datetime import datetime from faker import Faker from faker.providers import BaseProvider import random class ButtonClick(models.Model): """**Database model that tracks and saves button clicks for an application** """ # identifier id = models.UUIDField(default=uuid.uuid4, primary_key=True, editable=False) # info button_name = models.CharField(max_length=128, null=True, blank=True) application = models.ForeignKey( Application, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) user = models.ForeignKey( User, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) session = models.ForeignKey( Session, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) timestamp = models.DateTimeField(auto_now=True) class Meta: db_table = 'button_clicks' ordering = ('-timestamp', ) def __str__(self): return f'{self.application} - {self.button_name}' fake = Faker() faker = Factory.create() class ApplicationFactory(factory.DjangoModelFactory): class Meta: model = Application application = factory.LazyAttribute(lambda _: faker.word()) class FakeButtonClick(factory.django.DjangoModelFactory): class Meta: model = ButtonClick button_name = factory.Faker('first_name') application = factory.SubFactory(ApplicationFactory) user = factory.Faker('name') session = factory.Faker('random_int') timestamp = factory.Faker('date') ``` When I try to run the following code in the terminal, I get an error: ``` >>> from analytics.models.button_click import FakeButtonClick >>> for _ in range(200): FakeButtonClick.create() ... Traceback (most recent call last): File "<console>", line 1, in <module> File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 564, in create return cls._generate(enums.CREATE_STRATEGY, kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/django.py", line 141, in _generate return super(DjangoModelFactory, cls)._generate(strategy, params) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 501, in _generate return step.build() File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/builder.py", line 279, in build kwargs=kwargs, File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 315, in instantiate return self.factory._create(model, *args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/django.py", line 185, in _create return manager.create(*args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/query.py", line 431, in create obj = self.model(**kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/base.py", line 482, in __init__ _setattr(self, field.name, rel_obj) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/fields/related_descriptors.py", line 219, in __set__ self.field.remote_field.model._meta.object_name, ValueError: Cannot assign "9714": "ButtonClick.application" must be a "Application" instance. ``` I have created some very simple data using factory boy and faker in the past but the traceback seems to be implying that I need to create an application instance within my FakeButtonClick class? I checked the documentation and application doesn't appear to be an available instance for factory boy/faker. Do I need to create the instance myself? Maybe a subfactory?
2020/07/23
[ "https://Stackoverflow.com/questions/63044893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12778676/" ]
Your `ButtonClick` model has 3 fields defined as a `ForeignKey`: `application`, `user` and `session`. When you want to create a `ButtonClick` instance, Django requires that you provide a valid value to each field defined as a ForeignKey — here, this means providing either model instances or `None` (since those ForeignKey are nullable). With FactoryBoy, this means that you'll have to: 1. Define a `Factory` class for each of these models. 2. Use a `factory.SubFactory` pointing to those factories for each of the fields. An example would be: ```py class UserFactory(factory.django.DjangoModelFactory): class Meta: model = User username = factory.Faker('username') class SessionFactory(factory.django.DjangoModelFactory): class Meta: model = Session uuid = factory.Faker('uuid4') user = factory.SubFactory(UserFactory) class ApplicationFactory(factory.django.DjangoModelFactory): class Meta: model = Application name = factory.Faker('name') class ButtonClickFactory(factory.django.DjangoModelFactory): class Meta: model = ButtonClick user = factory.SubFactory(UserFactory) # Ensure that click.user == click.session.user session = factory.SubFactory(SessionFactory, user=factory.SelfAttribute('..user')) application = factory.SubFactory(ApplicationFactory) ``` You can take a look [at the docs](https://factoryboy.readthedocs.io/en/latest/recipes.html#dependent-objects-foreignkey). By the way, with FactoryBoy's [faker integration](https://factoryboy.readthedocs.io/en/latest/reference.html#faker), you don't need to import it directly: `factory.Faker('uuid4')` is equivalent to `faker.Faker().uuid4()`.
**for future readers!** To generate fake data for `django` you can use [`django-seed`](https://github.com/Brobin/django-seed). It's an easy process as * `pip install django-seed` (install **django-seed**) * add `django_seed` in your apps in `settings.py` file. ``` INSTALLED_APPS = ( ... 'django_seed', ) ``` * `python manage.py seed <app-name>` for example: to seed api app of django `python manage.py seed api --number=15` If you need, you can also specify what value a particular field should have. For example, if you want to seed 15 of MyModel, but you need my\_field to be the same on all of them, you can do it like this: ``` python manage.py seed api --number=15 --seeder "MyModel.my_field" "1.1.1.1" ```
63,044,893
I am trying to use factory boy and faker to generate some fake data for a website I am building. Here is my models.py: ``` # External Imports from django.db import models import uuid # Internal Imports from applications.models.application import Application from users.models.user import User from .session import Session # Fake data import factory import factory.django import factory.fuzzy from datetime import datetime from faker import Faker from faker.providers import BaseProvider import random class ButtonClick(models.Model): """**Database model that tracks and saves button clicks for an application** """ # identifier id = models.UUIDField(default=uuid.uuid4, primary_key=True, editable=False) # info button_name = models.CharField(max_length=128, null=True, blank=True) application = models.ForeignKey( Application, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) user = models.ForeignKey( User, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) session = models.ForeignKey( Session, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) timestamp = models.DateTimeField(auto_now=True) class Meta: db_table = 'button_clicks' ordering = ('-timestamp', ) def __str__(self): return f'{self.application} - {self.button_name}' fake = Faker() faker = Factory.create() class ApplicationFactory(factory.DjangoModelFactory): class Meta: model = Application application = factory.LazyAttribute(lambda _: faker.word()) class FakeButtonClick(factory.django.DjangoModelFactory): class Meta: model = ButtonClick button_name = factory.Faker('first_name') application = factory.SubFactory(ApplicationFactory) user = factory.Faker('name') session = factory.Faker('random_int') timestamp = factory.Faker('date') ``` When I try to run the following code in the terminal, I get an error: ``` >>> from analytics.models.button_click import FakeButtonClick >>> for _ in range(200): FakeButtonClick.create() ... Traceback (most recent call last): File "<console>", line 1, in <module> File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 564, in create return cls._generate(enums.CREATE_STRATEGY, kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/django.py", line 141, in _generate return super(DjangoModelFactory, cls)._generate(strategy, params) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 501, in _generate return step.build() File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/builder.py", line 279, in build kwargs=kwargs, File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 315, in instantiate return self.factory._create(model, *args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/django.py", line 185, in _create return manager.create(*args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/query.py", line 431, in create obj = self.model(**kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/base.py", line 482, in __init__ _setattr(self, field.name, rel_obj) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/fields/related_descriptors.py", line 219, in __set__ self.field.remote_field.model._meta.object_name, ValueError: Cannot assign "9714": "ButtonClick.application" must be a "Application" instance. ``` I have created some very simple data using factory boy and faker in the past but the traceback seems to be implying that I need to create an application instance within my FakeButtonClick class? I checked the documentation and application doesn't appear to be an available instance for factory boy/faker. Do I need to create the instance myself? Maybe a subfactory?
2020/07/23
[ "https://Stackoverflow.com/questions/63044893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12778676/" ]
Your `ButtonClick` model has 3 fields defined as a `ForeignKey`: `application`, `user` and `session`. When you want to create a `ButtonClick` instance, Django requires that you provide a valid value to each field defined as a ForeignKey — here, this means providing either model instances or `None` (since those ForeignKey are nullable). With FactoryBoy, this means that you'll have to: 1. Define a `Factory` class for each of these models. 2. Use a `factory.SubFactory` pointing to those factories for each of the fields. An example would be: ```py class UserFactory(factory.django.DjangoModelFactory): class Meta: model = User username = factory.Faker('username') class SessionFactory(factory.django.DjangoModelFactory): class Meta: model = Session uuid = factory.Faker('uuid4') user = factory.SubFactory(UserFactory) class ApplicationFactory(factory.django.DjangoModelFactory): class Meta: model = Application name = factory.Faker('name') class ButtonClickFactory(factory.django.DjangoModelFactory): class Meta: model = ButtonClick user = factory.SubFactory(UserFactory) # Ensure that click.user == click.session.user session = factory.SubFactory(SessionFactory, user=factory.SelfAttribute('..user')) application = factory.SubFactory(ApplicationFactory) ``` You can take a look [at the docs](https://factoryboy.readthedocs.io/en/latest/recipes.html#dependent-objects-foreignkey). By the way, with FactoryBoy's [faker integration](https://factoryboy.readthedocs.io/en/latest/reference.html#faker), you don't need to import it directly: `factory.Faker('uuid4')` is equivalent to `faker.Faker().uuid4()`.
This is how I generated fake data into `Django sqlite` [Mackaroo website](https://www.mockaroo.com/) go to this website and fill out details and download file in any format (`sql`, `json` or `csv`) any format The **good thing** about this website you can provide `regular expression` on your columns **null values** and any **format** for numbers, dates etc then either *download* it or dump it in your `database`
63,044,893
I am trying to use factory boy and faker to generate some fake data for a website I am building. Here is my models.py: ``` # External Imports from django.db import models import uuid # Internal Imports from applications.models.application import Application from users.models.user import User from .session import Session # Fake data import factory import factory.django import factory.fuzzy from datetime import datetime from faker import Faker from faker.providers import BaseProvider import random class ButtonClick(models.Model): """**Database model that tracks and saves button clicks for an application** """ # identifier id = models.UUIDField(default=uuid.uuid4, primary_key=True, editable=False) # info button_name = models.CharField(max_length=128, null=True, blank=True) application = models.ForeignKey( Application, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) user = models.ForeignKey( User, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) session = models.ForeignKey( Session, related_name='button_clicks', null=True, blank=True, on_delete=models.CASCADE) timestamp = models.DateTimeField(auto_now=True) class Meta: db_table = 'button_clicks' ordering = ('-timestamp', ) def __str__(self): return f'{self.application} - {self.button_name}' fake = Faker() faker = Factory.create() class ApplicationFactory(factory.DjangoModelFactory): class Meta: model = Application application = factory.LazyAttribute(lambda _: faker.word()) class FakeButtonClick(factory.django.DjangoModelFactory): class Meta: model = ButtonClick button_name = factory.Faker('first_name') application = factory.SubFactory(ApplicationFactory) user = factory.Faker('name') session = factory.Faker('random_int') timestamp = factory.Faker('date') ``` When I try to run the following code in the terminal, I get an error: ``` >>> from analytics.models.button_click import FakeButtonClick >>> for _ in range(200): FakeButtonClick.create() ... Traceback (most recent call last): File "<console>", line 1, in <module> File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 564, in create return cls._generate(enums.CREATE_STRATEGY, kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/django.py", line 141, in _generate return super(DjangoModelFactory, cls)._generate(strategy, params) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 501, in _generate return step.build() File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/builder.py", line 279, in build kwargs=kwargs, File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/base.py", line 315, in instantiate return self.factory._create(model, *args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/factory/django.py", line 185, in _create return manager.create(*args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/query.py", line 431, in create obj = self.model(**kwargs) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/base.py", line 482, in __init__ _setattr(self, field.name, rel_obj) File "/Users/ryan/bloks/bloks-backend/venv/lib/python3.7/site-packages/django/db/models/fields/related_descriptors.py", line 219, in __set__ self.field.remote_field.model._meta.object_name, ValueError: Cannot assign "9714": "ButtonClick.application" must be a "Application" instance. ``` I have created some very simple data using factory boy and faker in the past but the traceback seems to be implying that I need to create an application instance within my FakeButtonClick class? I checked the documentation and application doesn't appear to be an available instance for factory boy/faker. Do I need to create the instance myself? Maybe a subfactory?
2020/07/23
[ "https://Stackoverflow.com/questions/63044893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12778676/" ]
This is how I generated fake data into `Django sqlite` [Mackaroo website](https://www.mockaroo.com/) go to this website and fill out details and download file in any format (`sql`, `json` or `csv`) any format The **good thing** about this website you can provide `regular expression` on your columns **null values** and any **format** for numbers, dates etc then either *download* it or dump it in your `database`
**for future readers!** To generate fake data for `django` you can use [`django-seed`](https://github.com/Brobin/django-seed). It's an easy process as * `pip install django-seed` (install **django-seed**) * add `django_seed` in your apps in `settings.py` file. ``` INSTALLED_APPS = ( ... 'django_seed', ) ``` * `python manage.py seed <app-name>` for example: to seed api app of django `python manage.py seed api --number=15` If you need, you can also specify what value a particular field should have. For example, if you want to seed 15 of MyModel, but you need my\_field to be the same on all of them, you can do it like this: ``` python manage.py seed api --number=15 --seeder "MyModel.my_field" "1.1.1.1" ```
54,267,286
I'm trying to make a microservice with python, I'm following [this tutorial](https://medium.com/@ssola/building-microservices-with-python-part-i-5240a8dcc2fb) But I'm getting this error: ``` "flask_app.py", line 115, in run raise Exception('Server {} not recognized'.format(self.server)) Exception: Server 9090 not recognized ``` Project structure: ![project structure image](https://i.stack.imgur.com/I3s1n.png) **App.py** file code ``` from connexion.resolver import RestyResolver import connexion if __name__ == '__main__': app = connexion.App(__name__, 9090, specification_dir='swagger/') app.add_api('my_super_app.yaml', resolver=RestyResolver('api')) app.run() ``` **my\_super\_app.yaml** file code ``` swagger: "2.0" info: title: "My first API" version: "1.0" basePath: /v1.0 paths: /items/: get: responses: '200': description: 'Fetch a list of items' schema: type: array items: $ref: '#/definitions/Item' definitions: Item: type: object properties: id: type: integer format: int64 name: { type: string } ``` **items.py** file code ``` items = { 0: {"name": "First item"} } def search() -> list: return items ```
2019/01/19
[ "https://Stackoverflow.com/questions/54267286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7536804/" ]
ok... i was able to solve this problem... the problem is in app.py, you must specify the variable port: INCORRECT ``` app = connexion.App(__name__, 9090, specification_dir='swagger/') ``` CORRECT ``` app = connexion.App(__name__, port=9090, specification_dir='swagger/') ```
There are plenty of microservice frameworks in Python that would simplify a lot the code you have to write. Try for example pymacaron (<http://pymacaron.com/>). Here is an example of an helloworld api implemented with pymacaron: <https://github.com/pymacaron/pymacaron-helloworld> A pymacaron service requires you only to: (1) write a swagger specification for your api (which is always a good starting point, whatever language you are using). Your swagger file describes the get/post/etc calls of your api and which objects (json dicts) they get and return, but also which python method in your code that implement the endpoint. (2) and implement your endpoints' methods. Once you have done that, you get loads of things for free: you can package your code as a docker container, deploy it to amazon beanstalk, start asynchronous tasks from within your api calls, or get the api documentation with no extra work.
26,657,605
I'm seeing a buggy behaviour in taskqueue API. When a task fails, appengine always runs it once again, even if I tell it not to. This is the relevant code: ``` NO_RETRY = TaskRetryOptions(task_retry_limit=0) class EnqueueTaskDapau(webapp2.RequestHandler): def get(self): taskqueue.add( url='/task_dapau', queue_name='DEFAULT', retry_options=NO_RETRY ) class TaskDapau(webapp2.RequestHandler): def get(self): logging.warning('Vai dar pau') raise BaseException('Deu pau :-)') def post(self): return self.get() application = webapp2.WSGIApplication([ ('/', MainPage), ('/enqueue_dapau', EnqueueTaskDapau), ('/task_dapau', TaskDapau), ], debug=True) ``` The whole app is [available on Github](https://github.com/qmagico/gaetests) so it should be easy to reproduce. When I point my browser to /enqueue\_dapau, this is what I see in the logs (on the web console): ``` 2014-10-30 08:31:01.054 /task_dapau 500 4ms 0kb AppEngine-Google; (+http://code.google.com/appengine) module=default version=1 W 2014-10-30 08:31:01.052 Vai dar pau E 2014-10-30 08:31:01.053 Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 267, in 2014-10-30 08:31:00.933 /task_dapau 500 3ms 0kb AppEngine-Google; (+http://code.google.com/appengine) module=default version=1 W 2014-10-30 08:31:00.931 Vai dar pau E 2014-10-30 08:31:00.932 Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 267, in 2014-10-30 08:31:00.897 /enqueue_dapau 200 91ms 0kb Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.104 Safari/537.36 module=default version=1 ``` If I look at Task Queues on the web console, I see "Run in Last Minute == 2" This behaviour is different from what I get locally with the SDK: ``` INFO 2014-10-30 15:49:05,711 module.py:666] default: "GET /enqueue_dapau HTTP/1.1" 200 - WARNING 2014-10-30 15:49:05,729 views.py:33] Vai dar pau ERROR 2014-10-30 15:49:05,729 wsgi.py:279] Traceback (most recent call last): File "/home/tony/google_appengine/google/appengine/runtime/wsgi.py", line 267, in Handle result = handler(dict(self._environ), self._StartResponse) File "/home/tony/google_appengine/lib/webapp2-2.3/webapp2.py", line 1505, in __call__ rv = self.router.dispatch(request, response) File "/home/tony/google_appengine/lib/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher return route.handler_adapter(request, response) File "/home/tony/google_appengine/lib/webapp2-2.3/webapp2.py", line 1077, in __call__ return handler.dispatch() File "/home/tony/google_appengine/lib/webapp2-2.3/webapp2.py", line 545, in dispatch return method(*args, **kwargs) File "/home/tony/work/qmag/gaetests/src/views.py", line 37, in post return self.get() File "/home/tony/work/qmag/gaetests/src/views.py", line 34, in get raise BaseException('Deu pau :-)') BaseException: Deu pau :-) INFO 2014-10-30 15:49:05,735 module.py:666] default: "POST /task_dapau HTTP/1.1" 500 - WARNING 2014-10-30 15:49:05,735 taskqueue_stub.py:1986] Task task4 failed to execute. The task has no remaining retries. Failing permanently after 0 retries and 0 seconds ``` Is this a bug? (It really looks like so) Is there an easy workaround for it?
2014/10/30
[ "https://Stackoverflow.com/questions/26657605", "https://Stackoverflow.com", "https://Stackoverflow.com/users/627684/" ]
As [mentioned in the documentation](https://cloud.google.com/appengine/docs/python/taskqueue/overview-push#task_retries) App Engine will sometimes run a task twice. You should write your tasks to ensure that this will not be harmful.
Check your queue.yaml file and make sure it is correctly configured. ``` queue: - name: default retry_parameters: task_retry_limit: 0 ```
26,657,605
I'm seeing a buggy behaviour in taskqueue API. When a task fails, appengine always runs it once again, even if I tell it not to. This is the relevant code: ``` NO_RETRY = TaskRetryOptions(task_retry_limit=0) class EnqueueTaskDapau(webapp2.RequestHandler): def get(self): taskqueue.add( url='/task_dapau', queue_name='DEFAULT', retry_options=NO_RETRY ) class TaskDapau(webapp2.RequestHandler): def get(self): logging.warning('Vai dar pau') raise BaseException('Deu pau :-)') def post(self): return self.get() application = webapp2.WSGIApplication([ ('/', MainPage), ('/enqueue_dapau', EnqueueTaskDapau), ('/task_dapau', TaskDapau), ], debug=True) ``` The whole app is [available on Github](https://github.com/qmagico/gaetests) so it should be easy to reproduce. When I point my browser to /enqueue\_dapau, this is what I see in the logs (on the web console): ``` 2014-10-30 08:31:01.054 /task_dapau 500 4ms 0kb AppEngine-Google; (+http://code.google.com/appengine) module=default version=1 W 2014-10-30 08:31:01.052 Vai dar pau E 2014-10-30 08:31:01.053 Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 267, in 2014-10-30 08:31:00.933 /task_dapau 500 3ms 0kb AppEngine-Google; (+http://code.google.com/appengine) module=default version=1 W 2014-10-30 08:31:00.931 Vai dar pau E 2014-10-30 08:31:00.932 Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 267, in 2014-10-30 08:31:00.897 /enqueue_dapau 200 91ms 0kb Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.104 Safari/537.36 module=default version=1 ``` If I look at Task Queues on the web console, I see "Run in Last Minute == 2" This behaviour is different from what I get locally with the SDK: ``` INFO 2014-10-30 15:49:05,711 module.py:666] default: "GET /enqueue_dapau HTTP/1.1" 200 - WARNING 2014-10-30 15:49:05,729 views.py:33] Vai dar pau ERROR 2014-10-30 15:49:05,729 wsgi.py:279] Traceback (most recent call last): File "/home/tony/google_appengine/google/appengine/runtime/wsgi.py", line 267, in Handle result = handler(dict(self._environ), self._StartResponse) File "/home/tony/google_appengine/lib/webapp2-2.3/webapp2.py", line 1505, in __call__ rv = self.router.dispatch(request, response) File "/home/tony/google_appengine/lib/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher return route.handler_adapter(request, response) File "/home/tony/google_appengine/lib/webapp2-2.3/webapp2.py", line 1077, in __call__ return handler.dispatch() File "/home/tony/google_appengine/lib/webapp2-2.3/webapp2.py", line 545, in dispatch return method(*args, **kwargs) File "/home/tony/work/qmag/gaetests/src/views.py", line 37, in post return self.get() File "/home/tony/work/qmag/gaetests/src/views.py", line 34, in get raise BaseException('Deu pau :-)') BaseException: Deu pau :-) INFO 2014-10-30 15:49:05,735 module.py:666] default: "POST /task_dapau HTTP/1.1" 500 - WARNING 2014-10-30 15:49:05,735 taskqueue_stub.py:1986] Task task4 failed to execute. The task has no remaining retries. Failing permanently after 0 retries and 0 seconds ``` Is this a bug? (It really looks like so) Is there an easy workaround for it?
2014/10/30
[ "https://Stackoverflow.com/questions/26657605", "https://Stackoverflow.com", "https://Stackoverflow.com/users/627684/" ]
I just found a way to avoid the undesired retry: ``` taskqueue.add( url='/blah', queue_name='myq', retry_options=TaskRetryOptions(task_retry_limit=0, task_age_limit=1), countdown=1, ) ``` This combination of of retry\_limit, age\_limit and countdown is the magical incantation that does the trick. It's still suboptimal though, so I'll leave this without a green answer until google fixes this bug.
Check your queue.yaml file and make sure it is correctly configured. ``` queue: - name: default retry_parameters: task_retry_limit: 0 ```
64,614,736
Below is the code I have written: ``` X2=df['title'] y2=df['news_type'] X2_train, X2_test, y2_train, y2_test = train_test_split(X2, y2, test_size=0.3, random_state=42) pp=Pipeline([ ('bow',CountVectorizer(analyzer=final)), ('tfidf',TfidfTransformer()), ('classifier',RandomForestClassifier()) ]) pp.fit(X2_train.astype("U"),y2_train.astype("U")) predictions7=pp.predict(X2_test) ``` Error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-300-2bed28a1314e> in <module> ----> 1 predictions7=pp.predict(X2_test) /home/monika/snap/jupyter/common/lib/python3.7/site-packages/sklearn/utils/metaestimators.py in <lambda>(*args, **kwargs) 117 118 # lambda, but not partial, allows help() to work with update_wrapper --> 119 out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs) 120 # update the docstring of the returned function 121 update_wrapper(out, self.fn) /home/monika/snap/jupyter/common/lib/python3.7/site-packages/sklearn/pipeline.py in predict(self, X, **predict_params) 405 Xt = X 406 for _, name, transform in self._iter(with_final=False): --> 407 Xt = transform.transform(Xt) 408 return self.steps[-1][-1].predict(Xt, **predict_params) 409 /home/monika/snap/jupyter/common/lib/python3.7/site-packages/sklearn/feature_extraction/text.py in transform(self, raw_documents) 1248 1249 # use the same matrix-building strategy as fit_transform -> 1250 _, X = self._count_vocab(raw_documents, fixed_vocab=True) 1251 if self.binary: 1252 X.data.fill(1) /home/monika/snap/jupyter/common/lib/python3.7/site-packages/sklearn/feature_extraction/text.py in _count_vocab(self, raw_documents, fixed_vocab) 1108 for doc in raw_documents: 1109 feature_counter = {} -> 1110 for feature in analyze(doc): 1111 try: 1112 feature_idx = vocabulary[feature] /home/monika/snap/jupyter/common/lib/python3.7/site-packages/sklearn/feature_extraction/text.py in _analyze(doc, analyzer, tokenizer, ngrams, preprocessor, decoder, stop_words) 97 98 if decoder is not None: ---> 99 doc = decoder(doc) 100 if analyzer is not None: 101 doc = analyzer(doc) /home/monika/snap/jupyter/common/lib/python3.7/site-packages/sklearn/feature_extraction/text.py in decode(self, doc) 217 218 if doc is np.nan: --> 219 raise ValueError("np.nan is an invalid document, expected byte or " 220 "unicode string.") 221 ValueError: np.nan is an invalid document, expected byte or unicode string. ``` Tried everything to resolve this error but couldnot solve it. Please tell what have I done wrong here ? Its throwing error only after this line :predictions7=pp.predict(X2\_test). I have pasted the error above. ================================================================================================================================================================================================================== Solution: ``` .Replace "pp.fit(X2_train.astype("U"),y2_train.astype("U"))" by "pp.fit((X2_train.astype("U").str.lower()),(y2_train.astype("U").str.lower()))" Replace "predictions7=pp.predict(X2_test)" by "predictions7=pp.predict(X2_test.astype("U"))" ```
2020/10/30
[ "https://Stackoverflow.com/questions/64614736", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14507671/" ]
Try using the built-in `Parameters` ```js type success = (data: any, value: string, settings: any) => void const fn = (...args: Parameters<success>) => {} ```
if the order of the object arguments are always the same you might be able todo something with `Object.values` and the spread operator `Object.values`: <https://developer.mozilla.org/de/docs/Web/JavaScript/Reference/Global_Objects/Object/values> ``` > const fn = (a, b, c) => ({a,b,c}) undefined > fn({a: 5, b: 3, c:2}) { a: { a: 5, b: 3, c: 2 }, b: undefined, c: undefined } > fn(...Object.values({a: 5, b: 3, c: 2})) { a: 5, b: 3, c: 2 } ``` where `Object.values({a: 5, b: 3, c: 2})` is equal to `[5, 3, 2]` Although I would not recommend this and would just add some code to map them directly to prevent bugs later on.
17,912,615
I am brand new at python and coming from php. How do I dump all contents of a variable into a file, similarly to var\_dump? After searching around, i've come up with this: ``` from inspect import getmembers from pprint import pprint pprint(getmembers(_variable_)) ``` However it shows up in the command window not a friendly readable file. I do know how to write to a file and i've tried this: ``` f.write(pprint(getmembers(_variable_))) ``` But it gives me a type error. Would appreciate help, thanks.
2013/07/28
[ "https://Stackoverflow.com/questions/17912615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1973098/" ]
Have you tried using [pickle](http://docs.python.org/2/library/pickle.html)? It's in the standard library, and it should pretty much work like this : ``` import pickle # Write to file pickle.dump(obj, open("file.dat", "wb")) # Read that file obj = pickle.load(open("file.dat", "rb")) ```
Assuming you're happy with how the output of `pprint` looks and you're not looking for object serialization, [`pformat`](http://docs.python.org/2/library/pprint.html#pprint.pformat) does what you you're trying to do. ``` from pprint import pformat f.write(pformat(getmembers(_variable_))) ```
39,651,101
How I write a function "noVowel" in python that determines whether a word has no vowels? In my case, "y" is not a vowel. For example, I want the function to return True if the word is something like "My" and to return false if the word is something like "banana".
2016/09/23
[ "https://Stackoverflow.com/questions/39651101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6867609/" ]
``` any(vowel in word for vowel in 'aeiou') ``` Where `word` is the word you're searching. Broken down: `any` returns `True` if any of the values it checks are `True` returns `False` otherwise `for vowel in 'aeiou'` sets the value of `vowel` to a, then e, then i, etc. `vowel in word` checks if the string `word` contains vowel. If you don't understand why this works, I suggest you look up generator expressions, they are a very valuable tool. EDIT Oops, this returns `True` if there is a vowel and `False` otherwise. To do it the other way, you could ``` all(vowel not in word for vowel in 'aeiou') ``` or ``` not any(vowel in word for vowel in 'aeiou') ```
Try this: ``` def noVowel(word): vowels = 'aeiou' ## defining the vowels in the English alphabet hasVowel= False ## Boolean variable that tells us if there is any vowel for i in range(0,len(word)): ## Iterate through the word if word[i] in vowels: ## If the char at the current index is a vowel, break out of the loop hasVowel = True break else: ## if not, keep the boolean false hasVowel=False ## check the boolean, and return accordingly if hasVowel: return False else: return True ``` Hope it helps!
5,114,981
I am having a problem that may be quite a basic thing, but as a Python learner I've been struggling with it for hours. The documentation has not provided me with an answer so far. The problem is that an import statement included in a module does not seem to be executed when I import this module from a python script. What I have is as follows: I have a file project.py (i.e. python library) that looks like this: ``` import datetime class Project: """ This class is a container for project data """ title = "" manager = "" date = datetime.datetime.min def __init__( self, title="", manager="", date=datetime.datetime.min ): """ Init function with some defaults """ self.title = title self.manager = manager self.date = date ``` This library is later used in a script (`file.py`) that imports project, it starts like this: ``` import project print datetime.datetime.min ``` The problem then arises when I try to execute this script with Python `file.py`. Python then complains with the folliwing NameError: ``` Traceback (most recent call last): File "file.py", line 3, in <module> print datetime.datetime.min NameError: name 'datetime' is not defined ``` This actually happens also if I try to make the same statements (`import` and `print`) directly from the Python shell. Shouldn't the `datetime` module be automatically imported in the precise moment that I call `import project`? Thanks a lot in advance.
2011/02/25
[ "https://Stackoverflow.com/questions/5114981", "https://Stackoverflow.com", "https://Stackoverflow.com/users/236650/" ]
The `datetime` module is only imported into the `project` namespace. So you *could* access it as `project.datetime.datetime.min`, but really you should import it into your script directly. Every symbol (name) that you create in your `project.py` file (like your `Project` class) ends up in the `project` namespace, which includes things you import from other modules. This isn't as inefficient as it might seem however - the actual `datetime` module is still only imported once, no matter how many times you do it. Every time you import it subsequent to the first one it's just importing the *names* into the current namespace, but not actually doing all the heavy lifting of reading and importing the module.
Try thinking of the `import` statement as roughly equivalent to: ``` project = __import__('project') ``` Effectively an `import` statement is simply an assignment to a variable. There may be some side effects as the module is loaded, but from inside your script all you see is a simple assignment to a name. You can pull in all the names from a module using `from project import *`, but **don't do that** because it makes your code much more brittle and harder to maintain. Instead either just import the module or exactly the names you want. So for your code something like: ``` import datetime from project import Project ``` is the sort of thing you should be doing.
37,512,290
i am trying to read the CIFAR10 datasets, given in batches from <https://www.cs.toronto.edu/~kriz/cifar.html>>. i am trying to put it in a data frame using pickle and read 'data' part of it. But i am getting this error . ``` KeyError Traceback (most recent call last) <ipython-input-24-8758b7a31925> in <module>() ----> 1 unpickle('datasets/cifar-10-batches-py/test_batch') <ipython-input-23-04002b89d842> in unpickle(file) 3 fo = open(file, 'rb') 4 dict = pickle.load(fo, encoding ='bytes') ----> 5 X = dict['data'] 6 fo.close() 7 return dict ``` KeyError: 'data'. i am using ipython and here is my code : ``` def unpickle(file): fo = open(file, 'rb') dict = pickle.load(fo, encoding ='bytes') X = dict['data'] fo.close() return dict unpickle('datasets/cifar-10-batches-py/test_batch') ```
2016/05/29
[ "https://Stackoverflow.com/questions/37512290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4651358/" ]
I know the reason! I had the same problem and I solved it ! The key problem is about the encoding method, change the code from ``` dict = pickle.load(fo, encoding ='bytes') ``` to ``` dict = pickle.load(fo, encoding ='latin1') ```
Try this `def unpickle(file): import cPickle with open(file, 'rb') as fo: data = cPickle.load(fo) return data`
37,512,290
i am trying to read the CIFAR10 datasets, given in batches from <https://www.cs.toronto.edu/~kriz/cifar.html>>. i am trying to put it in a data frame using pickle and read 'data' part of it. But i am getting this error . ``` KeyError Traceback (most recent call last) <ipython-input-24-8758b7a31925> in <module>() ----> 1 unpickle('datasets/cifar-10-batches-py/test_batch') <ipython-input-23-04002b89d842> in unpickle(file) 3 fo = open(file, 'rb') 4 dict = pickle.load(fo, encoding ='bytes') ----> 5 X = dict['data'] 6 fo.close() 7 return dict ``` KeyError: 'data'. i am using ipython and here is my code : ``` def unpickle(file): fo = open(file, 'rb') dict = pickle.load(fo, encoding ='bytes') X = dict['data'] fo.close() return dict unpickle('datasets/cifar-10-batches-py/test_batch') ```
2016/05/29
[ "https://Stackoverflow.com/questions/37512290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4651358/" ]
you can read cifar 10 datasets by the code given below only make sure that you are giving write directory where the batches are placed ``` import tensorflow as tf import pandas as pd import numpy as np import math import timeit import matplotlib.pyplot as plt from six.moves import cPickle as pickle import os import platform from subprocess import check_output classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') %matplotlib inline img_rows, img_cols = 32, 32 input_shape = (img_rows, img_cols, 3) def load_pickle(f): version = platform.python_version_tuple() if version[0] == '2': return pickle.load(f) elif version[0] == '3': return pickle.load(f, encoding='latin1') raise ValueError("invalid python version: {}".format(version)) def load_CIFAR_batch(filename): """ load single batch of cifar """ with open(filename, 'rb') as f: datadict = load_pickle(f) X = datadict['data'] Y = datadict['labels'] X = X.reshape(10000,3072) Y = np.array(Y) return X, Y def load_CIFAR10(ROOT): """ load all of cifar """ xs = [] ys = [] for b in range(1,6): f = os.path.join(ROOT, 'data_batch_%d' % (b, )) X, Y = load_CIFAR_batch(f) xs.append(X) ys.append(Y) Xtr = np.concatenate(xs) Ytr = np.concatenate(ys) del X, Y Xte, Yte = load_CIFAR_batch(os.path.join(ROOT, 'test_batch')) return Xtr, Ytr, Xte, Yte def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000): # Load the raw CIFAR-10 data cifar10_dir = '../input/cifar-10-batches-py/' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = range(num_training, num_training + num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] x_train = X_train.astype('float32') x_test = X_test.astype('float32') x_train /= 255 x_test /= 255 return x_train, y_train, X_val, y_val, x_test, y_test # Invoke the above function to get our data. x_train, y_train, x_val, y_val, x_test, y_test = get_CIFAR10_data() print('Train data shape: ', x_train.shape) print('Train labels shape: ', y_train.shape) print('Validation data shape: ', x_val.shape) print('Validation labels shape: ', y_val.shape) print('Test data shape: ', x_test.shape) print('Test labels shape: ', y_test.shape) ```
Try this `def unpickle(file): import cPickle with open(file, 'rb') as fo: data = cPickle.load(fo) return data`
37,512,290
i am trying to read the CIFAR10 datasets, given in batches from <https://www.cs.toronto.edu/~kriz/cifar.html>>. i am trying to put it in a data frame using pickle and read 'data' part of it. But i am getting this error . ``` KeyError Traceback (most recent call last) <ipython-input-24-8758b7a31925> in <module>() ----> 1 unpickle('datasets/cifar-10-batches-py/test_batch') <ipython-input-23-04002b89d842> in unpickle(file) 3 fo = open(file, 'rb') 4 dict = pickle.load(fo, encoding ='bytes') ----> 5 X = dict['data'] 6 fo.close() 7 return dict ``` KeyError: 'data'. i am using ipython and here is my code : ``` def unpickle(file): fo = open(file, 'rb') dict = pickle.load(fo, encoding ='bytes') X = dict['data'] fo.close() return dict unpickle('datasets/cifar-10-batches-py/test_batch') ```
2016/05/29
[ "https://Stackoverflow.com/questions/37512290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4651358/" ]
I went through similar issues in the past. I'd like to mention for future readers that you can find [here](https://github.com/ndrplz/CIFAR-10) a python wrapper for automatically downloading, extracting and parsing the cifar10 dataset.
Try this `def unpickle(file): import cPickle with open(file, 'rb') as fo: data = cPickle.load(fo) return data`
37,512,290
i am trying to read the CIFAR10 datasets, given in batches from <https://www.cs.toronto.edu/~kriz/cifar.html>>. i am trying to put it in a data frame using pickle and read 'data' part of it. But i am getting this error . ``` KeyError Traceback (most recent call last) <ipython-input-24-8758b7a31925> in <module>() ----> 1 unpickle('datasets/cifar-10-batches-py/test_batch') <ipython-input-23-04002b89d842> in unpickle(file) 3 fo = open(file, 'rb') 4 dict = pickle.load(fo, encoding ='bytes') ----> 5 X = dict['data'] 6 fo.close() 7 return dict ``` KeyError: 'data'. i am using ipython and here is my code : ``` def unpickle(file): fo = open(file, 'rb') dict = pickle.load(fo, encoding ='bytes') X = dict['data'] fo.close() return dict unpickle('datasets/cifar-10-batches-py/test_batch') ```
2016/05/29
[ "https://Stackoverflow.com/questions/37512290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4651358/" ]
you can read cifar 10 datasets by the code given below only make sure that you are giving write directory where the batches are placed ``` import tensorflow as tf import pandas as pd import numpy as np import math import timeit import matplotlib.pyplot as plt from six.moves import cPickle as pickle import os import platform from subprocess import check_output classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') %matplotlib inline img_rows, img_cols = 32, 32 input_shape = (img_rows, img_cols, 3) def load_pickle(f): version = platform.python_version_tuple() if version[0] == '2': return pickle.load(f) elif version[0] == '3': return pickle.load(f, encoding='latin1') raise ValueError("invalid python version: {}".format(version)) def load_CIFAR_batch(filename): """ load single batch of cifar """ with open(filename, 'rb') as f: datadict = load_pickle(f) X = datadict['data'] Y = datadict['labels'] X = X.reshape(10000,3072) Y = np.array(Y) return X, Y def load_CIFAR10(ROOT): """ load all of cifar """ xs = [] ys = [] for b in range(1,6): f = os.path.join(ROOT, 'data_batch_%d' % (b, )) X, Y = load_CIFAR_batch(f) xs.append(X) ys.append(Y) Xtr = np.concatenate(xs) Ytr = np.concatenate(ys) del X, Y Xte, Yte = load_CIFAR_batch(os.path.join(ROOT, 'test_batch')) return Xtr, Ytr, Xte, Yte def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000): # Load the raw CIFAR-10 data cifar10_dir = '../input/cifar-10-batches-py/' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = range(num_training, num_training + num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] x_train = X_train.astype('float32') x_test = X_test.astype('float32') x_train /= 255 x_test /= 255 return x_train, y_train, X_val, y_val, x_test, y_test # Invoke the above function to get our data. x_train, y_train, x_val, y_val, x_test, y_test = get_CIFAR10_data() print('Train data shape: ', x_train.shape) print('Train labels shape: ', y_train.shape) print('Validation data shape: ', x_val.shape) print('Validation labels shape: ', y_val.shape) print('Test data shape: ', x_test.shape) print('Test labels shape: ', y_test.shape) ```
I know the reason! I had the same problem and I solved it ! The key problem is about the encoding method, change the code from ``` dict = pickle.load(fo, encoding ='bytes') ``` to ``` dict = pickle.load(fo, encoding ='latin1') ```
37,512,290
i am trying to read the CIFAR10 datasets, given in batches from <https://www.cs.toronto.edu/~kriz/cifar.html>>. i am trying to put it in a data frame using pickle and read 'data' part of it. But i am getting this error . ``` KeyError Traceback (most recent call last) <ipython-input-24-8758b7a31925> in <module>() ----> 1 unpickle('datasets/cifar-10-batches-py/test_batch') <ipython-input-23-04002b89d842> in unpickle(file) 3 fo = open(file, 'rb') 4 dict = pickle.load(fo, encoding ='bytes') ----> 5 X = dict['data'] 6 fo.close() 7 return dict ``` KeyError: 'data'. i am using ipython and here is my code : ``` def unpickle(file): fo = open(file, 'rb') dict = pickle.load(fo, encoding ='bytes') X = dict['data'] fo.close() return dict unpickle('datasets/cifar-10-batches-py/test_batch') ```
2016/05/29
[ "https://Stackoverflow.com/questions/37512290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4651358/" ]
I know the reason! I had the same problem and I solved it ! The key problem is about the encoding method, change the code from ``` dict = pickle.load(fo, encoding ='bytes') ``` to ``` dict = pickle.load(fo, encoding ='latin1') ```
I went through similar issues in the past. I'd like to mention for future readers that you can find [here](https://github.com/ndrplz/CIFAR-10) a python wrapper for automatically downloading, extracting and parsing the cifar10 dataset.
37,512,290
i am trying to read the CIFAR10 datasets, given in batches from <https://www.cs.toronto.edu/~kriz/cifar.html>>. i am trying to put it in a data frame using pickle and read 'data' part of it. But i am getting this error . ``` KeyError Traceback (most recent call last) <ipython-input-24-8758b7a31925> in <module>() ----> 1 unpickle('datasets/cifar-10-batches-py/test_batch') <ipython-input-23-04002b89d842> in unpickle(file) 3 fo = open(file, 'rb') 4 dict = pickle.load(fo, encoding ='bytes') ----> 5 X = dict['data'] 6 fo.close() 7 return dict ``` KeyError: 'data'. i am using ipython and here is my code : ``` def unpickle(file): fo = open(file, 'rb') dict = pickle.load(fo, encoding ='bytes') X = dict['data'] fo.close() return dict unpickle('datasets/cifar-10-batches-py/test_batch') ```
2016/05/29
[ "https://Stackoverflow.com/questions/37512290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4651358/" ]
you can read cifar 10 datasets by the code given below only make sure that you are giving write directory where the batches are placed ``` import tensorflow as tf import pandas as pd import numpy as np import math import timeit import matplotlib.pyplot as plt from six.moves import cPickle as pickle import os import platform from subprocess import check_output classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') %matplotlib inline img_rows, img_cols = 32, 32 input_shape = (img_rows, img_cols, 3) def load_pickle(f): version = platform.python_version_tuple() if version[0] == '2': return pickle.load(f) elif version[0] == '3': return pickle.load(f, encoding='latin1') raise ValueError("invalid python version: {}".format(version)) def load_CIFAR_batch(filename): """ load single batch of cifar """ with open(filename, 'rb') as f: datadict = load_pickle(f) X = datadict['data'] Y = datadict['labels'] X = X.reshape(10000,3072) Y = np.array(Y) return X, Y def load_CIFAR10(ROOT): """ load all of cifar """ xs = [] ys = [] for b in range(1,6): f = os.path.join(ROOT, 'data_batch_%d' % (b, )) X, Y = load_CIFAR_batch(f) xs.append(X) ys.append(Y) Xtr = np.concatenate(xs) Ytr = np.concatenate(ys) del X, Y Xte, Yte = load_CIFAR_batch(os.path.join(ROOT, 'test_batch')) return Xtr, Ytr, Xte, Yte def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000): # Load the raw CIFAR-10 data cifar10_dir = '../input/cifar-10-batches-py/' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = range(num_training, num_training + num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] x_train = X_train.astype('float32') x_test = X_test.astype('float32') x_train /= 255 x_test /= 255 return x_train, y_train, X_val, y_val, x_test, y_test # Invoke the above function to get our data. x_train, y_train, x_val, y_val, x_test, y_test = get_CIFAR10_data() print('Train data shape: ', x_train.shape) print('Train labels shape: ', y_train.shape) print('Validation data shape: ', x_val.shape) print('Validation labels shape: ', y_val.shape) print('Test data shape: ', x_test.shape) print('Test labels shape: ', y_test.shape) ```
I went through similar issues in the past. I'd like to mention for future readers that you can find [here](https://github.com/ndrplz/CIFAR-10) a python wrapper for automatically downloading, extracting and parsing the cifar10 dataset.
57,477,679
Some questions on vispy mention using canvas.native when adding a widget. How can a widget made as a placeholder in qt designer be used for vispy? The idea is going from this ``` canvas = vispy.app.Canvas() w = QMainWindow() widget = QWidget() w.setCentralWidget(widget) widget.setLayout(QVBoxLayout()) widget.layout().addWidget(canvas.native) widget.layout().addWidget(QPushButton()) w.show() vispy.app.run() ``` to a version where there is a ui or generated python file that has a frame with the name "frameFor3d". ``` class myWindow(QtWidgets.QMainWindow): def __init__(self): super(myWindow, self).__init__() self.ui = Ui_MainWindow() self.ui.setupUi(self) #declare this here? canvas = vispy.app.Canvas() self.ui.frameFor3d.layout().addWidget(canvas.native) if __name__ == '__main__': app = QtWidgets.QApplication([]) application = myWindow() vispy.app.run() #does it go here? application.show() sys.exit(app.exec()) ``` This errors with None because layout() is none. The uic sample ``` # -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'myWindow.ui' # # Created by: PyQt5 UI code generator 5.9.2 # # WARNING! All changes made in this file will be lost! from PyQt5 import QtCore, QtGui, QtWidgets class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(440, 299) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.gridLayout = QtWidgets.QGridLayout(self.centralwidget) self.gridLayout.setObjectName("gridLayout") self.frameFor3d = QtWidgets.QFrame(self.centralwidget) self.frameFor3d.setFrameShape(QtWidgets.QFrame.StyledPanel) self.frameFor3d.setFrameShadow(QtWidgets.QFrame.Raised) self.frameFor3d.setObjectName("frameFor3d") self.gridLayout.addWidget(self.frameFor3d, 0, 0, 1, 1) self.horizontalSlider = QtWidgets.QSlider(self.centralwidget) self.horizontalSlider.setOrientation(QtCore.Qt.Horizontal) self.horizontalSlider.setObjectName("horizontalSlider") self.gridLayout.addWidget(self.horizontalSlider, 1, 0, 1, 1) MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtWidgets.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 440, 18)) self.menubar.setObjectName("menubar") MainWindow.setMenuBar(self.menubar) self.statusbar = QtWidgets.QStatusBar(MainWindow) self.statusbar.setObjectName("statusbar") MainWindow.setStatusBar(self.statusbar) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) ```
2019/08/13
[ "https://Stackoverflow.com/questions/57477679", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1938107/" ]
QFrame does not have a layout, so you must set a layout, and the run() method is a blocker that internally calls app.exec(), so it is not necessary to call it again. ``` class myWindow(QtWidgets.QMainWindow): def __init__(self): super(myWindow, self).__init__() self.ui = Ui_MainWindow() self.ui.setupUi(self) canvas = vispy.app.Canvas() lay = QtWidgets.QVBoxLayout(self.ui.frameFor3d) # create layout lay.addWidget(canvas.native) if __name__ == '__main__': app = QtWidgets.QApplication([]) application = myWindow() application.show() vispy.app.run() # sys.exit(app.exec()) ```
FYI, I found more general and detailed two examples related with pyqt interface in vispy github respository. <https://github.com/vispy/vispy/blob/main/examples/basics/scene/isocurve_for_trisurface_qt.py> <https://github.com/vispy/vispy/blob/main/examples/demo/gloo/primitive_mesh_viewer_qt.py>
58,329,262
I'm trying to solve a mining problem in python. Given a string `s` and an integer `z` I have to find the least `n` such that `sha256(sha256(x))` ends with `z` zeros where `x` is the string given by appending `n` to `s`. I wrote the following code: ```py from hashlib import sha256 from multiprocessing import Pool def solve(string, zeros, cores): with Pool(cores) as p: for i in range(cores): result = p.apply_async(sub_solve, args=(string, zeros, i, cores), callback = p.terminate) return result def sub_solve(s, z, n0, cores): n = n0 - 1 d = "" while d[:-z] != "0"*z: n += cores s1 = (s + str(n)).encode() h1 = sha256(s1) h2 = sha256(h1.digest()) d = h2.hexdigest() if n % 100000 == 0: print("%d: %s" %(n,d)) return n ``` Calling `solve` with `string = s`, `zeros = z` and `cores = number of cores to use` it should execute parallel `sub_solve` calls in different cores where each one should solve the problem for different `n`. When one of the working processes solve the problem the whole pool should terminate working. When I run `solve` I get this output: ``` >>> pow.solve("asd",2,4) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\user\Desktop\pow.py", line 7, in solve result = p.apply_async(sub_solve, args=(string, zeros, i, cores), callback = p.terminate) File "C:\Users\user\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\pool.py", line 355, in apply_async raise ValueError("Pool not running") ValueError: Pool not running ``` How can I solve the problem?
2019/10/10
[ "https://Stackoverflow.com/questions/58329262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3897624/" ]
After the first iteration, the pool gets terminated due to the callback. So, In the next iteration there are no running Pool. To solve this, you will have to run the loop first and then use the `with` statement. i.e. swap the `with` statement with `for` loop like below: ``` from hashlib import sha256 from multiprocessing import Pool def solve(string, zeros, cores): for i in range(cores): with Pool(cores) as p: result = p.apply_async(sub_solve, args=(string, zeros, i, cores), callback = p.terminate) return result def sub_solve(s, z, n0, cores): n = n0 - 1 d = "" while d[:-z] != "0"*z: n += cores s1 = (s + str(n)).encode() h1 = sha256(s1) h2 = sha256(h1.digest()) d = h2.hexdigest() if n % 100000 == 0: print("%d: %s" %(n,d)) return n ```
``` from hashlib import sha256 import multiprocessing def solve(string, zeros, cores): with multiprocessing.Pool(cores) as p: for i in range(cores): ''' must call multiprocessing because the child process does Not know its p ''' result = p.apply_async(sub_solve, args=(string, zeros, i, cores), callback = multiprocessing.Pool().close()) return result def sub_solve(s, z, n0, cores): n = n0 - 1 d = "" while d[:-z] != "0"*z: n += cores s1 = (s + str(n)).encode() h1 = sha256(s1) h2 = sha256(h1.digest()) d = h2.hexdigest() if n % 100000 == 0: print("%d: %s" %(n,d)) return n if __name__ == "__main__": print("sollving problem 1") solve("asdfk",2,8) print("solving probem 2") solve("abcdefhijk",4,8) ``` OutPut: ``` sollving problem 1 100000: b982a515ed1f9da8d11be880cd621e13aec777abf3e08c78dc0849d7d3e591c9 200000: bb495b542d5e89f82a464fee84e2ad33f18de9ad2817b233d292b77ae42ff584 300000: cb6e9e02de1f2b76250c47b5d7e1121bfb90d32451a7d75bfb538df76b427ab1 400000: da6cec93d44719ec44925365090b2b49e79ada037b1a64608fd855e83e3a08af 500000: 15b43eaa0a500a337d04f37a01b616e5068effacb807e27e760b97aa58b68147 600000: a97fc82597b7b80b1b2c29bbbeddea3eb93a8690728e596b29eef8aba02a2ec8 700000: 8647001bb6d7ba352e2cc24ed31a1bc858812ed864a208256c4f20078509a52d 800000: 08d55a3b590cba473f7391915824a38ac4c1012aebf29d0aad3d0ea5c0654c2d 900000: 8ca55f8e9585a7212ca494370f30738c2ba8ef2bc7fde6d8d182dbb079ecca0f 1000000: 7362508f0d1e3b0da1e6250dba8fee831d94dd9bfe2837935750f0deb10a1a08 solving probem 2 100000: 99dfca2809f20a173657d7f767573641b263a2a233d062001e4e979d944919d8 200000: 1d2a7ab78930756300a0061aa01489045ee2a51c4987c7364f6410811e102db1 300000: b326e26dcdd28c212880fe0dd83dcc9d41d9053bcad7c92263177c370d1131b7 400000: adbb9c6d8acaf680739f8b5e8c86efd68a9a2c7e62d54531123298e4329c2764 500000: f77358148f8dea09533111044b75032e45b77579f4fd567a23ad06b8c6f8d29a 600000: 1324e1d8e2883fe5b91c91e1a65d26218fb7c08b37c2804d2c904082d516a5e7 700000: 50cc0e97b1b91bad942d36c9f3c549978db4ecb666ea08ab9b9a20012ff2c14a 800000: b98e254395f26fbe4e60857f3cffe12a3751c991f705fb8f6b3853ac9aa20b13 900000: 1ea083e7c135040d60eb4b681b1aeb73425384b532e3ab2110f608b9db6b6c38 1000000: 89e8b62ce9ddfc12fb40ee1bfe82028a08c5aabb2184434128ff03b820e0c104 ```
22,853,831
How would I use regular expression to find only links that end with numbers i've tried: ``` links = "'http://www.badlink.com' , 'http://good.link.com/W0QQAdIdZ567296978'" re.findall(r'http://[\w\.\w\.\w\.-]+.*',links) ``` I don't know how to make python stop searching after it finds integers in the link. Best case scenario I would like the match to only occur if the link ends with (5) or more numbers
2014/04/04
[ "https://Stackoverflow.com/questions/22853831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3496483/" ]
In your controller : ``` if (BadCredentials(checkLogin) == null) { ViewBag.SpanText = "This is span text"; } ``` Your View: ``` <span class="help-block">@ViewBag.SpanText</span> ```
You can set the value in [ViewBag](http://msdn.microsoft.com/en-us/library/system.web.mvc.controllerbase.viewbag%28v=vs.118%29.aspx) and use it in `View` *In controller* ``` if (BadCredentials(checkLogin) == null) { ViewBag.YourValue = "some text"; } ``` *In View* ``` <span class="help-block">@ViewBag.YourValue</span> ```
22,853,831
How would I use regular expression to find only links that end with numbers i've tried: ``` links = "'http://www.badlink.com' , 'http://good.link.com/W0QQAdIdZ567296978'" re.findall(r'http://[\w\.\w\.\w\.-]+.*',links) ``` I don't know how to make python stop searching after it finds integers in the link. Best case scenario I would like the match to only occur if the link ends with (5) or more numbers
2014/04/04
[ "https://Stackoverflow.com/questions/22853831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3496483/" ]
You can set the value in [ViewBag](http://msdn.microsoft.com/en-us/library/system.web.mvc.controllerbase.viewbag%28v=vs.118%29.aspx) and use it in `View` *In controller* ``` if (BadCredentials(checkLogin) == null) { ViewBag.YourValue = "some text"; } ``` *In View* ``` <span class="help-block">@ViewBag.YourValue</span> ```
On your Controller ``` @{ var ImageFile = System.Web.HttpContext.Current.Server.MapPath("~/Document/Picture/" + @Request.Cookies["sTimeStamp"].Value + ".jpg"); var PanFile = System.Web.HttpContext.Current.Server.MapPath("~/Document/PAN/" + @Request.Cookies["sTimeStamp"].Value + ".jpg"); var AdhaarFile = System.Web.HttpContext.Current.Server.MapPath("~/Document/Adhaar/" + @Request.Cookies["sTimeStamp"].Value + ".jpg"); } @if (System.IO.File.Exists(ImageFile) == false || System.IO.File.Exists(PanFile) == false || System.IO.File.Exists(AdhaarFile) == false) { ViewBag.KycMessage = "Your KYC has not completed yet. KYC is mandatory to get payment you earned. Following is list of document which is not uploaded."; } else { ViewBag.KycMessage = "<span style=\"color:green\">KYC Comlete</span>"; } <span>@ViewBag.KycMessage</span> ```
22,853,831
How would I use regular expression to find only links that end with numbers i've tried: ``` links = "'http://www.badlink.com' , 'http://good.link.com/W0QQAdIdZ567296978'" re.findall(r'http://[\w\.\w\.\w\.-]+.*',links) ``` I don't know how to make python stop searching after it finds integers in the link. Best case scenario I would like the match to only occur if the link ends with (5) or more numbers
2014/04/04
[ "https://Stackoverflow.com/questions/22853831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3496483/" ]
In your controller : ``` if (BadCredentials(checkLogin) == null) { ViewBag.SpanText = "This is span text"; } ``` Your View: ``` <span class="help-block">@ViewBag.SpanText</span> ```
You can use Viewbag property to display data from controller to view in .cshtml like in your controller : -------------------- ViewBag.Name = "your text"; in your .cshtml : ----------------- @ViewBag.Name
22,853,831
How would I use regular expression to find only links that end with numbers i've tried: ``` links = "'http://www.badlink.com' , 'http://good.link.com/W0QQAdIdZ567296978'" re.findall(r'http://[\w\.\w\.\w\.-]+.*',links) ``` I don't know how to make python stop searching after it finds integers in the link. Best case scenario I would like the match to only occur if the link ends with (5) or more numbers
2014/04/04
[ "https://Stackoverflow.com/questions/22853831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3496483/" ]
In your controller : ``` if (BadCredentials(checkLogin) == null) { ViewBag.SpanText = "This is span text"; } ``` Your View: ``` <span class="help-block">@ViewBag.SpanText</span> ```
On your Controller ``` @{ var ImageFile = System.Web.HttpContext.Current.Server.MapPath("~/Document/Picture/" + @Request.Cookies["sTimeStamp"].Value + ".jpg"); var PanFile = System.Web.HttpContext.Current.Server.MapPath("~/Document/PAN/" + @Request.Cookies["sTimeStamp"].Value + ".jpg"); var AdhaarFile = System.Web.HttpContext.Current.Server.MapPath("~/Document/Adhaar/" + @Request.Cookies["sTimeStamp"].Value + ".jpg"); } @if (System.IO.File.Exists(ImageFile) == false || System.IO.File.Exists(PanFile) == false || System.IO.File.Exists(AdhaarFile) == false) { ViewBag.KycMessage = "Your KYC has not completed yet. KYC is mandatory to get payment you earned. Following is list of document which is not uploaded."; } else { ViewBag.KycMessage = "<span style=\"color:green\">KYC Comlete</span>"; } <span>@ViewBag.KycMessage</span> ```
22,853,831
How would I use regular expression to find only links that end with numbers i've tried: ``` links = "'http://www.badlink.com' , 'http://good.link.com/W0QQAdIdZ567296978'" re.findall(r'http://[\w\.\w\.\w\.-]+.*',links) ``` I don't know how to make python stop searching after it finds integers in the link. Best case scenario I would like the match to only occur if the link ends with (5) or more numbers
2014/04/04
[ "https://Stackoverflow.com/questions/22853831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3496483/" ]
You can use Viewbag property to display data from controller to view in .cshtml like in your controller : -------------------- ViewBag.Name = "your text"; in your .cshtml : ----------------- @ViewBag.Name
On your Controller ``` @{ var ImageFile = System.Web.HttpContext.Current.Server.MapPath("~/Document/Picture/" + @Request.Cookies["sTimeStamp"].Value + ".jpg"); var PanFile = System.Web.HttpContext.Current.Server.MapPath("~/Document/PAN/" + @Request.Cookies["sTimeStamp"].Value + ".jpg"); var AdhaarFile = System.Web.HttpContext.Current.Server.MapPath("~/Document/Adhaar/" + @Request.Cookies["sTimeStamp"].Value + ".jpg"); } @if (System.IO.File.Exists(ImageFile) == false || System.IO.File.Exists(PanFile) == false || System.IO.File.Exists(AdhaarFile) == false) { ViewBag.KycMessage = "Your KYC has not completed yet. KYC is mandatory to get payment you earned. Following is list of document which is not uploaded."; } else { ViewBag.KycMessage = "<span style=\"color:green\">KYC Comlete</span>"; } <span>@ViewBag.KycMessage</span> ```
56,850,735
Can not plot an Histogram in Matplotlib with non numerical data. A = na, R, O, na, na, O, R ... A is a dataframe that takes 3 different values: na, R, O I try: ``` plt.hist(A, bins=3, color='#37777D') ``` Would expect something like this [Result](https://i.stack.imgur.com/wq4rr.jpg) It works with numerical data, but with non numerical data I get this error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-44-60369a6f9af4> in <module> 1 A = dataset2.iloc[:, 2 - 1].head(30) ----> 2 plt.hist(A, bins=3, histtype='bar', color='#37777D') C:\Anaconda\lib\site-packages\matplotlib\pyplot.py in hist(x, bins, range, density, weights, cumulative, bottom, histtype, align, orientation, rwidth, log, color, label, stacked, normed, data, **kwargs) 2657 align=align, orientation=orientation, rwidth=rwidth, log=log, 2658 color=color, label=label, stacked=stacked, normed=normed, -> 2659 **({"data": data} if data is not None else {}), **kwargs) 2660 2661 C:\Anaconda\lib\site-packages\matplotlib\__init__.py in inner(ax, data, *args, **kwargs) 1808 "the Matplotlib list!)" % (label_namer, func.__name__), 1809 RuntimeWarning, stacklevel=2) -> 1810 return func(ax, *args, **kwargs) 1811 1812 inner.__doc__ = _add_data_doc(inner.__doc__, C:\Anaconda\lib\site-packages\matplotlib\axes\_axes.py in hist(self, x, bins, range, density, weights, cumulative, bottom, histtype, align, orientation, rwidth, log, color, label, stacked, normed, **kwargs) 6563 "color kwarg must have one color per data set. %d data " 6564 "sets and %d colors were provided" % (nx, len(color))) -> 6565 raise ValueError(error_message) 6566 6567 # If bins are not specified either explicitly or via range, ValueError: color kwarg must have one color per data set. 30 data sets and 1 colors were provided ```
2019/07/02
[ "https://Stackoverflow.com/questions/56850735", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11052717/" ]
I think you need a bar chart instead of a histogram. Moreover, it is unclear what your values are. Considering they are strings (based on the plot), you need to first count their frequencies using for example `Counter` module. Then you can plot the frequencies and assign the names of the keys as the tick labels. ``` from collections import Counter from matplotlib import pyplot as plt A = ['na', 'R', 'O', 'na', 'na', 'R'] freqs = Counter(A) xvals = range(len(freqs.values())) plt.bar(xvals, freqs.values() , color='#37777D') plt.xticks(xvals, freqs.keys()) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/TA2v8.png)](https://i.stack.imgur.com/TA2v8.png)
This is not reproducible. But if we create a dataframe and run the following code ``` import numpy as np; np.random.seed(42) import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame(np.random.choice(["na", "O", "A"], size=10)) plt.hist(df.values, histtype='bar', bins=3) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/DKGw7.png)](https://i.stack.imgur.com/DKGw7.png) Now this may not be the best choice anyways, because histograms are continuous by definition. So one may create a bar plot of the counts instead. ``` import numpy as np; np.random.seed(42) import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame(np.random.choice(["na", "O", "A"], size=10)) counts = df[0].value_counts() plt.bar(counts.index, counts.values) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/5YdGG.png)](https://i.stack.imgur.com/5YdGG.png)
51,697,982
I am learning kivy now... I am developing my 1st app for a friend, a very simple one. But I am facing this error: Whenever I click "create account", the named 'Login(Screen)' loads blank. None of the widgets that I have created on my kivy file shows. here are the codes: ========================================================================== python file: ``` from kivy.app import App from kivy.uix.screenmanager import Screen, ScreenManager class Gerenciador(ScreenManager): pass class BoasVindas(Screen): pass class Login(Screen): def logar(self, usuario, senha): print("usuario={0}, senha={1}".format(usuario, senha)) class Resultado(Screen): pass class LoginApp(App): def build(self): return Gerenciador() LoginApp().run() ``` ======================================================================== kivy file: ``` <Gerenciador>: BoasVindas: name: 'boasvindas' BoxLayout: orientation:'vertical' Label: text:'Faça o seu Login ou crie uma nova conta' Button: text:'Login' Button: text:'Criar nova conta' on_release:root.current='login' Login: name: 'login' BoxLayout: TextInput: id:usuario hint_text:'Usuário' multiline: False TextInput: id:senha hint_text:'Senha' multiline: False password: True Button: id:'btn' text:'Ok' on_press: self.parent.parent.logar(usuario.text, senha.text) on_release:root.current='boasvindas' ``` ========================================================================= Any ideas on what I am missing? The first screen loads perfectly. If I swap the order, Login screen loads well. But the second screen is blank, no matter what content. as long as it is the second screen to load, it returns blank. Thank you!
2018/08/05
[ "https://Stackoverflow.com/questions/51697982", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183845/" ]
you can use [elasticsearch](https://github.com/elastic/elasticsearch-rails/tree/master/elasticsearch-rails). you can work with only one search box for all of it. it's a little complicated but it does support relations too
Thats exactly what you're looking for: <https://github.com/activerecord-hackery/ransack> In your Gemfile, for the last officially released gem: ``` gem 'ransack' ``` If you would like to use the latest updates (recommended), use the master branch: ``` gem 'ransack', github: 'activerecord-hackery/ransack' ``` And here's an article about how to use it basicly: <https://medium.com/@jaspercurry/searching-and-sorting-on-rails-with-ransack-560e862e650a>
51,697,982
I am learning kivy now... I am developing my 1st app for a friend, a very simple one. But I am facing this error: Whenever I click "create account", the named 'Login(Screen)' loads blank. None of the widgets that I have created on my kivy file shows. here are the codes: ========================================================================== python file: ``` from kivy.app import App from kivy.uix.screenmanager import Screen, ScreenManager class Gerenciador(ScreenManager): pass class BoasVindas(Screen): pass class Login(Screen): def logar(self, usuario, senha): print("usuario={0}, senha={1}".format(usuario, senha)) class Resultado(Screen): pass class LoginApp(App): def build(self): return Gerenciador() LoginApp().run() ``` ======================================================================== kivy file: ``` <Gerenciador>: BoasVindas: name: 'boasvindas' BoxLayout: orientation:'vertical' Label: text:'Faça o seu Login ou crie uma nova conta' Button: text:'Login' Button: text:'Criar nova conta' on_release:root.current='login' Login: name: 'login' BoxLayout: TextInput: id:usuario hint_text:'Usuário' multiline: False TextInput: id:senha hint_text:'Senha' multiline: False password: True Button: id:'btn' text:'Ok' on_press: self.parent.parent.logar(usuario.text, senha.text) on_release:root.current='boasvindas' ``` ========================================================================= Any ideas on what I am missing? The first screen loads perfectly. If I swap the order, Login screen loads well. But the second screen is blank, no matter what content. as long as it is the second screen to load, it returns blank. Thank you!
2018/08/05
[ "https://Stackoverflow.com/questions/51697982", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183845/" ]
You can use multi-search in Searchkick for this purpose. <https://github.com/ankane/searchkick#multi-search>
Thats exactly what you're looking for: <https://github.com/activerecord-hackery/ransack> In your Gemfile, for the last officially released gem: ``` gem 'ransack' ``` If you would like to use the latest updates (recommended), use the master branch: ``` gem 'ransack', github: 'activerecord-hackery/ransack' ``` And here's an article about how to use it basicly: <https://medium.com/@jaspercurry/searching-and-sorting-on-rails-with-ransack-560e862e650a>
69,710,383
Suppose you have a function Car.find\_owner\_from\_plate\_number(plate\_number) that will raise an Exception if plate is unknown and return an Owner object if plate number exists. Now, you do not need Owner information in your script, just to know if plate number exists (ie no exception raised) ``` owner = Car.find_owner_from_plate_number('ABC123') _ = Car.find_owner_from_plate_number('ABC123') Car.fund_owner_from_plate_number('ABC123') ``` With first, IDE will complain that owner is not used afterwards Second is ok since \_ is a global variable, but will assign memory in line with Owner's size Third should also do the job, cherry on the cake without consuming memory if I'm correct. What's the best way / more pythonic between 2nd and 3rd way? I ask because I often see 2nd way but I would be tempted to say 3rd is best.
2021/10/25
[ "https://Stackoverflow.com/questions/69710383", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3122657/" ]
You can use [guards](https://www.python.org/dev/peps/pep-0622/#id8): ``` match a: case _ if a < 42: print('Less') case _ if a == 42: print('The answer') case _ if a > 42: print('Greater') ``` Another option, without guards, using pure pattern matching: ``` match [a < 42, a == 42]: case [True, False]: print('Less') case [_, True]: print('The answer') case [False, False]: print('Greater') ```
A match-case statement inherently is designed for matching equalities (hence the word "match"). In your prototype example you could achieve this by matching with an if statement (as proposed by other answers), however now you are in essence simply matching True and False, which seems redundant. One way other languages solve this is via comparisons using Enums: ``` from enum import Enum class Ordering(Enum): LESS = 1 EQUAL = 2 GREATER = 3 def compare(a, b): if a < b: return Ordering.LESS elif a == b: return Ordering.EQUAL elif a > b: return Ordering.GREATER match compare(a, 42): case Ordering.LESS: print("Less") case Ordering.EQUAL: print("The answer") case Ordering.GREATER: print("Greater") ```
11,797,035
I am having a very bad week having chosen elasticsearch with graylog2. I am trying to run queries against the data in ES using Python. I have tried following clients. 1. ESClient - Very weird results, I think its not maintained, query\_body has no effect it returns all the results. 2. Pyes - Unreadable, undocumented. I have browsed sources and cant figure out how to run a simple query, maybe i am just not that smart. I can even run base queries in json format and then simply use the Python object/iterators to do my analysis on the results. But Pyes does not make it easy. 3. Elasticutils - Another documented, but without a complete sample. I get the following error with code attached. I don't even know how it uses this S() to connect to the right host? es = get\_es(hosts=HOST, default\_indexes=[INDEX]) basic\_s = S().indexes(INDEX).doctypes(DOCTYPE).values\_dict() results: ``` print basic_s.query(message__text="login/delete") File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 223, in __repr__ data = list(self)[:REPR_OUTPUT_SIZE + 1] File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 623, in __iter__ return iter(self._do_search()) File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 573, in _do_search hits = self.raw() File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 615, in raw hits = es.search(qs, self.get_indexes(), self.get_doctypes()) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 841, in search return self._query_call("_search", body, indexes, doc_types, **query_params) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 251, in _query_call response = self._send_request('GET', path, body, querystring_args) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 208, in _send_request response = self.connection.execute(request) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 167, in _client_call return getattr(conn.client, attr)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 59, in execute response = self.client.urlopen(Method._VALUES_TO_NAMES[request.method], uri, body=request.body, headers=request.headers) File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 255, in urlopen raise MaxRetryError("Max retries exceeded for url: %s" % url) pyes.urllib3.connectionpool.MaxRetryError: Max retries exceeded for url: /graylog2/message/_search ``` I wish the devs of this good projects would provide some complete examples. Even looking at sources I am t a complete loss. Is there any solution, help out there for me with elasticsearch and python or should I just drop all of this and pay for a nice splunk account and end this misery. I am proceeding with using curl, download the entire json result and json load it. Hope that works, though curl downloading 1 million messages from elasticsearch may not just happen.
2012/08/03
[ "https://Stackoverflow.com/questions/11797035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614355/" ]
Honestly, I've had the most luck with just CURLing everything. ES has so many different methods, filters, and queries that various "wrappers" have a hard time recreating all the functionality. In my view, it is similar to using an ORM for databases...what you gain in ease of use you lose in flexibility/raw power. Except most of the wrappers for ES aren't really that easy to use. I'd give CURL a try for a while and see how that treats you. You can use external JSON formatters to check your JSON, the mailing list to look for examples and the docs are ok if you use JSON.
Explicitly setting the host resolved that error for me: `basic_s = S()`**`.es(hosts=HOST, default_indexes=[INDEX])`**
11,797,035
I am having a very bad week having chosen elasticsearch with graylog2. I am trying to run queries against the data in ES using Python. I have tried following clients. 1. ESClient - Very weird results, I think its not maintained, query\_body has no effect it returns all the results. 2. Pyes - Unreadable, undocumented. I have browsed sources and cant figure out how to run a simple query, maybe i am just not that smart. I can even run base queries in json format and then simply use the Python object/iterators to do my analysis on the results. But Pyes does not make it easy. 3. Elasticutils - Another documented, but without a complete sample. I get the following error with code attached. I don't even know how it uses this S() to connect to the right host? es = get\_es(hosts=HOST, default\_indexes=[INDEX]) basic\_s = S().indexes(INDEX).doctypes(DOCTYPE).values\_dict() results: ``` print basic_s.query(message__text="login/delete") File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 223, in __repr__ data = list(self)[:REPR_OUTPUT_SIZE + 1] File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 623, in __iter__ return iter(self._do_search()) File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 573, in _do_search hits = self.raw() File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 615, in raw hits = es.search(qs, self.get_indexes(), self.get_doctypes()) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 841, in search return self._query_call("_search", body, indexes, doc_types, **query_params) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 251, in _query_call response = self._send_request('GET', path, body, querystring_args) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 208, in _send_request response = self.connection.execute(request) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 167, in _client_call return getattr(conn.client, attr)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 59, in execute response = self.client.urlopen(Method._VALUES_TO_NAMES[request.method], uri, body=request.body, headers=request.headers) File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 255, in urlopen raise MaxRetryError("Max retries exceeded for url: %s" % url) pyes.urllib3.connectionpool.MaxRetryError: Max retries exceeded for url: /graylog2/message/_search ``` I wish the devs of this good projects would provide some complete examples. Even looking at sources I am t a complete loss. Is there any solution, help out there for me with elasticsearch and python or should I just drop all of this and pay for a nice splunk account and end this misery. I am proceeding with using curl, download the entire json result and json load it. Hope that works, though curl downloading 1 million messages from elasticsearch may not just happen.
2012/08/03
[ "https://Stackoverflow.com/questions/11797035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614355/" ]
Explicitly setting the host resolved that error for me: `basic_s = S()`**`.es(hosts=HOST, default_indexes=[INDEX])`**
FWIW, PYES docs are here: <http://packages.python.org/pyes/index.html> Usage: <http://packages.python.org/pyes/manual/usage.html>
11,797,035
I am having a very bad week having chosen elasticsearch with graylog2. I am trying to run queries against the data in ES using Python. I have tried following clients. 1. ESClient - Very weird results, I think its not maintained, query\_body has no effect it returns all the results. 2. Pyes - Unreadable, undocumented. I have browsed sources and cant figure out how to run a simple query, maybe i am just not that smart. I can even run base queries in json format and then simply use the Python object/iterators to do my analysis on the results. But Pyes does not make it easy. 3. Elasticutils - Another documented, but without a complete sample. I get the following error with code attached. I don't even know how it uses this S() to connect to the right host? es = get\_es(hosts=HOST, default\_indexes=[INDEX]) basic\_s = S().indexes(INDEX).doctypes(DOCTYPE).values\_dict() results: ``` print basic_s.query(message__text="login/delete") File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 223, in __repr__ data = list(self)[:REPR_OUTPUT_SIZE + 1] File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 623, in __iter__ return iter(self._do_search()) File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 573, in _do_search hits = self.raw() File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 615, in raw hits = es.search(qs, self.get_indexes(), self.get_doctypes()) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 841, in search return self._query_call("_search", body, indexes, doc_types, **query_params) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 251, in _query_call response = self._send_request('GET', path, body, querystring_args) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 208, in _send_request response = self.connection.execute(request) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 167, in _client_call return getattr(conn.client, attr)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 59, in execute response = self.client.urlopen(Method._VALUES_TO_NAMES[request.method], uri, body=request.body, headers=request.headers) File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 255, in urlopen raise MaxRetryError("Max retries exceeded for url: %s" % url) pyes.urllib3.connectionpool.MaxRetryError: Max retries exceeded for url: /graylog2/message/_search ``` I wish the devs of this good projects would provide some complete examples. Even looking at sources I am t a complete loss. Is there any solution, help out there for me with elasticsearch and python or should I just drop all of this and pay for a nice splunk account and end this misery. I am proceeding with using curl, download the entire json result and json load it. Hope that works, though curl downloading 1 million messages from elasticsearch may not just happen.
2012/08/03
[ "https://Stackoverflow.com/questions/11797035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614355/" ]
Explicitly setting the host resolved that error for me: `basic_s = S()`**`.es(hosts=HOST, default_indexes=[INDEX])`**
ElasticSearch [recently](http://www.elasticsearch.org/blog/unleash-the-clients-ruby-python-php-perl/) (Sept 2013) released an official Python client [elasticsearch-py](http://www.elasticsearch.org/guide/en/elasticsearch/client/python-api/current/index.html) (elasticsearch on PyPI, also on [github](https://github.com/elasticsearch/elasticsearch-py)), which is supposed to be a fairly direct mapping to the official ElasticSearch API. I haven't used it yet, but it looks promising, and at least it will match the official docs! Edit: We started using it, and I'm very happy with it. ElasticSearch's API is pretty clean, and elasticsearch-py maintains that. Easier to work with and debug in general, plus decent logging.
11,797,035
I am having a very bad week having chosen elasticsearch with graylog2. I am trying to run queries against the data in ES using Python. I have tried following clients. 1. ESClient - Very weird results, I think its not maintained, query\_body has no effect it returns all the results. 2. Pyes - Unreadable, undocumented. I have browsed sources and cant figure out how to run a simple query, maybe i am just not that smart. I can even run base queries in json format and then simply use the Python object/iterators to do my analysis on the results. But Pyes does not make it easy. 3. Elasticutils - Another documented, but without a complete sample. I get the following error with code attached. I don't even know how it uses this S() to connect to the right host? es = get\_es(hosts=HOST, default\_indexes=[INDEX]) basic\_s = S().indexes(INDEX).doctypes(DOCTYPE).values\_dict() results: ``` print basic_s.query(message__text="login/delete") File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 223, in __repr__ data = list(self)[:REPR_OUTPUT_SIZE + 1] File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 623, in __iter__ return iter(self._do_search()) File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 573, in _do_search hits = self.raw() File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 615, in raw hits = es.search(qs, self.get_indexes(), self.get_doctypes()) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 841, in search return self._query_call("_search", body, indexes, doc_types, **query_params) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 251, in _query_call response = self._send_request('GET', path, body, querystring_args) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 208, in _send_request response = self.connection.execute(request) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 167, in _client_call return getattr(conn.client, attr)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 59, in execute response = self.client.urlopen(Method._VALUES_TO_NAMES[request.method], uri, body=request.body, headers=request.headers) File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 255, in urlopen raise MaxRetryError("Max retries exceeded for url: %s" % url) pyes.urllib3.connectionpool.MaxRetryError: Max retries exceeded for url: /graylog2/message/_search ``` I wish the devs of this good projects would provide some complete examples. Even looking at sources I am t a complete loss. Is there any solution, help out there for me with elasticsearch and python or should I just drop all of this and pay for a nice splunk account and end this misery. I am proceeding with using curl, download the entire json result and json load it. Hope that works, though curl downloading 1 million messages from elasticsearch may not just happen.
2012/08/03
[ "https://Stackoverflow.com/questions/11797035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614355/" ]
Honestly, I've had the most luck with just CURLing everything. ES has so many different methods, filters, and queries that various "wrappers" have a hard time recreating all the functionality. In my view, it is similar to using an ORM for databases...what you gain in ease of use you lose in flexibility/raw power. Except most of the wrappers for ES aren't really that easy to use. I'd give CURL a try for a while and see how that treats you. You can use external JSON formatters to check your JSON, the mailing list to look for examples and the docs are ok if you use JSON.
FWIW, PYES docs are here: <http://packages.python.org/pyes/index.html> Usage: <http://packages.python.org/pyes/manual/usage.html>
11,797,035
I am having a very bad week having chosen elasticsearch with graylog2. I am trying to run queries against the data in ES using Python. I have tried following clients. 1. ESClient - Very weird results, I think its not maintained, query\_body has no effect it returns all the results. 2. Pyes - Unreadable, undocumented. I have browsed sources and cant figure out how to run a simple query, maybe i am just not that smart. I can even run base queries in json format and then simply use the Python object/iterators to do my analysis on the results. But Pyes does not make it easy. 3. Elasticutils - Another documented, but without a complete sample. I get the following error with code attached. I don't even know how it uses this S() to connect to the right host? es = get\_es(hosts=HOST, default\_indexes=[INDEX]) basic\_s = S().indexes(INDEX).doctypes(DOCTYPE).values\_dict() results: ``` print basic_s.query(message__text="login/delete") File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 223, in __repr__ data = list(self)[:REPR_OUTPUT_SIZE + 1] File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 623, in __iter__ return iter(self._do_search()) File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 573, in _do_search hits = self.raw() File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 615, in raw hits = es.search(qs, self.get_indexes(), self.get_doctypes()) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 841, in search return self._query_call("_search", body, indexes, doc_types, **query_params) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 251, in _query_call response = self._send_request('GET', path, body, querystring_args) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 208, in _send_request response = self.connection.execute(request) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 167, in _client_call return getattr(conn.client, attr)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 59, in execute response = self.client.urlopen(Method._VALUES_TO_NAMES[request.method], uri, body=request.body, headers=request.headers) File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 255, in urlopen raise MaxRetryError("Max retries exceeded for url: %s" % url) pyes.urllib3.connectionpool.MaxRetryError: Max retries exceeded for url: /graylog2/message/_search ``` I wish the devs of this good projects would provide some complete examples. Even looking at sources I am t a complete loss. Is there any solution, help out there for me with elasticsearch and python or should I just drop all of this and pay for a nice splunk account and end this misery. I am proceeding with using curl, download the entire json result and json load it. Hope that works, though curl downloading 1 million messages from elasticsearch may not just happen.
2012/08/03
[ "https://Stackoverflow.com/questions/11797035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614355/" ]
Honestly, I've had the most luck with just CURLing everything. ES has so many different methods, filters, and queries that various "wrappers" have a hard time recreating all the functionality. In my view, it is similar to using an ORM for databases...what you gain in ease of use you lose in flexibility/raw power. Except most of the wrappers for ES aren't really that easy to use. I'd give CURL a try for a while and see how that treats you. You can use external JSON formatters to check your JSON, the mailing list to look for examples and the docs are ok if you use JSON.
ElasticSearch [recently](http://www.elasticsearch.org/blog/unleash-the-clients-ruby-python-php-perl/) (Sept 2013) released an official Python client [elasticsearch-py](http://www.elasticsearch.org/guide/en/elasticsearch/client/python-api/current/index.html) (elasticsearch on PyPI, also on [github](https://github.com/elasticsearch/elasticsearch-py)), which is supposed to be a fairly direct mapping to the official ElasticSearch API. I haven't used it yet, but it looks promising, and at least it will match the official docs! Edit: We started using it, and I'm very happy with it. ElasticSearch's API is pretty clean, and elasticsearch-py maintains that. Easier to work with and debug in general, plus decent logging.
11,797,035
I am having a very bad week having chosen elasticsearch with graylog2. I am trying to run queries against the data in ES using Python. I have tried following clients. 1. ESClient - Very weird results, I think its not maintained, query\_body has no effect it returns all the results. 2. Pyes - Unreadable, undocumented. I have browsed sources and cant figure out how to run a simple query, maybe i am just not that smart. I can even run base queries in json format and then simply use the Python object/iterators to do my analysis on the results. But Pyes does not make it easy. 3. Elasticutils - Another documented, but without a complete sample. I get the following error with code attached. I don't even know how it uses this S() to connect to the right host? es = get\_es(hosts=HOST, default\_indexes=[INDEX]) basic\_s = S().indexes(INDEX).doctypes(DOCTYPE).values\_dict() results: ``` print basic_s.query(message__text="login/delete") File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 223, in __repr__ data = list(self)[:REPR_OUTPUT_SIZE + 1] File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 623, in __iter__ return iter(self._do_search()) File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 573, in _do_search hits = self.raw() File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 615, in raw hits = es.search(qs, self.get_indexes(), self.get_doctypes()) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 841, in search return self._query_call("_search", body, indexes, doc_types, **query_params) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 251, in _query_call response = self._send_request('GET', path, body, querystring_args) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 208, in _send_request response = self.connection.execute(request) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 167, in _client_call return getattr(conn.client, attr)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 59, in execute response = self.client.urlopen(Method._VALUES_TO_NAMES[request.method], uri, body=request.body, headers=request.headers) File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 255, in urlopen raise MaxRetryError("Max retries exceeded for url: %s" % url) pyes.urllib3.connectionpool.MaxRetryError: Max retries exceeded for url: /graylog2/message/_search ``` I wish the devs of this good projects would provide some complete examples. Even looking at sources I am t a complete loss. Is there any solution, help out there for me with elasticsearch and python or should I just drop all of this and pay for a nice splunk account and end this misery. I am proceeding with using curl, download the entire json result and json load it. Hope that works, though curl downloading 1 million messages from elasticsearch may not just happen.
2012/08/03
[ "https://Stackoverflow.com/questions/11797035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614355/" ]
Explicitly setting the host resolved that error for me: `basic_s = S()`**`.es(hosts=HOST, default_indexes=[INDEX])`**
ElasticUtils has sample code: <http://elasticutils.readthedocs.org/en/latest/sampleprogram1.html> If there are other things you need in the docs, just ask.
11,797,035
I am having a very bad week having chosen elasticsearch with graylog2. I am trying to run queries against the data in ES using Python. I have tried following clients. 1. ESClient - Very weird results, I think its not maintained, query\_body has no effect it returns all the results. 2. Pyes - Unreadable, undocumented. I have browsed sources and cant figure out how to run a simple query, maybe i am just not that smart. I can even run base queries in json format and then simply use the Python object/iterators to do my analysis on the results. But Pyes does not make it easy. 3. Elasticutils - Another documented, but without a complete sample. I get the following error with code attached. I don't even know how it uses this S() to connect to the right host? es = get\_es(hosts=HOST, default\_indexes=[INDEX]) basic\_s = S().indexes(INDEX).doctypes(DOCTYPE).values\_dict() results: ``` print basic_s.query(message__text="login/delete") File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 223, in __repr__ data = list(self)[:REPR_OUTPUT_SIZE + 1] File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 623, in __iter__ return iter(self._do_search()) File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 573, in _do_search hits = self.raw() File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 615, in raw hits = es.search(qs, self.get_indexes(), self.get_doctypes()) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 841, in search return self._query_call("_search", body, indexes, doc_types, **query_params) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 251, in _query_call response = self._send_request('GET', path, body, querystring_args) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 208, in _send_request response = self.connection.execute(request) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 167, in _client_call return getattr(conn.client, attr)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 59, in execute response = self.client.urlopen(Method._VALUES_TO_NAMES[request.method], uri, body=request.body, headers=request.headers) File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 255, in urlopen raise MaxRetryError("Max retries exceeded for url: %s" % url) pyes.urllib3.connectionpool.MaxRetryError: Max retries exceeded for url: /graylog2/message/_search ``` I wish the devs of this good projects would provide some complete examples. Even looking at sources I am t a complete loss. Is there any solution, help out there for me with elasticsearch and python or should I just drop all of this and pay for a nice splunk account and end this misery. I am proceeding with using curl, download the entire json result and json load it. Hope that works, though curl downloading 1 million messages from elasticsearch may not just happen.
2012/08/03
[ "https://Stackoverflow.com/questions/11797035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614355/" ]
I have found rawes to be quite usable: <https://github.com/humangeo/rawes> It's a rather low-level interface but I have found it to be much less awkward to work with than the high-level ones. It also supports the Thrift RPC if you're into that.
ElasticUtils has sample code: <http://elasticutils.readthedocs.org/en/latest/sampleprogram1.html> If there are other things you need in the docs, just ask.
11,797,035
I am having a very bad week having chosen elasticsearch with graylog2. I am trying to run queries against the data in ES using Python. I have tried following clients. 1. ESClient - Very weird results, I think its not maintained, query\_body has no effect it returns all the results. 2. Pyes - Unreadable, undocumented. I have browsed sources and cant figure out how to run a simple query, maybe i am just not that smart. I can even run base queries in json format and then simply use the Python object/iterators to do my analysis on the results. But Pyes does not make it easy. 3. Elasticutils - Another documented, but without a complete sample. I get the following error with code attached. I don't even know how it uses this S() to connect to the right host? es = get\_es(hosts=HOST, default\_indexes=[INDEX]) basic\_s = S().indexes(INDEX).doctypes(DOCTYPE).values\_dict() results: ``` print basic_s.query(message__text="login/delete") File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 223, in __repr__ data = list(self)[:REPR_OUTPUT_SIZE + 1] File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 623, in __iter__ return iter(self._do_search()) File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 573, in _do_search hits = self.raw() File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 615, in raw hits = es.search(qs, self.get_indexes(), self.get_doctypes()) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 841, in search return self._query_call("_search", body, indexes, doc_types, **query_params) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 251, in _query_call response = self._send_request('GET', path, body, querystring_args) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 208, in _send_request response = self.connection.execute(request) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 167, in _client_call return getattr(conn.client, attr)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 59, in execute response = self.client.urlopen(Method._VALUES_TO_NAMES[request.method], uri, body=request.body, headers=request.headers) File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 255, in urlopen raise MaxRetryError("Max retries exceeded for url: %s" % url) pyes.urllib3.connectionpool.MaxRetryError: Max retries exceeded for url: /graylog2/message/_search ``` I wish the devs of this good projects would provide some complete examples. Even looking at sources I am t a complete loss. Is there any solution, help out there for me with elasticsearch and python or should I just drop all of this and pay for a nice splunk account and end this misery. I am proceeding with using curl, download the entire json result and json load it. Hope that works, though curl downloading 1 million messages from elasticsearch may not just happen.
2012/08/03
[ "https://Stackoverflow.com/questions/11797035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614355/" ]
I have found rawes to be quite usable: <https://github.com/humangeo/rawes> It's a rather low-level interface but I have found it to be much less awkward to work with than the high-level ones. It also supports the Thrift RPC if you're into that.
ElasticSearch [recently](http://www.elasticsearch.org/blog/unleash-the-clients-ruby-python-php-perl/) (Sept 2013) released an official Python client [elasticsearch-py](http://www.elasticsearch.org/guide/en/elasticsearch/client/python-api/current/index.html) (elasticsearch on PyPI, also on [github](https://github.com/elasticsearch/elasticsearch-py)), which is supposed to be a fairly direct mapping to the official ElasticSearch API. I haven't used it yet, but it looks promising, and at least it will match the official docs! Edit: We started using it, and I'm very happy with it. ElasticSearch's API is pretty clean, and elasticsearch-py maintains that. Easier to work with and debug in general, plus decent logging.
11,797,035
I am having a very bad week having chosen elasticsearch with graylog2. I am trying to run queries against the data in ES using Python. I have tried following clients. 1. ESClient - Very weird results, I think its not maintained, query\_body has no effect it returns all the results. 2. Pyes - Unreadable, undocumented. I have browsed sources and cant figure out how to run a simple query, maybe i am just not that smart. I can even run base queries in json format and then simply use the Python object/iterators to do my analysis on the results. But Pyes does not make it easy. 3. Elasticutils - Another documented, but without a complete sample. I get the following error with code attached. I don't even know how it uses this S() to connect to the right host? es = get\_es(hosts=HOST, default\_indexes=[INDEX]) basic\_s = S().indexes(INDEX).doctypes(DOCTYPE).values\_dict() results: ``` print basic_s.query(message__text="login/delete") File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 223, in __repr__ data = list(self)[:REPR_OUTPUT_SIZE + 1] File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 623, in __iter__ return iter(self._do_search()) File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 573, in _do_search hits = self.raw() File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 615, in raw hits = es.search(qs, self.get_indexes(), self.get_doctypes()) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 841, in search return self._query_call("_search", body, indexes, doc_types, **query_params) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 251, in _query_call response = self._send_request('GET', path, body, querystring_args) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 208, in _send_request response = self.connection.execute(request) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 167, in _client_call return getattr(conn.client, attr)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 59, in execute response = self.client.urlopen(Method._VALUES_TO_NAMES[request.method], uri, body=request.body, headers=request.headers) File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 255, in urlopen raise MaxRetryError("Max retries exceeded for url: %s" % url) pyes.urllib3.connectionpool.MaxRetryError: Max retries exceeded for url: /graylog2/message/_search ``` I wish the devs of this good projects would provide some complete examples. Even looking at sources I am t a complete loss. Is there any solution, help out there for me with elasticsearch and python or should I just drop all of this and pay for a nice splunk account and end this misery. I am proceeding with using curl, download the entire json result and json load it. Hope that works, though curl downloading 1 million messages from elasticsearch may not just happen.
2012/08/03
[ "https://Stackoverflow.com/questions/11797035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614355/" ]
Honestly, I've had the most luck with just CURLing everything. ES has so many different methods, filters, and queries that various "wrappers" have a hard time recreating all the functionality. In my view, it is similar to using an ORM for databases...what you gain in ease of use you lose in flexibility/raw power. Except most of the wrappers for ES aren't really that easy to use. I'd give CURL a try for a while and see how that treats you. You can use external JSON formatters to check your JSON, the mailing list to look for examples and the docs are ok if you use JSON.
ElasticUtils has sample code: <http://elasticutils.readthedocs.org/en/latest/sampleprogram1.html> If there are other things you need in the docs, just ask.
11,797,035
I am having a very bad week having chosen elasticsearch with graylog2. I am trying to run queries against the data in ES using Python. I have tried following clients. 1. ESClient - Very weird results, I think its not maintained, query\_body has no effect it returns all the results. 2. Pyes - Unreadable, undocumented. I have browsed sources and cant figure out how to run a simple query, maybe i am just not that smart. I can even run base queries in json format and then simply use the Python object/iterators to do my analysis on the results. But Pyes does not make it easy. 3. Elasticutils - Another documented, but without a complete sample. I get the following error with code attached. I don't even know how it uses this S() to connect to the right host? es = get\_es(hosts=HOST, default\_indexes=[INDEX]) basic\_s = S().indexes(INDEX).doctypes(DOCTYPE).values\_dict() results: ``` print basic_s.query(message__text="login/delete") File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 223, in __repr__ data = list(self)[:REPR_OUTPUT_SIZE + 1] File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 623, in __iter__ return iter(self._do_search()) File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 573, in _do_search hits = self.raw() File "/usr/lib/python2.7/site-packages/elasticutils/__init__.py", line 615, in raw hits = es.search(qs, self.get_indexes(), self.get_doctypes()) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 841, in search return self._query_call("_search", body, indexes, doc_types, **query_params) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 251, in _query_call response = self._send_request('GET', path, body, querystring_args) File "/usr/lib/python2.7/site-packages/pyes/es.py", line 208, in _send_request response = self.connection.execute(request) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 167, in _client_call return getattr(conn.client, attr)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pyes/connection_http.py", line 59, in execute response = self.client.urlopen(Method._VALUES_TO_NAMES[request.method], uri, body=request.body, headers=request.headers) File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 294, in urlopen return self.urlopen(method, url, body, headers, retries-1, redirect) # Try again File "/usr/lib/python2.7/site-packages/pyes/urllib3/connectionpool.py", line 255, in urlopen raise MaxRetryError("Max retries exceeded for url: %s" % url) pyes.urllib3.connectionpool.MaxRetryError: Max retries exceeded for url: /graylog2/message/_search ``` I wish the devs of this good projects would provide some complete examples. Even looking at sources I am t a complete loss. Is there any solution, help out there for me with elasticsearch and python or should I just drop all of this and pay for a nice splunk account and end this misery. I am proceeding with using curl, download the entire json result and json load it. Hope that works, though curl downloading 1 million messages from elasticsearch may not just happen.
2012/08/03
[ "https://Stackoverflow.com/questions/11797035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614355/" ]
I have found rawes to be quite usable: <https://github.com/humangeo/rawes> It's a rather low-level interface but I have found it to be much less awkward to work with than the high-level ones. It also supports the Thrift RPC if you're into that.
FWIW, PYES docs are here: <http://packages.python.org/pyes/index.html> Usage: <http://packages.python.org/pyes/manual/usage.html>
61,817,908
Learning from the book "python crash course second edition". I'm getting syntaxerrors for the code that is being taught inside the book and don't understand why. ``` bicycles = ['trek', 'cannondale', 'redline', 'specialized'] message = f"My first bicycle was a {bicycles[0].title()}." print(bicycles[0].title()) print(message) ``` Any reasons why? Is the book incorrect? I'm using sublime text on a MacBook Pro. Thanks!
2020/05/15
[ "https://Stackoverflow.com/questions/61817908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11287873/" ]
You might be using python version 3.6 and below, message = f"My first bicycle was a {bicycles[0].title()}." 'f' strings are introduced in python 3.6 and above. So check you current python version, if your version below 3.6 then surely that's the error's route cause. Learn more about python 'f' string visit <https://www.python.org/dev/peps/pep-0498/>
The code runs OK for me. The syntax with `f` (`f"My first bicycle was a {bicycles[0].title()}."`) is new from Python 3.6. Check that your Python version is recent enough. It's also useful to post the exact error you get.
50,561,222
When i am tying to take user input in python then it is taking input in next line but I want ti to take input in same line. How to achieve that? I am taking input like this ``` print("Enter your name:",end=" ") ``` It is showing on console as ``` Enter your name: Ankit ``` but I want it as ``` Enter your name:Ankit ```
2018/05/28
[ "https://Stackoverflow.com/questions/50561222", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7900604/" ]
You need to use the `input` method: ``` response = input("Enter your name:") ``` (or `raw_input` for python 2)
By using the input() method Just type, ``` userinput = input("Enter your name: ") ```
50,561,222
When i am tying to take user input in python then it is taking input in next line but I want ti to take input in same line. How to achieve that? I am taking input like this ``` print("Enter your name:",end=" ") ``` It is showing on console as ``` Enter your name: Ankit ``` but I want it as ``` Enter your name:Ankit ```
2018/05/28
[ "https://Stackoverflow.com/questions/50561222", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7900604/" ]
You need to use the `input` method: ``` response = input("Enter your name:") ``` (or `raw_input` for python 2)
If you are using `Python 2.x`: ``` response = raw_input("Enter your name:") ``` If you are using `Python 3.x`: ``` response = input("Enter your name:") ``` **Alternate solution:** For python 2.x: ``` print("Enter your name:"), response = raw_input() ``` For python 3.x: ``` print("Enter your name:", end="") response = input() ```
4,422,491
I want a bash way to read lines from standard input (so I can pipe input to it), and remove just the leading and trailing space characters. Piping to echo does not work. For example, if the input is: ``` 12 s3c sd wqr ``` the output should be: ``` 12 s3c sd wqr ``` I want to avoid writing a python script or similar for something as trivial as this. Any help is appreciated!
2010/12/12
[ "https://Stackoverflow.com/questions/4422491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145537/" ]
``` $ trim () { read -r line; echo "$line"; } $ echo " aa bb cc " | trim aa bb cc $ a=$(echo " aa bb cc " | trim) $ echo "..$a.." ..aa bb cc.. ``` To make it work for multi-line input, just add a `while` loop: ``` trim () { while read -r line; do echo "$line"; done; } ``` Using `sed` with only *one* substitution: ``` sed 's/^\s*\(.*[^ \t]\)\(\s\+\)*$/\1/' ```
Add this: `| sed -r 's/\s*(.*?)\s*$/\1/'`
4,422,491
I want a bash way to read lines from standard input (so I can pipe input to it), and remove just the leading and trailing space characters. Piping to echo does not work. For example, if the input is: ``` 12 s3c sd wqr ``` the output should be: ``` 12 s3c sd wqr ``` I want to avoid writing a python script or similar for something as trivial as this. Any help is appreciated!
2010/12/12
[ "https://Stackoverflow.com/questions/4422491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145537/" ]
Add this: `| sed -r 's/\s*(.*?)\s*$/\1/'`
``` grep -o -E '\S.*\S|\S' ``` Еxplanation: * `-о` - print only matches * `-E` - use extended regular expression syntax * `'\S.*\S'`: + match the first non-space symbol, then **greedy** match any number of any symbols, then match a non-space symbol + or, if the first part is not matched (i.e. there are *no two* non-space symbols), match a single non-space symbol
4,422,491
I want a bash way to read lines from standard input (so I can pipe input to it), and remove just the leading and trailing space characters. Piping to echo does not work. For example, if the input is: ``` 12 s3c sd wqr ``` the output should be: ``` 12 s3c sd wqr ``` I want to avoid writing a python script or similar for something as trivial as this. Any help is appreciated!
2010/12/12
[ "https://Stackoverflow.com/questions/4422491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145537/" ]
``` $ trim () { read -r line; echo "$line"; } $ echo " aa bb cc " | trim aa bb cc $ a=$(echo " aa bb cc " | trim) $ echo "..$a.." ..aa bb cc.. ``` To make it work for multi-line input, just add a `while` loop: ``` trim () { while read -r line; do echo "$line"; done; } ``` Using `sed` with only *one* substitution: ``` sed 's/^\s*\(.*[^ \t]\)\(\s\+\)*$/\1/' ```
``` grep -o -E '\S.*\S|\S' ``` Еxplanation: * `-о` - print only matches * `-E` - use extended regular expression syntax * `'\S.*\S'`: + match the first non-space symbol, then **greedy** match any number of any symbols, then match a non-space symbol + or, if the first part is not matched (i.e. there are *no two* non-space symbols), match a single non-space symbol
4,422,491
I want a bash way to read lines from standard input (so I can pipe input to it), and remove just the leading and trailing space characters. Piping to echo does not work. For example, if the input is: ``` 12 s3c sd wqr ``` the output should be: ``` 12 s3c sd wqr ``` I want to avoid writing a python script or similar for something as trivial as this. Any help is appreciated!
2010/12/12
[ "https://Stackoverflow.com/questions/4422491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145537/" ]
You can use sed to trim it. ``` sed 's/^ *//;s/ *$//' ``` You can test it really easily on a command line by doing: ``` echo -n " 12 s3c " | sed 's/^ *//;s/ *$//' && echo c ```
``` your_command | xargs -L1 echo ``` This works because `echo` converts all tabls to spaces and then all multiple spaces to a single space, not only leading and trailing, see example: ``` $ printf " 1\t\t\t2 3" 1 2 3 $ echo `printf " 1\t\t\t2 3"` 1 2 3 ``` The drawback is that it will also remove some useful characters like `\` `'` `"`.
4,422,491
I want a bash way to read lines from standard input (so I can pipe input to it), and remove just the leading and trailing space characters. Piping to echo does not work. For example, if the input is: ``` 12 s3c sd wqr ``` the output should be: ``` 12 s3c sd wqr ``` I want to avoid writing a python script or similar for something as trivial as this. Any help is appreciated!
2010/12/12
[ "https://Stackoverflow.com/questions/4422491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145537/" ]
You can use sed to trim it. ``` sed 's/^ *//;s/ *$//' ``` You can test it really easily on a command line by doing: ``` echo -n " 12 s3c " | sed 's/^ *//;s/ *$//' && echo c ```
Add this: `| sed -r 's/\s*(.*?)\s*$/\1/'`
4,422,491
I want a bash way to read lines from standard input (so I can pipe input to it), and remove just the leading and trailing space characters. Piping to echo does not work. For example, if the input is: ``` 12 s3c sd wqr ``` the output should be: ``` 12 s3c sd wqr ``` I want to avoid writing a python script or similar for something as trivial as this. Any help is appreciated!
2010/12/12
[ "https://Stackoverflow.com/questions/4422491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145537/" ]
``` $ trim () { read -r line; echo "$line"; } $ echo " aa bb cc " | trim aa bb cc $ a=$(echo " aa bb cc " | trim) $ echo "..$a.." ..aa bb cc.. ``` To make it work for multi-line input, just add a `while` loop: ``` trim () { while read -r line; do echo "$line"; done; } ``` Using `sed` with only *one* substitution: ``` sed 's/^\s*\(.*[^ \t]\)\(\s\+\)*$/\1/' ```
``` your_command | xargs -L1 echo ``` This works because `echo` converts all tabls to spaces and then all multiple spaces to a single space, not only leading and trailing, see example: ``` $ printf " 1\t\t\t2 3" 1 2 3 $ echo `printf " 1\t\t\t2 3"` 1 2 3 ``` The drawback is that it will also remove some useful characters like `\` `'` `"`.
4,422,491
I want a bash way to read lines from standard input (so I can pipe input to it), and remove just the leading and trailing space characters. Piping to echo does not work. For example, if the input is: ``` 12 s3c sd wqr ``` the output should be: ``` 12 s3c sd wqr ``` I want to avoid writing a python script or similar for something as trivial as this. Any help is appreciated!
2010/12/12
[ "https://Stackoverflow.com/questions/4422491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145537/" ]
You can use sed to trim it. ``` sed 's/^ *//;s/ *$//' ``` You can test it really easily on a command line by doing: ``` echo -n " 12 s3c " | sed 's/^ *//;s/ *$//' && echo c ```
I know this is old, but there is another simple and dirty way: ``` line=$(echo $line) ``` See this example: ``` user@host:~$ x=" abc " user@host:~$ echo "+$x+" + abc + user@host:~$ y=$(echo $x) user@host:~$ echo "$y" +abc+ ```
4,422,491
I want a bash way to read lines from standard input (so I can pipe input to it), and remove just the leading and trailing space characters. Piping to echo does not work. For example, if the input is: ``` 12 s3c sd wqr ``` the output should be: ``` 12 s3c sd wqr ``` I want to avoid writing a python script or similar for something as trivial as this. Any help is appreciated!
2010/12/12
[ "https://Stackoverflow.com/questions/4422491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145537/" ]
You can use sed to trim it. ``` sed 's/^ *//;s/ *$//' ``` You can test it really easily on a command line by doing: ``` echo -n " 12 s3c " | sed 's/^ *//;s/ *$//' && echo c ```
``` grep -o -E '\S.*\S|\S' ``` Еxplanation: * `-о` - print only matches * `-E` - use extended regular expression syntax * `'\S.*\S'`: + match the first non-space symbol, then **greedy** match any number of any symbols, then match a non-space symbol + or, if the first part is not matched (i.e. there are *no two* non-space symbols), match a single non-space symbol
4,422,491
I want a bash way to read lines from standard input (so I can pipe input to it), and remove just the leading and trailing space characters. Piping to echo does not work. For example, if the input is: ``` 12 s3c sd wqr ``` the output should be: ``` 12 s3c sd wqr ``` I want to avoid writing a python script or similar for something as trivial as this. Any help is appreciated!
2010/12/12
[ "https://Stackoverflow.com/questions/4422491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145537/" ]
I know this is old, but there is another simple and dirty way: ``` line=$(echo $line) ``` See this example: ``` user@host:~$ x=" abc " user@host:~$ echo "+$x+" + abc + user@host:~$ y=$(echo $x) user@host:~$ echo "$y" +abc+ ```
``` grep -o -E '\S.*\S|\S' ``` Еxplanation: * `-о` - print only matches * `-E` - use extended regular expression syntax * `'\S.*\S'`: + match the first non-space symbol, then **greedy** match any number of any symbols, then match a non-space symbol + or, if the first part is not matched (i.e. there are *no two* non-space symbols), match a single non-space symbol
4,422,491
I want a bash way to read lines from standard input (so I can pipe input to it), and remove just the leading and trailing space characters. Piping to echo does not work. For example, if the input is: ``` 12 s3c sd wqr ``` the output should be: ``` 12 s3c sd wqr ``` I want to avoid writing a python script or similar for something as trivial as this. Any help is appreciated!
2010/12/12
[ "https://Stackoverflow.com/questions/4422491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145537/" ]
``` your_command | xargs -L1 echo ``` This works because `echo` converts all tabls to spaces and then all multiple spaces to a single space, not only leading and trailing, see example: ``` $ printf " 1\t\t\t2 3" 1 2 3 $ echo `printf " 1\t\t\t2 3"` 1 2 3 ``` The drawback is that it will also remove some useful characters like `\` `'` `"`.
``` grep -o -E '\S.*\S|\S' ``` Еxplanation: * `-о` - print only matches * `-E` - use extended regular expression syntax * `'\S.*\S'`: + match the first non-space symbol, then **greedy** match any number of any symbols, then match a non-space symbol + or, if the first part is not matched (i.e. there are *no two* non-space symbols), match a single non-space symbol
18,429,992
I've come across this post: [How to generate all permutations of a list in Python](https://stackoverflow.com/questions/104420/how-to-generate-all-permutations-of-a-list-in-python) But I require something more, namely all of the permutations of a string as well as all the permutations of all the substrings. I know it's a big number, but is it possible?
2013/08/25
[ "https://Stackoverflow.com/questions/18429992", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2672265/" ]
``` import itertools def all_permutations_substrings(a_str): return ( ''.join(item) for length in xrange(1, len(a_str)+1) for item in itertools.permutations(a_str, length)) ``` Note, however, that this is true permutations - as in, `hello` will have any substring permutation that has two `l`s in it twice, since the `l`'s will be considered "unique". If you wanted to get rid of that, you could pass it through a `set()`: ``` all_permutations_no_dupes = set(all_permutations_substrings(a_str)) ```
As the question you linked states, [itertools.permutations](http://docs.python.org/2/library/itertools.html#itertools.permutations) is the solution for generating permutations of lists. In python, strings can be treated as lists, so `itertools.permutations("text")` will work just fine. For substrings, you can pass a length into itertools.permutations as an optional second argument. ``` def permutate_all_substrings(text): permutations = [] # All possible substring lengths for length in range(1, len(text)+1): # All permutations of a given length for permutation in itertools.permutations(text, length): # itertools.permutations returns a tuple, so join it back into a string permutations.append("".join(permutation)) return permutations ``` Or if you prefer one-line list comprehensions ``` list(itertools.chain.from_iterable([["".join(p) for p in itertools.permutations(text, l)] for l in range(1, len(text)+1)])) ```
15,832,421
I have the following code ``` A = [(X(x), Y(y), Z(z)) for x in range(N) for y in range(N) for z in range(N)] ``` It does what I want - produce a list of tuples representing cartesian coordinates according to my functions X, Y and Z - but it is not very pretty. I tried ``` A = [(X(x), Y(y), Z(z)) for x, y, z in range(N)] ``` but that didn't work. Is there a more elegant and pythonic way to do this?
2013/04/05
[ "https://Stackoverflow.com/questions/15832421", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2248727/" ]
``` from itertools import product A = [(X(x), Y(y), Z(z)) for x, y, z in product(range(N), repeat=3)] ```
As x,y and z are having the same value, you could do this: ``` A = [(X(x), Y(x), Z(x)) for x, in range(N)] ``` You can also use a map function: ``` f = lambda x : (X(x), Y(x), Z(x)) map(f, range(N)) ``` Good luck
15,832,421
I have the following code ``` A = [(X(x), Y(y), Z(z)) for x in range(N) for y in range(N) for z in range(N)] ``` It does what I want - produce a list of tuples representing cartesian coordinates according to my functions X, Y and Z - but it is not very pretty. I tried ``` A = [(X(x), Y(y), Z(z)) for x, y, z in range(N)] ``` but that didn't work. Is there a more elegant and pythonic way to do this?
2013/04/05
[ "https://Stackoverflow.com/questions/15832421", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2248727/" ]
``` from itertools import product A = [(X(x), Y(y), Z(z)) for x, y, z in product(range(N), repeat=3)] ```
You can do this: ``` import itertools res = [X(each[0]), Y(each[1]), Z(each[2]) for each in itertools.combinations(N, 3)] ``` This will give you all the unique combinations. You can find more about it [here](http://docs.python.org/2/library/itertools.html#itertools.combinations) . Keep coding :)
4,802,513
I want to overlay geospatial data (mostly heatmaps) on top of high resolution satellite images using python. (i am newbie, so be gentle on me ;-) ) Here is my wish list * detailed enough to show streets and buildings * must be fairly recent (captured within last several years) * coordinates and projection of images/maps must be known that heatmaps i created can be overlayed * easy retrieval (hopefully, several lines of python codes will take care of getting right images) * free I think google map/earth, yahoo map, bing, etc... could be potential candidates, but I am not sure how to access them easily. Code examples would be very helpful. Any suggestions?
2011/01/26
[ "https://Stackoverflow.com/questions/4802513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/186477/" ]
[Open Street Map](http://www.openstreetmap.org/) is a good equivalent to Google maps (which I do not know very well). Their database increases with time. It is an open source map acquisition attempt. They are sometimes a little bit more accurate than Google maps, see the [Berlin zoo example](http://arstechnica.com/open-source/news/2010/06/crowd-sourced-world-map.ars). It has several APIs, which are read-only access: <http://wiki.openstreetmap.org/wiki/XAPI>. It appears to use the REST protocol. For the use of REST and Python, I would suggest this [SO link](https://stackoverflow.com/questions/713847/recommendations-of-python-rest-web-services-framework).
One possible source is the [images from NASA World Wind](http://worldwindcentral.com/wiki/World_Wind_Data_Sources). You can [look at the their source](http://worldwindcentral.com/wiki/Source_code) to find out how they access their data sources, and do the same in your application.
4,802,513
I want to overlay geospatial data (mostly heatmaps) on top of high resolution satellite images using python. (i am newbie, so be gentle on me ;-) ) Here is my wish list * detailed enough to show streets and buildings * must be fairly recent (captured within last several years) * coordinates and projection of images/maps must be known that heatmaps i created can be overlayed * easy retrieval (hopefully, several lines of python codes will take care of getting right images) * free I think google map/earth, yahoo map, bing, etc... could be potential candidates, but I am not sure how to access them easily. Code examples would be very helpful. Any suggestions?
2011/01/26
[ "https://Stackoverflow.com/questions/4802513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/186477/" ]
[Open Street Map](http://www.openstreetmap.org/) is a good equivalent to Google maps (which I do not know very well). Their database increases with time. It is an open source map acquisition attempt. They are sometimes a little bit more accurate than Google maps, see the [Berlin zoo example](http://arstechnica.com/open-source/news/2010/06/crowd-sourced-world-map.ars). It has several APIs, which are read-only access: <http://wiki.openstreetmap.org/wiki/XAPI>. It appears to use the REST protocol. For the use of REST and Python, I would suggest this [SO link](https://stackoverflow.com/questions/713847/recommendations-of-python-rest-web-services-framework).
So you want to do something almost exactly like this: <http://www.jjguy.com/heatmap/> which I found by googling for "python heatmap". Now you are a bit unclear about what you want to do with these images, so remember that Google Earth imagery is copyrighted and there's a set of restrictions on what you can do with them.
4,802,513
I want to overlay geospatial data (mostly heatmaps) on top of high resolution satellite images using python. (i am newbie, so be gentle on me ;-) ) Here is my wish list * detailed enough to show streets and buildings * must be fairly recent (captured within last several years) * coordinates and projection of images/maps must be known that heatmaps i created can be overlayed * easy retrieval (hopefully, several lines of python codes will take care of getting right images) * free I think google map/earth, yahoo map, bing, etc... could be potential candidates, but I am not sure how to access them easily. Code examples would be very helpful. Any suggestions?
2011/01/26
[ "https://Stackoverflow.com/questions/4802513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/186477/" ]
[Open Street Map](http://www.openstreetmap.org/) is a good equivalent to Google maps (which I do not know very well). Their database increases with time. It is an open source map acquisition attempt. They are sometimes a little bit more accurate than Google maps, see the [Berlin zoo example](http://arstechnica.com/open-source/news/2010/06/crowd-sourced-world-map.ars). It has several APIs, which are read-only access: <http://wiki.openstreetmap.org/wiki/XAPI>. It appears to use the REST protocol. For the use of REST and Python, I would suggest this [SO link](https://stackoverflow.com/questions/713847/recommendations-of-python-rest-web-services-framework).
Google Maps explicitly forbid using map tiles offline or caching them, but I think Microsoft Bing Maps don't say anything explicitly against it, and I guess you are not planning to use your program commercially (?) Then, you could use this. It creates a cache, first loading a tile from memory, else fom disk, else from the internet, always caching everything to disk for reuse. Of course you'll have to figure out how to tweak it, specifically how to get the tile coordinates and zoom level you need, and for this I suggest strongly [this site](http://www.maptiler.org/google-maps-coordinates-tile-bounds-projection/). Good study! ``` #!/usr/bin/env python # coding: utf-8 import os import Image import random import urllib import cStringIO import cairo #from geofunctions import * class TileServer(object): def __init__(self): self.imdict = {} self.surfdict = {} self.layers = 'ROADMAP' self.path = './' self.urltemplate = 'http://ecn.t{4}.tiles.virtualearth.net/tiles/{3}{5}?g=0' self.layerdict = {'SATELLITE': 'a', 'HYBRID': 'h', 'ROADMAP': 'r'} def tiletoquadkey(self, xi, yi, z): quadKey = '' for i in range(z, 0, -1): digit = 0 mask = 1 << (i - 1) if(xi & mask) != 0: digit += 1 if(yi & mask) != 0: digit += 2 quadKey += str(digit) return quadKey def loadimage(self, fullname, tilekey): im = Image.open(fullname) self.imdict[tilekey] = im return self.imdict[tilekey] def tile_as_image(self, xi, yi, zoom): tilekey = (xi, yi, zoom) result = None try: result = self.imdict[tilekey] except: filename = '{}_{}_{}_{}.jpg'.format(zoom, xi, yi, self.layerdict[self.layers]) fullname = self.path + filename try: result = self.loadimage(fullname, tilekey) except: server = random.choice(range(1,4)) quadkey = self.tiletoquadkey(*tilekey) print quadkey url = self.urltemplate.format(xi, yi, zoom, self.layerdict[self.layers], server, quadkey) print "Downloading tile %s to local cache." % filename urllib.urlretrieve(url, fullname) result = self.loadimage(fullname, tilekey) return result if __name__ == "__main__": ts = TileServer() im = ts.tile_as_image(5, 9, 4) im.show() ```
4,802,513
I want to overlay geospatial data (mostly heatmaps) on top of high resolution satellite images using python. (i am newbie, so be gentle on me ;-) ) Here is my wish list * detailed enough to show streets and buildings * must be fairly recent (captured within last several years) * coordinates and projection of images/maps must be known that heatmaps i created can be overlayed * easy retrieval (hopefully, several lines of python codes will take care of getting right images) * free I think google map/earth, yahoo map, bing, etc... could be potential candidates, but I am not sure how to access them easily. Code examples would be very helpful. Any suggestions?
2011/01/26
[ "https://Stackoverflow.com/questions/4802513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/186477/" ]
[Open Street Map](http://www.openstreetmap.org/) is a good equivalent to Google maps (which I do not know very well). Their database increases with time. It is an open source map acquisition attempt. They are sometimes a little bit more accurate than Google maps, see the [Berlin zoo example](http://arstechnica.com/open-source/news/2010/06/crowd-sourced-world-map.ars). It has several APIs, which are read-only access: <http://wiki.openstreetmap.org/wiki/XAPI>. It appears to use the REST protocol. For the use of REST and Python, I would suggest this [SO link](https://stackoverflow.com/questions/713847/recommendations-of-python-rest-web-services-framework).
I have made use of Bing Maps API coupled with the knowledge of [Map Tiling](https://www.maptiler.com/google-maps-coordinates-tile-bounds-projection/). Please find the code for the same in my [Github Repository](https://github.com/tanishqvyas/GreeneryEstimator). You may find [this](https://learn.microsoft.com/en-us/bingmaps/rest-services/imagery/get-a-static-map) helpful too.
4,802,513
I want to overlay geospatial data (mostly heatmaps) on top of high resolution satellite images using python. (i am newbie, so be gentle on me ;-) ) Here is my wish list * detailed enough to show streets and buildings * must be fairly recent (captured within last several years) * coordinates and projection of images/maps must be known that heatmaps i created can be overlayed * easy retrieval (hopefully, several lines of python codes will take care of getting right images) * free I think google map/earth, yahoo map, bing, etc... could be potential candidates, but I am not sure how to access them easily. Code examples would be very helpful. Any suggestions?
2011/01/26
[ "https://Stackoverflow.com/questions/4802513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/186477/" ]
So you want to do something almost exactly like this: <http://www.jjguy.com/heatmap/> which I found by googling for "python heatmap". Now you are a bit unclear about what you want to do with these images, so remember that Google Earth imagery is copyrighted and there's a set of restrictions on what you can do with them.
One possible source is the [images from NASA World Wind](http://worldwindcentral.com/wiki/World_Wind_Data_Sources). You can [look at the their source](http://worldwindcentral.com/wiki/Source_code) to find out how they access their data sources, and do the same in your application.
4,802,513
I want to overlay geospatial data (mostly heatmaps) on top of high resolution satellite images using python. (i am newbie, so be gentle on me ;-) ) Here is my wish list * detailed enough to show streets and buildings * must be fairly recent (captured within last several years) * coordinates and projection of images/maps must be known that heatmaps i created can be overlayed * easy retrieval (hopefully, several lines of python codes will take care of getting right images) * free I think google map/earth, yahoo map, bing, etc... could be potential candidates, but I am not sure how to access them easily. Code examples would be very helpful. Any suggestions?
2011/01/26
[ "https://Stackoverflow.com/questions/4802513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/186477/" ]
Google Maps explicitly forbid using map tiles offline or caching them, but I think Microsoft Bing Maps don't say anything explicitly against it, and I guess you are not planning to use your program commercially (?) Then, you could use this. It creates a cache, first loading a tile from memory, else fom disk, else from the internet, always caching everything to disk for reuse. Of course you'll have to figure out how to tweak it, specifically how to get the tile coordinates and zoom level you need, and for this I suggest strongly [this site](http://www.maptiler.org/google-maps-coordinates-tile-bounds-projection/). Good study! ``` #!/usr/bin/env python # coding: utf-8 import os import Image import random import urllib import cStringIO import cairo #from geofunctions import * class TileServer(object): def __init__(self): self.imdict = {} self.surfdict = {} self.layers = 'ROADMAP' self.path = './' self.urltemplate = 'http://ecn.t{4}.tiles.virtualearth.net/tiles/{3}{5}?g=0' self.layerdict = {'SATELLITE': 'a', 'HYBRID': 'h', 'ROADMAP': 'r'} def tiletoquadkey(self, xi, yi, z): quadKey = '' for i in range(z, 0, -1): digit = 0 mask = 1 << (i - 1) if(xi & mask) != 0: digit += 1 if(yi & mask) != 0: digit += 2 quadKey += str(digit) return quadKey def loadimage(self, fullname, tilekey): im = Image.open(fullname) self.imdict[tilekey] = im return self.imdict[tilekey] def tile_as_image(self, xi, yi, zoom): tilekey = (xi, yi, zoom) result = None try: result = self.imdict[tilekey] except: filename = '{}_{}_{}_{}.jpg'.format(zoom, xi, yi, self.layerdict[self.layers]) fullname = self.path + filename try: result = self.loadimage(fullname, tilekey) except: server = random.choice(range(1,4)) quadkey = self.tiletoquadkey(*tilekey) print quadkey url = self.urltemplate.format(xi, yi, zoom, self.layerdict[self.layers], server, quadkey) print "Downloading tile %s to local cache." % filename urllib.urlretrieve(url, fullname) result = self.loadimage(fullname, tilekey) return result if __name__ == "__main__": ts = TileServer() im = ts.tile_as_image(5, 9, 4) im.show() ```
One possible source is the [images from NASA World Wind](http://worldwindcentral.com/wiki/World_Wind_Data_Sources). You can [look at the their source](http://worldwindcentral.com/wiki/Source_code) to find out how they access their data sources, and do the same in your application.
4,802,513
I want to overlay geospatial data (mostly heatmaps) on top of high resolution satellite images using python. (i am newbie, so be gentle on me ;-) ) Here is my wish list * detailed enough to show streets and buildings * must be fairly recent (captured within last several years) * coordinates and projection of images/maps must be known that heatmaps i created can be overlayed * easy retrieval (hopefully, several lines of python codes will take care of getting right images) * free I think google map/earth, yahoo map, bing, etc... could be potential candidates, but I am not sure how to access them easily. Code examples would be very helpful. Any suggestions?
2011/01/26
[ "https://Stackoverflow.com/questions/4802513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/186477/" ]
One possible source is the [images from NASA World Wind](http://worldwindcentral.com/wiki/World_Wind_Data_Sources). You can [look at the their source](http://worldwindcentral.com/wiki/Source_code) to find out how they access their data sources, and do the same in your application.
I have made use of Bing Maps API coupled with the knowledge of [Map Tiling](https://www.maptiler.com/google-maps-coordinates-tile-bounds-projection/). Please find the code for the same in my [Github Repository](https://github.com/tanishqvyas/GreeneryEstimator). You may find [this](https://learn.microsoft.com/en-us/bingmaps/rest-services/imagery/get-a-static-map) helpful too.
4,802,513
I want to overlay geospatial data (mostly heatmaps) on top of high resolution satellite images using python. (i am newbie, so be gentle on me ;-) ) Here is my wish list * detailed enough to show streets and buildings * must be fairly recent (captured within last several years) * coordinates and projection of images/maps must be known that heatmaps i created can be overlayed * easy retrieval (hopefully, several lines of python codes will take care of getting right images) * free I think google map/earth, yahoo map, bing, etc... could be potential candidates, but I am not sure how to access them easily. Code examples would be very helpful. Any suggestions?
2011/01/26
[ "https://Stackoverflow.com/questions/4802513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/186477/" ]
So you want to do something almost exactly like this: <http://www.jjguy.com/heatmap/> which I found by googling for "python heatmap". Now you are a bit unclear about what you want to do with these images, so remember that Google Earth imagery is copyrighted and there's a set of restrictions on what you can do with them.
I have made use of Bing Maps API coupled with the knowledge of [Map Tiling](https://www.maptiler.com/google-maps-coordinates-tile-bounds-projection/). Please find the code for the same in my [Github Repository](https://github.com/tanishqvyas/GreeneryEstimator). You may find [this](https://learn.microsoft.com/en-us/bingmaps/rest-services/imagery/get-a-static-map) helpful too.
4,802,513
I want to overlay geospatial data (mostly heatmaps) on top of high resolution satellite images using python. (i am newbie, so be gentle on me ;-) ) Here is my wish list * detailed enough to show streets and buildings * must be fairly recent (captured within last several years) * coordinates and projection of images/maps must be known that heatmaps i created can be overlayed * easy retrieval (hopefully, several lines of python codes will take care of getting right images) * free I think google map/earth, yahoo map, bing, etc... could be potential candidates, but I am not sure how to access them easily. Code examples would be very helpful. Any suggestions?
2011/01/26
[ "https://Stackoverflow.com/questions/4802513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/186477/" ]
Google Maps explicitly forbid using map tiles offline or caching them, but I think Microsoft Bing Maps don't say anything explicitly against it, and I guess you are not planning to use your program commercially (?) Then, you could use this. It creates a cache, first loading a tile from memory, else fom disk, else from the internet, always caching everything to disk for reuse. Of course you'll have to figure out how to tweak it, specifically how to get the tile coordinates and zoom level you need, and for this I suggest strongly [this site](http://www.maptiler.org/google-maps-coordinates-tile-bounds-projection/). Good study! ``` #!/usr/bin/env python # coding: utf-8 import os import Image import random import urllib import cStringIO import cairo #from geofunctions import * class TileServer(object): def __init__(self): self.imdict = {} self.surfdict = {} self.layers = 'ROADMAP' self.path = './' self.urltemplate = 'http://ecn.t{4}.tiles.virtualearth.net/tiles/{3}{5}?g=0' self.layerdict = {'SATELLITE': 'a', 'HYBRID': 'h', 'ROADMAP': 'r'} def tiletoquadkey(self, xi, yi, z): quadKey = '' for i in range(z, 0, -1): digit = 0 mask = 1 << (i - 1) if(xi & mask) != 0: digit += 1 if(yi & mask) != 0: digit += 2 quadKey += str(digit) return quadKey def loadimage(self, fullname, tilekey): im = Image.open(fullname) self.imdict[tilekey] = im return self.imdict[tilekey] def tile_as_image(self, xi, yi, zoom): tilekey = (xi, yi, zoom) result = None try: result = self.imdict[tilekey] except: filename = '{}_{}_{}_{}.jpg'.format(zoom, xi, yi, self.layerdict[self.layers]) fullname = self.path + filename try: result = self.loadimage(fullname, tilekey) except: server = random.choice(range(1,4)) quadkey = self.tiletoquadkey(*tilekey) print quadkey url = self.urltemplate.format(xi, yi, zoom, self.layerdict[self.layers], server, quadkey) print "Downloading tile %s to local cache." % filename urllib.urlretrieve(url, fullname) result = self.loadimage(fullname, tilekey) return result if __name__ == "__main__": ts = TileServer() im = ts.tile_as_image(5, 9, 4) im.show() ```
I have made use of Bing Maps API coupled with the knowledge of [Map Tiling](https://www.maptiler.com/google-maps-coordinates-tile-bounds-projection/). Please find the code for the same in my [Github Repository](https://github.com/tanishqvyas/GreeneryEstimator). You may find [this](https://learn.microsoft.com/en-us/bingmaps/rest-services/imagery/get-a-static-map) helpful too.
29,240,526
I've installed Syntastic from GitHub and I'm trying to use Syntastic for checking perl syntax errors (and planning to use for Python in a short while). When I use ':quit' or ':q', only original file window closes. The error window does not close. Below is snip from my .vimrc file : ``` execute pathogen#infect() set statusline+=%#warningmsg# set statusline+=%{SyntasticStatuslineFlag()} set statusline+=%* let g:syntastic_perl_checkers = ['perl'] let g:syntastic_python_checkers = ['pylint'] let g:syntastic_enable_perl_checker = 1 let g:syntastic_always_populate_loc_list = 1 let g:syntastic_auto_loc_list = 1 let g:syntastic_check_on_open = 1 ``` Since I'm very new to vim scripting, I would like to know how to close both windows, error window and original file window, when I use ':quit' or ':q' while original file window is active.
2015/03/24
[ "https://Stackoverflow.com/questions/29240526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4708834/" ]
That's the normal Vim behavior; it has nothing to do with Syntastic. The *quickfix* or *location list* windows may contain references to other files, so it is not certain that you want to completely leave Vim when quitting from the originating window. The simplest solution is using `:qa` (quit all) instead of `:q`. As the error window doesn't contain unpersisted changes, this is safe and doesn't require a confirmation. If you are annoyed by having to think about this, you can use Vim's scripting capabilities to change its behavior: ``` :autocmd WinEnter * if &buftype ==# 'quickfix' && winnr('$') == 1 | quit | endif ``` This checks on each change of window whether there's only one window left, and if that one is a quickfix / location list, it quits Vim.
Try the below command: `:lclose`
29,240,526
I've installed Syntastic from GitHub and I'm trying to use Syntastic for checking perl syntax errors (and planning to use for Python in a short while). When I use ':quit' or ':q', only original file window closes. The error window does not close. Below is snip from my .vimrc file : ``` execute pathogen#infect() set statusline+=%#warningmsg# set statusline+=%{SyntasticStatuslineFlag()} set statusline+=%* let g:syntastic_perl_checkers = ['perl'] let g:syntastic_python_checkers = ['pylint'] let g:syntastic_enable_perl_checker = 1 let g:syntastic_always_populate_loc_list = 1 let g:syntastic_auto_loc_list = 1 let g:syntastic_check_on_open = 1 ``` Since I'm very new to vim scripting, I would like to know how to close both windows, error window and original file window, when I use ':quit' or ':q' while original file window is active.
2015/03/24
[ "https://Stackoverflow.com/questions/29240526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4708834/" ]
That's the normal Vim behavior; it has nothing to do with Syntastic. The *quickfix* or *location list* windows may contain references to other files, so it is not certain that you want to completely leave Vim when quitting from the originating window. The simplest solution is using `:qa` (quit all) instead of `:q`. As the error window doesn't contain unpersisted changes, this is safe and doesn't require a confirmation. If you are annoyed by having to think about this, you can use Vim's scripting capabilities to change its behavior: ``` :autocmd WinEnter * if &buftype ==# 'quickfix' && winnr('$') == 1 | quit | endif ``` This checks on each change of window whether there's only one window left, and if that one is a quickfix / location list, it quits Vim.
According to [Syntastic help](https://github.com/scrooloose/syntastic/blob/master/doc/syntastic.txt), the command to close Syntastic error window is: ``` :SyntasticReset ```
56,358,685
I want to delete a Django UserModel table and then recreate it. Or delete user field and recreate it with a new user by `python manage.py createsuperuser` --- [**NOTE**]: My DB is PostgreSQL on a docker container.
2019/05/29
[ "https://Stackoverflow.com/questions/56358685", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3702377/" ]
First, check whether you have a typo or not. After verifying that you have entered class name properly. You can try out as, ``` .your-class-name{ color : #ffffff !important; } ``` **!important** has the superpower to override previous CSS class and it's properties. There are guidelines and defined the precedence of different CSS stylings. Checkout, <https://developer.mozilla.org/en-US/docs/Web/CSS/Specificity> Ask in the comment if required a more specific answer.
Check with the order of the css files loaded. If you have declared multiple css for same element latest class css will be applied. Along with that check the specificity of the css selectors. The selector with higher specificity will be affect the style.
36,547,848
I would like to check each JSON content type with my expectation type. I receive JSON in my python code like this: ``` a = request.json['a'] b = request.json['b'] ``` when I checked a and b type, it is always return Unicode. I checked it like this: ``` type(a) # or type(b) # (always return: type 'unicode') ``` How do I check if `request.json['a']` is `str`, if `request.json['a']` is always `unicode`?
2016/04/11
[ "https://Stackoverflow.com/questions/36547848", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2185032/" ]
I suspect you are on Python 2.x and not Python 3 (because in Python 3 both `type('a')` and `type(u'a')` are `str`, not `unicode`) So in Python 2, what you should know is `str` and `unicode` both are subclasses of `basestring` so instead of testing with ``` if isinstance(x, (str, unicode)): # equiv. to type(x) is str or type(x) is unicode # something ``` you can do (Python 2.x) ``` if isinstance(x, basestring): # do something ``` In Python 3 you don't have to distinguish between `str` and `unicode`, just use ``` if isinstance(x, str): # do something ```
There are a number of built-in sequence types in python, `str` is one of them, and `Unicode str` is another. So technically no, it's not a `str`, it's a `Unicode str`, but you may as well just treat it like a `str`. Documentation [here](https://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange).
36,547,848
I would like to check each JSON content type with my expectation type. I receive JSON in my python code like this: ``` a = request.json['a'] b = request.json['b'] ``` when I checked a and b type, it is always return Unicode. I checked it like this: ``` type(a) # or type(b) # (always return: type 'unicode') ``` How do I check if `request.json['a']` is `str`, if `request.json['a']` is always `unicode`?
2016/04/11
[ "https://Stackoverflow.com/questions/36547848", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2185032/" ]
I suspect you are on Python 2.x and not Python 3 (because in Python 3 both `type('a')` and `type(u'a')` are `str`, not `unicode`) So in Python 2, what you should know is `str` and `unicode` both are subclasses of `basestring` so instead of testing with ``` if isinstance(x, (str, unicode)): # equiv. to type(x) is str or type(x) is unicode # something ``` you can do (Python 2.x) ``` if isinstance(x, basestring): # do something ``` In Python 3 you don't have to distinguish between `str` and `unicode`, just use ``` if isinstance(x, str): # do something ```
If you are using `Python 2.x`,: ``` isinstance(a, basestring) ``` Or if you are using `Python 3.X`: ``` isinstance(a, str) ```
7,313,893
I have a number of functions that parse data from files, usually returning a list of results. If I encounter a dodgy line in the file, I want to soldier on and process the valid lines, and return them. But I also want to report the error to the calling function. The reason I want to report it is so that the calling function can notify the user that the file needs looking at. I don't want to start doing GUI things in the parse function, as that seems to be a big violation of separation of concerns. The parse function does not have access to the console I'm writing error messages to anyway. This leaves me wanting to return the successful data, but also raise an exception because of the error, which clearly I can't do. Consider this code: ``` try: parseResult = parse(myFile) except MyErrorClass, e: HandleErrorsSomehow(str(e)) def parse(file): #file is a list of lines from an actual file err = False result = [] for lines in file: processedLine = Process(line) if not processedLine: err = True else result.append(processedLine) return result if err: raise MyErrorClass("Something went wrong") ``` Obviously the last three lines make no sense, but I can't figure out a nice way to do this. I guess I could do `return (err, result)`, and call it like ``` parseErr, parseResult = parse(file) if parseErr: HandleErrorsSomehow() ``` But returning error codes seems un-pythonic enough, let alone returning tuples of error codes and actual result values. The fact that I feel like I want to do something so strange in an application that shouldn't really be terribly complicated, is making me think I'm probably doing something wrong. Is there a better solution to this problem? Or is there some way that I can use `finally` to return a value and raise an exception at the same time?
2011/09/06
[ "https://Stackoverflow.com/questions/7313893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665488/" ]
Nobody says the *only* valid way to treat an "error" is to throw an exception. In your design the caller wants two pieces of information: (1) the valid data, (2) whether an error occurred (and probably something about what went wrong where, so it can format a useful error message). That is a completely valid and above-ground case for returning a pair of values. An alternative design would be to pass a mutable collection *down* to the function as a parameter and let it fill any error messages it wants to emit into that. That will often simplify the plumbing in the caller, especially if there are several layers of calls between the parser and the code that knows how to do something with the error messages afterwards.
Depending on caller design. using callbacks might be reasonable: ``` def got_line(line): print 'Got valid line', line def got_error(error): print 'got error', error def parse(file, line, error): for lines in file: processedLine = Process(line) if not processedLine: error(MyErrorClass("Something went wrong")) else line(processedLine) parse(some_file, got_line, got_error) ```
7,313,893
I have a number of functions that parse data from files, usually returning a list of results. If I encounter a dodgy line in the file, I want to soldier on and process the valid lines, and return them. But I also want to report the error to the calling function. The reason I want to report it is so that the calling function can notify the user that the file needs looking at. I don't want to start doing GUI things in the parse function, as that seems to be a big violation of separation of concerns. The parse function does not have access to the console I'm writing error messages to anyway. This leaves me wanting to return the successful data, but also raise an exception because of the error, which clearly I can't do. Consider this code: ``` try: parseResult = parse(myFile) except MyErrorClass, e: HandleErrorsSomehow(str(e)) def parse(file): #file is a list of lines from an actual file err = False result = [] for lines in file: processedLine = Process(line) if not processedLine: err = True else result.append(processedLine) return result if err: raise MyErrorClass("Something went wrong") ``` Obviously the last three lines make no sense, but I can't figure out a nice way to do this. I guess I could do `return (err, result)`, and call it like ``` parseErr, parseResult = parse(file) if parseErr: HandleErrorsSomehow() ``` But returning error codes seems un-pythonic enough, let alone returning tuples of error codes and actual result values. The fact that I feel like I want to do something so strange in an application that shouldn't really be terribly complicated, is making me think I'm probably doing something wrong. Is there a better solution to this problem? Or is there some way that I can use `finally` to return a value and raise an exception at the same time?
2011/09/06
[ "https://Stackoverflow.com/questions/7313893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665488/" ]
Nobody says the *only* valid way to treat an "error" is to throw an exception. In your design the caller wants two pieces of information: (1) the valid data, (2) whether an error occurred (and probably something about what went wrong where, so it can format a useful error message). That is a completely valid and above-ground case for returning a pair of values. An alternative design would be to pass a mutable collection *down* to the function as a parameter and let it fill any error messages it wants to emit into that. That will often simplify the plumbing in the caller, especially if there are several layers of calls between the parser and the code that knows how to do something with the error messages afterwards.
[Emit a warning](http://docs.python.org/library/warnings.html) instead, and let the code decide how to handle it.
7,313,893
I have a number of functions that parse data from files, usually returning a list of results. If I encounter a dodgy line in the file, I want to soldier on and process the valid lines, and return them. But I also want to report the error to the calling function. The reason I want to report it is so that the calling function can notify the user that the file needs looking at. I don't want to start doing GUI things in the parse function, as that seems to be a big violation of separation of concerns. The parse function does not have access to the console I'm writing error messages to anyway. This leaves me wanting to return the successful data, but also raise an exception because of the error, which clearly I can't do. Consider this code: ``` try: parseResult = parse(myFile) except MyErrorClass, e: HandleErrorsSomehow(str(e)) def parse(file): #file is a list of lines from an actual file err = False result = [] for lines in file: processedLine = Process(line) if not processedLine: err = True else result.append(processedLine) return result if err: raise MyErrorClass("Something went wrong") ``` Obviously the last three lines make no sense, but I can't figure out a nice way to do this. I guess I could do `return (err, result)`, and call it like ``` parseErr, parseResult = parse(file) if parseErr: HandleErrorsSomehow() ``` But returning error codes seems un-pythonic enough, let alone returning tuples of error codes and actual result values. The fact that I feel like I want to do something so strange in an application that shouldn't really be terribly complicated, is making me think I'm probably doing something wrong. Is there a better solution to this problem? Or is there some way that I can use `finally` to return a value and raise an exception at the same time?
2011/09/06
[ "https://Stackoverflow.com/questions/7313893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665488/" ]
[Emit a warning](http://docs.python.org/library/warnings.html) instead, and let the code decide how to handle it.
I come from the .Net world, so not sure how this translates into Python... In cases like yours (where you want to process numerous items in a single call) I'd return a `MyProcessingResults` object that held two collections, for example: * `MyProcessingResults.ProcessedLines` - holds all the valid data you parsed. * `MyProcessingResults.Errors` - holds all the errors (on the assumption that you have more than one and you want to explicitly know about all of them).
7,313,893
I have a number of functions that parse data from files, usually returning a list of results. If I encounter a dodgy line in the file, I want to soldier on and process the valid lines, and return them. But I also want to report the error to the calling function. The reason I want to report it is so that the calling function can notify the user that the file needs looking at. I don't want to start doing GUI things in the parse function, as that seems to be a big violation of separation of concerns. The parse function does not have access to the console I'm writing error messages to anyway. This leaves me wanting to return the successful data, but also raise an exception because of the error, which clearly I can't do. Consider this code: ``` try: parseResult = parse(myFile) except MyErrorClass, e: HandleErrorsSomehow(str(e)) def parse(file): #file is a list of lines from an actual file err = False result = [] for lines in file: processedLine = Process(line) if not processedLine: err = True else result.append(processedLine) return result if err: raise MyErrorClass("Something went wrong") ``` Obviously the last three lines make no sense, but I can't figure out a nice way to do this. I guess I could do `return (err, result)`, and call it like ``` parseErr, parseResult = parse(file) if parseErr: HandleErrorsSomehow() ``` But returning error codes seems un-pythonic enough, let alone returning tuples of error codes and actual result values. The fact that I feel like I want to do something so strange in an application that shouldn't really be terribly complicated, is making me think I'm probably doing something wrong. Is there a better solution to this problem? Or is there some way that I can use `finally` to return a value and raise an exception at the same time?
2011/09/06
[ "https://Stackoverflow.com/questions/7313893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665488/" ]
[Emit a warning](http://docs.python.org/library/warnings.html) instead, and let the code decide how to handle it.
Depending on caller design. using callbacks might be reasonable: ``` def got_line(line): print 'Got valid line', line def got_error(error): print 'got error', error def parse(file, line, error): for lines in file: processedLine = Process(line) if not processedLine: error(MyErrorClass("Something went wrong")) else line(processedLine) parse(some_file, got_line, got_error) ```
7,313,893
I have a number of functions that parse data from files, usually returning a list of results. If I encounter a dodgy line in the file, I want to soldier on and process the valid lines, and return them. But I also want to report the error to the calling function. The reason I want to report it is so that the calling function can notify the user that the file needs looking at. I don't want to start doing GUI things in the parse function, as that seems to be a big violation of separation of concerns. The parse function does not have access to the console I'm writing error messages to anyway. This leaves me wanting to return the successful data, but also raise an exception because of the error, which clearly I can't do. Consider this code: ``` try: parseResult = parse(myFile) except MyErrorClass, e: HandleErrorsSomehow(str(e)) def parse(file): #file is a list of lines from an actual file err = False result = [] for lines in file: processedLine = Process(line) if not processedLine: err = True else result.append(processedLine) return result if err: raise MyErrorClass("Something went wrong") ``` Obviously the last three lines make no sense, but I can't figure out a nice way to do this. I guess I could do `return (err, result)`, and call it like ``` parseErr, parseResult = parse(file) if parseErr: HandleErrorsSomehow() ``` But returning error codes seems un-pythonic enough, let alone returning tuples of error codes and actual result values. The fact that I feel like I want to do something so strange in an application that shouldn't really be terribly complicated, is making me think I'm probably doing something wrong. Is there a better solution to this problem? Or is there some way that I can use `finally` to return a value and raise an exception at the same time?
2011/09/06
[ "https://Stackoverflow.com/questions/7313893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665488/" ]
[Emit a warning](http://docs.python.org/library/warnings.html) instead, and let the code decide how to handle it.
Another possible design is to invert control, by passing the error handler in as a parameter. (Also, don't feel like you have to tell Python how to accumulate data in a list. It knows already. It's not hard to make a list comprehension work here.) ``` def sample_handler(): print "OMG, I wasn't expecting that; oh well." parseResult = parse(myFile, sample_handler) def parse(file, handler): #file is a list of lines from an actual file result = [Process(line) for line in data] if not all(result): handler() # i.e. if there are any false-ish values result = filter(None, result) # remove false-ish values if any return result ```
7,313,893
I have a number of functions that parse data from files, usually returning a list of results. If I encounter a dodgy line in the file, I want to soldier on and process the valid lines, and return them. But I also want to report the error to the calling function. The reason I want to report it is so that the calling function can notify the user that the file needs looking at. I don't want to start doing GUI things in the parse function, as that seems to be a big violation of separation of concerns. The parse function does not have access to the console I'm writing error messages to anyway. This leaves me wanting to return the successful data, but also raise an exception because of the error, which clearly I can't do. Consider this code: ``` try: parseResult = parse(myFile) except MyErrorClass, e: HandleErrorsSomehow(str(e)) def parse(file): #file is a list of lines from an actual file err = False result = [] for lines in file: processedLine = Process(line) if not processedLine: err = True else result.append(processedLine) return result if err: raise MyErrorClass("Something went wrong") ``` Obviously the last three lines make no sense, but I can't figure out a nice way to do this. I guess I could do `return (err, result)`, and call it like ``` parseErr, parseResult = parse(file) if parseErr: HandleErrorsSomehow() ``` But returning error codes seems un-pythonic enough, let alone returning tuples of error codes and actual result values. The fact that I feel like I want to do something so strange in an application that shouldn't really be terribly complicated, is making me think I'm probably doing something wrong. Is there a better solution to this problem? Or is there some way that I can use `finally` to return a value and raise an exception at the same time?
2011/09/06
[ "https://Stackoverflow.com/questions/7313893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665488/" ]
Nobody says the *only* valid way to treat an "error" is to throw an exception. In your design the caller wants two pieces of information: (1) the valid data, (2) whether an error occurred (and probably something about what went wrong where, so it can format a useful error message). That is a completely valid and above-ground case for returning a pair of values. An alternative design would be to pass a mutable collection *down* to the function as a parameter and let it fill any error messages it wants to emit into that. That will often simplify the plumbing in the caller, especially if there are several layers of calls between the parser and the code that knows how to do something with the error messages afterwards.
I'd like to propose an alternative solution; using a class. ``` class MyParser(object): def __init__(self): self.warnings = [] def parse(self, file): ... ``` Now the parse function can set warnings to the `warnings` list, and the user can check this list if they so desire. As soon as my functions start becoming more advanced than just "process this and return my value" I like to consider using a class instead. It makes for great clustering of related code into one object and it often makes for cleaner code and simpler usage than functions returning tuples of information.
7,313,893
I have a number of functions that parse data from files, usually returning a list of results. If I encounter a dodgy line in the file, I want to soldier on and process the valid lines, and return them. But I also want to report the error to the calling function. The reason I want to report it is so that the calling function can notify the user that the file needs looking at. I don't want to start doing GUI things in the parse function, as that seems to be a big violation of separation of concerns. The parse function does not have access to the console I'm writing error messages to anyway. This leaves me wanting to return the successful data, but also raise an exception because of the error, which clearly I can't do. Consider this code: ``` try: parseResult = parse(myFile) except MyErrorClass, e: HandleErrorsSomehow(str(e)) def parse(file): #file is a list of lines from an actual file err = False result = [] for lines in file: processedLine = Process(line) if not processedLine: err = True else result.append(processedLine) return result if err: raise MyErrorClass("Something went wrong") ``` Obviously the last three lines make no sense, but I can't figure out a nice way to do this. I guess I could do `return (err, result)`, and call it like ``` parseErr, parseResult = parse(file) if parseErr: HandleErrorsSomehow() ``` But returning error codes seems un-pythonic enough, let alone returning tuples of error codes and actual result values. The fact that I feel like I want to do something so strange in an application that shouldn't really be terribly complicated, is making me think I'm probably doing something wrong. Is there a better solution to this problem? Or is there some way that I can use `finally` to return a value and raise an exception at the same time?
2011/09/06
[ "https://Stackoverflow.com/questions/7313893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665488/" ]
Nobody says the *only* valid way to treat an "error" is to throw an exception. In your design the caller wants two pieces of information: (1) the valid data, (2) whether an error occurred (and probably something about what went wrong where, so it can format a useful error message). That is a completely valid and above-ground case for returning a pair of values. An alternative design would be to pass a mutable collection *down* to the function as a parameter and let it fill any error messages it wants to emit into that. That will often simplify the plumbing in the caller, especially if there are several layers of calls between the parser and the code that knows how to do something with the error messages afterwards.
I come from the .Net world, so not sure how this translates into Python... In cases like yours (where you want to process numerous items in a single call) I'd return a `MyProcessingResults` object that held two collections, for example: * `MyProcessingResults.ProcessedLines` - holds all the valid data you parsed. * `MyProcessingResults.Errors` - holds all the errors (on the assumption that you have more than one and you want to explicitly know about all of them).
7,313,893
I have a number of functions that parse data from files, usually returning a list of results. If I encounter a dodgy line in the file, I want to soldier on and process the valid lines, and return them. But I also want to report the error to the calling function. The reason I want to report it is so that the calling function can notify the user that the file needs looking at. I don't want to start doing GUI things in the parse function, as that seems to be a big violation of separation of concerns. The parse function does not have access to the console I'm writing error messages to anyway. This leaves me wanting to return the successful data, but also raise an exception because of the error, which clearly I can't do. Consider this code: ``` try: parseResult = parse(myFile) except MyErrorClass, e: HandleErrorsSomehow(str(e)) def parse(file): #file is a list of lines from an actual file err = False result = [] for lines in file: processedLine = Process(line) if not processedLine: err = True else result.append(processedLine) return result if err: raise MyErrorClass("Something went wrong") ``` Obviously the last three lines make no sense, but I can't figure out a nice way to do this. I guess I could do `return (err, result)`, and call it like ``` parseErr, parseResult = parse(file) if parseErr: HandleErrorsSomehow() ``` But returning error codes seems un-pythonic enough, let alone returning tuples of error codes and actual result values. The fact that I feel like I want to do something so strange in an application that shouldn't really be terribly complicated, is making me think I'm probably doing something wrong. Is there a better solution to this problem? Or is there some way that I can use `finally` to return a value and raise an exception at the same time?
2011/09/06
[ "https://Stackoverflow.com/questions/7313893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665488/" ]
Another possible design is to invert control, by passing the error handler in as a parameter. (Also, don't feel like you have to tell Python how to accumulate data in a list. It knows already. It's not hard to make a list comprehension work here.) ``` def sample_handler(): print "OMG, I wasn't expecting that; oh well." parseResult = parse(myFile, sample_handler) def parse(file, handler): #file is a list of lines from an actual file result = [Process(line) for line in data] if not all(result): handler() # i.e. if there are any false-ish values result = filter(None, result) # remove false-ish values if any return result ```
I come from the .Net world, so not sure how this translates into Python... In cases like yours (where you want to process numerous items in a single call) I'd return a `MyProcessingResults` object that held two collections, for example: * `MyProcessingResults.ProcessedLines` - holds all the valid data you parsed. * `MyProcessingResults.Errors` - holds all the errors (on the assumption that you have more than one and you want to explicitly know about all of them).
7,313,893
I have a number of functions that parse data from files, usually returning a list of results. If I encounter a dodgy line in the file, I want to soldier on and process the valid lines, and return them. But I also want to report the error to the calling function. The reason I want to report it is so that the calling function can notify the user that the file needs looking at. I don't want to start doing GUI things in the parse function, as that seems to be a big violation of separation of concerns. The parse function does not have access to the console I'm writing error messages to anyway. This leaves me wanting to return the successful data, but also raise an exception because of the error, which clearly I can't do. Consider this code: ``` try: parseResult = parse(myFile) except MyErrorClass, e: HandleErrorsSomehow(str(e)) def parse(file): #file is a list of lines from an actual file err = False result = [] for lines in file: processedLine = Process(line) if not processedLine: err = True else result.append(processedLine) return result if err: raise MyErrorClass("Something went wrong") ``` Obviously the last three lines make no sense, but I can't figure out a nice way to do this. I guess I could do `return (err, result)`, and call it like ``` parseErr, parseResult = parse(file) if parseErr: HandleErrorsSomehow() ``` But returning error codes seems un-pythonic enough, let alone returning tuples of error codes and actual result values. The fact that I feel like I want to do something so strange in an application that shouldn't really be terribly complicated, is making me think I'm probably doing something wrong. Is there a better solution to this problem? Or is there some way that I can use `finally` to return a value and raise an exception at the same time?
2011/09/06
[ "https://Stackoverflow.com/questions/7313893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665488/" ]
Nobody says the *only* valid way to treat an "error" is to throw an exception. In your design the caller wants two pieces of information: (1) the valid data, (2) whether an error occurred (and probably something about what went wrong where, so it can format a useful error message). That is a completely valid and above-ground case for returning a pair of values. An alternative design would be to pass a mutable collection *down* to the function as a parameter and let it fill any error messages it wants to emit into that. That will often simplify the plumbing in the caller, especially if there are several layers of calls between the parser and the code that knows how to do something with the error messages afterwards.
Another possible design is to invert control, by passing the error handler in as a parameter. (Also, don't feel like you have to tell Python how to accumulate data in a list. It knows already. It's not hard to make a list comprehension work here.) ``` def sample_handler(): print "OMG, I wasn't expecting that; oh well." parseResult = parse(myFile, sample_handler) def parse(file, handler): #file is a list of lines from an actual file result = [Process(line) for line in data] if not all(result): handler() # i.e. if there are any false-ish values result = filter(None, result) # remove false-ish values if any return result ```
7,313,893
I have a number of functions that parse data from files, usually returning a list of results. If I encounter a dodgy line in the file, I want to soldier on and process the valid lines, and return them. But I also want to report the error to the calling function. The reason I want to report it is so that the calling function can notify the user that the file needs looking at. I don't want to start doing GUI things in the parse function, as that seems to be a big violation of separation of concerns. The parse function does not have access to the console I'm writing error messages to anyway. This leaves me wanting to return the successful data, but also raise an exception because of the error, which clearly I can't do. Consider this code: ``` try: parseResult = parse(myFile) except MyErrorClass, e: HandleErrorsSomehow(str(e)) def parse(file): #file is a list of lines from an actual file err = False result = [] for lines in file: processedLine = Process(line) if not processedLine: err = True else result.append(processedLine) return result if err: raise MyErrorClass("Something went wrong") ``` Obviously the last three lines make no sense, but I can't figure out a nice way to do this. I guess I could do `return (err, result)`, and call it like ``` parseErr, parseResult = parse(file) if parseErr: HandleErrorsSomehow() ``` But returning error codes seems un-pythonic enough, let alone returning tuples of error codes and actual result values. The fact that I feel like I want to do something so strange in an application that shouldn't really be terribly complicated, is making me think I'm probably doing something wrong. Is there a better solution to this problem? Or is there some way that I can use `finally` to return a value and raise an exception at the same time?
2011/09/06
[ "https://Stackoverflow.com/questions/7313893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665488/" ]
Depending on caller design. using callbacks might be reasonable: ``` def got_line(line): print 'Got valid line', line def got_error(error): print 'got error', error def parse(file, line, error): for lines in file: processedLine = Process(line) if not processedLine: error(MyErrorClass("Something went wrong")) else line(processedLine) parse(some_file, got_line, got_error) ```
I come from the .Net world, so not sure how this translates into Python... In cases like yours (where you want to process numerous items in a single call) I'd return a `MyProcessingResults` object that held two collections, for example: * `MyProcessingResults.ProcessedLines` - holds all the valid data you parsed. * `MyProcessingResults.Errors` - holds all the errors (on the assumption that you have more than one and you want to explicitly know about all of them).
47,404,738
I have this list of countries: ``` country = ['Togo', 'Nauru', 'Palestine, State of', 'Malawi'] ``` I'm trying to write this into a csv: ``` with open('temp.csv', 'wt') as output_write: csvout = csv.writer(output_write) csvout.writerow(country) output_write.close() ``` However the output puts the values into a row rather than a column in csv. Can someone please let me know how to change it? Thanks in advance! --- I followed some of the suggestions below and the output has empty lines between rows: [![enter image description here](https://i.stack.imgur.com/pR55J.jpg)](https://i.stack.imgur.com/pR55J.jpg) The code I used: ``` import csv country = ['Togo', 'Nauru', 'Palestine, State of', 'Malawi'] with open('temp.csv', 'wt') as output_write: csvout = csv.writer(output_write) for item in country: csvout.writerow((item, )) ``` --- Update: I figured the reason that I'm getting an empty line because each row is because windows interpret a new line differently. The code that finally work for me is: ``` import csv country = ['Togo', 'Nauru', 'Palestine, State of', 'Malawi'] with open('temp.csv', 'w', newline = '') as output_write: csvout = csv.writer(output_write) for item in country: csvout.writerow((item, )) ``` Found a related post regarding the empty row: [python empty row](https://stackoverflow.com/questions/3348460/csv-file-written-with-python-has-blank-lines-between-each-row)
2017/11/21
[ "https://Stackoverflow.com/questions/47404738", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6510076/" ]
You have to iterate through the list if you want to write each item to a separate line: ``` import csv country = ['Togo', 'Nauru', 'Palestine, State of', 'Malawi'] with open('temp.csv', 'wt') as output_write: csvout = csv.writer(output_write, delimiter=',') for c in country: csvout.writerow([c]) ```
Try the following: ``` country = ['Togo', 'Nauru', 'Palestine, State of', 'Malawi'] with open('temp.csv', 'wt') as output_write: csvout = csv.writer(output_write, lineterminator='\n') for item in country: csvout.writerow((item, )) ```
47,404,738
I have this list of countries: ``` country = ['Togo', 'Nauru', 'Palestine, State of', 'Malawi'] ``` I'm trying to write this into a csv: ``` with open('temp.csv', 'wt') as output_write: csvout = csv.writer(output_write) csvout.writerow(country) output_write.close() ``` However the output puts the values into a row rather than a column in csv. Can someone please let me know how to change it? Thanks in advance! --- I followed some of the suggestions below and the output has empty lines between rows: [![enter image description here](https://i.stack.imgur.com/pR55J.jpg)](https://i.stack.imgur.com/pR55J.jpg) The code I used: ``` import csv country = ['Togo', 'Nauru', 'Palestine, State of', 'Malawi'] with open('temp.csv', 'wt') as output_write: csvout = csv.writer(output_write) for item in country: csvout.writerow((item, )) ``` --- Update: I figured the reason that I'm getting an empty line because each row is because windows interpret a new line differently. The code that finally work for me is: ``` import csv country = ['Togo', 'Nauru', 'Palestine, State of', 'Malawi'] with open('temp.csv', 'w', newline = '') as output_write: csvout = csv.writer(output_write) for item in country: csvout.writerow((item, )) ``` Found a related post regarding the empty row: [python empty row](https://stackoverflow.com/questions/3348460/csv-file-written-with-python-has-blank-lines-between-each-row)
2017/11/21
[ "https://Stackoverflow.com/questions/47404738", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6510076/" ]
You have to iterate through the list if you want to write each item to a separate line: ``` import csv country = ['Togo', 'Nauru', 'Palestine, State of', 'Malawi'] with open('temp.csv', 'wt') as output_write: csvout = csv.writer(output_write, delimiter=',') for c in country: csvout.writerow([c]) ```
Try this: ``` import csv countries = ['Togo', 'Nauru', 'Palestine, State of', 'Malawi'] with open('temp.csv', 'w') as output_write: csvout = csv.writer(output_write, lineterminator='\n') for country in countries: csvout.writerow([country]) ``` [![enter image description here](https://i.stack.imgur.com/uusgN.png)](https://i.stack.imgur.com/uusgN.png)
11,994,515
I'm using python to set up a computationally intense simulation, then running it in a custom built C-extension and finally processing the results in python. During the simulation, I want to store a fixed-length number of floats (C doubles converted to PyFloatObjects) representing my variables at every time step, but I don't know how many time steps there will be in advance. Once the simulation is done, I need to pass back the results to python in a form where the data logged for each individual variable is available as a list-like object (for example a (wrapper around a) continuous array, piece-wise continuous array or column in a matrix with a fixed stride). At the moment I'm creating a dictionary mapping the name of each variable to a list containing PyFloatObject objects. This format is perfect for working with in the post-processing stage but I have a feeling the creation stage could be a lot faster. Time is quite crucial since the simulation is a computationally heavy task already. I expect that a combination of A. buying lots of memory and B. setting up your experiment wisely will allow the entire log to fit in the RAM. However, with my current dict-of-lists solution keeping every variable's log in a continuous section of memory would require a lot of copying and overhead. My question is: *What is a clever, low-level way of quickly logging gigabytes of doubles in memory with minimal space/time overhead, that still translates to a neat python data structure?* --- **Clarification:** when I say "logging", I mean storing until after the simulation. Once that's done a post-processing phase begins and in most cases I'll only store the resulting graphs. So I don't actually need to store the numbers on disk. --- **Update:** In the end, I changed my approach a little and added the log (as a dict mapping variable names to sequence types) to the function parameters. This allows you to pass in objects such as lists or array.arrays or anything that has an append method. This adds a little time overhead because I'm using the PyObject\_CallMethodObjArgs function to call the Append method instead of PyList\_Append or similar. Using arrays allows you to reduce the memory load, which appears to be the best I can do short of writing my own expanding storage type. Thanks everyone!
2012/08/16
[ "https://Stackoverflow.com/questions/11994515", "https://Stackoverflow.com", "https://Stackoverflow.com/users/423420/" ]
You might want to consider doing this in Cython, instead of as a C extension module. Cython is smart, and lets you do things in a pretty pythonic way, even though it at the same time lets you use C datatypes and python datatypes. Have you checked out the array module? It allows you to store lots of scalar, homogeneous types in a single collection. If you're truly "logging" these, and not just returning them to CPython, you might try opening a file and fprintf'ing them. BTW, realloc might be your friend here, whether you go with a C extension module or Cython.
This is going to be more a huge dump of ideas rather than a consistent answer, because it sounds like that's what you're looking for. If not, I apologize. The main thing you're trying to avoid here is storing billions of PyFloatObjects in memory. There are a few ways around that, but they all revolve on storing billions of plain C doubles instead, and finding some way to expose them to Python as if they were sequences of PyFloatObjects. To make Python (or someone else's module) do the work, you can use a numpy array, a standard library array, a simple hand-made wrapper on top of the struct module, or ctypes. (It's a bit odd to use ctypes to deal with an extension module, but there's nothing stopping you from doing it.) If you're using struct or ctypes, you can even go beyond the limits of your memory by creating a huge file and mmapping in windows into it as needed. To make your C module do the work, instead of actually returning a list, return a custom object that meets the sequence protocol, so when someone calls, say, foo.**getitem**(i) you convert \_array[i] to a PyFloatObject on the fly. Another advantage of mmap is that, if you're creating the arrays iteratively, you can create them by just streaming to a file, and then use them by mmapping the resulting file back as a block of memory. Otherwise, you need to handle the allocations. If you're using the standard array, it takes care of auto-expanding as needed, but otherwise, you're doing it yourself. The code to do a realloc and copy if necessary isn't that difficult, and there's lots of sample code online, but you do have to write it. Or you may want to consider building a strided container that you can expose to Python as if it were contiguous even though it isn't. (You can do this directly via the complex buffer protocol, but personally I've always found that harder than writing my own sequence implementation.) If you can use C++, vector is an auto-expanding array, and deque is a strided container (and if you've got the SGI STL rope, it may be an even better strided container for the kind of thing you're doing). As the other answer pointed out, Cython can help for some of this. Not so much for the "exposing lots of floats to Python" part; you can just move pieces of the Python part into Cython, where they'll get compiled into C. If you're lucky, all of the code that needs to deal with the lots of floats will work within the subset of Python that Cython implements, and the only things you'll need to expose to actual interpreted code are higher-level drivers (if even that).
66,258,454
The way as python2 and python3 handtle the strings and the bytes are different, thus printing a hex string which contains non-ASCII characters in Python3 is different to Python2 does. Why does it happens and how could I print something in Python3 like Python2 does? (With ASCII characters or UTF-8 it works well if you decode the bytes string) Python3: ``` $ python3 -c 'print("\x41\xb3\xde\x41\x42\x43\xad\xde")' |xxd -p 41c2b3c39e414243c2adc39e0a ``` Python2: ``` $ python2 -c 'print "\x41\xb3\xde\x41\x42\x43\xad\xde"' |xxd -p 41b3de414243adde0a ``` \x0a is *newline* because print adds it. How could I print "\xb3" in python3? It adds "\xc2\xb3" instead just "\xb3". ``` $ python3 -c 'print("\xb3")' |xxd 00000000: c2b3 0a ... $ python2 -c 'print "\xb3"' |xxd 00000000: b30a .. ```
2021/02/18
[ "https://Stackoverflow.com/questions/66258454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2993875/" ]
Assuming `list` is a list of `double`s and you want to find the one nearest to `target`: ``` var result = list .Select((d,i) => (d,i)) .OrderBy(x => Math.Abs(x.d-target)) .First(); ``` * take the list of doubles * convert it to tuples: the double value and the index * sort by absolute difference between the double from the list and the target * get the first, which has the lowest difference = closest value See in action : <https://dotnetfiddle.net/qybj9X> But this ignores the helpful fact that the original list is already sorted.
Quick solution, use Array.indexOf(arrayname, valuetosearch) it returns the index of the first appearance of the specified value, if there is nothing to match it returns -1. ``` double i = 0.1; double[] arrayDoubles = { 0.1, 0.2, 0.3, 0.4 }; Console.WriteLine(Array.IndexOf(arrayDoubles, i)); //it will return 0 ```
71,873,791
if we have a nested list `ListA` and another nested list `ListB` of same length how can we add these nested lists replacing original values of `ListA` in Python? I browsed on hours on end, and found no reliable solution. Would it be possible to do inside a for loop too? Optimally without NumPy, pure python. Here's a pseudo code: ``` ListA = [[1, 2], [3, 4]] ListB = [[5, 6], [7, 8]] ``` Expected output: `ListA = [[6, 8], [10, 12]]` because... 1 + 5, 2 + 8 etc...
2022/04/14
[ "https://Stackoverflow.com/questions/71873791", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18804560/" ]
what are you trying to print out? Each tuple or each IP? Just an FYI it's not an array in Python it is a list. I have just done this. ```py data = [('192.168.0.59', 2881, '192.168.0.199', 0, 6), ('192.168.0.199', 0, '192.168.0.59', 0, 1), ('192.168.0.59', 2882, '192.168.0.199', 0, 6)] for item in data: print(data) ``` And got the following: ```sh ('192.168.0.59', 2979, '192.168.0.199', 0, 6) ('192.168.0.59', 2980, '192.168.0.199', 0, 6) ('192.168.0.59', 2981, '192.168.0.199', 0, 6) ('192.168.0.59', 2982, '192.168.0.199', 0, 6) ('192.168.0.59', 2983, '192.168.0.199', 0, 6) ``` But I have done the same as you and got the same: ```py with open("data.txt", "r") as f: data = f.read() for item in data: print(item) ``` But if you were to do something like `print(type(data))` it would tell you it's a string. So that's why you're getting what you're getting what you're getting. Because you're iterating over each item in that string. ```py with open("data.txt", "r") as f: data = f.read() new_list = data.strip("][").split(", ") print(type(data)) # str print(type(new_list)) # list ``` Therefore you could `split()` the string which will get you back to your list. Like the above...having said that I have tested the split option and I don't think it would give you the desired result. It works better when using `ast` like so: ```py import ast with open("data.txt", "r") as f: data = f.read() new_list = ast.literal_eval(data) for item in new_list: print(item) ``` This prints out something like: ``` ('192.168.0.59', 6069, '192.168.0.199', 0, 6) ('192.168.0.59', 6070, '192.168.0.199', 0, 6) ('192.168.0.59', 6071, '192.168.0.199', 0, 6) ``` #### Update Getting the first IP ```py import ast with open("data.txt", "r") as f: data = f.read() new_list = ast.literal_eval(data) for item in new_list: print(item[0]) ```
The [print](https://www.w3schools.com/python/ref_func_print.asp) function in python has an 'end' parameter that defaults to newline `\n`. Since each character is read in individually, `i` is a single character followed by `\n` Try `print(i, end = '')`
71,873,791
if we have a nested list `ListA` and another nested list `ListB` of same length how can we add these nested lists replacing original values of `ListA` in Python? I browsed on hours on end, and found no reliable solution. Would it be possible to do inside a for loop too? Optimally without NumPy, pure python. Here's a pseudo code: ``` ListA = [[1, 2], [3, 4]] ListB = [[5, 6], [7, 8]] ``` Expected output: `ListA = [[6, 8], [10, 12]]` because... 1 + 5, 2 + 8 etc...
2022/04/14
[ "https://Stackoverflow.com/questions/71873791", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18804560/" ]
what are you trying to print out? Each tuple or each IP? Just an FYI it's not an array in Python it is a list. I have just done this. ```py data = [('192.168.0.59', 2881, '192.168.0.199', 0, 6), ('192.168.0.199', 0, '192.168.0.59', 0, 1), ('192.168.0.59', 2882, '192.168.0.199', 0, 6)] for item in data: print(data) ``` And got the following: ```sh ('192.168.0.59', 2979, '192.168.0.199', 0, 6) ('192.168.0.59', 2980, '192.168.0.199', 0, 6) ('192.168.0.59', 2981, '192.168.0.199', 0, 6) ('192.168.0.59', 2982, '192.168.0.199', 0, 6) ('192.168.0.59', 2983, '192.168.0.199', 0, 6) ``` But I have done the same as you and got the same: ```py with open("data.txt", "r") as f: data = f.read() for item in data: print(item) ``` But if you were to do something like `print(type(data))` it would tell you it's a string. So that's why you're getting what you're getting what you're getting. Because you're iterating over each item in that string. ```py with open("data.txt", "r") as f: data = f.read() new_list = data.strip("][").split(", ") print(type(data)) # str print(type(new_list)) # list ``` Therefore you could `split()` the string which will get you back to your list. Like the above...having said that I have tested the split option and I don't think it would give you the desired result. It works better when using `ast` like so: ```py import ast with open("data.txt", "r") as f: data = f.read() new_list = ast.literal_eval(data) for item in new_list: print(item) ``` This prints out something like: ``` ('192.168.0.59', 6069, '192.168.0.199', 0, 6) ('192.168.0.59', 6070, '192.168.0.199', 0, 6) ('192.168.0.59', 6071, '192.168.0.199', 0, 6) ``` #### Update Getting the first IP ```py import ast with open("data.txt", "r") as f: data = f.read() new_list = ast.literal_eval(data) for item in new_list: print(item[0]) ```
f.read() return the content of your text file in a string, you cannot convert it to a list of tuples. Well, you could but using a lot of split() (I recomend to split on ")," or "(" to separate each tuple, then on "," to get each element of the tuple). Something like: ```py with open("example.txt", "r") as f: your_list: list[tuples] # read the file as a string f_in_str: str = f.read() # removing useless characters f_in_str = f_in_str.replace("[", "") f_in_str = f_in_str.replace("]", "") f_in_str = f_in_str.replace("(", "") f_in_str = f_in_str.replace(" ", "") f_in_str = f_in_str.replace("'", "") # split it into a list tuples_in_str: list[str] = f_in_str.split("),") # convert str to tuples for tuple_str in tuples_in_str: # split() returns a list that we convert into a tuple a_tuple: tuple = tuple(tuple_str.split(",")) your_list.append(a_tuple) ``` Note that i have not tested that code. I strongly advise you, if you can, to change the format of your source to something like a csv. It will make things a lot easier in the futur.
57,009,662
I am wondering if it is possible, without any problems, to have Python installed on a network drive for use by multiple Windows users who have only read and execute rights. As far as I know, it is possible to add the python binaries to the PATH variable and run python on another drive without any problem, but I was wondering some things : * I know you can install Python on another drive than your C: drive, but not sure if the same is possible with a network drive. * Can this support concurrent users? Like two people running python scripts at the same time. * Would users with no write privileges still be able to install python modules? I want only users with write access to the drive to be able to do this. * Would this pose any problems with some modules? Thanks.
2019/07/12
[ "https://Stackoverflow.com/questions/57009662", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7623655/" ]
I'm a Linux user so can't test, but this seems like a common question and [google](https://www.google.com/search?q=python+on+a+network+drive) finds [lots](https://www.reddit.com/r/Python/comments/1ngql8/python_on_network_drive/) of [similar](https://groups.google.com/forum/#!topic/comp.lang.python/KyWvdfaOxlU) questions with answers to answer your questions to be best of my ability: 1. it should certainly support "concurrent users", the file server wouldn't even know the program is running, it'll just see devices on the network opening/reading the files 2. users obviously wouldn't be able to install things on network drives if they don't have write permission, but they should still be able to install modules locally (i.e. on their own machine) with something like [`pip install --user requests`](https://pip.pypa.io/en/stable/user_guide/#user-installs) 3. I'm sure this would cause problems with **some** modules, but I wouldn't expect it to be many. note that lots of modules have issues running under OSs that they weren't developed/tested in, and a lot of Python work happens on Unix/Linux/OSX machines
If you want everyone to have the same development enviorment: Try creating a python virtualenv on the remote folder,so they can activate it when its required. <https://virtualenv.pypa.io/en/stable/> But if want you want to do is to distribute an app and want everyone to have the same python, may be take a look at pyinstaller 'An application to convert python scripts into stand-alone binaries'
63,097,165
I tried doing this: ``` with open("contactbook.txt","a") as f: f.write("hello\n") ``` But it gets printed as: ``` 'hello\n' ``` My code to read the file: ``` with open("contactbook.txt","r") as f: lines = sorted(f.readlines()) print(lines) ``` EDIT: I tried to check the text file and there \n really was interpreted as a linebreak! But in python's shell it's still showing as \n What am I doing wrong? Is it because of how I print it?
2020/07/26
[ "https://Stackoverflow.com/questions/63097165", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13764026/" ]
It's being written out fine. The problem is that when you read the file back in, you print out the whole `lines` list in a single `print(lines)` statement; this will give you a form suitable for debugging, not really for display to the end-user, including writing out control characters as `\n` and so on. If you check the file in any other way, you'll be able to confirm that it has the content that you want.
Just in case there are other people like me who gets the same problem... The problem with my code before is the way I print it out so by changing how I read/print it to: ``` with open('contactbook.txt', 'r') as f: for lines in sorted(f): print(lines, end='') ``` It now works! sabik's answer helped me realized this! :)