qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
1,104,762
How can I break a long one liner string in my code and keep the string indented with the rest of the code? [PEP 8](http://www.python.org/dev/peps/pep-0008/ "PEP-8") doesn't have any example for this case. Correct ouptut but strangely indented: ``` if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test ``` Bad output, but looks better in code: ``` if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test ``` --- Wow, lots of fast answers. Thanks!
2009/07/09
[ "https://Stackoverflow.com/questions/1104762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23002/" ]
You can use a trailing backslash to join separate strings like this: ``` if True: print "long test long test long test long test long " \ "test long test long test long test long test long test" ```
Why isn't anyone recommending triple quotes? ``` print """ blah blah blah ..............""" ```
1,104,762
How can I break a long one liner string in my code and keep the string indented with the rest of the code? [PEP 8](http://www.python.org/dev/peps/pep-0008/ "PEP-8") doesn't have any example for this case. Correct ouptut but strangely indented: ``` if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test ``` Bad output, but looks better in code: ``` if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test ``` --- Wow, lots of fast answers. Thanks!
2009/07/09
[ "https://Stackoverflow.com/questions/1104762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23002/" ]
``` if True: print "long test long test long test long test long"\ "test long test long test long test long test long test" ```
``` if True: print "long test long test long test "+ "long test long test long test "+ "long test long test long test " ``` And so on.
1,104,762
How can I break a long one liner string in my code and keep the string indented with the rest of the code? [PEP 8](http://www.python.org/dev/peps/pep-0008/ "PEP-8") doesn't have any example for this case. Correct ouptut but strangely indented: ``` if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test ``` Bad output, but looks better in code: ``` if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test ``` --- Wow, lots of fast answers. Thanks!
2009/07/09
[ "https://Stackoverflow.com/questions/1104762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23002/" ]
``` if True: print "long test long test long test long test long"\ "test long test long test long test long test long test" ```
Why isn't anyone recommending triple quotes? ``` print """ blah blah blah ..............""" ```
1,104,762
How can I break a long one liner string in my code and keep the string indented with the rest of the code? [PEP 8](http://www.python.org/dev/peps/pep-0008/ "PEP-8") doesn't have any example for this case. Correct ouptut but strangely indented: ``` if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test ``` Bad output, but looks better in code: ``` if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test ``` --- Wow, lots of fast answers. Thanks!
2009/07/09
[ "https://Stackoverflow.com/questions/1104762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23002/" ]
Why isn't anyone recommending triple quotes? ``` print """ blah blah blah ..............""" ```
``` if True: print "long test long test long test "+ "long test long test long test "+ "long test long test long test " ``` And so on.
56,680,581
If there's a function `f(x)`, and x's type may be Int or String, if it's Int, then this f will return `x+1` if it's String, then f will reverse x and return it. This is easy in dynamic typed languages like python and javascript which just uses `isinstance(x, Int)`. We can know its type and do something with if-else, but in static type languages like kotlin, I don't know how to do that? Because I don't know how to make x has a type that may be Int or String. def f(x): ``` if isinstance(x, int): return x+1 if isinstance(x, str): return x[::-1] ``` in haskell, we have pattern matching to do that f :: Either Int String -> Either Int String f (Left x) = Left (x+1) f (Right x) = Right (reverse x) and in kotlin?
2019/06/20
[ "https://Stackoverflow.com/questions/56680581", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11674346/" ]
In kotlin you have [Arrow](https://arrow-kt.io/) that provides a lot of functional capabilities to the language. Between them you have [`EitherT`](https://arrow-kt.io/docs/arrow/data/eithert/). That lets you define: ``` fun f(x: Either<Int, String>): Either<Int, String> = x.bimap({ it+1 }, { it.reversed() }) ```
You could do something like: ``` fun getValue(id: Int): Any { ... } fun process(value: Int) { ... } fun process(value: String) { ... } val value = getValue(valueId) when (value) { is Int -> process(value) is String -> process(value) else -> ... } ``` This way, you can use method overloading to do the job for you based on params.
56,680,581
If there's a function `f(x)`, and x's type may be Int or String, if it's Int, then this f will return `x+1` if it's String, then f will reverse x and return it. This is easy in dynamic typed languages like python and javascript which just uses `isinstance(x, Int)`. We can know its type and do something with if-else, but in static type languages like kotlin, I don't know how to do that? Because I don't know how to make x has a type that may be Int or String. def f(x): ``` if isinstance(x, int): return x+1 if isinstance(x, str): return x[::-1] ``` in haskell, we have pattern matching to do that f :: Either Int String -> Either Int String f (Left x) = Left (x+1) f (Right x) = Right (reverse x) and in kotlin?
2019/06/20
[ "https://Stackoverflow.com/questions/56680581", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11674346/" ]
sealed class E data class L(val v: String) : E() data class R(val v: Int): E() fun poly(expr: E):E = when(expr) { ``` is L -> L(expr.v) is R -> R(expr.v + 1) ``` } println(poly(poly(R(3)))) println(poly(L("aha"))) fun poly2(expr: E):Any? = when(expr) { ``` is L -> expr.v is R -> expr.v + 1 ``` }
You could do something like: ``` fun getValue(id: Int): Any { ... } fun process(value: Int) { ... } fun process(value: String) { ... } val value = getValue(valueId) when (value) { is Int -> process(value) is String -> process(value) else -> ... } ``` This way, you can use method overloading to do the job for you based on params.
34,894,096
What is the best way to read in a line of numbers from a file when they are presented in a format like this: ``` [1, 2, 3 , -4, 5] [10, 11, -12, 13, 14 ] ``` Annoyingly, as I depicted, sometimes there are extra spaces between the numbers, sometimes not. I've attempted to use `CSV` to work around the commas, but the brackets and the random spaces are proving difficult to remove as well. Ideally I would append each number between the brackets as an `int` to a `list`, but of course the brackets are causing `int()` to fail. I've already looked into similar solutions suggested with [Removing unwanted characters from a string in Python](https://stackoverflow.com/questions/2780904/removing-unwanted-characters-from-a-string-in-python/ "this") and [Python Read File, Look up a String and Remove Characters](https://stackoverflow.com/questions/19201575/python-read-file-look-up-a-string-and-remove-characters), but unfortunately I keep falling short when I try to combine everything.
2016/01/20
[ "https://Stackoverflow.com/questions/34894096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5814412/" ]
Use regular expression to remove any unwanted characters from strings ``` import re text_ = re.sub("[0-9]+", " ", text); ``` Second Method: ``` str = "h3110 23 cat 444.4 rabbit 11 2 dog" >>> [int(s) for s in str.split() if s.isdigit()] [23, 11, 2] ```
Use the [`json`](https://docs.python.org/3/library/json.html#json.loads) module to parse each line as a [JSON](http://json.org/) array. ``` import json list_of_ints = [] for line in open("/tmp/so.txt").readlines(): a = json.loads(line) list_of_ints.extend(a) print(list_of_ints) ``` This collects all integers from all lines into `list_of_ints`. Output: ``` [1, 2, 3, -4, 5, 10, 11, -12, 13, 14] ```
34,894,096
What is the best way to read in a line of numbers from a file when they are presented in a format like this: ``` [1, 2, 3 , -4, 5] [10, 11, -12, 13, 14 ] ``` Annoyingly, as I depicted, sometimes there are extra spaces between the numbers, sometimes not. I've attempted to use `CSV` to work around the commas, but the brackets and the random spaces are proving difficult to remove as well. Ideally I would append each number between the brackets as an `int` to a `list`, but of course the brackets are causing `int()` to fail. I've already looked into similar solutions suggested with [Removing unwanted characters from a string in Python](https://stackoverflow.com/questions/2780904/removing-unwanted-characters-from-a-string-in-python/ "this") and [Python Read File, Look up a String and Remove Characters](https://stackoverflow.com/questions/19201575/python-read-file-look-up-a-string-and-remove-characters), but unfortunately I keep falling short when I try to combine everything.
2016/01/20
[ "https://Stackoverflow.com/questions/34894096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5814412/" ]
Since each line already seems a literal python list you can use [ast](https://docs.python.org/2.7/library/ast.html) module: ``` import ast with open('myfile.txt') as fh: for line in fh: numbers_list = ast.literal_eval(line) ``` Note that you could have obtained the same result using the builtin function [eval()](https://docs.python.org/2.7/library/functions.html#eval) but using ast is more secure against malicious input.
Use the [`json`](https://docs.python.org/3/library/json.html#json.loads) module to parse each line as a [JSON](http://json.org/) array. ``` import json list_of_ints = [] for line in open("/tmp/so.txt").readlines(): a = json.loads(line) list_of_ints.extend(a) print(list_of_ints) ``` This collects all integers from all lines into `list_of_ints`. Output: ``` [1, 2, 3, -4, 5, 10, 11, -12, 13, 14] ```
34,894,096
What is the best way to read in a line of numbers from a file when they are presented in a format like this: ``` [1, 2, 3 , -4, 5] [10, 11, -12, 13, 14 ] ``` Annoyingly, as I depicted, sometimes there are extra spaces between the numbers, sometimes not. I've attempted to use `CSV` to work around the commas, but the brackets and the random spaces are proving difficult to remove as well. Ideally I would append each number between the brackets as an `int` to a `list`, but of course the brackets are causing `int()` to fail. I've already looked into similar solutions suggested with [Removing unwanted characters from a string in Python](https://stackoverflow.com/questions/2780904/removing-unwanted-characters-from-a-string-in-python/ "this") and [Python Read File, Look up a String and Remove Characters](https://stackoverflow.com/questions/19201575/python-read-file-look-up-a-string-and-remove-characters), but unfortunately I keep falling short when I try to combine everything.
2016/01/20
[ "https://Stackoverflow.com/questions/34894096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5814412/" ]
Using the [`ast.literal_eval()`](https://docs.python.org/3/library/ast.html#ast.literal_eval) is another option: ``` from ast import literal_eval with open("your_file.txt") as file_obj: for line in file_obj: lst = literal_eval(line) do_stuff(lst) ```
Use the [`json`](https://docs.python.org/3/library/json.html#json.loads) module to parse each line as a [JSON](http://json.org/) array. ``` import json list_of_ints = [] for line in open("/tmp/so.txt").readlines(): a = json.loads(line) list_of_ints.extend(a) print(list_of_ints) ``` This collects all integers from all lines into `list_of_ints`. Output: ``` [1, 2, 3, -4, 5, 10, 11, -12, 13, 14] ```
34,894,096
What is the best way to read in a line of numbers from a file when they are presented in a format like this: ``` [1, 2, 3 , -4, 5] [10, 11, -12, 13, 14 ] ``` Annoyingly, as I depicted, sometimes there are extra spaces between the numbers, sometimes not. I've attempted to use `CSV` to work around the commas, but the brackets and the random spaces are proving difficult to remove as well. Ideally I would append each number between the brackets as an `int` to a `list`, but of course the brackets are causing `int()` to fail. I've already looked into similar solutions suggested with [Removing unwanted characters from a string in Python](https://stackoverflow.com/questions/2780904/removing-unwanted-characters-from-a-string-in-python/ "this") and [Python Read File, Look up a String and Remove Characters](https://stackoverflow.com/questions/19201575/python-read-file-look-up-a-string-and-remove-characters), but unfortunately I keep falling short when I try to combine everything.
2016/01/20
[ "https://Stackoverflow.com/questions/34894096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5814412/" ]
Use regular expression to remove any unwanted characters from strings ``` import re text_ = re.sub("[0-9]+", " ", text); ``` Second Method: ``` str = "h3110 23 cat 444.4 rabbit 11 2 dog" >>> [int(s) for s in str.split() if s.isdigit()] [23, 11, 2] ```
Since each line already seems a literal python list you can use [ast](https://docs.python.org/2.7/library/ast.html) module: ``` import ast with open('myfile.txt') as fh: for line in fh: numbers_list = ast.literal_eval(line) ``` Note that you could have obtained the same result using the builtin function [eval()](https://docs.python.org/2.7/library/functions.html#eval) but using ast is more secure against malicious input.
34,894,096
What is the best way to read in a line of numbers from a file when they are presented in a format like this: ``` [1, 2, 3 , -4, 5] [10, 11, -12, 13, 14 ] ``` Annoyingly, as I depicted, sometimes there are extra spaces between the numbers, sometimes not. I've attempted to use `CSV` to work around the commas, but the brackets and the random spaces are proving difficult to remove as well. Ideally I would append each number between the brackets as an `int` to a `list`, but of course the brackets are causing `int()` to fail. I've already looked into similar solutions suggested with [Removing unwanted characters from a string in Python](https://stackoverflow.com/questions/2780904/removing-unwanted-characters-from-a-string-in-python/ "this") and [Python Read File, Look up a String and Remove Characters](https://stackoverflow.com/questions/19201575/python-read-file-look-up-a-string-and-remove-characters), but unfortunately I keep falling short when I try to combine everything.
2016/01/20
[ "https://Stackoverflow.com/questions/34894096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5814412/" ]
Use regular expression to remove any unwanted characters from strings ``` import re text_ = re.sub("[0-9]+", " ", text); ``` Second Method: ``` str = "h3110 23 cat 444.4 rabbit 11 2 dog" >>> [int(s) for s in str.split() if s.isdigit()] [23, 11, 2] ```
Using the [`ast.literal_eval()`](https://docs.python.org/3/library/ast.html#ast.literal_eval) is another option: ``` from ast import literal_eval with open("your_file.txt") as file_obj: for line in file_obj: lst = literal_eval(line) do_stuff(lst) ```
34,894,096
What is the best way to read in a line of numbers from a file when they are presented in a format like this: ``` [1, 2, 3 , -4, 5] [10, 11, -12, 13, 14 ] ``` Annoyingly, as I depicted, sometimes there are extra spaces between the numbers, sometimes not. I've attempted to use `CSV` to work around the commas, but the brackets and the random spaces are proving difficult to remove as well. Ideally I would append each number between the brackets as an `int` to a `list`, but of course the brackets are causing `int()` to fail. I've already looked into similar solutions suggested with [Removing unwanted characters from a string in Python](https://stackoverflow.com/questions/2780904/removing-unwanted-characters-from-a-string-in-python/ "this") and [Python Read File, Look up a String and Remove Characters](https://stackoverflow.com/questions/19201575/python-read-file-look-up-a-string-and-remove-characters), but unfortunately I keep falling short when I try to combine everything.
2016/01/20
[ "https://Stackoverflow.com/questions/34894096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5814412/" ]
Using the [`ast.literal_eval()`](https://docs.python.org/3/library/ast.html#ast.literal_eval) is another option: ``` from ast import literal_eval with open("your_file.txt") as file_obj: for line in file_obj: lst = literal_eval(line) do_stuff(lst) ```
Since each line already seems a literal python list you can use [ast](https://docs.python.org/2.7/library/ast.html) module: ``` import ast with open('myfile.txt') as fh: for line in fh: numbers_list = ast.literal_eval(line) ``` Note that you could have obtained the same result using the builtin function [eval()](https://docs.python.org/2.7/library/functions.html#eval) but using ast is more secure against malicious input.
30,798,447
I tried the following code, but I ran into problems. I think .values is the problem but how do I encode this as a Theano object? The following is my data source ``` home_team,away_team,home_score,away_score Wales,Italy,23,15 France,England,26,24 Ireland,Scotland,28,6 Ireland,Wales,26,3 Scotland,England,0,20 France,Italy,30,10 Wales,France,27,6 Italy,Scotland,20,21 England,Ireland,13,10 Ireland,Italy,46,7 Scotland,France,17,19 England,Wales,29,18 Italy,England,11,52 Wales,Scotland,51,3 France,Ireland,20,22 ``` Here is the PyMC2 Code which works: data\_file = DATA\_DIR + 'results\_2014.csv' ``` df = pd.read_csv(data_file, sep=',') # Or whatever it takes to get this into a data frame. teams = df.home_team.unique() teams = pd.DataFrame(teams, columns=['team']) teams['i'] = teams.index df = pd.merge(df, teams, left_on='home_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_home'}).drop('team', 1) df = pd.merge(df, teams, left_on='away_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_away'}).drop('team', 1) observed_home_goals = df.home_score.values observed_away_goals = df.away_score.values home_team = df.i_home.values away_team = df.i_away.values num_teams = len(df.i_home.drop_duplicates()) num_games = len(home_team) g = df.groupby('i_away') att_starting_points = np.log(g.away_score.mean()) g = df.groupby('i_home') def_starting_points = -np.log(g.away_score.mean()) #hyperpriors home = pymc.Normal('home', 0, .0001, value=0) tau_att = pymc.Gamma('tau_att', .1, .1, value=10) tau_def = pymc.Gamma('tau_def', .1, .1, value=10) intercept = pymc.Normal('intercept', 0, .0001, value=0) #team-specific parameters atts_star = pymc.Normal("atts_star", mu=0, tau=tau_att, size=num_teams, value=att_starting_points.values) defs_star = pymc.Normal("defs_star", mu=0, tau=tau_def, size=num_teams, value=def_starting_points.values) # trick to code the sum to zero constraint @pymc.deterministic def atts(atts_star=atts_star): atts = atts_star.copy() atts = atts - np.mean(atts_star) return atts @pymc.deterministic def defs(defs_star=defs_star): defs = defs_star.copy() defs = defs - np.mean(defs_star) return defs @pymc.deterministic def home_theta(home_team=home_team, away_team=away_team, home=home, atts=atts, defs=defs, intercept=intercept): return np.exp(intercept + home + atts[home_team] + defs[away_team]) @pymc.deterministic def away_theta(home_team=home_team, away_team=away_team, home=home, atts=atts, defs=defs, intercept=intercept): return np.exp(intercept + atts[away_team] + defs[home_team]) home_points = pymc.Poisson('home_points', mu=home_theta, value=observed_home_goals, observed=True) away_points = pymc.Poisson('away_points', mu=away_theta, value=observed_away_goals, observed=True) mcmc = pymc.MCMC([home, intercept, tau_att, tau_def, home_theta, away_theta, atts_star, defs_star, atts, defs, home_points, away_points]) map_ = pymc.MAP( mcmc ) map_.fit() mcmc.sample(200000, 40000, 20) ``` My attempt at porting to PyMC3 :) And I include the wrangling code. I defined my own data directory etc. ``` data_file = DATA_DIR + 'results_2014.csv' df = pd.read_csv(data_file, sep=',') # Or whatever it takes to get this into a data frame. teams = df.home_team.unique() teams = pd.DataFrame(teams, columns=['team']) teams['i'] = teams.index df = pd.merge(df, teams, left_on='home_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_home'}).drop('team', 1) df = pd.merge(df, teams, left_on='away_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_away'}).drop('team', 1) observed_home_goals = df.home_score.values observed_away_goals = df.away_score.values home_team = df.i_home.values away_team = df.i_away.values num_teams = len(df.i_home.drop_duplicates()) num_games = len(home_team) g = df.groupby('i_away') att_starting_points = np.log(g.away_score.mean()) g = df.groupby('i_home') def_starting_points = -np.log(g.away_score.mean()) import theano.tensor as T import pymc3 as pm3 #hyperpriors x = att_starting_points.values y = def_starting_points.values model = pm.Model() with pm3.Model() as model: home3 = pm3.Normal('home', 0, .0001) tau_att3 = pm3.Gamma('tau_att', .1, .1) tau_def3 = pm3.Gamma('tau_def', .1, .1) intercept3 = pm3.Normal('intercept', 0, .0001) #team-specific parameters atts_star3 = pm3.Normal("atts_star", mu=0, tau=tau_att3, observed=x) defs_star3 = pm3.Normal("defs_star", mu=0, tau=tau_def3, observed=y) #Seems to be the error here. atts = pm3.Deterministic('regression', atts_star3 - np.mean(atts_star3)) home_theta3 = pm3.Deterministic('regression', T.exp(intercept3 + atts[away_team] + defs[home_team])) atts = pm3.Deterministic('regression', atts_star3 - np.mean(atts_star3)) home_theta3 = pm3.Deterministic('regression', T.exp(intercept3 + atts[away_team] + defs[home_team])) # Unknown model parameters home_points3 = pm3.Poisson('home_points', mu=home_theta3, observed=observed_home_goals) away_points3 = pm3.Poisson('away_points', mu=home_theta3, observed=observed_away_goals) start = pm3.find_MAP() step = pm3.NUTS(state=start) trace = pm3.sample(2000, step, start=start, progressbar=True) pm3.traceplot(trace) ``` And I get an error like values isn't a Theano object. I think this is the .values part above. But i'm confused about how to convert this into a Theano tensor. The tensors are confusing me :) And the error for clarity, because I've misunderstood something in PyMC3 syntax. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-71-ce51c1a64412> in <module>() 23 24 #Seems to be the error here. ---> 25 atts = pm3.Deterministic('regression', atts_star3 - np.mean(atts_star3)) 26 home_theta3 = pm3.Deterministic('regression', T.exp(intercept3 + atts[away_team] + defs[home_team])) 27 /Users/peadarcoyle/anaconda/lib/python3.4/site-packages/numpy/core/fromnumeric.py in mean(a, axis, dtype, out, keepdims) 2733 2734 return _methods._mean(a, axis=axis, dtype=dtype, -> 2735 out=out, keepdims=keepdims) 2736 2737 def std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False): /Users/peadarcoyle/anaconda/lib/python3.4/site-packages/numpy/core/_methods.py in _mean(a, axis, dtype, out, keepdims) 71 ret = ret.dtype.type(ret / rcount) 72 else: ---> 73 ret = ret / rcount 74 75 return ret TypeError: unsupported operand type(s) for /: 'ObservedRV' and 'int' ```
2015/06/12
[ "https://Stackoverflow.com/questions/30798447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2610971/" ]
Here is my translation of your PyMC2 model: ``` model = pm.Model() with pm.Model() as model: # global model parameters home = pm.Normal('home', 0, .0001) tau_att = pm.Gamma('tau_att', .1, .1) tau_def = pm.Gamma('tau_def', .1, .1) intercept = pm.Normal('intercept', 0, .0001) # team-specific model parameters atts_star = pm.Normal("atts_star", mu =0, tau =tau_att, shape=num_teams) defs_star = pm.Normal("defs_star", mu =0, tau =tau_def, shape=num_teams) atts = pm.Deterministic('atts', atts_star - tt.mean(atts_star)) defs = pm.Deterministic('defs', defs_star - tt.mean(defs_star)) home_theta = tt.exp(intercept + home + atts[home_team] + defs[away_team] away_theta = tt.exp(intercept + atts[away_team] + defs[home_team]) # likelihood of observed data home_points = pm.Poisson('home_points', mu=home_theta, observed=observed_home_goals) away_points = pm.Poisson('away_points', mu=away_theta, observed=observed_away_goals) ``` The big difference, as I see it, between PyMC2 and 3 model building is that the whole business of initial values in PyMC2 is not included in model building in PyMC3. It is pushed off into the model fitting portion of the code. Here is a notebook that puts this model in context with your data and some fitting code: <http://nbviewer.ipython.org/gist/aflaxman/55e23195fe0a0b089103>
Your model is failing because you can't use NumPy functions on theano tensors. Thus ``` np.mean(atts_star3) ``` Will give you an error. You can remove `atts_star3 = pm3.Normal("atts_star",...)` and just use the NumPy array directly `atts_star3 = x`. I don't think you need to explicitly model `tau_att3`, `tau_def3` or `defs_star` either. Alternatively, if you want to keep those variables you can replace `np.mean` with `theano.tensor.mean`, which should work.
30,798,447
I tried the following code, but I ran into problems. I think .values is the problem but how do I encode this as a Theano object? The following is my data source ``` home_team,away_team,home_score,away_score Wales,Italy,23,15 France,England,26,24 Ireland,Scotland,28,6 Ireland,Wales,26,3 Scotland,England,0,20 France,Italy,30,10 Wales,France,27,6 Italy,Scotland,20,21 England,Ireland,13,10 Ireland,Italy,46,7 Scotland,France,17,19 England,Wales,29,18 Italy,England,11,52 Wales,Scotland,51,3 France,Ireland,20,22 ``` Here is the PyMC2 Code which works: data\_file = DATA\_DIR + 'results\_2014.csv' ``` df = pd.read_csv(data_file, sep=',') # Or whatever it takes to get this into a data frame. teams = df.home_team.unique() teams = pd.DataFrame(teams, columns=['team']) teams['i'] = teams.index df = pd.merge(df, teams, left_on='home_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_home'}).drop('team', 1) df = pd.merge(df, teams, left_on='away_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_away'}).drop('team', 1) observed_home_goals = df.home_score.values observed_away_goals = df.away_score.values home_team = df.i_home.values away_team = df.i_away.values num_teams = len(df.i_home.drop_duplicates()) num_games = len(home_team) g = df.groupby('i_away') att_starting_points = np.log(g.away_score.mean()) g = df.groupby('i_home') def_starting_points = -np.log(g.away_score.mean()) #hyperpriors home = pymc.Normal('home', 0, .0001, value=0) tau_att = pymc.Gamma('tau_att', .1, .1, value=10) tau_def = pymc.Gamma('tau_def', .1, .1, value=10) intercept = pymc.Normal('intercept', 0, .0001, value=0) #team-specific parameters atts_star = pymc.Normal("atts_star", mu=0, tau=tau_att, size=num_teams, value=att_starting_points.values) defs_star = pymc.Normal("defs_star", mu=0, tau=tau_def, size=num_teams, value=def_starting_points.values) # trick to code the sum to zero constraint @pymc.deterministic def atts(atts_star=atts_star): atts = atts_star.copy() atts = atts - np.mean(atts_star) return atts @pymc.deterministic def defs(defs_star=defs_star): defs = defs_star.copy() defs = defs - np.mean(defs_star) return defs @pymc.deterministic def home_theta(home_team=home_team, away_team=away_team, home=home, atts=atts, defs=defs, intercept=intercept): return np.exp(intercept + home + atts[home_team] + defs[away_team]) @pymc.deterministic def away_theta(home_team=home_team, away_team=away_team, home=home, atts=atts, defs=defs, intercept=intercept): return np.exp(intercept + atts[away_team] + defs[home_team]) home_points = pymc.Poisson('home_points', mu=home_theta, value=observed_home_goals, observed=True) away_points = pymc.Poisson('away_points', mu=away_theta, value=observed_away_goals, observed=True) mcmc = pymc.MCMC([home, intercept, tau_att, tau_def, home_theta, away_theta, atts_star, defs_star, atts, defs, home_points, away_points]) map_ = pymc.MAP( mcmc ) map_.fit() mcmc.sample(200000, 40000, 20) ``` My attempt at porting to PyMC3 :) And I include the wrangling code. I defined my own data directory etc. ``` data_file = DATA_DIR + 'results_2014.csv' df = pd.read_csv(data_file, sep=',') # Or whatever it takes to get this into a data frame. teams = df.home_team.unique() teams = pd.DataFrame(teams, columns=['team']) teams['i'] = teams.index df = pd.merge(df, teams, left_on='home_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_home'}).drop('team', 1) df = pd.merge(df, teams, left_on='away_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_away'}).drop('team', 1) observed_home_goals = df.home_score.values observed_away_goals = df.away_score.values home_team = df.i_home.values away_team = df.i_away.values num_teams = len(df.i_home.drop_duplicates()) num_games = len(home_team) g = df.groupby('i_away') att_starting_points = np.log(g.away_score.mean()) g = df.groupby('i_home') def_starting_points = -np.log(g.away_score.mean()) import theano.tensor as T import pymc3 as pm3 #hyperpriors x = att_starting_points.values y = def_starting_points.values model = pm.Model() with pm3.Model() as model: home3 = pm3.Normal('home', 0, .0001) tau_att3 = pm3.Gamma('tau_att', .1, .1) tau_def3 = pm3.Gamma('tau_def', .1, .1) intercept3 = pm3.Normal('intercept', 0, .0001) #team-specific parameters atts_star3 = pm3.Normal("atts_star", mu=0, tau=tau_att3, observed=x) defs_star3 = pm3.Normal("defs_star", mu=0, tau=tau_def3, observed=y) #Seems to be the error here. atts = pm3.Deterministic('regression', atts_star3 - np.mean(atts_star3)) home_theta3 = pm3.Deterministic('regression', T.exp(intercept3 + atts[away_team] + defs[home_team])) atts = pm3.Deterministic('regression', atts_star3 - np.mean(atts_star3)) home_theta3 = pm3.Deterministic('regression', T.exp(intercept3 + atts[away_team] + defs[home_team])) # Unknown model parameters home_points3 = pm3.Poisson('home_points', mu=home_theta3, observed=observed_home_goals) away_points3 = pm3.Poisson('away_points', mu=home_theta3, observed=observed_away_goals) start = pm3.find_MAP() step = pm3.NUTS(state=start) trace = pm3.sample(2000, step, start=start, progressbar=True) pm3.traceplot(trace) ``` And I get an error like values isn't a Theano object. I think this is the .values part above. But i'm confused about how to convert this into a Theano tensor. The tensors are confusing me :) And the error for clarity, because I've misunderstood something in PyMC3 syntax. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-71-ce51c1a64412> in <module>() 23 24 #Seems to be the error here. ---> 25 atts = pm3.Deterministic('regression', atts_star3 - np.mean(atts_star3)) 26 home_theta3 = pm3.Deterministic('regression', T.exp(intercept3 + atts[away_team] + defs[home_team])) 27 /Users/peadarcoyle/anaconda/lib/python3.4/site-packages/numpy/core/fromnumeric.py in mean(a, axis, dtype, out, keepdims) 2733 2734 return _methods._mean(a, axis=axis, dtype=dtype, -> 2735 out=out, keepdims=keepdims) 2736 2737 def std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False): /Users/peadarcoyle/anaconda/lib/python3.4/site-packages/numpy/core/_methods.py in _mean(a, axis, dtype, out, keepdims) 71 ret = ret.dtype.type(ret / rcount) 72 else: ---> 73 ret = ret / rcount 74 75 return ret TypeError: unsupported operand type(s) for /: 'ObservedRV' and 'int' ```
2015/06/12
[ "https://Stackoverflow.com/questions/30798447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2610971/" ]
Here is my translation of your PyMC2 model: ``` model = pm.Model() with pm.Model() as model: # global model parameters home = pm.Normal('home', 0, .0001) tau_att = pm.Gamma('tau_att', .1, .1) tau_def = pm.Gamma('tau_def', .1, .1) intercept = pm.Normal('intercept', 0, .0001) # team-specific model parameters atts_star = pm.Normal("atts_star", mu =0, tau =tau_att, shape=num_teams) defs_star = pm.Normal("defs_star", mu =0, tau =tau_def, shape=num_teams) atts = pm.Deterministic('atts', atts_star - tt.mean(atts_star)) defs = pm.Deterministic('defs', defs_star - tt.mean(defs_star)) home_theta = tt.exp(intercept + home + atts[home_team] + defs[away_team] away_theta = tt.exp(intercept + atts[away_team] + defs[home_team]) # likelihood of observed data home_points = pm.Poisson('home_points', mu=home_theta, observed=observed_home_goals) away_points = pm.Poisson('away_points', mu=away_theta, observed=observed_away_goals) ``` The big difference, as I see it, between PyMC2 and 3 model building is that the whole business of initial values in PyMC2 is not included in model building in PyMC3. It is pushed off into the model fitting portion of the code. Here is a notebook that puts this model in context with your data and some fitting code: <http://nbviewer.ipython.org/gist/aflaxman/55e23195fe0a0b089103>
So I did this. It isn't a direct port of my previous version but it gives me an answer. Does anyone have any feedback? ``` import os import math import warnings import numpy as np import pandas as pd import matplotlib.pyplot as plt import pymc3 as pm3# I know folks are switching to "as pm" but I'm just not there yet %matplotlib inline import seaborn as sns from IPython.core.pylabtools import figsize import seaborn as sns import theano.tensor as T figsize(12, 12) DATA_DIR = os.path.join(os.getcwd(), 'data/') data_file = DATA_DIR + 'results_2014.csv' df = pd.read_csv(data_file, sep=',') # Or whatever it takes to get this into a data frame. teams = df.home_team.unique() teams = pd.DataFrame(teams, columns=['team']) teams['i'] = teams.index df = pd.merge(df, teams, left_on='home_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_home'}).drop('team', 1) df = pd.merge(df, teams, left_on='away_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_away'}).drop('team', 1) observed_home_goals = df.home_score.values observed_away_goals = df.away_score.values home_team = df.i_home.values away_team = df.i_away.values num_teams = len(df.i_home.drop_duplicates()) num_games = len(home_team) g = df.groupby('i_away') att_starting_points = np.log(g.away_score.mean()) g = df.groupby('i_home') def_starting_points = -np.log(g.away_score.mean()) import theano.tensor as T import pymc3 as pm3 #hyperpriors ''' def atts3(atts_star3=atts_star3): atts3 = atts_star.copy() atts3 = atts3 - np.mean(atts_star) return atts3 def defs3(defs_star3=defs_star3): defs3 = defs_star3.copy() defs3 = defs3 - np.mean(defs_star3) return defs ''' model = pm3.Model() with pm3.Model() as model: home3 = pm3.Normal('home', 0, .0001) tau_att3 = pm3.Gamma('tau_att', .1, .1) tau_def3 = pm3.Gamma('tau_def', .1, .1) intercept3 = pm3.Normal('intercept', 0, .0001) #team-specific parameters atts_star3 = pm3.Normal("atts_star", mu=0, tau=tau_att3, shape=num_teams, observed=att_starting_points.values) defs_star3 = pm3.Normal("defs_star", mu=0, tau=tau_def3, shape=num_teams, observed=def_starting_points.values) #home_theta3 = atts3 + defs3 #away_theta3 = atts3 + defs3 # Unknown model parameters home_points3 = pm3.Poisson('home_points', mu=1, observed=observed_home_goals) away_points3 = pm3.Poisson('away_points', mu=1, observed=observed_away_goals) start = pm3.find_MAP() step = pm3.NUTS(state=start) trace = pm3.sample(2000, step, start=start, progressbar=True) pm3.traceplot(trace) ```
61,195,729
I have been working with binance websocket. Worked well if the start/stop command is in the main programm. Now I wanted to start and stop the socket through a GUI. So I placed the start/stop command in a function each. But it doesn't work. Just no reaction while calling the function. Any idea what's the problem? Here the relevant parts of my code (I am quite new to python, any hints to this code are welcome): ``` def start_websocket(conn_key): bm.start() def stop_websocket(conn_key): bm.close() def process_message(msg): currentValues['text']= msg['p'] # --- main --- PUBLIC = '************************' SECRET = '************************' client = Client(api_key=PUBLIC, api_secret=SECRET) bm = BinanceSocketManager(client) conn_key = bm.start_trade_socket('BNBBTC', process_message) # create main window and set its title root = tk.Tk() root.title('Websocket') # create variable for displayed time and use it with Label label = tk.Label(root) label.grid(column=5, row=0) #root.geometry('500x500') bt_start_socket = tk.Button(root, text="Start Websocket", command=start_websocket(conn_key)) bt_start_socket.grid (column=1, row=1) bt_stop_socket = tk.Button(root, text="Sop Websocket", command=stop_websocket(conn_key)) bt_stop_socket.grid (column=1, row=10) ```
2020/04/13
[ "https://Stackoverflow.com/questions/61195729", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13305068/" ]
Whatever user is executing that code, does not have permission to write to that file path. If you go to C:\Users\chris\Source\Repos\inventory2.0\PIC\_Program\_1.0\Content\images\Components, right click, properties, Security tab, you will see the users that have permissions and what those permissions are. You can add or edit your users permissions there.
I think the problem is your application user don't have permission to access your the folder. If you are testing this in VS IIS express, then you should grant permission for your current user. However, if you are receiving this error message from IIS Server. Then you should grant permission for application pool identity(IIS Apppool\apppoolname). Process monitor can help you fix access denied error all the time. You just need to create a filter for Result ="access is denied". Then it will tell you who and what permission are required. <https://learn.microsoft.com/en-us/sysinternals/downloads/procmon>
52,436,084
I have a word list and I need to find the count of words that are present in the string. eg: ``` text_string = 'I came, I saw, I conquered!' word_list=['I','saw','Britain'] ``` I require a python script that prints ``` {‘i’:3,’saw’:1,’britain':0} ```
2018/09/21
[ "https://Stackoverflow.com/questions/52436084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9635284/" ]
You can use a [property accessor](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Property_Accessors) to reference `mutableValue` from accessing the property `a` like this: ```js let mutableValue = 3 const obj = { get a() { return mutableValue } } console.log(obj.a) mutableValue = 4 console.log(obj.a) ```
object -> reference values try ``` let mutableValue = {aa: 3} const getText = () => mutableValue const obj = {a: getText()} ``` run ``` obj.a// {aa: 3} mutableValue.aa = 4 obj.a// {aa: 4} ```
52,436,084
I have a word list and I need to find the count of words that are present in the string. eg: ``` text_string = 'I came, I saw, I conquered!' word_list=['I','saw','Britain'] ``` I require a python script that prints ``` {‘i’:3,’saw’:1,’britain':0} ```
2018/09/21
[ "https://Stackoverflow.com/questions/52436084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9635284/" ]
In order to mutate you have to keep the value in an object for example ``` let mutatingObject = { mutableValue: 3 }; const getText = () => mutatingObject ; const obj12 = { a: getText() } mutatingObject.mutableValue=4; console.log(obj12.a); ``` Now see the output when you update the value of an object , it's mutating. [![enter image description here](https://i.stack.imgur.com/Y7rmM.png)](https://i.stack.imgur.com/Y7rmM.png) This is called **By Reference**. In Javascript when create an object , It will create new address in memory which knows where is my object exactly lives. When you assign an Object 1 to an another Object 2 , both will share this common memory location. So when you update a property or value in Object 1 will also mutate in Object 2 . But Primitive type such as boolean,string,number are **By Value** For example ``` var a=1; (Which has an address location which knows where that primitive value sits in memory) ``` Let's suppose if you assign the value of **a** in to another variable called **b** like below ``` var b=a; ``` In Primitive types the new variable point to a new address in the memory and copy the value of **a** in to it. So if you change again a = 2 , it will not change the value of b , because both are in difference memory location. <https://codeburst.io/explaining-value-vs-reference-in-javascript-647a975e12a0>
object -> reference values try ``` let mutableValue = {aa: 3} const getText = () => mutableValue const obj = {a: getText()} ``` run ``` obj.a// {aa: 3} mutableValue.aa = 4 obj.a// {aa: 4} ```
58,901,682
First of all i tried command from their main page, that they gave me: ``` pip3 install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html ``` Could not find a version that satisfies the requirement torch==1.3.1+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch==1.3.1.+cpu After this i decided to take available from this list: <https://download.pytorch.org/whl/cpu/stable> So at the end i tried something like this ``` pip3 install torch-1.1.0-cp37-cp37m-win_amd64.whl -f https://download.pytorch.org/whl/torch_stable.html ``` And now they write that this is not supported wheel on my platform. Wtf? ( I use windows 7, python64, have amd) ( location of python: C:\Python38, location of pip C:\Python38\Scripts )
2019/11/17
[ "https://Stackoverflow.com/questions/58901682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8037832/" ]
There is no any wheels for Python 3.8 at <https://download.pytorch.org/whl/torch_stable.html>. > > not supported wheel on my platform > > > This is because the wheel is for Python 3.7. Advice: downgrade to Python 3.7.
Adding to @phd's answer, you could consider [installing from source](https://github.com/pytorch/pytorch#from-source). Note that I have built PyTorch from the source in the past (and it was a mostly straightforward process) but I have not done this on windows or for Python 3.8.
58,901,682
First of all i tried command from their main page, that they gave me: ``` pip3 install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html ``` Could not find a version that satisfies the requirement torch==1.3.1+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch==1.3.1.+cpu After this i decided to take available from this list: <https://download.pytorch.org/whl/cpu/stable> So at the end i tried something like this ``` pip3 install torch-1.1.0-cp37-cp37m-win_amd64.whl -f https://download.pytorch.org/whl/torch_stable.html ``` And now they write that this is not supported wheel on my platform. Wtf? ( I use windows 7, python64, have amd) ( location of python: C:\Python38, location of pip C:\Python38\Scripts )
2019/11/17
[ "https://Stackoverflow.com/questions/58901682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8037832/" ]
There is no any wheels for Python 3.8 at <https://download.pytorch.org/whl/torch_stable.html>. > > not supported wheel on my platform > > > This is because the wheel is for Python 3.7. Advice: downgrade to Python 3.7.
windows -check your system is 32 or 64 bit -check your python is 32 or 64 bit match your system and python version check if pip is installed by pip --version install CUDA 10.2(i didn't check for CUDA 11(latest at the time of writing)) got to pytorch and select your the option and copy past the command in cmd
58,901,682
First of all i tried command from their main page, that they gave me: ``` pip3 install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html ``` Could not find a version that satisfies the requirement torch==1.3.1+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch==1.3.1.+cpu After this i decided to take available from this list: <https://download.pytorch.org/whl/cpu/stable> So at the end i tried something like this ``` pip3 install torch-1.1.0-cp37-cp37m-win_amd64.whl -f https://download.pytorch.org/whl/torch_stable.html ``` And now they write that this is not supported wheel on my platform. Wtf? ( I use windows 7, python64, have amd) ( location of python: C:\Python38, location of pip C:\Python38\Scripts )
2019/11/17
[ "https://Stackoverflow.com/questions/58901682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8037832/" ]
There is no any wheels for Python 3.8 at <https://download.pytorch.org/whl/torch_stable.html>. > > not supported wheel on my platform > > > This is because the wheel is for Python 3.7. Advice: downgrade to Python 3.7.
``` pip install numpy pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html ```
58,901,682
First of all i tried command from their main page, that they gave me: ``` pip3 install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html ``` Could not find a version that satisfies the requirement torch==1.3.1+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch==1.3.1.+cpu After this i decided to take available from this list: <https://download.pytorch.org/whl/cpu/stable> So at the end i tried something like this ``` pip3 install torch-1.1.0-cp37-cp37m-win_amd64.whl -f https://download.pytorch.org/whl/torch_stable.html ``` And now they write that this is not supported wheel on my platform. Wtf? ( I use windows 7, python64, have amd) ( location of python: C:\Python38, location of pip C:\Python38\Scripts )
2019/11/17
[ "https://Stackoverflow.com/questions/58901682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8037832/" ]
There is no any wheels for Python 3.8 at <https://download.pytorch.org/whl/torch_stable.html>. > > not supported wheel on my platform > > > This is because the wheel is for Python 3.7. Advice: downgrade to Python 3.7.
Also see this issue that currently affects Windows installations (even if you downgrade to Python 3.7): <https://github.com/pytorch/pytorch/issues/54172> TL;DR run this command instead: ``` pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchaudio===0.8.0 -f https://download.pytorch.org/whl/torch_stable.html ``` Note the change from cu102 to cu101 in the command generated by <https://pytorch.org/get-started/locally/>
58,901,682
First of all i tried command from their main page, that they gave me: ``` pip3 install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html ``` Could not find a version that satisfies the requirement torch==1.3.1+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch==1.3.1.+cpu After this i decided to take available from this list: <https://download.pytorch.org/whl/cpu/stable> So at the end i tried something like this ``` pip3 install torch-1.1.0-cp37-cp37m-win_amd64.whl -f https://download.pytorch.org/whl/torch_stable.html ``` And now they write that this is not supported wheel on my platform. Wtf? ( I use windows 7, python64, have amd) ( location of python: C:\Python38, location of pip C:\Python38\Scripts )
2019/11/17
[ "https://Stackoverflow.com/questions/58901682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8037832/" ]
Adding to @phd's answer, you could consider [installing from source](https://github.com/pytorch/pytorch#from-source). Note that I have built PyTorch from the source in the past (and it was a mostly straightforward process) but I have not done this on windows or for Python 3.8.
windows -check your system is 32 or 64 bit -check your python is 32 or 64 bit match your system and python version check if pip is installed by pip --version install CUDA 10.2(i didn't check for CUDA 11(latest at the time of writing)) got to pytorch and select your the option and copy past the command in cmd
58,901,682
First of all i tried command from their main page, that they gave me: ``` pip3 install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html ``` Could not find a version that satisfies the requirement torch==1.3.1+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch==1.3.1.+cpu After this i decided to take available from this list: <https://download.pytorch.org/whl/cpu/stable> So at the end i tried something like this ``` pip3 install torch-1.1.0-cp37-cp37m-win_amd64.whl -f https://download.pytorch.org/whl/torch_stable.html ``` And now they write that this is not supported wheel on my platform. Wtf? ( I use windows 7, python64, have amd) ( location of python: C:\Python38, location of pip C:\Python38\Scripts )
2019/11/17
[ "https://Stackoverflow.com/questions/58901682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8037832/" ]
Adding to @phd's answer, you could consider [installing from source](https://github.com/pytorch/pytorch#from-source). Note that I have built PyTorch from the source in the past (and it was a mostly straightforward process) but I have not done this on windows or for Python 3.8.
``` pip install numpy pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html ```
58,901,682
First of all i tried command from their main page, that they gave me: ``` pip3 install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html ``` Could not find a version that satisfies the requirement torch==1.3.1+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch==1.3.1.+cpu After this i decided to take available from this list: <https://download.pytorch.org/whl/cpu/stable> So at the end i tried something like this ``` pip3 install torch-1.1.0-cp37-cp37m-win_amd64.whl -f https://download.pytorch.org/whl/torch_stable.html ``` And now they write that this is not supported wheel on my platform. Wtf? ( I use windows 7, python64, have amd) ( location of python: C:\Python38, location of pip C:\Python38\Scripts )
2019/11/17
[ "https://Stackoverflow.com/questions/58901682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8037832/" ]
Adding to @phd's answer, you could consider [installing from source](https://github.com/pytorch/pytorch#from-source). Note that I have built PyTorch from the source in the past (and it was a mostly straightforward process) but I have not done this on windows or for Python 3.8.
Also see this issue that currently affects Windows installations (even if you downgrade to Python 3.7): <https://github.com/pytorch/pytorch/issues/54172> TL;DR run this command instead: ``` pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchaudio===0.8.0 -f https://download.pytorch.org/whl/torch_stable.html ``` Note the change from cu102 to cu101 in the command generated by <https://pytorch.org/get-started/locally/>
12,938,786
Im trying to pass a sql ( wich works perfectly if i run it on the client ) inside my python script, but i receive the error "not enough arguments for format string" Following, the code: ``` sql = """ SELECT rr.iserver, foo.*, rr.queue_capacity, rr.queue_refill_level, rr.is_concurrent, rr.max_execution_threads, rr.retrieval_status, rr.processing_status FROM ( SELECT DISTINCT ip.package, it. TRIGGER FROM wip.info_package ip, wip.info_trigger it WHERE ip.service = it.service and ip.iserver = '%(iserver)s' and it.iserver = %(iserver)s' AND package = '%(package)s' UNION SELECT '%(package)s' AS package, TRIGGER FROM info_trigger WHERE TRIGGER LIKE '%(package)s%' ) AS foo, info_trigger rr WHERE rr. TRIGGER = foo. TRIGGER """ % {'iserver' : var_iserver,'package' : var_package} dcon = Database_connection() getResults = dcon.db_call(sql, dbHost, dbName, dbUser, dbPass) # more and more code to work the result.... ``` My main problem on this is how i can pass `'%(iserver)s' , '%(package)s'` correctly. Because usualy, when i select's or insert's on database, i only use two variables , but i dont know how to do it with more than two. Thanks.
2012/10/17
[ "https://Stackoverflow.com/questions/12938786", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1323826/" ]
``` WHERE TRIGGER LIKE '%(package)s%' ``` you have an EXTRA '%' if you want the actual character '%', you need to escape with a double '%'. so it should be ``` WHERE TRIGGER LIKE '%(package)s%%' ``` if you want to display a '%' and ``` WHERE TRIGGER LIKE '%(package)s' ``` if you dont
Don't build SQL like this using `%`: ``` "SELECT %(foo)s FROM bar WHERE %(baz)s" % {"foo": "FOO", "baz": "1=1;-- DROP TABLE bar;"} ``` This opens the door for nasty SQL injection attacks. Use the proper form of your [Python Database API Specification v2.0](http://www.python.org/dev/peps/pep-0249/) adapter. For Psychopg this form is described [here](http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries). ``` cur.execute("SELECT %(foo)s FROM bar WHERE %(baz)s", {"foo": "FOO", "baz": "1=1;-- DROP TABLE bar;"}) ```
54,311,678
I have a UDP socket application where I am working on the server side. To test the server side I put together a simple python client program that sends the message "hello world how are you". The server, should then receive the message, convert to uppercase and send back to the client. The problem lies here: I can observe while debugging that the server is receiving the message, applies the conversion, sends the response back and eventually waits for another message. However the python client is not receiving the message but wait's endlessly for the response from the server. I found (an option) through the web that in order for the client to receive a response back it needs to bind to the server, which goes against what I have seen in a text book (The Linux Programming Interface). Nevertheless, I tried to bind the client to the server and the python program failed to connect at the binding line (don't know if I did it correctly). Python version is 2.7.5. The client program runs on RedHat and the server runs on a target module with Angstrom (it's cross compiled for a 32 bit processor). Here is the code for the client: ``` import socket import os UDP_IP = "192.168.10.4" UDP_PORT = 50005 #dir_path = os.path.dirname(os.path.realpath(__file__)) MESSAGE = "hello world how are you" print "UDP target IP: ", UDP_IP print "UDP target port: ", UDP_PORT print "message: ", MESSAGE sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) #sock.bind((UDP_IP, UDP_PORT)) print "Sending message..." sock.sendto(MESSAGE, (UDP_IP, UDP_PORT)) print "Message sent!" print "Waiting for response..." data = sock.recv(1024) print "Received", repr(data) ``` And here is the code for the server: ``` void server_side(void) { printf("Server start up.\n"); struct sockaddr_in svaddr; struct sockaddr_in claddr; int sfd; int j; ssize_t numBytes; socklen_t len; char buf[BUF_SIZE]; char claddrStr[INET_ADDRSTRLEN]; //int output = open("test_output.txt", O_WRONLY|O_CREAT, 0664); printf("Creating new UDP socket...\n"); sfd = socket(AF_INET, SOCK_DGRAM, 0); /* Create Server Socket*/ if (sfd == -1) { errExit("socket"); } printf("Socket has been created!\n"); memset(&svaddr, 0, sizeof(struct sockaddr_in)); svaddr.sin_family = AF_INET; svaddr.sin_addr.s_addr = htonl(INADDR_ANY); svaddr.sin_port = htons(PORT_NUM); printf("Binding in process...\n"); if (bind(sfd, (struct sockaddr *) &svaddr, sizeof(struct sockaddr_in)) == -1) { errExit("bind"); } printf("Binded!\n"); /* Receive messages, convert to upper case, and return to client.*/ for(;;) { len = sizeof(struct sockaddr_in); numBytes = recvfrom(sfd, buf, BUF_SIZE, 0, (struct sockaddr *) &claddr, &len); if (numBytes == -1) { errExit("recvfrom"); } if (inet_ntop(AF_INET, &claddr.sin_addr, claddrStr, INET_ADDRSTRLEN) == NULL) { printf("Couldn't convert client address to string.\n"); } else { printf("Server received %ld bytes from (%s, %u).\n", (long) numBytes, claddrStr, ntohs(claddr.sin_port)); } claddr.sin_port = htons(PORT_NUM); for (j = 0; j< numBytes; j++) { buf[j] = toupper((unsigned char) buf[j]); } if (sendto(sfd, buf, numBytes, 0, (struct sockaddr *) &claddr, len) != numBytes) { fatal("sendto"); } } } ``` Again the problem is I am not receiving the response and printing the message back on the client terminal. I should receive the same message in all uppercase letters. I feel like I am missing a small detail. Thanks for the help!
2019/01/22
[ "https://Stackoverflow.com/questions/54311678", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5970879/" ]
**Quick and dirty:** Remove this line from your C code: ``` claddr.sin_port = htons(PORT_NUM); ``` **Now why:** When you send a message in your python script, your operating system will fill a [UDP packet](https://en.wikipedia.org/wiki/User_Datagram_Protocol) with the destination IP address and port you specified, the IP address of the machine you are using on the source IP field, and finally, a source port assigned "randomly" by the OS. Do notice, this is not the same port as the destination port, and even if it was the case, how would you know what program will receive the source message?(both would be hearing messages from the same port) Luckily, this is not possible. Now, when your C code receives this packet, it will know who sent the message, and you have access to this information though the sockaddr struct filled by recvfrom. If you want to send some information back, you must send the packet with a destination port(as seen by the server) equal to the source port as seen by the client, which again, is not the same port that you are listening on the server. By doing `claddr.sin_port = htons(PORT_NUM)`, you set overwrite the field that contained the source port of the client with the server port, and when you try to send this packet, 2 things may happen: * If the client ran from the same computer, the destination IP and source IP will be the same, and you've just set the destination port to be the port that the server is listening, so you will have a message loop. * If running on different computers, the packet will be received by the client computer, but there probably won't be any programs waiting for messages on that port, so it is discarded. A half-baked analogy: you receive a letter from a friend, but when writing back to him, you change the number of his house with the number of your house... does not make much sense. Only difference is that this friend of yours moves a lot, and each letter may have a different number, but that is not important. In theory, you must bind if you want to receive data back, in this case bind is an equivalent to listening to that port. This answer clarifies why it was not necessary in this case: <https://stackoverflow.com/a/14243544/6253527> If you are on linux, you can see which port you OS assigned for your UDP socket using `sudo ss -antup | grep python`
N. Dijkhoffz, Would love to hear how you fixed it and perhaps post the correct code.
36,965,951
I'm begineer in python. I'm bit confused about this basic python program and its output ``` for num in range(2,10): for i in range(2,num): if (num % i) == 0: break else: print(num) ``` output ``` Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:38:48) [MSC v.1900 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. >>> ================= RESTART: C:\Users\ms\Desktop\python\new.py ================= 2 3 5 7 >>> ``` As per condition ``` if (2 %2) == 0: break ``` then how 2 prints to the output display Thanks for helping ..
2016/05/01
[ "https://Stackoverflow.com/questions/36965951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5483135/" ]
You can use `ajax`. **timestamp.php** ``` <?php date_default_timezone_set('YOUR TIMEZONE'); echo $timestamp = date('H:i:s'); ``` **jQuery** ``` $(document).ready(function() { setInterval(timestamp, 1000); }); function timestamp() { $.ajax({ url: 'http://localhost/timestamp.php', success: function(data) { $('#timestamp').html(data); }, }); } ``` **HTML** ``` <div id="timestamp"></div> ```
PHP is a server-side programming language, Javascript is a client-side programming language. The PHP code that fills the variables will only update when the webpage is loaded, after that you are left with Javascript code and nothing more. I recommend you to search a basic programming book which mentions concepts such as client-side and server-side code because (and not trying to be harsh) you seems to have a big misunderstanding about how those things works.
36,965,951
I'm begineer in python. I'm bit confused about this basic python program and its output ``` for num in range(2,10): for i in range(2,num): if (num % i) == 0: break else: print(num) ``` output ``` Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:38:48) [MSC v.1900 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. >>> ================= RESTART: C:\Users\ms\Desktop\python\new.py ================= 2 3 5 7 >>> ``` As per condition ``` if (2 %2) == 0: break ``` then how 2 prints to the output display Thanks for helping ..
2016/05/01
[ "https://Stackoverflow.com/questions/36965951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5483135/" ]
You can use `ajax`. **timestamp.php** ``` <?php date_default_timezone_set('YOUR TIMEZONE'); echo $timestamp = date('H:i:s'); ``` **jQuery** ``` $(document).ready(function() { setInterval(timestamp, 1000); }); function timestamp() { $.ajax({ url: 'http://localhost/timestamp.php', success: function(data) { $('#timestamp').html(data); }, }); } ``` **HTML** ``` <div id="timestamp"></div> ```
I would send the timestamp across from the server just to get a snapshot of the current server time and then let JS take over from there. JS can keep the local clock pretty close to being synced with your server and you could run a new ajax call every x number of minutes/hours to resync with a fresh timestamp. I haven't tested this, but if accuracy isn't an issue this should work pretty well and keep your server work to a minimum. Edit: I actually ended up doing this for a project I'm working on and it works great. You just need to get your timestamp from your server into your JS - Since my page is a .php page I'm able to just do this: ``` <h1 id="current-time-now" data-start="<?php echo time() ?>"></h1> ``` And then I can grab that timestamp and use it like this: ``` //get new date from timestamp in data-start attr var freshTime = new Date(parseInt($("#current-time-now").attr("data-start"))*1000); //loop to tick clock every second var func = function myFunc() { //set text of clock to show current time $("#current-time-now").text(freshTime.toLocaleTimeString()); //add a second to freshtime var freshTime.setSeconds(freshTime.getSeconds() + 1); //wait for 1 second and go again setTimeout(myFunc, 1000); }; func(); ``` From there you can run ajax calls to get fresh timestamps how ever often you feel is needed for your project.
58,543,054
I am trying to use pyspark to preprocess data for the prediction model. I get an error when I try spark.createDataFrame out of my preprocessing.Is there a way to check how processedRDD look like before making it to dataframe? ``` import findspark findspark.init('/usr/local/spark') import pyspark from pyspark.sql import SQLContext import os import pandas as pd import geohash2 sc = pyspark.SparkContext('local', 'sentinel') spark = pyspark.SQLContext(sc) sql = SQLContext(sc) working_dir = os.getcwd() df = sql.createDataFrame(data) df = df.select(['starttime', 'latstart','lonstart', 'latfinish', 'lonfinish', 'trip_type']) df.show(10, False) processedRDD = df.rdd processedRDD = processedRDD \ .map(lambda row: (row, g, b, minutes_per_bin)) \ .map(data_cleaner) \ .filter(lambda row: row != None) print(processedRDD) featuredDf = spark.createDataFrame(processedRDD, ['year', 'month', 'day', 'time_cat', 'time_num', 'time_cos', \ 'time_sin', 'day_cat', 'day_num', 'day_cos', 'day_sin', 'weekend', \ 'x_start', 'y_start', 'z_start','location_start', 'location_end', 'trip_type']) ``` I am getting this error: ``` [Stage 1:> (0 + 1) / 1]2019-10-24 15:37:56 ERROR Executor:91 - Exception in task 0.0 in stage 1.0 (TID 1) raise AppRegistryNotReady("Apps aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet. at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452) at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588) at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.api.python.PythonRDD$$anonfun$3.apply(PythonRDD.scala:153) at org.apache.spark.api.python.PythonRDD$$anonfun$3.apply(PythonRDD.scala:153) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:121) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more ``` I do not understand what this have to do with importing an app
2019/10/24
[ "https://Stackoverflow.com/questions/58543054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9240223/" ]
I don't know what this script has to do with Django exactly, but adding the following lines at the top of the script will probably fix this issue: ``` import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings') import django django.setup() ```
Basically, you need to load your settings and populate Django’s application registry before doing anything else. You have all the information required in the Django docs. <https://docs.djangoproject.com/en/2.2/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage>
58,543,054
I am trying to use pyspark to preprocess data for the prediction model. I get an error when I try spark.createDataFrame out of my preprocessing.Is there a way to check how processedRDD look like before making it to dataframe? ``` import findspark findspark.init('/usr/local/spark') import pyspark from pyspark.sql import SQLContext import os import pandas as pd import geohash2 sc = pyspark.SparkContext('local', 'sentinel') spark = pyspark.SQLContext(sc) sql = SQLContext(sc) working_dir = os.getcwd() df = sql.createDataFrame(data) df = df.select(['starttime', 'latstart','lonstart', 'latfinish', 'lonfinish', 'trip_type']) df.show(10, False) processedRDD = df.rdd processedRDD = processedRDD \ .map(lambda row: (row, g, b, minutes_per_bin)) \ .map(data_cleaner) \ .filter(lambda row: row != None) print(processedRDD) featuredDf = spark.createDataFrame(processedRDD, ['year', 'month', 'day', 'time_cat', 'time_num', 'time_cos', \ 'time_sin', 'day_cat', 'day_num', 'day_cos', 'day_sin', 'weekend', \ 'x_start', 'y_start', 'z_start','location_start', 'location_end', 'trip_type']) ``` I am getting this error: ``` [Stage 1:> (0 + 1) / 1]2019-10-24 15:37:56 ERROR Executor:91 - Exception in task 0.0 in stage 1.0 (TID 1) raise AppRegistryNotReady("Apps aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet. at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452) at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588) at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.api.python.PythonRDD$$anonfun$3.apply(PythonRDD.scala:153) at org.apache.spark.api.python.PythonRDD$$anonfun$3.apply(PythonRDD.scala:153) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:121) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more ``` I do not understand what this have to do with importing an app
2019/10/24
[ "https://Stackoverflow.com/questions/58543054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9240223/" ]
I don't know what this script has to do with Django exactly, but adding the following lines at the top of the script will probably fix this issue: ``` import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings') import django django.setup() ```
Instead of manualy running Hadoop I am making a python server which is using pyspack and calculate 10 times faster heavy AI algorithms on Django server. The problem I had came from SPARK-LOCAL-IP, different IP was used (the one I use to connect to a remote database vis sshtunnel). I import and use pyspark. I had to rename a file and add the correct IP. ``` cd /usr/local/spark/conf touch spark-env.sh.template mv -i spark-env.sh.template spark-env.sh nano spark-env.sh paste: SPARK-LOCAL_IP="127.0.1.1" ``` Then I had to add to my views.py sc.setLogLevel("ERROR") To see what was the real problem .Debuging of java in python can be problematic sometimes. A column was datetime instead of string and I fixed it.
149,474
This XML file contained archived news stories for all of last year. I was asked to sort these stories by story categor[y|ies] into new XML files. ``` big_story_export.xml ``` turns into ``` lifestyles.xml food.xml nascar.xml ``` ...and so on. I got the job done using a one-off python script, *however*, **I originally attempted this using XSLT**. This resulted in frustration as my XPATH selections were crapping the bed. Test files were transformed perfectly, but putting the big file up against my style sheet resulted in ...*nothing*. What strategies do you recommend for ensuring that files like this will run through XSLT? *This was handed to me by a vendor, so imagine that I don't have a lot of leverage when it comes to defining the structure of this file.* If you guys want code samples, I'll put some together. If anything, I'd be satisfied with some tips for making XML+XSLT work together smoothly. --- @Sklivvz I was using python's libxml2 & libxslt to process this. I'm looking into xsltproc now. It seems like a good tool for these one-off situations. Thanks! --- @diomidis-spinellis It's well-formed, though (as mentioned) I don't have faculties to discover it's validity. As for writing a Schema, I like the idea. The amount of time I invest in getting this one file validated would be impractical if it were a one-time thing, though I foresee having to handle more files like this from our vendor. Writing a schema (and submitting it to the vendor) would be an excellent long-term strategy for managing XML funk like this. Thanks!
2008/09/29
[ "https://Stackoverflow.com/questions/149474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/22491/" ]
This sounds like a bug in the large XML file or the XSLT processor. There are two things you should check on your file. 1. Is the file well-formed XML? That is, are all tags and attributes properly terminated and matched? An XML processor, like [xmlstarlet](http://xmlstar.sourceforge.net/), can tell you that. 2. Does the file contain valid XML? For this you need a schema and an XML validator ([xmlstarlet](http://xmlstar.sourceforge.net/) can do this trick as well). I suggest you invest some effort to write the schema definition of your file. It will simplify a lot your debugging, because you can then easily pinpoint the exact source of problems you may be having. If the file is well-formed and valid, but the XSLT processor still refuses to give you the results you would expect, you can be sure that the problem lies in the processor, and you should try a different one.
What language/parser were you using? For large files I try to use Unix command line tools. They are usually much, much more efficient than other solutions and don't "crap out" on large files. Try using `xsltproc`
149,474
This XML file contained archived news stories for all of last year. I was asked to sort these stories by story categor[y|ies] into new XML files. ``` big_story_export.xml ``` turns into ``` lifestyles.xml food.xml nascar.xml ``` ...and so on. I got the job done using a one-off python script, *however*, **I originally attempted this using XSLT**. This resulted in frustration as my XPATH selections were crapping the bed. Test files were transformed perfectly, but putting the big file up against my style sheet resulted in ...*nothing*. What strategies do you recommend for ensuring that files like this will run through XSLT? *This was handed to me by a vendor, so imagine that I don't have a lot of leverage when it comes to defining the structure of this file.* If you guys want code samples, I'll put some together. If anything, I'd be satisfied with some tips for making XML+XSLT work together smoothly. --- @Sklivvz I was using python's libxml2 & libxslt to process this. I'm looking into xsltproc now. It seems like a good tool for these one-off situations. Thanks! --- @diomidis-spinellis It's well-formed, though (as mentioned) I don't have faculties to discover it's validity. As for writing a Schema, I like the idea. The amount of time I invest in getting this one file validated would be impractical if it were a one-time thing, though I foresee having to handle more files like this from our vendor. Writing a schema (and submitting it to the vendor) would be an excellent long-term strategy for managing XML funk like this. Thanks!
2008/09/29
[ "https://Stackoverflow.com/questions/149474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/22491/" ]
The problem with using XSLT to process arbitrarily large XML documents is that XSLT processing begins by parsing the input document into a source tree. This tree gets parsed into memory. This means that eventually you'll encounter an input document large enough to cause problems even if you're using a robust XSLT processor like Saxon and you have plenty of virtual memory. (It may still work, but it'll be slow.) Another reason not to use XSLT for this is that you're producing multiple output documents, which (based on what you've said so far) means you're making multiple passes over your input document. It may (depending on a lot of factors about your situation that I don't know about) be better to take a SAX-based approach instead of using XSLT. Using a SAX processor, you may be able to write a method that makes a single, forward-only pass through the source document, parsing it as it goes, and writes all of the output documents as it encounters the elements that contain them.
What language/parser were you using? For large files I try to use Unix command line tools. They are usually much, much more efficient than other solutions and don't "crap out" on large files. Try using `xsltproc`
149,474
This XML file contained archived news stories for all of last year. I was asked to sort these stories by story categor[y|ies] into new XML files. ``` big_story_export.xml ``` turns into ``` lifestyles.xml food.xml nascar.xml ``` ...and so on. I got the job done using a one-off python script, *however*, **I originally attempted this using XSLT**. This resulted in frustration as my XPATH selections were crapping the bed. Test files were transformed perfectly, but putting the big file up against my style sheet resulted in ...*nothing*. What strategies do you recommend for ensuring that files like this will run through XSLT? *This was handed to me by a vendor, so imagine that I don't have a lot of leverage when it comes to defining the structure of this file.* If you guys want code samples, I'll put some together. If anything, I'd be satisfied with some tips for making XML+XSLT work together smoothly. --- @Sklivvz I was using python's libxml2 & libxslt to process this. I'm looking into xsltproc now. It seems like a good tool for these one-off situations. Thanks! --- @diomidis-spinellis It's well-formed, though (as mentioned) I don't have faculties to discover it's validity. As for writing a Schema, I like the idea. The amount of time I invest in getting this one file validated would be impractical if it were a one-time thing, though I foresee having to handle more files like this from our vendor. Writing a schema (and submitting it to the vendor) would be an excellent long-term strategy for managing XML funk like this. Thanks!
2008/09/29
[ "https://Stackoverflow.com/questions/149474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/22491/" ]
What language/parser were you using? For large files I try to use Unix command line tools. They are usually much, much more efficient than other solutions and don't "crap out" on large files. Try using `xsltproc`
Check out Apache's [Xalan C++](http://xml.apache.org/xalan-c/index.html). In my experience, where others (including Saxon) have failed on "large" XML files (>600 MB), this was able to run with memory to spare.
149,474
This XML file contained archived news stories for all of last year. I was asked to sort these stories by story categor[y|ies] into new XML files. ``` big_story_export.xml ``` turns into ``` lifestyles.xml food.xml nascar.xml ``` ...and so on. I got the job done using a one-off python script, *however*, **I originally attempted this using XSLT**. This resulted in frustration as my XPATH selections were crapping the bed. Test files were transformed perfectly, but putting the big file up against my style sheet resulted in ...*nothing*. What strategies do you recommend for ensuring that files like this will run through XSLT? *This was handed to me by a vendor, so imagine that I don't have a lot of leverage when it comes to defining the structure of this file.* If you guys want code samples, I'll put some together. If anything, I'd be satisfied with some tips for making XML+XSLT work together smoothly. --- @Sklivvz I was using python's libxml2 & libxslt to process this. I'm looking into xsltproc now. It seems like a good tool for these one-off situations. Thanks! --- @diomidis-spinellis It's well-formed, though (as mentioned) I don't have faculties to discover it's validity. As for writing a Schema, I like the idea. The amount of time I invest in getting this one file validated would be impractical if it were a one-time thing, though I foresee having to handle more files like this from our vendor. Writing a schema (and submitting it to the vendor) would be an excellent long-term strategy for managing XML funk like this. Thanks!
2008/09/29
[ "https://Stackoverflow.com/questions/149474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/22491/" ]
This sounds like a bug in the large XML file or the XSLT processor. There are two things you should check on your file. 1. Is the file well-formed XML? That is, are all tags and attributes properly terminated and matched? An XML processor, like [xmlstarlet](http://xmlstar.sourceforge.net/), can tell you that. 2. Does the file contain valid XML? For this you need a schema and an XML validator ([xmlstarlet](http://xmlstar.sourceforge.net/) can do this trick as well). I suggest you invest some effort to write the schema definition of your file. It will simplify a lot your debugging, because you can then easily pinpoint the exact source of problems you may be having. If the file is well-formed and valid, but the XSLT processor still refuses to give you the results you would expect, you can be sure that the problem lies in the processor, and you should try a different one.
Can I recommend Saxon XSLT processor - I know for a fact it can handle large files, provided you give the Java JVM enough memory. Another thing is that there may be optimisations n your XSLT that could help, but its hard to make blanket statements about things like that.
149,474
This XML file contained archived news stories for all of last year. I was asked to sort these stories by story categor[y|ies] into new XML files. ``` big_story_export.xml ``` turns into ``` lifestyles.xml food.xml nascar.xml ``` ...and so on. I got the job done using a one-off python script, *however*, **I originally attempted this using XSLT**. This resulted in frustration as my XPATH selections were crapping the bed. Test files were transformed perfectly, but putting the big file up against my style sheet resulted in ...*nothing*. What strategies do you recommend for ensuring that files like this will run through XSLT? *This was handed to me by a vendor, so imagine that I don't have a lot of leverage when it comes to defining the structure of this file.* If you guys want code samples, I'll put some together. If anything, I'd be satisfied with some tips for making XML+XSLT work together smoothly. --- @Sklivvz I was using python's libxml2 & libxslt to process this. I'm looking into xsltproc now. It seems like a good tool for these one-off situations. Thanks! --- @diomidis-spinellis It's well-formed, though (as mentioned) I don't have faculties to discover it's validity. As for writing a Schema, I like the idea. The amount of time I invest in getting this one file validated would be impractical if it were a one-time thing, though I foresee having to handle more files like this from our vendor. Writing a schema (and submitting it to the vendor) would be an excellent long-term strategy for managing XML funk like this. Thanks!
2008/09/29
[ "https://Stackoverflow.com/questions/149474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/22491/" ]
This sounds like a bug in the large XML file or the XSLT processor. There are two things you should check on your file. 1. Is the file well-formed XML? That is, are all tags and attributes properly terminated and matched? An XML processor, like [xmlstarlet](http://xmlstar.sourceforge.net/), can tell you that. 2. Does the file contain valid XML? For this you need a schema and an XML validator ([xmlstarlet](http://xmlstar.sourceforge.net/) can do this trick as well). I suggest you invest some effort to write the schema definition of your file. It will simplify a lot your debugging, because you can then easily pinpoint the exact source of problems you may be having. If the file is well-formed and valid, but the XSLT processor still refuses to give you the results you would expect, you can be sure that the problem lies in the processor, and you should try a different one.
Check out Apache's [Xalan C++](http://xml.apache.org/xalan-c/index.html). In my experience, where others (including Saxon) have failed on "large" XML files (>600 MB), this was able to run with memory to spare.
149,474
This XML file contained archived news stories for all of last year. I was asked to sort these stories by story categor[y|ies] into new XML files. ``` big_story_export.xml ``` turns into ``` lifestyles.xml food.xml nascar.xml ``` ...and so on. I got the job done using a one-off python script, *however*, **I originally attempted this using XSLT**. This resulted in frustration as my XPATH selections were crapping the bed. Test files were transformed perfectly, but putting the big file up against my style sheet resulted in ...*nothing*. What strategies do you recommend for ensuring that files like this will run through XSLT? *This was handed to me by a vendor, so imagine that I don't have a lot of leverage when it comes to defining the structure of this file.* If you guys want code samples, I'll put some together. If anything, I'd be satisfied with some tips for making XML+XSLT work together smoothly. --- @Sklivvz I was using python's libxml2 & libxslt to process this. I'm looking into xsltproc now. It seems like a good tool for these one-off situations. Thanks! --- @diomidis-spinellis It's well-formed, though (as mentioned) I don't have faculties to discover it's validity. As for writing a Schema, I like the idea. The amount of time I invest in getting this one file validated would be impractical if it were a one-time thing, though I foresee having to handle more files like this from our vendor. Writing a schema (and submitting it to the vendor) would be an excellent long-term strategy for managing XML funk like this. Thanks!
2008/09/29
[ "https://Stackoverflow.com/questions/149474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/22491/" ]
The problem with using XSLT to process arbitrarily large XML documents is that XSLT processing begins by parsing the input document into a source tree. This tree gets parsed into memory. This means that eventually you'll encounter an input document large enough to cause problems even if you're using a robust XSLT processor like Saxon and you have plenty of virtual memory. (It may still work, but it'll be slow.) Another reason not to use XSLT for this is that you're producing multiple output documents, which (based on what you've said so far) means you're making multiple passes over your input document. It may (depending on a lot of factors about your situation that I don't know about) be better to take a SAX-based approach instead of using XSLT. Using a SAX processor, you may be able to write a method that makes a single, forward-only pass through the source document, parsing it as it goes, and writes all of the output documents as it encounters the elements that contain them.
Can I recommend Saxon XSLT processor - I know for a fact it can handle large files, provided you give the Java JVM enough memory. Another thing is that there may be optimisations n your XSLT that could help, but its hard to make blanket statements about things like that.
149,474
This XML file contained archived news stories for all of last year. I was asked to sort these stories by story categor[y|ies] into new XML files. ``` big_story_export.xml ``` turns into ``` lifestyles.xml food.xml nascar.xml ``` ...and so on. I got the job done using a one-off python script, *however*, **I originally attempted this using XSLT**. This resulted in frustration as my XPATH selections were crapping the bed. Test files were transformed perfectly, but putting the big file up against my style sheet resulted in ...*nothing*. What strategies do you recommend for ensuring that files like this will run through XSLT? *This was handed to me by a vendor, so imagine that I don't have a lot of leverage when it comes to defining the structure of this file.* If you guys want code samples, I'll put some together. If anything, I'd be satisfied with some tips for making XML+XSLT work together smoothly. --- @Sklivvz I was using python's libxml2 & libxslt to process this. I'm looking into xsltproc now. It seems like a good tool for these one-off situations. Thanks! --- @diomidis-spinellis It's well-formed, though (as mentioned) I don't have faculties to discover it's validity. As for writing a Schema, I like the idea. The amount of time I invest in getting this one file validated would be impractical if it were a one-time thing, though I foresee having to handle more files like this from our vendor. Writing a schema (and submitting it to the vendor) would be an excellent long-term strategy for managing XML funk like this. Thanks!
2008/09/29
[ "https://Stackoverflow.com/questions/149474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/22491/" ]
Can I recommend Saxon XSLT processor - I know for a fact it can handle large files, provided you give the Java JVM enough memory. Another thing is that there may be optimisations n your XSLT that could help, but its hard to make blanket statements about things like that.
Check out Apache's [Xalan C++](http://xml.apache.org/xalan-c/index.html). In my experience, where others (including Saxon) have failed on "large" XML files (>600 MB), this was able to run with memory to spare.
149,474
This XML file contained archived news stories for all of last year. I was asked to sort these stories by story categor[y|ies] into new XML files. ``` big_story_export.xml ``` turns into ``` lifestyles.xml food.xml nascar.xml ``` ...and so on. I got the job done using a one-off python script, *however*, **I originally attempted this using XSLT**. This resulted in frustration as my XPATH selections were crapping the bed. Test files were transformed perfectly, but putting the big file up against my style sheet resulted in ...*nothing*. What strategies do you recommend for ensuring that files like this will run through XSLT? *This was handed to me by a vendor, so imagine that I don't have a lot of leverage when it comes to defining the structure of this file.* If you guys want code samples, I'll put some together. If anything, I'd be satisfied with some tips for making XML+XSLT work together smoothly. --- @Sklivvz I was using python's libxml2 & libxslt to process this. I'm looking into xsltproc now. It seems like a good tool for these one-off situations. Thanks! --- @diomidis-spinellis It's well-formed, though (as mentioned) I don't have faculties to discover it's validity. As for writing a Schema, I like the idea. The amount of time I invest in getting this one file validated would be impractical if it were a one-time thing, though I foresee having to handle more files like this from our vendor. Writing a schema (and submitting it to the vendor) would be an excellent long-term strategy for managing XML funk like this. Thanks!
2008/09/29
[ "https://Stackoverflow.com/questions/149474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/22491/" ]
The problem with using XSLT to process arbitrarily large XML documents is that XSLT processing begins by parsing the input document into a source tree. This tree gets parsed into memory. This means that eventually you'll encounter an input document large enough to cause problems even if you're using a robust XSLT processor like Saxon and you have plenty of virtual memory. (It may still work, but it'll be slow.) Another reason not to use XSLT for this is that you're producing multiple output documents, which (based on what you've said so far) means you're making multiple passes over your input document. It may (depending on a lot of factors about your situation that I don't know about) be better to take a SAX-based approach instead of using XSLT. Using a SAX processor, you may be able to write a method that makes a single, forward-only pass through the source document, parsing it as it goes, and writes all of the output documents as it encounters the elements that contain them.
Check out Apache's [Xalan C++](http://xml.apache.org/xalan-c/index.html). In my experience, where others (including Saxon) have failed on "large" XML files (>600 MB), this was able to run with memory to spare.
56,921,192
I have created a text file using file operations in python. I want the file to be pushed to my existed GITLAB repository. I have tried the below code where i get the created file in my local folders. ``` file_path = 'E:\My material\output.txt' k= 'Fail/Pass' with open (file_path, 'w+') as text: text.write('Test case :' +k) text.close() ``` What is the process or steps or any modifications in file\_path to move the created text file to the GITLAB repository through python code.
2019/07/07
[ "https://Stackoverflow.com/questions/56921192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7547718/" ]
You can use `.loc` and column names in the following way: ``` import pandas as pd import numpy as np np.random.seed(12) df = pd.DataFrame( { "df0" : np.random.choice(["a", "b"], 100), "df1" : np.random.randint(0, 15, 100), "df2" : np.random.randint(0, 15, 100), "df3" : np.random.randint(0, 15, 100), "df4" : np.random.randint(0, 15, 100), } ) print(df.head()) l = [2, 3, 1, 4] df.loc[:, ["df1", "df2", "df3", "df4"]] *= np.array(l) df.head() ``` Here is the output: ``` df0 df1 df2 df3 df4 0 b 5 10 7 13 1 b 3 2 13 3 2 a 5 0 11 14 3 b 11 1 7 10 4 b 0 4 1 12 df0 df1 df2 df3 df4 0 b 10 30 7 52 1 b 6 6 13 12 2 a 10 0 11 56 3 b 22 3 7 40 4 b 0 12 1 48 ```
I think you were doing correct you need to define all columns you need to multiply ``` df.iloc[:,1:] = df.iloc[:,1:]*l ```
56,921,192
I have created a text file using file operations in python. I want the file to be pushed to my existed GITLAB repository. I have tried the below code where i get the created file in my local folders. ``` file_path = 'E:\My material\output.txt' k= 'Fail/Pass' with open (file_path, 'w+') as text: text.write('Test case :' +k) text.close() ``` What is the process or steps or any modifications in file\_path to move the created text file to the GITLAB repository through python code.
2019/07/07
[ "https://Stackoverflow.com/questions/56921192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7547718/" ]
You can use `.loc` and column names in the following way: ``` import pandas as pd import numpy as np np.random.seed(12) df = pd.DataFrame( { "df0" : np.random.choice(["a", "b"], 100), "df1" : np.random.randint(0, 15, 100), "df2" : np.random.randint(0, 15, 100), "df3" : np.random.randint(0, 15, 100), "df4" : np.random.randint(0, 15, 100), } ) print(df.head()) l = [2, 3, 1, 4] df.loc[:, ["df1", "df2", "df3", "df4"]] *= np.array(l) df.head() ``` Here is the output: ``` df0 df1 df2 df3 df4 0 b 5 10 7 13 1 b 3 2 13 3 2 a 5 0 11 14 3 b 11 1 7 10 4 b 0 4 1 12 df0 df1 df2 df3 df4 0 b 10 30 7 52 1 b 6 6 13 12 2 a 10 0 11 56 3 b 22 3 7 40 4 b 0 12 1 48 ```
Use [`DataFrame.iloc`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html) and multiple all columns without first: ``` l = [2, 3, 1, 4] df.iloc[:, 1:] *= l print (df) Name c1 c2 c3 c4 0 a1 2 6 2 12 1 a2 4 3 1 8 2 a3 6 3 2 4 3 a4 4 9 3 16 ```
56,921,192
I have created a text file using file operations in python. I want the file to be pushed to my existed GITLAB repository. I have tried the below code where i get the created file in my local folders. ``` file_path = 'E:\My material\output.txt' k= 'Fail/Pass' with open (file_path, 'w+') as text: text.write('Test case :' +k) text.close() ``` What is the process or steps or any modifications in file\_path to move the created text file to the GITLAB repository through python code.
2019/07/07
[ "https://Stackoverflow.com/questions/56921192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7547718/" ]
I think you were doing correct you need to define all columns you need to multiply ``` df.iloc[:,1:] = df.iloc[:,1:]*l ```
Use [`DataFrame.iloc`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html) and multiple all columns without first: ``` l = [2, 3, 1, 4] df.iloc[:, 1:] *= l print (df) Name c1 c2 c3 c4 0 a1 2 6 2 12 1 a2 4 3 1 8 2 a3 6 3 2 4 3 a4 4 9 3 16 ```
31,745,613
I have the below mysql table. I need to pull out the first two rows as a dictionary using python. I am using python 2.7. ``` C1 C2 C3 C4 C5 C6 C7 25 33 76 87 56 76 47 67 94 90 56 77 32 84 53 66 24 93 33 88 99 73 34 52 85 67 82 77 ``` I use the following code ``` exp = MySQLdb.connect(host,port,user,passwd,db) exp_cur = van.cursor(MySQLdb.cursors.DictCursor) exp_cur.execute("SELECT * FROM table;") data = exp_cur.fetchone() data_keys = data.keys() #print data_keys ``` The expected output (data\_keys) is ``` ['C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7'] ``` But I get ``` ['C1', 'C3', 'C2', 'C5', 'C4', 'C7', 'C6'] ``` What is the mistake in my code?
2015/07/31
[ "https://Stackoverflow.com/questions/31745613", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5070767/" ]
[`dict` keys have no easily predictable order](https://stackoverflow.com/q/4458169/190597). To obtain the database table fields in the order in which they appear in the database, use the cursor's [description attribute](https://www.python.org/dev/peps/pep-0249/#description): ``` fields = [item[0] for item in cursor.description] ``` --- For example, ``` import MySQLdb import MySQLdb.cursors as cursors import config connection = MySQLdb.connect( host=config.HOST, user=config.USER, passwd=config.PASS, db=config.MYDB, cursorclass=cursors.DictCursor) with connection as cursor: cursor.execute('DROP TABLE IF EXISTS test') cursor.execute("""CREATE TABLE test (foo int, bar int, baz int)""") cursor.execute("""INSERT INTO test (foo, bar, baz) VALUES (%s,%s,%s)""", (1,2,3)) cursor.execute('SELECT * FROM test') data = cursor.fetchone() fields = [item[0] for item in cursor.description] ``` `data.keys()` may return the fields in any order: ``` print(data.keys()) # ['baz', 'foo', 'bar'] ``` But `fields` is always `('foo', 'bar', 'baz')`: ``` print(fields) # ('foo', 'bar', 'baz') ```
Instead of ``` data_keys = data.keys() ``` Try: ``` data_keys = exp_cur.column_names ``` Source: [10.5.11 Property MySQLCursor.column\_names](http://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-column-names.html)
31,745,613
I have the below mysql table. I need to pull out the first two rows as a dictionary using python. I am using python 2.7. ``` C1 C2 C3 C4 C5 C6 C7 25 33 76 87 56 76 47 67 94 90 56 77 32 84 53 66 24 93 33 88 99 73 34 52 85 67 82 77 ``` I use the following code ``` exp = MySQLdb.connect(host,port,user,passwd,db) exp_cur = van.cursor(MySQLdb.cursors.DictCursor) exp_cur.execute("SELECT * FROM table;") data = exp_cur.fetchone() data_keys = data.keys() #print data_keys ``` The expected output (data\_keys) is ``` ['C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7'] ``` But I get ``` ['C1', 'C3', 'C2', 'C5', 'C4', 'C7', 'C6'] ``` What is the mistake in my code?
2015/07/31
[ "https://Stackoverflow.com/questions/31745613", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5070767/" ]
Instead of ``` data_keys = data.keys() ``` Try: ``` data_keys = exp_cur.column_names ``` Source: [10.5.11 Property MySQLCursor.column\_names](http://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-column-names.html)
While creating the cursor pass an argument as `dictionary=True`. example: ``` exp = MySQLdb.connect(host,port,user,passwd,db) exp_cur = van.cursor(dictionary=True) ``` Now when you will fetch the data, you will get a dictionary as a result.
31,745,613
I have the below mysql table. I need to pull out the first two rows as a dictionary using python. I am using python 2.7. ``` C1 C2 C3 C4 C5 C6 C7 25 33 76 87 56 76 47 67 94 90 56 77 32 84 53 66 24 93 33 88 99 73 34 52 85 67 82 77 ``` I use the following code ``` exp = MySQLdb.connect(host,port,user,passwd,db) exp_cur = van.cursor(MySQLdb.cursors.DictCursor) exp_cur.execute("SELECT * FROM table;") data = exp_cur.fetchone() data_keys = data.keys() #print data_keys ``` The expected output (data\_keys) is ``` ['C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7'] ``` But I get ``` ['C1', 'C3', 'C2', 'C5', 'C4', 'C7', 'C6'] ``` What is the mistake in my code?
2015/07/31
[ "https://Stackoverflow.com/questions/31745613", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5070767/" ]
[`dict` keys have no easily predictable order](https://stackoverflow.com/q/4458169/190597). To obtain the database table fields in the order in which they appear in the database, use the cursor's [description attribute](https://www.python.org/dev/peps/pep-0249/#description): ``` fields = [item[0] for item in cursor.description] ``` --- For example, ``` import MySQLdb import MySQLdb.cursors as cursors import config connection = MySQLdb.connect( host=config.HOST, user=config.USER, passwd=config.PASS, db=config.MYDB, cursorclass=cursors.DictCursor) with connection as cursor: cursor.execute('DROP TABLE IF EXISTS test') cursor.execute("""CREATE TABLE test (foo int, bar int, baz int)""") cursor.execute("""INSERT INTO test (foo, bar, baz) VALUES (%s,%s,%s)""", (1,2,3)) cursor.execute('SELECT * FROM test') data = cursor.fetchone() fields = [item[0] for item in cursor.description] ``` `data.keys()` may return the fields in any order: ``` print(data.keys()) # ['baz', 'foo', 'bar'] ``` But `fields` is always `('foo', 'bar', 'baz')`: ``` print(fields) # ('foo', 'bar', 'baz') ```
While creating the cursor pass an argument as `dictionary=True`. example: ``` exp = MySQLdb.connect(host,port,user,passwd,db) exp_cur = van.cursor(dictionary=True) ``` Now when you will fetch the data, you will get a dictionary as a result.
67,384,831
Using this option in python it is possible to calculate the mean from multiple csv file If file1.csv through file100.csv are all in the same directory, you can use this Python script: ``` #!/usr/bin/env python3 N = 100 mean_sum = 0 std_sum = 0 for i in range(1, N + 1): with open(f"file{i}.csv") as f: mean_sum += float(f.readline().split(",")[1]) std_sum += float(f.readline().split(",")[1]) print(f"Mean of means: {mean_sum / N}") print(f"Mean of stds: {std_sum / N}") ``` How is it possible to make it in R?
2021/05/04
[ "https://Stackoverflow.com/questions/67384831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14961922/" ]
Try the following **Solution 1, create a new axios instance in your plugins folder:** ``` export default function ({ $axios }, inject) { // Create a custom axios instance const api = $axios.create({ headers: { // headers you need } }) // Inject to context as $api inject('api', api) } ``` Declare this plugin in nuxt.config.js, then you can send your request : ``` this.$api.$put(...) ``` **Solution 2, declare axios as a plugin in plugins/axios.js and set the hearders according to the request url:** ``` export default function({ $axios, redirect, app }) { const apiS3BaseUrl = // Your s3 base url here $axios.onRequest(config => { if (config.url.includes(apiS3BaseUrl) { setToken(false) // Or delete $axios.defaults.headers.common['Authorization'] } else { // Your current axios config here } }); } ``` Declare this plugin in nuxt.config.js Personally I use the first solution, it doesn't matter if someday the s3 url changes. Here is the [doc](https://axios.nuxtjs.org/extend)
You can pass the below configuration to `nuxt-auth`. Beware, those `plugins` are not related to the root configuration, but related to the `nuxt-auth` package. `nuxt.config.js` ```js auth: { redirect: { login: '/login', home: '/', logout: '/login', callback: false, }, strategies: { ... }, plugins: ['~/plugins/config-file-for-nuxt-auth.js'], }, ``` Then, create a plugin file that will serve as configuration for `@nuxt/auth` (you need to have `@nuxt/axios` installed of course. PS: in this file, `exampleBaseUrlForAxios` is used as an example to set the variable for the axios calls while using `@nuxt/auth`. `config-file-for-nuxt-auth.js` ```js export default ({ $axios, $config: { exampleBaseUrlForAxios } }) => { $axios.defaults.baseURL = exampleBaseUrlForAxios // I guess that any usual axios configuration can be done here } ``` This is the recommended way of doing things as explained in this [article](https://nuxtjs.org/blog/moving-from-nuxtjs-dotenv-to-runtime-config/). Basically, you can pass runtime variables to your project when you're using this. Hence, here we are passing a `EXAMPLE_BASE_URL_FOR_AXIOS` variable (located in `.env`) and renaming it to a name that we wish to use in our project. `nuxt.config.js` ```js export default { publicRuntimeConfig: { exampleBaseUrlForAxios: process.env.EXAMPLE_BASE_URL_FOR_AXIOS, } } ```
35,387,277
Is there a way in python with selenium that instead of selecting an option using a value or name from a drop down menu, that I can select an option via count? Like select option 1 and another example select option 2. This is because it's a possibility that a value or text of a drop down menu option can change so to ensure an option is selected, I just want to say select the first option (regardless what it is) and for another example select the fifth option etc. Below is the code I have using value to select an option which will be a problem if the value changes in the future: ``` pax_one_bags = Select(driver.find_element_by_id("ctl00_MainContent_passengerList_PassengerGridView_ctl02_baggageOutDropDown")) pax_one_bags.select_by_value("2") pax_two_bags = Select(driver.find_element_by_id("ctl00_MainContent_passengerList_PassengerGridView_ctl03_baggageOutDropDown")) pax_two_bags.select_by_value("5") ```
2016/02/14
[ "https://Stackoverflow.com/questions/35387277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1096892/" ]
If this represents the way you have been trying to match output, it's your problem: ``` while(reader.readLine() != "\u001B") {} ``` Except in special cases, you have to use the `equals()` method on `String` instances: ``` while (true) { String line = reader.readLine(); if ((line == null) || "\u001B".equals(line)) break; } ``` I'm not sure why you expect `ESC` and a newline when a process exits though.
I believe you need to call the Process.waitFor() method. So you need something like: ``` Process p = build.start(); p.waitFor() ``` If you are trying to simulate a bash shell, allowing input of a command, executing, and processing output without terminating. There is an open source project that may be a good reference for code on how to do this. It is available on Git. Take a look at the [Jediterm](https://github.com/JetBrains/jediterm) Pure Java Emulator. Thinking about simulating a bash, I also found this example for [Piping between processes](https://blog.art-of-coding.eu/piping-between-processes/) also be be relevant. It does show how to extract the output of a process executing and piping that data as the input into another Java Process. Should be helpful.
60,959,688
I have a python2 script I want to run with the [pwntools python module](https://github.com/Gallopsled/pwntools) and I tried running it using: > > python test.py > > > But then I get: > > File "test.py", line 3, in > from pwn import \* > ImportError: No module named pwn > > > But when I try it with python3, it gets past that error but it runs into other errors because it's a python2 script. Why does pwntools not work when I run it with python2 and can I get my script to run without porting the whole thing to python3?
2020/03/31
[ "https://Stackoverflow.com/questions/60959688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12941653/" ]
**Yes**, It's absolutely possible to include a `JavaScript` object into `makeStyles`. Thanks to the `spread` operator. > > Advice is to spread over the object first, so that you can easily override any styles. > > Therefore it's preferred to do as follows. > > > ```js const useStyles = makeStyles(theme => ({ textField: { ...stylesFromDatabase, // object width: "100%", color: "green", // this would override "red" (easier fine tuning) }, }); ```
For the benefit of future posters, the code in my original post worked perfectly, I just had something overriding it later! (Without the callback function it was undefined) – H Capello just
65,370,140
Thanks for looking into this, I have a python program for which I need to have `process_tweet` and `build_freqs` for some NLP task, `nltk` is installed already and `utils` **wasn't** so I installed it via `pip install utils` but the above mentioned two modules apparently weren't installed, the error I got is standard one here, ``` ImportError: cannot import name 'process_tweet' from 'utils' (C:\Python\lib\site-packages\utils\__init__.py) ``` what have I done wrong or is there anything missing? Also I referred [This stackoverflow answer](https://stackoverflow.com/questions/37096364/python-importerror-cannot-import-name-utils) but it didn't help.
2020/12/19
[ "https://Stackoverflow.com/questions/65370140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11779635/" ]
You can easily access any source code with ??, for example in this case: process\_tweet?? (the code above from deeplearning.ai NLP course custome utils library): ``` def process_tweet(tweet): """Process tweet function. Input: tweet: a string containing a tweet Output: tweets_clean: a list of words containing the processed tweet """ stemmer = PorterStemmer() stopwords_english = stopwords.words('english') # remove stock market tickers like $GE tweet = re.sub(r'\$\w*', '', tweet) # remove old style retweet text "RT" tweet = re.sub(r'^RT[\s]+', '', tweet) # remove hyperlinks tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) # remove hashtags # only removing the hash # sign from the word tweet = re.sub(r'#', '', tweet) # tokenize tweets tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True, reduce_len=True) tweet_tokens = tokenizer.tokenize(tweet) tweets_clean = [] for word in tweet_tokens: if (word not in stopwords_english and # remove stopwords word not in string.punctuation): # remove punctuation # tweets_clean.append(word) stem_word = stemmer.stem(word) # stemming word tweets_clean.append(stem_word) ```
Try this code, It should work: ``` def process_tweet(tweet): stemmer = PorterStemmer() stopwords_english = stopwords.words('english') tweet = re.sub(r'\$\w*', '', tweet) tweet = re.sub(r'^RT[\s]+', '', tweet) tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) tweet = re.sub(r'#', '', tweet) tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True,reduce_len=True) tweet_tokens = tokenizer.tokenize(tweet) tweets_clean = [] for word in tweet_tokens: if (word not in stopwords_english and word not in string.punctuation): stem_word = stemmer.stem(word) # stemming word tweets_clean.append(stem_word) return tweets_clean ```
65,370,140
Thanks for looking into this, I have a python program for which I need to have `process_tweet` and `build_freqs` for some NLP task, `nltk` is installed already and `utils` **wasn't** so I installed it via `pip install utils` but the above mentioned two modules apparently weren't installed, the error I got is standard one here, ``` ImportError: cannot import name 'process_tweet' from 'utils' (C:\Python\lib\site-packages\utils\__init__.py) ``` what have I done wrong or is there anything missing? Also I referred [This stackoverflow answer](https://stackoverflow.com/questions/37096364/python-importerror-cannot-import-name-utils) but it didn't help.
2020/12/19
[ "https://Stackoverflow.com/questions/65370140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11779635/" ]
Try this code, It should work: ``` def process_tweet(tweet): stemmer = PorterStemmer() stopwords_english = stopwords.words('english') tweet = re.sub(r'\$\w*', '', tweet) tweet = re.sub(r'^RT[\s]+', '', tweet) tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) tweet = re.sub(r'#', '', tweet) tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True,reduce_len=True) tweet_tokens = tokenizer.tokenize(tweet) tweets_clean = [] for word in tweet_tokens: if (word not in stopwords_english and word not in string.punctuation): stem_word = stemmer.stem(word) # stemming word tweets_clean.append(stem_word) return tweets_clean ```
I guess you don't need to use `process_tweet` as all. The code in the course is just a shortcut to summarize everything you do from the beginning to the stemming step; hence, just ignore the step and just print out the `tweet_stem` to see the difference between original text and preprocessed text.
65,370,140
Thanks for looking into this, I have a python program for which I need to have `process_tweet` and `build_freqs` for some NLP task, `nltk` is installed already and `utils` **wasn't** so I installed it via `pip install utils` but the above mentioned two modules apparently weren't installed, the error I got is standard one here, ``` ImportError: cannot import name 'process_tweet' from 'utils' (C:\Python\lib\site-packages\utils\__init__.py) ``` what have I done wrong or is there anything missing? Also I referred [This stackoverflow answer](https://stackoverflow.com/questions/37096364/python-importerror-cannot-import-name-utils) but it didn't help.
2020/12/19
[ "https://Stackoverflow.com/questions/65370140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11779635/" ]
Try this code, It should work: ``` def process_tweet(tweet): stemmer = PorterStemmer() stopwords_english = stopwords.words('english') tweet = re.sub(r'\$\w*', '', tweet) tweet = re.sub(r'^RT[\s]+', '', tweet) tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) tweet = re.sub(r'#', '', tweet) tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True,reduce_len=True) tweet_tokens = tokenizer.tokenize(tweet) tweets_clean = [] for word in tweet_tokens: if (word not in stopwords_english and word not in string.punctuation): stem_word = stemmer.stem(word) # stemming word tweets_clean.append(stem_word) return tweets_clean ```
You can try this. ``` def preprocess_tweet(tweet): # cleaning tweet = re.sub(r'^RT[\s]+','',tweet) tweet = re.sub(r'https?://[^\s\n\r]+', '', tweet) tweet = re.sub(r'#', '',tweet) tweet= re.sub(r'@', '',tweet) # tokenization token = TweetTokenizer(preserve_case=False, strip_handles=True,reduce_len=True) tweet_tokenized = token.tokenize(tweet) # STOP WORDS stopwords_english = stopwords.words('english') tweet_processed = [] for word in tweet_tokenized: if (word not in stopwords_english and word not in string.punctuation): tweet_processed.append(word) # stemming tweet_stem = [] stem = PorterStemmer() for word in tweet_processed: stem_word = stem.stem(word) tweet_stem.append(stem_word) return tweet_stem ``` **Input and Output** [![Input and Expected Output](https://i.stack.imgur.com/Mkis9.png)](https://i.stack.imgur.com/Mkis9.png)
65,370,140
Thanks for looking into this, I have a python program for which I need to have `process_tweet` and `build_freqs` for some NLP task, `nltk` is installed already and `utils` **wasn't** so I installed it via `pip install utils` but the above mentioned two modules apparently weren't installed, the error I got is standard one here, ``` ImportError: cannot import name 'process_tweet' from 'utils' (C:\Python\lib\site-packages\utils\__init__.py) ``` what have I done wrong or is there anything missing? Also I referred [This stackoverflow answer](https://stackoverflow.com/questions/37096364/python-importerror-cannot-import-name-utils) but it didn't help.
2020/12/19
[ "https://Stackoverflow.com/questions/65370140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11779635/" ]
You can easily access any source code with ??, for example in this case: process\_tweet?? (the code above from deeplearning.ai NLP course custome utils library): ``` def process_tweet(tweet): """Process tweet function. Input: tweet: a string containing a tweet Output: tweets_clean: a list of words containing the processed tweet """ stemmer = PorterStemmer() stopwords_english = stopwords.words('english') # remove stock market tickers like $GE tweet = re.sub(r'\$\w*', '', tweet) # remove old style retweet text "RT" tweet = re.sub(r'^RT[\s]+', '', tweet) # remove hyperlinks tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) # remove hashtags # only removing the hash # sign from the word tweet = re.sub(r'#', '', tweet) # tokenize tweets tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True, reduce_len=True) tweet_tokens = tokenizer.tokenize(tweet) tweets_clean = [] for word in tweet_tokens: if (word not in stopwords_english and # remove stopwords word not in string.punctuation): # remove punctuation # tweets_clean.append(word) stem_word = stemmer.stem(word) # stemming word tweets_clean.append(stem_word) ```
If you are following the NLP course on deeplearning.ai, then I believe the utils.py file was created by the instructors of that course, for use within the lab sessions, and shouldn't be confused with the usual utils.
65,370,140
Thanks for looking into this, I have a python program for which I need to have `process_tweet` and `build_freqs` for some NLP task, `nltk` is installed already and `utils` **wasn't** so I installed it via `pip install utils` but the above mentioned two modules apparently weren't installed, the error I got is standard one here, ``` ImportError: cannot import name 'process_tweet' from 'utils' (C:\Python\lib\site-packages\utils\__init__.py) ``` what have I done wrong or is there anything missing? Also I referred [This stackoverflow answer](https://stackoverflow.com/questions/37096364/python-importerror-cannot-import-name-utils) but it didn't help.
2020/12/19
[ "https://Stackoverflow.com/questions/65370140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11779635/" ]
If you are following the NLP course on deeplearning.ai, then I believe the utils.py file was created by the instructors of that course, for use within the lab sessions, and shouldn't be confused with the usual utils.
I guess you don't need to use `process_tweet` as all. The code in the course is just a shortcut to summarize everything you do from the beginning to the stemming step; hence, just ignore the step and just print out the `tweet_stem` to see the difference between original text and preprocessed text.
65,370,140
Thanks for looking into this, I have a python program for which I need to have `process_tweet` and `build_freqs` for some NLP task, `nltk` is installed already and `utils` **wasn't** so I installed it via `pip install utils` but the above mentioned two modules apparently weren't installed, the error I got is standard one here, ``` ImportError: cannot import name 'process_tweet' from 'utils' (C:\Python\lib\site-packages\utils\__init__.py) ``` what have I done wrong or is there anything missing? Also I referred [This stackoverflow answer](https://stackoverflow.com/questions/37096364/python-importerror-cannot-import-name-utils) but it didn't help.
2020/12/19
[ "https://Stackoverflow.com/questions/65370140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11779635/" ]
If you are following the NLP course on deeplearning.ai, then I believe the utils.py file was created by the instructors of that course, for use within the lab sessions, and shouldn't be confused with the usual utils.
You can try this. ``` def preprocess_tweet(tweet): # cleaning tweet = re.sub(r'^RT[\s]+','',tweet) tweet = re.sub(r'https?://[^\s\n\r]+', '', tweet) tweet = re.sub(r'#', '',tweet) tweet= re.sub(r'@', '',tweet) # tokenization token = TweetTokenizer(preserve_case=False, strip_handles=True,reduce_len=True) tweet_tokenized = token.tokenize(tweet) # STOP WORDS stopwords_english = stopwords.words('english') tweet_processed = [] for word in tweet_tokenized: if (word not in stopwords_english and word not in string.punctuation): tweet_processed.append(word) # stemming tweet_stem = [] stem = PorterStemmer() for word in tweet_processed: stem_word = stem.stem(word) tweet_stem.append(stem_word) return tweet_stem ``` **Input and Output** [![Input and Expected Output](https://i.stack.imgur.com/Mkis9.png)](https://i.stack.imgur.com/Mkis9.png)
65,370,140
Thanks for looking into this, I have a python program for which I need to have `process_tweet` and `build_freqs` for some NLP task, `nltk` is installed already and `utils` **wasn't** so I installed it via `pip install utils` but the above mentioned two modules apparently weren't installed, the error I got is standard one here, ``` ImportError: cannot import name 'process_tweet' from 'utils' (C:\Python\lib\site-packages\utils\__init__.py) ``` what have I done wrong or is there anything missing? Also I referred [This stackoverflow answer](https://stackoverflow.com/questions/37096364/python-importerror-cannot-import-name-utils) but it didn't help.
2020/12/19
[ "https://Stackoverflow.com/questions/65370140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11779635/" ]
You can easily access any source code with ??, for example in this case: process\_tweet?? (the code above from deeplearning.ai NLP course custome utils library): ``` def process_tweet(tweet): """Process tweet function. Input: tweet: a string containing a tweet Output: tweets_clean: a list of words containing the processed tweet """ stemmer = PorterStemmer() stopwords_english = stopwords.words('english') # remove stock market tickers like $GE tweet = re.sub(r'\$\w*', '', tweet) # remove old style retweet text "RT" tweet = re.sub(r'^RT[\s]+', '', tweet) # remove hyperlinks tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) # remove hashtags # only removing the hash # sign from the word tweet = re.sub(r'#', '', tweet) # tokenize tweets tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True, reduce_len=True) tweet_tokens = tokenizer.tokenize(tweet) tweets_clean = [] for word in tweet_tokens: if (word not in stopwords_english and # remove stopwords word not in string.punctuation): # remove punctuation # tweets_clean.append(word) stem_word = stemmer.stem(word) # stemming word tweets_clean.append(stem_word) ```
I guess you don't need to use `process_tweet` as all. The code in the course is just a shortcut to summarize everything you do from the beginning to the stemming step; hence, just ignore the step and just print out the `tweet_stem` to see the difference between original text and preprocessed text.
65,370,140
Thanks for looking into this, I have a python program for which I need to have `process_tweet` and `build_freqs` for some NLP task, `nltk` is installed already and `utils` **wasn't** so I installed it via `pip install utils` but the above mentioned two modules apparently weren't installed, the error I got is standard one here, ``` ImportError: cannot import name 'process_tweet' from 'utils' (C:\Python\lib\site-packages\utils\__init__.py) ``` what have I done wrong or is there anything missing? Also I referred [This stackoverflow answer](https://stackoverflow.com/questions/37096364/python-importerror-cannot-import-name-utils) but it didn't help.
2020/12/19
[ "https://Stackoverflow.com/questions/65370140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11779635/" ]
You can easily access any source code with ??, for example in this case: process\_tweet?? (the code above from deeplearning.ai NLP course custome utils library): ``` def process_tweet(tweet): """Process tweet function. Input: tweet: a string containing a tweet Output: tweets_clean: a list of words containing the processed tweet """ stemmer = PorterStemmer() stopwords_english = stopwords.words('english') # remove stock market tickers like $GE tweet = re.sub(r'\$\w*', '', tweet) # remove old style retweet text "RT" tweet = re.sub(r'^RT[\s]+', '', tweet) # remove hyperlinks tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) # remove hashtags # only removing the hash # sign from the word tweet = re.sub(r'#', '', tweet) # tokenize tweets tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True, reduce_len=True) tweet_tokens = tokenizer.tokenize(tweet) tweets_clean = [] for word in tweet_tokens: if (word not in stopwords_english and # remove stopwords word not in string.punctuation): # remove punctuation # tweets_clean.append(word) stem_word = stemmer.stem(word) # stemming word tweets_clean.append(stem_word) ```
You can try this. ``` def preprocess_tweet(tweet): # cleaning tweet = re.sub(r'^RT[\s]+','',tweet) tweet = re.sub(r'https?://[^\s\n\r]+', '', tweet) tweet = re.sub(r'#', '',tweet) tweet= re.sub(r'@', '',tweet) # tokenization token = TweetTokenizer(preserve_case=False, strip_handles=True,reduce_len=True) tweet_tokenized = token.tokenize(tweet) # STOP WORDS stopwords_english = stopwords.words('english') tweet_processed = [] for word in tweet_tokenized: if (word not in stopwords_english and word not in string.punctuation): tweet_processed.append(word) # stemming tweet_stem = [] stem = PorterStemmer() for word in tweet_processed: stem_word = stem.stem(word) tweet_stem.append(stem_word) return tweet_stem ``` **Input and Output** [![Input and Expected Output](https://i.stack.imgur.com/Mkis9.png)](https://i.stack.imgur.com/Mkis9.png)
65,370,140
Thanks for looking into this, I have a python program for which I need to have `process_tweet` and `build_freqs` for some NLP task, `nltk` is installed already and `utils` **wasn't** so I installed it via `pip install utils` but the above mentioned two modules apparently weren't installed, the error I got is standard one here, ``` ImportError: cannot import name 'process_tweet' from 'utils' (C:\Python\lib\site-packages\utils\__init__.py) ``` what have I done wrong or is there anything missing? Also I referred [This stackoverflow answer](https://stackoverflow.com/questions/37096364/python-importerror-cannot-import-name-utils) but it didn't help.
2020/12/19
[ "https://Stackoverflow.com/questions/65370140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11779635/" ]
I guess you don't need to use `process_tweet` as all. The code in the course is just a shortcut to summarize everything you do from the beginning to the stemming step; hence, just ignore the step and just print out the `tweet_stem` to see the difference between original text and preprocessed text.
You can try this. ``` def preprocess_tweet(tweet): # cleaning tweet = re.sub(r'^RT[\s]+','',tweet) tweet = re.sub(r'https?://[^\s\n\r]+', '', tweet) tweet = re.sub(r'#', '',tweet) tweet= re.sub(r'@', '',tweet) # tokenization token = TweetTokenizer(preserve_case=False, strip_handles=True,reduce_len=True) tweet_tokenized = token.tokenize(tweet) # STOP WORDS stopwords_english = stopwords.words('english') tweet_processed = [] for word in tweet_tokenized: if (word not in stopwords_english and word not in string.punctuation): tweet_processed.append(word) # stemming tweet_stem = [] stem = PorterStemmer() for word in tweet_processed: stem_word = stem.stem(word) tweet_stem.append(stem_word) return tweet_stem ``` **Input and Output** [![Input and Expected Output](https://i.stack.imgur.com/Mkis9.png)](https://i.stack.imgur.com/Mkis9.png)
25,916,444
I would like to test, using unittest, a method which reads from a file using a context manager: ``` with open(k_file, 'r') as content_file: content = content_file.read() ``` I don't want to have to create a file on my system so I wanted to mock it, but I'm not suceeding much at the moment. I've found [mock\_open](http://www.voidspace.org.uk/python/mock/helpers.html#mock-open) but I don't really understand how I'm supposed to use it and feed the mock as content\_file in my test case. There is for instance this [post](https://stackoverflow.com/a/19663055/914086) here, but I do not understand how one is supposed to write this in a test case without modifying the original code. Could anyone point me in the right direction?
2014/09/18
[ "https://Stackoverflow.com/questions/25916444", "https://Stackoverflow.com", "https://Stackoverflow.com/users/914086/" ]
`mock_open()` is the way to go; you patch `open` in your code-under-test with the result of a `mock_open()` call: ``` mocked_open = unittest.mock.mock_open(read_data='file contents\nas needed\n') with unittest.mock.patch('yourmodule.open', mocked_open, create=True): # tests calling your code; the open function will use the mocked_open object ``` The [`patch()` context manager](http://www.voidspace.org.uk/python/mock/patch.html#patch) will put a `open()` global into your module (I named it `yourmodule`), bound to the `mocked_open()`-produced object. This object will pretend to produce a file object when called. The only thing this mock file object *won't* do yet is iteration; you cannot do `for line in content_file` with it, at least not in current versions of the `mock` library. See [Customizing unittest.mock.mock\_open for iteration](https://stackoverflow.com/questions/24779893/customizing-unittest-mock-mock-open-for-iteration) for a work-around.
An alternative is [pyfakefs](http://github.com/jmcgeheeiv/pyfakefs). It allows you to create a fake file system, write and read files, set permissions and more without ever touching your real disk. It also contains a practical example and tutorial showing how to apply pyfakefs to both unittest and doctest.
26,679,011
I am trying to use mpl\_toolkits.basemap on python and everytime I use a function for plotting like drawcoastlines() or any other, the program automatically shows the plot on the screen. My problem is that I am trying to use those programs later on an external server and it returns 'SystemExit: Unable to access the X Display, is $DISPLAY set properly?' Is there any way I can avoid the plot to be shown when I use a Basemap function on it? I just want to save it to a file so later I can read it externally. My code is: ``` from mpl_toolkits.basemap import Basemap import numpy as np m = Basemap(projection='robin',lon_0=0) m.drawcoastlines() #m.fillcontinents(color='coral',lake_color='aqua') # draw parallels and meridians. m.drawparallels(np.arange(-90.,120.,10.)) m.drawmeridians(np.arange(0.,360.,60.)) ```
2014/10/31
[ "https://Stackoverflow.com/questions/26679011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3510686/" ]
Use the `Agg` backend, it doesn't require a graphical environment: Do this at the very beginning of your script: ``` import matplotlib as mpl mpl.use('Agg') ``` See also the FAQ on [Generate images without having a window appear](http://matplotlib.org/faq/howto_faq.html#generate-images-without-having-a-window-appear).
The easiest way is to put off the interactive mode of matplotlib. ``` from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt import numpy as np #NOT SHOW plt.ioff() m = Basemap(projection='robin',lon_0=0) m.drawcoastlines() #m.fillcontinents(color='coral',lake_color='aqua') # draw parallels and meridians. m.drawparallels(np.arange(-90.,120.,10.)) m.drawmeridians(np.arange(0.,360.,60.)) ```
32,567,357
Since today i've been using [remote\_api](https://cloud.google.com/appengine/articles/remote_api) (python) to access the datastore on GAE. I usually do `remote_api_shell.py -s <mydomain>`. Today I tried and it fails, the error is: > > oauth2client.client.ApplicationDefaultCredentialsError: The > Application Default Credentials are not available. They are available > if running in Google Compute Engine. Otherwise, the environment > variable GOOGLE\_APPLICATION\_CREDENTIALS must be defined pointing to a > file defining the credentials. See > <https://developers.google.com/accounts/docs/application-default-credentials> > for more information. > > > I cannot understand why it asks me that. the wole output is this ``` stefano@~/gc$ remote_api_shell.py -s .... Traceback (most recent call last): File "/usr/local/bin/remote_api_shell.py", line 133, in <module> run_file(__file__, globals()) File "/usr/local/bin/remote_api_shell.py", line 129, in run_file execfile(_PATHS.script_file(script_name), globals_) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 157, in <module> main(sys.argv) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 153, in main appengine_rpc.HttpRpcServer) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 74, in remote_api_shell secure=secure) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 734, in ConfigureRemoteApiForOAuth credentials = client.GoogleCredentials.get_application_default() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/oauth2client/oauth2client/client.py", line 1204, in get_application_default return GoogleCredentials._get_implicit_credentials() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/oauth2client/oauth2client/client.py", line 1194, in _get_implicit_credentials raise ApplicationDefaultCredentialsError(ADC_HELP_MSG) oauth2client.client.ApplicationDefaultCredentialsError: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information. ```
2015/09/14
[ "https://Stackoverflow.com/questions/32567357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1257185/" ]
You could try implementing [SignalR](http://www.asp.net/signalr/overview/deployment/tutorial-signalr-self-host). It is a great library that uses web sockets to push data to clients. Edit: SignalR can help you solve your problem by allowing you to set up Hubs on your console app (server) that WPF application (clients) can connect to. When the clients start up you will register them with a specified Hub. When something changes on the server, you can push from the server Hub to the client. The client will receive the information from the server and allow you to handle it as you see fit. Rough mockup of some code: ``` namepsace Server{} public class YourHub : Hub { public void SomeHubMethod(string userName) { //clientMethodToCall is a method in the WPF application that //will be called. Client needs to be registered to hub first. Clients.User(userName).clientMethodToCall("This is a test."); //One issue you may face is mapping client connections. //There are a couple different ways/methodologies to do this. //Just figure what will work best for you. } } } namespace Client{ public class HubService{ public IHubProxy CreateHubProxy(){ var hubConnection = new HubConnection("http://serverAddress:serverPort/"); IHubProxy yourHubProxy = hubConnection.CreateHubProxy("YourHub"); return yourHubProxy; } } } ``` Then in your WPF window: ``` var hubService = new HubService(); var yourHubProxy = hubService.CreateHubProxy(); yourHubProxy.Start().Wait(); yourHubProxy.On("clientMethodToCall", () => DoSometingWithServerData()); ```
You need to create some kind of subscription model for the clients to the server to handle a Publish-Subscribe channel (see <http://www.enterpriseintegrationpatterns.com/patterns/messaging/PublishSubscribeChannel.html>). The basic architecture is this: 1. Client sends a request to the messaging channel to register itself as a subscriber to a certain kind of message/event/etc. 2. Server sends messages to the channel to be delivered to subscribers to that message. There are many ways to handle this. You could use some of the Azure services (like Event hub, or Topic) if you don't want to reinvent the wheel here. You could also have your server application track all of these things (updates to IP addresses, updates to subscription interest, making sure that messages don't get sent more than once; taking care of message durability [making sure messages get delivered even if the client is offline when the message gets created]).
32,567,357
Since today i've been using [remote\_api](https://cloud.google.com/appengine/articles/remote_api) (python) to access the datastore on GAE. I usually do `remote_api_shell.py -s <mydomain>`. Today I tried and it fails, the error is: > > oauth2client.client.ApplicationDefaultCredentialsError: The > Application Default Credentials are not available. They are available > if running in Google Compute Engine. Otherwise, the environment > variable GOOGLE\_APPLICATION\_CREDENTIALS must be defined pointing to a > file defining the credentials. See > <https://developers.google.com/accounts/docs/application-default-credentials> > for more information. > > > I cannot understand why it asks me that. the wole output is this ``` stefano@~/gc$ remote_api_shell.py -s .... Traceback (most recent call last): File "/usr/local/bin/remote_api_shell.py", line 133, in <module> run_file(__file__, globals()) File "/usr/local/bin/remote_api_shell.py", line 129, in run_file execfile(_PATHS.script_file(script_name), globals_) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 157, in <module> main(sys.argv) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 153, in main appengine_rpc.HttpRpcServer) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 74, in remote_api_shell secure=secure) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 734, in ConfigureRemoteApiForOAuth credentials = client.GoogleCredentials.get_application_default() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/oauth2client/oauth2client/client.py", line 1204, in get_application_default return GoogleCredentials._get_implicit_credentials() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/oauth2client/oauth2client/client.py", line 1194, in _get_implicit_credentials raise ApplicationDefaultCredentialsError(ADC_HELP_MSG) oauth2client.client.ApplicationDefaultCredentialsError: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information. ```
2015/09/14
[ "https://Stackoverflow.com/questions/32567357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1257185/" ]
You could try implementing [SignalR](http://www.asp.net/signalr/overview/deployment/tutorial-signalr-self-host). It is a great library that uses web sockets to push data to clients. Edit: SignalR can help you solve your problem by allowing you to set up Hubs on your console app (server) that WPF application (clients) can connect to. When the clients start up you will register them with a specified Hub. When something changes on the server, you can push from the server Hub to the client. The client will receive the information from the server and allow you to handle it as you see fit. Rough mockup of some code: ``` namepsace Server{} public class YourHub : Hub { public void SomeHubMethod(string userName) { //clientMethodToCall is a method in the WPF application that //will be called. Client needs to be registered to hub first. Clients.User(userName).clientMethodToCall("This is a test."); //One issue you may face is mapping client connections. //There are a couple different ways/methodologies to do this. //Just figure what will work best for you. } } } namespace Client{ public class HubService{ public IHubProxy CreateHubProxy(){ var hubConnection = new HubConnection("http://serverAddress:serverPort/"); IHubProxy yourHubProxy = hubConnection.CreateHubProxy("YourHub"); return yourHubProxy; } } } ``` Then in your WPF window: ``` var hubService = new HubService(); var yourHubProxy = hubService.CreateHubProxy(); yourHubProxy.Start().Wait(); yourHubProxy.On("clientMethodToCall", () => DoSometingWithServerData()); ```
Sounds like you want to track users à la <https://www.simple-talk.com/dotnet/asp.net/tracking-online-users-with-signalr/> , but in a desktop app in the sense of <http://www.codeproject.com/Articles/804770/Implementing-SignalR-in-Desktop-Applications> or damienbod.wordpress.com/2013/11/20/signalr-a-complete-wpf-client-using-mvvm/ .
32,567,357
Since today i've been using [remote\_api](https://cloud.google.com/appengine/articles/remote_api) (python) to access the datastore on GAE. I usually do `remote_api_shell.py -s <mydomain>`. Today I tried and it fails, the error is: > > oauth2client.client.ApplicationDefaultCredentialsError: The > Application Default Credentials are not available. They are available > if running in Google Compute Engine. Otherwise, the environment > variable GOOGLE\_APPLICATION\_CREDENTIALS must be defined pointing to a > file defining the credentials. See > <https://developers.google.com/accounts/docs/application-default-credentials> > for more information. > > > I cannot understand why it asks me that. the wole output is this ``` stefano@~/gc$ remote_api_shell.py -s .... Traceback (most recent call last): File "/usr/local/bin/remote_api_shell.py", line 133, in <module> run_file(__file__, globals()) File "/usr/local/bin/remote_api_shell.py", line 129, in run_file execfile(_PATHS.script_file(script_name), globals_) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 157, in <module> main(sys.argv) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 153, in main appengine_rpc.HttpRpcServer) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 74, in remote_api_shell secure=secure) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 734, in ConfigureRemoteApiForOAuth credentials = client.GoogleCredentials.get_application_default() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/oauth2client/oauth2client/client.py", line 1204, in get_application_default return GoogleCredentials._get_implicit_credentials() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/oauth2client/oauth2client/client.py", line 1194, in _get_implicit_credentials raise ApplicationDefaultCredentialsError(ADC_HELP_MSG) oauth2client.client.ApplicationDefaultCredentialsError: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information. ```
2015/09/14
[ "https://Stackoverflow.com/questions/32567357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1257185/" ]
You could try implementing [SignalR](http://www.asp.net/signalr/overview/deployment/tutorial-signalr-self-host). It is a great library that uses web sockets to push data to clients. Edit: SignalR can help you solve your problem by allowing you to set up Hubs on your console app (server) that WPF application (clients) can connect to. When the clients start up you will register them with a specified Hub. When something changes on the server, you can push from the server Hub to the client. The client will receive the information from the server and allow you to handle it as you see fit. Rough mockup of some code: ``` namepsace Server{} public class YourHub : Hub { public void SomeHubMethod(string userName) { //clientMethodToCall is a method in the WPF application that //will be called. Client needs to be registered to hub first. Clients.User(userName).clientMethodToCall("This is a test."); //One issue you may face is mapping client connections. //There are a couple different ways/methodologies to do this. //Just figure what will work best for you. } } } namespace Client{ public class HubService{ public IHubProxy CreateHubProxy(){ var hubConnection = new HubConnection("http://serverAddress:serverPort/"); IHubProxy yourHubProxy = hubConnection.CreateHubProxy("YourHub"); return yourHubProxy; } } } ``` Then in your WPF window: ``` var hubService = new HubService(); var yourHubProxy = hubService.CreateHubProxy(); yourHubProxy.Start().Wait(); yourHubProxy.On("clientMethodToCall", () => DoSometingWithServerData()); ```
In general, whatever solution you choose is plagued with a common problem - clients hide behind firewalls and have dynamic IP addresses. This makes it difficult (I've heard of technologies claiming to overcome this but haven't seen any in action) for a server to push to a client. In reality, the client talks and the server listens and response. However, you can use this approach to simulate a push by; 1. polling (the client periodically asks for information) 2. long polling (the client asks for information and the server holds onto the request until information arrives or a timeout occurs) 3. sockets (the client requests server connection that is used for bi-directional communication for a period of time). Knowing those terms, your next choice is to write your own or use a third-party service (azure, amazon, other) to deliver messages for you. I personally like long polling because it is easy to implement. In my application, I have the following setup. * A web API server on Azure with and endpoint that listens for message requests * A simple loop inside the server code that checks the database for new messages every 100ms. * A client that calls the API, handling the response. As mentioned, there are many ways to do this. In your particular case, one way would be as follows. 1. Client A calls server API to listen for message 2. Server holds onto call, waiting for new message entry in database 3. Client B calls server API to post new message 4. Server saves message to database 5. Server instance from step 2 sees new message 6. Server returns message to Client A. Also, the message doesn't have to be stored in a database - it just depends on your needs.
32,567,357
Since today i've been using [remote\_api](https://cloud.google.com/appengine/articles/remote_api) (python) to access the datastore on GAE. I usually do `remote_api_shell.py -s <mydomain>`. Today I tried and it fails, the error is: > > oauth2client.client.ApplicationDefaultCredentialsError: The > Application Default Credentials are not available. They are available > if running in Google Compute Engine. Otherwise, the environment > variable GOOGLE\_APPLICATION\_CREDENTIALS must be defined pointing to a > file defining the credentials. See > <https://developers.google.com/accounts/docs/application-default-credentials> > for more information. > > > I cannot understand why it asks me that. the wole output is this ``` stefano@~/gc$ remote_api_shell.py -s .... Traceback (most recent call last): File "/usr/local/bin/remote_api_shell.py", line 133, in <module> run_file(__file__, globals()) File "/usr/local/bin/remote_api_shell.py", line 129, in run_file execfile(_PATHS.script_file(script_name), globals_) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 157, in <module> main(sys.argv) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 153, in main appengine_rpc.HttpRpcServer) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 74, in remote_api_shell secure=secure) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 734, in ConfigureRemoteApiForOAuth credentials = client.GoogleCredentials.get_application_default() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/oauth2client/oauth2client/client.py", line 1204, in get_application_default return GoogleCredentials._get_implicit_credentials() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/oauth2client/oauth2client/client.py", line 1194, in _get_implicit_credentials raise ApplicationDefaultCredentialsError(ADC_HELP_MSG) oauth2client.client.ApplicationDefaultCredentialsError: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information. ```
2015/09/14
[ "https://Stackoverflow.com/questions/32567357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1257185/" ]
You need to create some kind of subscription model for the clients to the server to handle a Publish-Subscribe channel (see <http://www.enterpriseintegrationpatterns.com/patterns/messaging/PublishSubscribeChannel.html>). The basic architecture is this: 1. Client sends a request to the messaging channel to register itself as a subscriber to a certain kind of message/event/etc. 2. Server sends messages to the channel to be delivered to subscribers to that message. There are many ways to handle this. You could use some of the Azure services (like Event hub, or Topic) if you don't want to reinvent the wheel here. You could also have your server application track all of these things (updates to IP addresses, updates to subscription interest, making sure that messages don't get sent more than once; taking care of message durability [making sure messages get delivered even if the client is offline when the message gets created]).
Sounds like you want to track users à la <https://www.simple-talk.com/dotnet/asp.net/tracking-online-users-with-signalr/> , but in a desktop app in the sense of <http://www.codeproject.com/Articles/804770/Implementing-SignalR-in-Desktop-Applications> or damienbod.wordpress.com/2013/11/20/signalr-a-complete-wpf-client-using-mvvm/ .
32,567,357
Since today i've been using [remote\_api](https://cloud.google.com/appengine/articles/remote_api) (python) to access the datastore on GAE. I usually do `remote_api_shell.py -s <mydomain>`. Today I tried and it fails, the error is: > > oauth2client.client.ApplicationDefaultCredentialsError: The > Application Default Credentials are not available. They are available > if running in Google Compute Engine. Otherwise, the environment > variable GOOGLE\_APPLICATION\_CREDENTIALS must be defined pointing to a > file defining the credentials. See > <https://developers.google.com/accounts/docs/application-default-credentials> > for more information. > > > I cannot understand why it asks me that. the wole output is this ``` stefano@~/gc$ remote_api_shell.py -s .... Traceback (most recent call last): File "/usr/local/bin/remote_api_shell.py", line 133, in <module> run_file(__file__, globals()) File "/usr/local/bin/remote_api_shell.py", line 129, in run_file execfile(_PATHS.script_file(script_name), globals_) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 157, in <module> main(sys.argv) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 153, in main appengine_rpc.HttpRpcServer) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 74, in remote_api_shell secure=secure) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 734, in ConfigureRemoteApiForOAuth credentials = client.GoogleCredentials.get_application_default() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/oauth2client/oauth2client/client.py", line 1204, in get_application_default return GoogleCredentials._get_implicit_credentials() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/oauth2client/oauth2client/client.py", line 1194, in _get_implicit_credentials raise ApplicationDefaultCredentialsError(ADC_HELP_MSG) oauth2client.client.ApplicationDefaultCredentialsError: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information. ```
2015/09/14
[ "https://Stackoverflow.com/questions/32567357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1257185/" ]
In general, whatever solution you choose is plagued with a common problem - clients hide behind firewalls and have dynamic IP addresses. This makes it difficult (I've heard of technologies claiming to overcome this but haven't seen any in action) for a server to push to a client. In reality, the client talks and the server listens and response. However, you can use this approach to simulate a push by; 1. polling (the client periodically asks for information) 2. long polling (the client asks for information and the server holds onto the request until information arrives or a timeout occurs) 3. sockets (the client requests server connection that is used for bi-directional communication for a period of time). Knowing those terms, your next choice is to write your own or use a third-party service (azure, amazon, other) to deliver messages for you. I personally like long polling because it is easy to implement. In my application, I have the following setup. * A web API server on Azure with and endpoint that listens for message requests * A simple loop inside the server code that checks the database for new messages every 100ms. * A client that calls the API, handling the response. As mentioned, there are many ways to do this. In your particular case, one way would be as follows. 1. Client A calls server API to listen for message 2. Server holds onto call, waiting for new message entry in database 3. Client B calls server API to post new message 4. Server saves message to database 5. Server instance from step 2 sees new message 6. Server returns message to Client A. Also, the message doesn't have to be stored in a database - it just depends on your needs.
Sounds like you want to track users à la <https://www.simple-talk.com/dotnet/asp.net/tracking-online-users-with-signalr/> , but in a desktop app in the sense of <http://www.codeproject.com/Articles/804770/Implementing-SignalR-in-Desktop-Applications> or damienbod.wordpress.com/2013/11/20/signalr-a-complete-wpf-client-using-mvvm/ .
11,866,944
I would like to be able to pickle a function or class from within \_\_main\_\_, with the obvious problem (mentioned in other posts) that the pickled function/class is in the \_\_main\_\_ namespace and unpickling in another script/module will fail. I have the following solution which works, is there a reason this should not be done? The following is in myscript.py: ``` import myscript import pickle if __name__ == "__main__": print pickle.dumps(myscript.myclass()) else: class myclass: pass ``` **edit**: The unpickling would be done in a script/module that *has access to* myscript.py and can do an `import myscript`. The aim is to use a solution like [parallel python](http://www.parallelpython.com/ "parallel python") to call functions remotely, and be able to write a short, *standalone* script that contains the functions/classes that can be accessed remotely.
2012/08/08
[ "https://Stackoverflow.com/questions/11866944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068490/" ]
You can get a better handle on global objects by importing `__main__`, and using the methods available in that module. This is what [dill](http://pythonhosted.org/dill) does in order to serialize almost anything in python. Basically, when dill serializes an interactively defined function, it uses some name mangling on `__main__` on both the serialization and deserialization side that makes `__main__` a valid module. ``` >>> import dill >>> >>> def bar(x): ... return foo(x) + x ... >>> def foo(x): ... return x**2 ... >>> bar(3) 12 >>> >>> _bar = dill.loads(dill.dumps(bar)) >>> _bar(3) 12 ``` Actually, dill registers it's types into the `pickle` registry, so if you have some black box code that uses `pickle` and you can't really edit it, then just importing dill can magically make it work without monkeypatching the 3rd party code. Or, if you want the whole interpreter session sent over as an "python image", dill can do that too. ``` >>> # continuing from above >>> dill.dump_session('foobar.pkl') >>> >>> ^D dude@sakurai>$ python Python 2.7.5 (default, Sep 30 2013, 20:15:49) [GCC 4.2.1 (Apple Inc. build 5566)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import dill >>> dill.load_session('foobar.pkl') >>> _bar(3) 12 ``` You can easily send the image across ssh to another computer, and start where you left off there as long as there's version compatibility of pickle and the usual caveats about python changing and things being installed. I actually use dill to serialize objects and send them across parallel resources with [parallel python](http://www.parallelpython.com/), multiprocessing, and [mpi4py](https://bitbucket.org/mpi4py/mpi4py). I roll these up conveniently into the [pathos](http://pythonhosted.org/pathos) package (and [pyina](http://pythonhosted.org/pyina) for MPI), which provides a uniform `map` interface for different parallel batch processing backends. ``` >>> # continued from above >>> from pathos.multiprocessing import ProcessingPool as Pool >>> Pool(4).map(foo, range(10)) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] >>> >>> from pyina.launchers import MpiPool >>> MpiPool(4).map(foo, range(10)) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] ``` There are also non-blocking and iterative maps as well as non-parallel pipe connections. I also have a pathos module for `pp`, however, it is somewhat unstable for functions defined in `__main__`. I'm working on improving that. If you like, fork [the code on github](https://www.github.com/uqfoundation) and help make the `pp` better for functions defined in `__main__`. The reason `pp` doesn't pickle well is that `pp` does it's serialization tricks through using temporary file objects and reading the interpreter session's history... so it doesn't serialize objects in the same way that multiprocessing or mpi4py do. I have a dill module `dill.source` that seamlessly does the same type of pickling that `pp` uses, but it's rather new.
If you are trying to pickle something so that you can use it somewhere else, separate from `test_script`, that's not going to work, because pickle (apparently) just tries to load the function from the module. Here's an example: test\_script.py ``` def my_awesome_function(x, y, z): return x + y + z ``` picklescript.py ``` import pickle import test_script with open("awesome.pickle", "wb") as f: pickle.dump(test_script.my_awesome_function, f) ``` If you run `python picklescript.py`, then change the filename of `test_script`, when you try to load the function, it will fail. e.g. Running this: ``` import pickle with open("awesome.pickle", "rb") as f: pickle.load(f) ``` Will give you the following traceback: ``` Traceback (most recent call last): File "load_pickle.py", line 3, in <module> pickle.load(f) File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/pickle.py", line 1378, in load return Unpickler(file).load() File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/pickle.py", line 858, in load dispatch[key](self) File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/pickle.py", line 1090, in load_global klass = self.find_class(module, name) File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/pickle.py", line 1124, in find_class __import__(module) ImportError: No module named test_script ```
11,866,944
I would like to be able to pickle a function or class from within \_\_main\_\_, with the obvious problem (mentioned in other posts) that the pickled function/class is in the \_\_main\_\_ namespace and unpickling in another script/module will fail. I have the following solution which works, is there a reason this should not be done? The following is in myscript.py: ``` import myscript import pickle if __name__ == "__main__": print pickle.dumps(myscript.myclass()) else: class myclass: pass ``` **edit**: The unpickling would be done in a script/module that *has access to* myscript.py and can do an `import myscript`. The aim is to use a solution like [parallel python](http://www.parallelpython.com/ "parallel python") to call functions remotely, and be able to write a short, *standalone* script that contains the functions/classes that can be accessed remotely.
2012/08/08
[ "https://Stackoverflow.com/questions/11866944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068490/" ]
Pickle seems to look at the **main** scope for definitions of classes and functions. From inside the module you're unpickling from, try this: ``` import myscript import __main__ __main__.myclass = myscript.myclass #unpickle anywhere after this ```
If you are trying to pickle something so that you can use it somewhere else, separate from `test_script`, that's not going to work, because pickle (apparently) just tries to load the function from the module. Here's an example: test\_script.py ``` def my_awesome_function(x, y, z): return x + y + z ``` picklescript.py ``` import pickle import test_script with open("awesome.pickle", "wb") as f: pickle.dump(test_script.my_awesome_function, f) ``` If you run `python picklescript.py`, then change the filename of `test_script`, when you try to load the function, it will fail. e.g. Running this: ``` import pickle with open("awesome.pickle", "rb") as f: pickle.load(f) ``` Will give you the following traceback: ``` Traceback (most recent call last): File "load_pickle.py", line 3, in <module> pickle.load(f) File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/pickle.py", line 1378, in load return Unpickler(file).load() File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/pickle.py", line 858, in load dispatch[key](self) File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/pickle.py", line 1090, in load_global klass = self.find_class(module, name) File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/pickle.py", line 1124, in find_class __import__(module) ImportError: No module named test_script ```
11,866,944
I would like to be able to pickle a function or class from within \_\_main\_\_, with the obvious problem (mentioned in other posts) that the pickled function/class is in the \_\_main\_\_ namespace and unpickling in another script/module will fail. I have the following solution which works, is there a reason this should not be done? The following is in myscript.py: ``` import myscript import pickle if __name__ == "__main__": print pickle.dumps(myscript.myclass()) else: class myclass: pass ``` **edit**: The unpickling would be done in a script/module that *has access to* myscript.py and can do an `import myscript`. The aim is to use a solution like [parallel python](http://www.parallelpython.com/ "parallel python") to call functions remotely, and be able to write a short, *standalone* script that contains the functions/classes that can be accessed remotely.
2012/08/08
[ "https://Stackoverflow.com/questions/11866944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068490/" ]
Pickle seems to look at the **main** scope for definitions of classes and functions. From inside the module you're unpickling from, try this: ``` import myscript import __main__ __main__.myclass = myscript.myclass #unpickle anywhere after this ```
You can get a better handle on global objects by importing `__main__`, and using the methods available in that module. This is what [dill](http://pythonhosted.org/dill) does in order to serialize almost anything in python. Basically, when dill serializes an interactively defined function, it uses some name mangling on `__main__` on both the serialization and deserialization side that makes `__main__` a valid module. ``` >>> import dill >>> >>> def bar(x): ... return foo(x) + x ... >>> def foo(x): ... return x**2 ... >>> bar(3) 12 >>> >>> _bar = dill.loads(dill.dumps(bar)) >>> _bar(3) 12 ``` Actually, dill registers it's types into the `pickle` registry, so if you have some black box code that uses `pickle` and you can't really edit it, then just importing dill can magically make it work without monkeypatching the 3rd party code. Or, if you want the whole interpreter session sent over as an "python image", dill can do that too. ``` >>> # continuing from above >>> dill.dump_session('foobar.pkl') >>> >>> ^D dude@sakurai>$ python Python 2.7.5 (default, Sep 30 2013, 20:15:49) [GCC 4.2.1 (Apple Inc. build 5566)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import dill >>> dill.load_session('foobar.pkl') >>> _bar(3) 12 ``` You can easily send the image across ssh to another computer, and start where you left off there as long as there's version compatibility of pickle and the usual caveats about python changing and things being installed. I actually use dill to serialize objects and send them across parallel resources with [parallel python](http://www.parallelpython.com/), multiprocessing, and [mpi4py](https://bitbucket.org/mpi4py/mpi4py). I roll these up conveniently into the [pathos](http://pythonhosted.org/pathos) package (and [pyina](http://pythonhosted.org/pyina) for MPI), which provides a uniform `map` interface for different parallel batch processing backends. ``` >>> # continued from above >>> from pathos.multiprocessing import ProcessingPool as Pool >>> Pool(4).map(foo, range(10)) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] >>> >>> from pyina.launchers import MpiPool >>> MpiPool(4).map(foo, range(10)) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] ``` There are also non-blocking and iterative maps as well as non-parallel pipe connections. I also have a pathos module for `pp`, however, it is somewhat unstable for functions defined in `__main__`. I'm working on improving that. If you like, fork [the code on github](https://www.github.com/uqfoundation) and help make the `pp` better for functions defined in `__main__`. The reason `pp` doesn't pickle well is that `pp` does it's serialization tricks through using temporary file objects and reading the interpreter session's history... so it doesn't serialize objects in the same way that multiprocessing or mpi4py do. I have a dill module `dill.source` that seamlessly does the same type of pickling that `pp` uses, but it's rather new.
50,005,229
So the assignment is: take 2 lists and write a program that returns a list that contains only the elements that are common to the lists without duplicates, and it must work on lists of different sizes. My code is: ``` a = [1, 2, 4] b = [3, 1, 5, 2] for j < len(a): for i < len(b): if a(elem) == b(i): print (a(elem)) i=i+1 j=j+1 ``` An infinite loop is then generated, where it prints 1 and then never exits. Can someone tell me why the infinite loop occurs? I understand this is not the most "python" way of doing things, however my coding background includes a very small, brute force technique of C, and I do not know much Python. If there are simple alternatives to this, please let me know, as well as why it never exits.
2018/04/24
[ "https://Stackoverflow.com/questions/50005229", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9691751/" ]
The value is a regular JavaScript expression. This way, if you want to pass a string, say `'test'`, use: ``` v-my-directive="'test'" ``` Demo: ```js Vue.directive('my-directive', function (el, binding) { console.log('directive expression:', binding.value) // => "test" }) new Vue({ el: '#app', data: { message: 'Hello Vue.js!' } }) ``` ```html <script src="https://unpkg.com/vue"></script> <div id="app"> <p>{{ message }}</p> <div v-my-directive="'test'"></div> </div> ```
You have to quote the string, otherwise it will look for the `test` variable in your component context (its `props` or `data`): ``` v-my-directive="'test'" ``` Inside your custom directive, you can access the passed value as in the `binding.value`: ``` Vue.directive('demo', { bind: function (el, binding, vnode) { var s = JSON.stringify el.innerHTML = 'name: ' + s(binding.name) + '<br>' + 'value: ' + s(binding.value) + '<br>' + 'expression: ' + s(binding.expression) + '<br>' + 'argument: ' + s(binding.arg) + '<br>' + 'modifiers: ' + s(binding.modifiers) + '<br>' + 'vnode keys: ' + Object.keys(vnode).join(', ') } }) ``` See the [Custom Directives](https://v2.vuejs.org/v2/guide/custom-directive.html) chapter of the guide.
37,490,609
i'm working on site for renting rooms. User picks 2 dates(UserStartDate & UserEndDate). with this python code i gonna get number of days in his date range: ``` user_date_range = [endUser - timedelta(i) for i in range((endUser - startUser).days+1)] user_range_num_days = len(user_date_range) ``` and i have a day price for room: 20$ but due to lack of proficiency in Django,I can't figure out how to calculate user price according to his date range. And where it should be done. hope for your help.
2016/05/27
[ "https://Stackoverflow.com/questions/37490609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6361942/" ]
It doesn't have anything to do with django but rather python. I assume `user_start_date` and `user_end_date` are both python `datetime.date` or `datetime.datetime` objects, then you could do: ``` num_days = (user_end_date - user_start_date).days total_price = num_days * 20 ```
<https://docs.python.org/2/library/calendar.html> A calendar is necessary as you should be aware that not all months have the same amount of days in them. itermonthdates(year, month) returns an iterator for all days in the month. Run through that iterator and increment a count for every date match within the range. Of course if the end date extends into the next month keep the same counter.
6,831,685
I am learning python via dive into python. Got few questions and unable to understand, even through the documentation. 1) ***BaseClass*** 2) ***InheritClass*** What exactly happens when we assign a *InheritClass* instance to a variable, when the *InheritClass* doesn't contain an `__init__` method and *BaseClass* does ? * Is the *BaseClass* `__init__` method called automatically * Also, tell me other things that happen under the hood. Actually the fileInfo.py example is giving me serious headache, i am just unable to understand as to how the things are working. Following
2011/07/26
[ "https://Stackoverflow.com/questions/6831685", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570928/" ]
Yes, `BaseClass.__init__` will be called automatically. Same goes for any other methods defined in the parent class but not the child class. Observe: ``` >>> class Parent(object): ... def __init__(self): ... print 'Parent.__init__' ... def func(self, x): ... print x ... >>> class Child(Parent): ... pass ... >>> x = Child() Parent.__init__ >>> x.func(1) 1 ``` The child inherits its parent's methods. It can override them, but it doesn't have to.
@FogleBird has already answered your question, but I wanted to add something and can't comment on his post: You may also want to look at the [`super` function](http://docs.python.org/library/functions.html#super). It's a way to call a parent's method from inside a child. It's helpful when you want to extend a method, for example: ``` class ParentClass(object): def __init__(self, x): self.x = x class ChildClass(ParentClass): def __init__(self, x, y): self.y = y super(ChildClass, self).__init__(x) ``` This can of course encompass methods that are a lot more complicated, *not* the `__init__` method or even a method by the same name!
23,969,296
I wanted to get number of indexes in two string which are not same. Things that are fixed: String data will only have 0 or 1 on any index. i.e strings are binary representation of a number. Both the string will be of same length. For the above problem I wrote the below function in python ``` def foo(a,b): result = 0 for x,y in zip(a,b): if x != y: result += 1 return result ``` But the thing is these strings are huge. Very large. So the above functions is taking too much time. any thing i should do to make it super fast. This is how i did same in c++, Its quite fast now, but still can't understand how to do packing in short integers and all that said by @Yves Daoust : ``` size_t diff(long long int n1, long long int n2) { long long int c = n1 ^ n2; bitset<sizeof(int) * CHAR_BIT> bits(c); string s = bits.to_string(); return std::count(s.begin(), s.end(), '1'); } ```
2014/05/31
[ "https://Stackoverflow.com/questions/23969296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3694018/" ]
I'll walk through the options here, but basically you are calculating the hamming distance between two numbers. There are dedicated libraries that can make this really, really fast, but lets focus on the pure Python options first. Your approach, zipping ---------------------- `zip()` produces one big list *first*, then lets you loop. You could use `itertools.izip()` instead, and make it a generator expression: ``` from itertools import izip def foo(a, b): return sum(x != y for x, y in izip(a, b)) ``` This produces only one pair at a time, avoiding having to create a large list of tuples first. The Python boolean type is a subclass of `int`, where `True == 1` and `False == 0`, letting you sum them: ``` >>> True + True 2 ``` Using integers instead ---------------------- However, you probably want to rethink your input data. It's much more efficient to use integers to represent your binary data; integers can be operated on directly. Doing the conversion inline, then counting the number of 1s on the XOR result is: ``` def foo(a, b): return format(int(a, 2) ^ int(b, 2), 'b').count('1') ``` but not having to convert `a` and `b` to integers in the first place would be much more efficient. Time comparisons: ``` >>> from itertools import izip >>> import timeit >>> s1 = "0100010010" >>> s2 = "0011100010" >>> def foo_zipped(a, b): return sum(x != y for x, y in izip(a, b)) ... >>> def foo_xor(a, b): return format(int(a, 2) ^ int(b, 2), 'b').count('1') ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_zipped as f') 1.7872788906097412 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f') 1.3399651050567627 >>> s1 = s1 * 1000 >>> s2 = s2 * 1000 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_zipped as f', number=1000) 1.0649528503417969 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=1000) 0.0779869556427002 ``` The XOR approach is faster by orders of magnitude if the inputs get larger, and this is **with** converting the inputs to `int` first. Dedicated libraries for bitcounting ----------------------------------- The bit counting (`format(integer, 'b').count(1)`) is pretty fast, but can be made faster still if you installed the [`gmpy` extension library](https://pypi.python.org/pypi/gmpy) (a Python wrapper around the [GMP library](https://gmplib.org/)) and used the `gmpy.popcount()` function: ``` def foo(a, b): return gmpy.popcount(int(a, 2) ^ int(b, 2)) ``` `gmpy.popcount()` is about 20 times faster on my machine than the `str.count()` method. Again, not having to convert `a` and `b` to integers to begin with would remove another bottleneck, but even then there per-call performance is almost doubled: ``` >>> import gmpy >>> def foo_xor_gmpy(a, b): return gmpy.popcount(int(a, 2) ^ int(b, 2)) ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=10000) 0.7225301265716553 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor_gmpy as f', number=10000) 0.47731995582580566 ``` To illustrate the difference when `a` and `b` are integers to begin with: ``` >>> si1, si2 = int(s1, 2), int(s2, 2) >>> def foo_xor_int(a, b): return format(a ^ b, 'b').count('1') ... >>> def foo_xor_gmpy_int(a, b): return gmpy.popcount(a ^ b) ... >>> timeit.timeit('f(si1, si2)', 'from __main__ import si1, si2, foo_xor_int as f', number=100000) 3.0529568195343018 >>> timeit.timeit('f(si1, si2)', 'from __main__ import si1, si2, foo_xor_gmpy_int as f', number=100000) 0.15820622444152832 ``` Dedicated libraries for hamming distances ----------------------------------------- The `gmpy` library actually includes a `gmpy.hamdist()` function, which calculates this exact number (the number of 1 bits in the XOR result of the integers) *directly*: ``` def foo_gmpy_hamdist(a, b): return gmpy.hamdist(int(a, 2), int(b, 2)) ``` which'll blow your socks off *entirely* if you used integers to begin with: ``` def foo_gmpy_hamdist_int(a, b): return gmpy.hamdist(a, b) ``` Comparisons: ``` >>> def foo_gmpy_hamdist(a, b): ... return gmpy.hamdist(int(a, 2), int(b, 2)) ... >>> def foo_gmpy_hamdist_int(a, b): ... return gmpy.hamdist(a, b) ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=100000) 7.479684114456177 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_gmpy_hamdist as f', number=100000) 4.340585947036743 >>> timeit.timeit('f(si1, si2)', 'from __main__ import si1, si2, foo_gmpy_hamdist_int as f', number=100000) 0.22896099090576172 ``` That's 100.000 times the hamming distance between two 3k+ digit numbers. Another package that can calculate the distance is [`Distance`](https://pypi.python.org/pypi/Distance), which supports calculating the hamming distance between strings directly. Make sure you use the `--with-c` switch to have it compile the C optimisations; when installing with `pip` use `bin/pip install Distance --install-option --with-c` for example. Benchmarking this against the XOR-with-bitcount approach again: ``` >>> import distance >>> def foo_distance_hamming(a, b): ... return distance.hamming(a, b) ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=100000) 7.229060173034668 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_distance_hamming as f', number=100000) 0.7701470851898193 ``` It uses the naive approach; zip over both input strings and count the number of differences, but since it does this in C it is still plenty faster, about 10 times as fast. The `gmpy.hamdist()` function still beats it when you use integers, however.
Not tested, but how would this perform: ``` sum(x!=y for x,y in zip(a,b)) ```
23,969,296
I wanted to get number of indexes in two string which are not same. Things that are fixed: String data will only have 0 or 1 on any index. i.e strings are binary representation of a number. Both the string will be of same length. For the above problem I wrote the below function in python ``` def foo(a,b): result = 0 for x,y in zip(a,b): if x != y: result += 1 return result ``` But the thing is these strings are huge. Very large. So the above functions is taking too much time. any thing i should do to make it super fast. This is how i did same in c++, Its quite fast now, but still can't understand how to do packing in short integers and all that said by @Yves Daoust : ``` size_t diff(long long int n1, long long int n2) { long long int c = n1 ^ n2; bitset<sizeof(int) * CHAR_BIT> bits(c); string s = bits.to_string(); return std::count(s.begin(), s.end(), '1'); } ```
2014/05/31
[ "https://Stackoverflow.com/questions/23969296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3694018/" ]
I'll walk through the options here, but basically you are calculating the hamming distance between two numbers. There are dedicated libraries that can make this really, really fast, but lets focus on the pure Python options first. Your approach, zipping ---------------------- `zip()` produces one big list *first*, then lets you loop. You could use `itertools.izip()` instead, and make it a generator expression: ``` from itertools import izip def foo(a, b): return sum(x != y for x, y in izip(a, b)) ``` This produces only one pair at a time, avoiding having to create a large list of tuples first. The Python boolean type is a subclass of `int`, where `True == 1` and `False == 0`, letting you sum them: ``` >>> True + True 2 ``` Using integers instead ---------------------- However, you probably want to rethink your input data. It's much more efficient to use integers to represent your binary data; integers can be operated on directly. Doing the conversion inline, then counting the number of 1s on the XOR result is: ``` def foo(a, b): return format(int(a, 2) ^ int(b, 2), 'b').count('1') ``` but not having to convert `a` and `b` to integers in the first place would be much more efficient. Time comparisons: ``` >>> from itertools import izip >>> import timeit >>> s1 = "0100010010" >>> s2 = "0011100010" >>> def foo_zipped(a, b): return sum(x != y for x, y in izip(a, b)) ... >>> def foo_xor(a, b): return format(int(a, 2) ^ int(b, 2), 'b').count('1') ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_zipped as f') 1.7872788906097412 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f') 1.3399651050567627 >>> s1 = s1 * 1000 >>> s2 = s2 * 1000 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_zipped as f', number=1000) 1.0649528503417969 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=1000) 0.0779869556427002 ``` The XOR approach is faster by orders of magnitude if the inputs get larger, and this is **with** converting the inputs to `int` first. Dedicated libraries for bitcounting ----------------------------------- The bit counting (`format(integer, 'b').count(1)`) is pretty fast, but can be made faster still if you installed the [`gmpy` extension library](https://pypi.python.org/pypi/gmpy) (a Python wrapper around the [GMP library](https://gmplib.org/)) and used the `gmpy.popcount()` function: ``` def foo(a, b): return gmpy.popcount(int(a, 2) ^ int(b, 2)) ``` `gmpy.popcount()` is about 20 times faster on my machine than the `str.count()` method. Again, not having to convert `a` and `b` to integers to begin with would remove another bottleneck, but even then there per-call performance is almost doubled: ``` >>> import gmpy >>> def foo_xor_gmpy(a, b): return gmpy.popcount(int(a, 2) ^ int(b, 2)) ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=10000) 0.7225301265716553 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor_gmpy as f', number=10000) 0.47731995582580566 ``` To illustrate the difference when `a` and `b` are integers to begin with: ``` >>> si1, si2 = int(s1, 2), int(s2, 2) >>> def foo_xor_int(a, b): return format(a ^ b, 'b').count('1') ... >>> def foo_xor_gmpy_int(a, b): return gmpy.popcount(a ^ b) ... >>> timeit.timeit('f(si1, si2)', 'from __main__ import si1, si2, foo_xor_int as f', number=100000) 3.0529568195343018 >>> timeit.timeit('f(si1, si2)', 'from __main__ import si1, si2, foo_xor_gmpy_int as f', number=100000) 0.15820622444152832 ``` Dedicated libraries for hamming distances ----------------------------------------- The `gmpy` library actually includes a `gmpy.hamdist()` function, which calculates this exact number (the number of 1 bits in the XOR result of the integers) *directly*: ``` def foo_gmpy_hamdist(a, b): return gmpy.hamdist(int(a, 2), int(b, 2)) ``` which'll blow your socks off *entirely* if you used integers to begin with: ``` def foo_gmpy_hamdist_int(a, b): return gmpy.hamdist(a, b) ``` Comparisons: ``` >>> def foo_gmpy_hamdist(a, b): ... return gmpy.hamdist(int(a, 2), int(b, 2)) ... >>> def foo_gmpy_hamdist_int(a, b): ... return gmpy.hamdist(a, b) ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=100000) 7.479684114456177 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_gmpy_hamdist as f', number=100000) 4.340585947036743 >>> timeit.timeit('f(si1, si2)', 'from __main__ import si1, si2, foo_gmpy_hamdist_int as f', number=100000) 0.22896099090576172 ``` That's 100.000 times the hamming distance between two 3k+ digit numbers. Another package that can calculate the distance is [`Distance`](https://pypi.python.org/pypi/Distance), which supports calculating the hamming distance between strings directly. Make sure you use the `--with-c` switch to have it compile the C optimisations; when installing with `pip` use `bin/pip install Distance --install-option --with-c` for example. Benchmarking this against the XOR-with-bitcount approach again: ``` >>> import distance >>> def foo_distance_hamming(a, b): ... return distance.hamming(a, b) ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=100000) 7.229060173034668 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_distance_hamming as f', number=100000) 0.7701470851898193 ``` It uses the naive approach; zip over both input strings and count the number of differences, but since it does this in C it is still plenty faster, about 10 times as fast. The `gmpy.hamdist()` function still beats it when you use integers, however.
If the strings represent binary numbers, you can convert to integers and use bitwise operators: ``` def foo(s1, s2): # return sum(map(int, format(int(a, 2) ^ int(b, 2), 'b'))) # one-liner a = int(s1, 2) # convert string to integer b = int(s2, 2) c = a ^ b # use xor to get differences s = format(c, 'b') # convert back to string of zeroes and ones return sum(map(int, s)) # sum all ones (count of differences) s1 = "0100010010" s2 = "0011100010" # 12345 assert foo(s1, s2) == 5 ```
23,969,296
I wanted to get number of indexes in two string which are not same. Things that are fixed: String data will only have 0 or 1 on any index. i.e strings are binary representation of a number. Both the string will be of same length. For the above problem I wrote the below function in python ``` def foo(a,b): result = 0 for x,y in zip(a,b): if x != y: result += 1 return result ``` But the thing is these strings are huge. Very large. So the above functions is taking too much time. any thing i should do to make it super fast. This is how i did same in c++, Its quite fast now, but still can't understand how to do packing in short integers and all that said by @Yves Daoust : ``` size_t diff(long long int n1, long long int n2) { long long int c = n1 ^ n2; bitset<sizeof(int) * CHAR_BIT> bits(c); string s = bits.to_string(); return std::count(s.begin(), s.end(), '1'); } ```
2014/05/31
[ "https://Stackoverflow.com/questions/23969296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3694018/" ]
I'll walk through the options here, but basically you are calculating the hamming distance between two numbers. There are dedicated libraries that can make this really, really fast, but lets focus on the pure Python options first. Your approach, zipping ---------------------- `zip()` produces one big list *first*, then lets you loop. You could use `itertools.izip()` instead, and make it a generator expression: ``` from itertools import izip def foo(a, b): return sum(x != y for x, y in izip(a, b)) ``` This produces only one pair at a time, avoiding having to create a large list of tuples first. The Python boolean type is a subclass of `int`, where `True == 1` and `False == 0`, letting you sum them: ``` >>> True + True 2 ``` Using integers instead ---------------------- However, you probably want to rethink your input data. It's much more efficient to use integers to represent your binary data; integers can be operated on directly. Doing the conversion inline, then counting the number of 1s on the XOR result is: ``` def foo(a, b): return format(int(a, 2) ^ int(b, 2), 'b').count('1') ``` but not having to convert `a` and `b` to integers in the first place would be much more efficient. Time comparisons: ``` >>> from itertools import izip >>> import timeit >>> s1 = "0100010010" >>> s2 = "0011100010" >>> def foo_zipped(a, b): return sum(x != y for x, y in izip(a, b)) ... >>> def foo_xor(a, b): return format(int(a, 2) ^ int(b, 2), 'b').count('1') ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_zipped as f') 1.7872788906097412 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f') 1.3399651050567627 >>> s1 = s1 * 1000 >>> s2 = s2 * 1000 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_zipped as f', number=1000) 1.0649528503417969 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=1000) 0.0779869556427002 ``` The XOR approach is faster by orders of magnitude if the inputs get larger, and this is **with** converting the inputs to `int` first. Dedicated libraries for bitcounting ----------------------------------- The bit counting (`format(integer, 'b').count(1)`) is pretty fast, but can be made faster still if you installed the [`gmpy` extension library](https://pypi.python.org/pypi/gmpy) (a Python wrapper around the [GMP library](https://gmplib.org/)) and used the `gmpy.popcount()` function: ``` def foo(a, b): return gmpy.popcount(int(a, 2) ^ int(b, 2)) ``` `gmpy.popcount()` is about 20 times faster on my machine than the `str.count()` method. Again, not having to convert `a` and `b` to integers to begin with would remove another bottleneck, but even then there per-call performance is almost doubled: ``` >>> import gmpy >>> def foo_xor_gmpy(a, b): return gmpy.popcount(int(a, 2) ^ int(b, 2)) ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=10000) 0.7225301265716553 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor_gmpy as f', number=10000) 0.47731995582580566 ``` To illustrate the difference when `a` and `b` are integers to begin with: ``` >>> si1, si2 = int(s1, 2), int(s2, 2) >>> def foo_xor_int(a, b): return format(a ^ b, 'b').count('1') ... >>> def foo_xor_gmpy_int(a, b): return gmpy.popcount(a ^ b) ... >>> timeit.timeit('f(si1, si2)', 'from __main__ import si1, si2, foo_xor_int as f', number=100000) 3.0529568195343018 >>> timeit.timeit('f(si1, si2)', 'from __main__ import si1, si2, foo_xor_gmpy_int as f', number=100000) 0.15820622444152832 ``` Dedicated libraries for hamming distances ----------------------------------------- The `gmpy` library actually includes a `gmpy.hamdist()` function, which calculates this exact number (the number of 1 bits in the XOR result of the integers) *directly*: ``` def foo_gmpy_hamdist(a, b): return gmpy.hamdist(int(a, 2), int(b, 2)) ``` which'll blow your socks off *entirely* if you used integers to begin with: ``` def foo_gmpy_hamdist_int(a, b): return gmpy.hamdist(a, b) ``` Comparisons: ``` >>> def foo_gmpy_hamdist(a, b): ... return gmpy.hamdist(int(a, 2), int(b, 2)) ... >>> def foo_gmpy_hamdist_int(a, b): ... return gmpy.hamdist(a, b) ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=100000) 7.479684114456177 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_gmpy_hamdist as f', number=100000) 4.340585947036743 >>> timeit.timeit('f(si1, si2)', 'from __main__ import si1, si2, foo_gmpy_hamdist_int as f', number=100000) 0.22896099090576172 ``` That's 100.000 times the hamming distance between two 3k+ digit numbers. Another package that can calculate the distance is [`Distance`](https://pypi.python.org/pypi/Distance), which supports calculating the hamming distance between strings directly. Make sure you use the `--with-c` switch to have it compile the C optimisations; when installing with `pip` use `bin/pip install Distance --install-option --with-c` for example. Benchmarking this against the XOR-with-bitcount approach again: ``` >>> import distance >>> def foo_distance_hamming(a, b): ... return distance.hamming(a, b) ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=100000) 7.229060173034668 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_distance_hamming as f', number=100000) 0.7701470851898193 ``` It uses the naive approach; zip over both input strings and count the number of differences, but since it does this in C it is still plenty faster, about 10 times as fast. The `gmpy.hamdist()` function still beats it when you use integers, however.
Pack your strings as short integers (16 bits). After xoring, pass to a precomputed lookup table of 65536 entries that gives the number of 1s per short. If pre-packing is not an option, switch to C++ with inline AVX2 intrinsics. They will allow you to load 32 characters in a single instruction, perform the comparisons, then pack the 32 results to 32 bits (if I am right).
24,502,360
I am a Python newbie and am trying to write a numpy array into format readable in Matlab in the following format into an array [xi, yi, ti], separated by a semi-colon. In python, I am able to currently write it in the following form, which is a numpy array printed on screen/written to file as [[xi yi ti]]. Here is the code: ``` import math import random import numpy as np SPOT = [] f = open('data_dump.txt', 'a') for i in range(10): X = random.randrange(6) Y = random.randrange(10) T = random.randrange(5) SPOT.append([X,Y,T]) SPOT = np.array(SPOT) f.write(str(SPOT[:])) f.close() ``` Please suggest how I should proceed to be able to write this data in Matlab readable format as mentioned above. Thanks in advance! Sree.
2014/07/01
[ "https://Stackoverflow.com/questions/24502360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3792245/" ]
It is not very necessary to write your `array` into a special format. Write it into a normal `csv` and use [`dlmread`](http://www.mathworks.com/help/matlab/ref/dlmread.html) to open it in `matlab`. In `numpy` side, write your `array` using `np.savetxt('some_name.txt', aar, delimiter=' ')`
If you have scipy than you can do: ``` import scipy.io scipy.io.savemat('/tmp/test.mat', dict(SPOT=SPOT)) ``` And in matlab: ``` a=load('/tmp/test.mat'); a.SPOT % should have your data ```
24,502,360
I am a Python newbie and am trying to write a numpy array into format readable in Matlab in the following format into an array [xi, yi, ti], separated by a semi-colon. In python, I am able to currently write it in the following form, which is a numpy array printed on screen/written to file as [[xi yi ti]]. Here is the code: ``` import math import random import numpy as np SPOT = [] f = open('data_dump.txt', 'a') for i in range(10): X = random.randrange(6) Y = random.randrange(10) T = random.randrange(5) SPOT.append([X,Y,T]) SPOT = np.array(SPOT) f.write(str(SPOT[:])) f.close() ``` Please suggest how I should proceed to be able to write this data in Matlab readable format as mentioned above. Thanks in advance! Sree.
2014/07/01
[ "https://Stackoverflow.com/questions/24502360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3792245/" ]
It is not very necessary to write your `array` into a special format. Write it into a normal `csv` and use [`dlmread`](http://www.mathworks.com/help/matlab/ref/dlmread.html) to open it in `matlab`. In `numpy` side, write your `array` using `np.savetxt('some_name.txt', aar, delimiter=' ')`
Thanks everyone and thanks @CT Zhu for letting me know! Since I am not using Scipy, I tried using np.savetxt and it seems to work! Added the following and it writes into a format that is readable in Matlab as an array directly. Thanks again! ``` np.savetxt('test.txt', SPOT, fmt = '%10.5f', delimiter=',', newline = ';\n', header='data =[...', footer=']', comments = '#') ``` Cheers! Sree.
24,502,360
I am a Python newbie and am trying to write a numpy array into format readable in Matlab in the following format into an array [xi, yi, ti], separated by a semi-colon. In python, I am able to currently write it in the following form, which is a numpy array printed on screen/written to file as [[xi yi ti]]. Here is the code: ``` import math import random import numpy as np SPOT = [] f = open('data_dump.txt', 'a') for i in range(10): X = random.randrange(6) Y = random.randrange(10) T = random.randrange(5) SPOT.append([X,Y,T]) SPOT = np.array(SPOT) f.write(str(SPOT[:])) f.close() ``` Please suggest how I should proceed to be able to write this data in Matlab readable format as mentioned above. Thanks in advance! Sree.
2014/07/01
[ "https://Stackoverflow.com/questions/24502360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3792245/" ]
Try `scipy.io` to export data for `Matlab` ``` import scipy.io as sio matlab_data = dict(SPOT=SPOT) sio.savemat('data_dump.mat', matlab_data) ``` `data_dump.mat` is `Matlab` data. For more detail, see <http://docs.scipy.org/doc/scipy/reference/tutorial/io.html>
If you have scipy than you can do: ``` import scipy.io scipy.io.savemat('/tmp/test.mat', dict(SPOT=SPOT)) ``` And in matlab: ``` a=load('/tmp/test.mat'); a.SPOT % should have your data ```
24,502,360
I am a Python newbie and am trying to write a numpy array into format readable in Matlab in the following format into an array [xi, yi, ti], separated by a semi-colon. In python, I am able to currently write it in the following form, which is a numpy array printed on screen/written to file as [[xi yi ti]]. Here is the code: ``` import math import random import numpy as np SPOT = [] f = open('data_dump.txt', 'a') for i in range(10): X = random.randrange(6) Y = random.randrange(10) T = random.randrange(5) SPOT.append([X,Y,T]) SPOT = np.array(SPOT) f.write(str(SPOT[:])) f.close() ``` Please suggest how I should proceed to be able to write this data in Matlab readable format as mentioned above. Thanks in advance! Sree.
2014/07/01
[ "https://Stackoverflow.com/questions/24502360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3792245/" ]
Try `scipy.io` to export data for `Matlab` ``` import scipy.io as sio matlab_data = dict(SPOT=SPOT) sio.savemat('data_dump.mat', matlab_data) ``` `data_dump.mat` is `Matlab` data. For more detail, see <http://docs.scipy.org/doc/scipy/reference/tutorial/io.html>
Thanks everyone and thanks @CT Zhu for letting me know! Since I am not using Scipy, I tried using np.savetxt and it seems to work! Added the following and it writes into a format that is readable in Matlab as an array directly. Thanks again! ``` np.savetxt('test.txt', SPOT, fmt = '%10.5f', delimiter=',', newline = ';\n', header='data =[...', footer=']', comments = '#') ``` Cheers! Sree.
24,502,360
I am a Python newbie and am trying to write a numpy array into format readable in Matlab in the following format into an array [xi, yi, ti], separated by a semi-colon. In python, I am able to currently write it in the following form, which is a numpy array printed on screen/written to file as [[xi yi ti]]. Here is the code: ``` import math import random import numpy as np SPOT = [] f = open('data_dump.txt', 'a') for i in range(10): X = random.randrange(6) Y = random.randrange(10) T = random.randrange(5) SPOT.append([X,Y,T]) SPOT = np.array(SPOT) f.write(str(SPOT[:])) f.close() ``` Please suggest how I should proceed to be able to write this data in Matlab readable format as mentioned above. Thanks in advance! Sree.
2014/07/01
[ "https://Stackoverflow.com/questions/24502360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3792245/" ]
If you have scipy than you can do: ``` import scipy.io scipy.io.savemat('/tmp/test.mat', dict(SPOT=SPOT)) ``` And in matlab: ``` a=load('/tmp/test.mat'); a.SPOT % should have your data ```
Thanks everyone and thanks @CT Zhu for letting me know! Since I am not using Scipy, I tried using np.savetxt and it seems to work! Added the following and it writes into a format that is readable in Matlab as an array directly. Thanks again! ``` np.savetxt('test.txt', SPOT, fmt = '%10.5f', delimiter=',', newline = ';\n', header='data =[...', footer=']', comments = '#') ``` Cheers! Sree.
65,753,830
I'm trying to train Mask-R CNN model from cocoapi(<https://github.com/cocodataset/cocoapi>), and this error code keep come out. ``` ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-83356bb9cf95> in <module> 19 sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version 20 ---> 21 from pycocotools.coco import coco 22 23 get_ipython().run_line_magic('matplotlib', 'inline ') ~/Desktop/coco/PythonAPI/pycocotools/coco.py in <module> 53 import copy 54 import itertools ---> 55 from . import mask as maskUtils 56 import os 57 from collections import defaultdict ~/Desktop/coco/PythonAPI/pycocotools/mask.py in <module> 1 __author__ = 'tsungyi' 2 ----> 3 import pycocotools._mask as _mask 4 5 # Interface for manipulating masks stored in RLE format. ModuleNotFoundError: No module named 'pycocotools._mask' ``` I tried all the methods on the github 'issues' tab, but it is not working to me at all. Is there are another solution for this? I'm using Python 3.6, Linux.
2021/01/16
[ "https://Stackoverflow.com/questions/65753830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14258016/" ]
The answer is summarise from [these](https://github.com/cocodataset/cocoapi/issues/172) [three](https://github.com/cocodataset/cocoapi/issues/168) [GitHub issues](https://github.com/cocodataset/cocoapi/issues/141#issuecomment-386606299) 1.whether you have installed cython in the correct version. Namely, you should install cython for python2/3 if you use python2/3 ``` pip install cython ``` 2.whether you have downloaded the whole .zip file from this github project. Namely, you should download all the things here even though you only need PythonAPI ``` git clone https://github.com/cocodataset/cocoapi.git ``` or unzip the [zip file](https://github.com/cocodataset/cocoapi/archive/refs/heads/master.zip) 3.whether you open Terminal and run "make" under the correct folder. The correct folder is the one that "Makefile" is located in ``` cd path/to/coco/PythonAPI/Makefile make ``` Almost, the question can be solved. If not, 4 and 5 may help. 4.whether you have already installed gcc in the correct version 5.whether you have already installed python-dev in the correct version. Namely you should install python3-dev (you may try "sudo apt-get install python3-dev"), if you use python3.
Try cloning official repo and run below commands ``` python setup.py install make ```
53,469,976
I am using the osmnx library (python) to extract the road network of a city. I also have a separate data source that corresponds to GPS coordinates being sent by vehicles as they traverse the aforementioned road network. My issue is that I only have the GPS coordinates but I wish to also know which road they correspond to. I.e. I want to input a set of longitude, latitude coordinates and get the corresponding street on which that GPS coordinate lies. I believe the term for this is Map Matching. What is the best way to do this? Preferably the solution would be using osmnx but other solutions would also be appreciated. Note that the GPS coordinates may be noisy.
2018/11/25
[ "https://Stackoverflow.com/questions/53469976", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10702801/" ]
You can do map matching with OSMnx. See the nearest\_nodes and nearest\_edges functions in the OSMnx documentation: <https://osmnx.readthedocs.io/>
My suggestion woud be to use the leuvenmapmatching package. You will get the details in the documentation of the package itself. <https://github.com/wannesm/LeuvenMapMatching>
13,793,973
I have a string in python 3 that has several unicode representations in it, for example: ``` t = 'R\\u00f3is\\u00edn' ``` and I want to convert t so that it has the proper representation when I print it, ie: ``` >>> print(t) Róisín ``` However I just get the original string back. I've tried re.sub and some others, but I can't seem to find a way that will change these characters without having to iterate over each one. What would be the easiest way to do so?
2012/12/10
[ "https://Stackoverflow.com/questions/13793973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1205923/" ]
You want to use the built-in codec `unicode_escape`. If `t` is already a `bytes` (an 8-bit string), it's as simple as this: ``` >>> print(t.decode('unicode_escape')) Róisín ``` If `t` has already been decoded to Unicode, you can to encode it back to a `bytes` and then `decode` it this way. If you're sure that all of your Unicode characters have been escaped, it actually doesn't matter what codec you use to do the encode. Otherwise, you could try to get your original byte string back, but it's simpler, and probably safer, to just force any non-encoded characters to get encoded, and then they'll get decoded along with the already-encoded ones: ``` >>> print(t.encode('unicode_escape').decode('unicode_escape') Róisín ``` In case you want to know how to do this kind of thing with regular expressions in the future, note that [`sub`](http://docs.python.org/3/library/re.html?highlight=unicode_escape#re.sub) lets you pass a function instead of a pattern for the `repl`. And you can convert any hex string into an integer by calling `int(hexstring, 16)`, and any integer into the corresponding Unicode character with `chr` (note that this is the one bit that's different in Python 2—you need `unichr` instead). So: ``` >>> re.sub(r'(\\u[0-9A-Fa-f]+)', lambda matchobj: chr(int(matchobj.group(0)[2:], 16)), t) Róisín ``` Or, making it a bit more clear: ``` >>> def unescapematch(matchobj): ... escapesequence = matchobj.group(0) ... digits = escapesequence[2:] ... ordinal = int(digits, 16) ... char = chr(ordinal) ... return char >>> re.sub(r'(\\u[0-9A-Fa-f]+)', unescapematch, t) Róisín ``` The `unicode_escape` codec actually handles `\U`, `\x`, `\X`, octal (`\066`), and special-character (`\n`) sequences as well as just `\u`, and it implements the proper rules for reading only the appropriate max number of digits (4 for `\u`, 8 for `\U`, etc., so `r'\\u22222'` decodes to `'∢2'` rather than `''`), and probably more things I haven't thought of. But this should give you the idea.
First of all, it is rather confused what you what to convert to. Just imagine that you may want to convert to 'o' and 'i'. In this case you can just make a map: ``` mp = {u'\u00f3':'o', u'\u00ed':'i'} ``` Than you may apply the replacement like: ``` t = u'R\u00f3is\u00edn' for i in range(len(t)): if t[i] in mp: t[i]=mp[t[i]] print t ```
13,793,973
I have a string in python 3 that has several unicode representations in it, for example: ``` t = 'R\\u00f3is\\u00edn' ``` and I want to convert t so that it has the proper representation when I print it, ie: ``` >>> print(t) Róisín ``` However I just get the original string back. I've tried re.sub and some others, but I can't seem to find a way that will change these characters without having to iterate over each one. What would be the easiest way to do so?
2012/12/10
[ "https://Stackoverflow.com/questions/13793973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1205923/" ]
You want to use the built-in codec `unicode_escape`. If `t` is already a `bytes` (an 8-bit string), it's as simple as this: ``` >>> print(t.decode('unicode_escape')) Róisín ``` If `t` has already been decoded to Unicode, you can to encode it back to a `bytes` and then `decode` it this way. If you're sure that all of your Unicode characters have been escaped, it actually doesn't matter what codec you use to do the encode. Otherwise, you could try to get your original byte string back, but it's simpler, and probably safer, to just force any non-encoded characters to get encoded, and then they'll get decoded along with the already-encoded ones: ``` >>> print(t.encode('unicode_escape').decode('unicode_escape') Róisín ``` In case you want to know how to do this kind of thing with regular expressions in the future, note that [`sub`](http://docs.python.org/3/library/re.html?highlight=unicode_escape#re.sub) lets you pass a function instead of a pattern for the `repl`. And you can convert any hex string into an integer by calling `int(hexstring, 16)`, and any integer into the corresponding Unicode character with `chr` (note that this is the one bit that's different in Python 2—you need `unichr` instead). So: ``` >>> re.sub(r'(\\u[0-9A-Fa-f]+)', lambda matchobj: chr(int(matchobj.group(0)[2:], 16)), t) Róisín ``` Or, making it a bit more clear: ``` >>> def unescapematch(matchobj): ... escapesequence = matchobj.group(0) ... digits = escapesequence[2:] ... ordinal = int(digits, 16) ... char = chr(ordinal) ... return char >>> re.sub(r'(\\u[0-9A-Fa-f]+)', unescapematch, t) Róisín ``` The `unicode_escape` codec actually handles `\U`, `\x`, `\X`, octal (`\066`), and special-character (`\n`) sequences as well as just `\u`, and it implements the proper rules for reading only the appropriate max number of digits (4 for `\u`, 8 for `\U`, etc., so `r'\\u22222'` decodes to `'∢2'` rather than `''`), and probably more things I haven't thought of. But this should give you the idea.
I apologize for posting as a second answer, I don't have the reputation to comment on abarnert's solution. After using his function to process approximately 50K android strings I noticed that there is yet another small improvement possible for certain use-cases. I changed the + to {1,4} to deal with the case where valid hex characters follow a 4-digit escape. I also changed int(escapesequence) to read int(digits) ``` >>> def unescapematch(matchobj): ... escapesequence = matchobj.group(0) ... digits = escapesequence[2:] ... ordinal = int(digits, 16) ... char = unichr(ordinal) ... return char >>> print re.sub(r'(\\u[0-9A-Fa-f]{1,4})', unescapematch, "Wi\u2011Fi") Wi‑Fi >>> print re.sub(r'(\\u[0-9A-Fa-f]+)', unescapematch, "Wi\u2011Fi") Traceback (most recent call last): File "<pyshell#102>", line 1, in <module> print re.sub(r'(\\u[0-9A-Fa-f]+)', unescapematch, "Wi\u2011Fi") File "C:\Python27\lib\re.py", line 151, in sub return _compile(pattern, flags).sub(repl, string, count) File "<pyshell#99>", line 5, in unescapematch char = unichr(ordinal) ValueError: unichr() arg not in range(0x10000) (narrow Python build) ```
62,502,606
I have the following python code which should be able to read a .csv file with cities and their coordinates. The .csv file is in the form of: ``` name,x,y name,x,y name,x,y ``` However, I am getting the error '**list index out of range**' at line 764: ``` 758 """function to calculate the route for files in data folder with coordinates""" 759 start_time = time.time() 760 f = open(csv_name, "r") 761 f.readline() 762 f.readline() 763 f.readline() 764 lines = int(f.readline().split()[2]) 765 f.readline() 766 f.readline() ``` The file has around 50 rows. What may be causing the problem? Thanks!
2020/06/21
[ "https://Stackoverflow.com/questions/62502606", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13722495/" ]
Change DateCutting to **DateTime** and adjust your criteria: ```vb Dim strCriteria As String strCriteria = "[DateCutting] >= #" & Format(Me.txtfrom, "yyyy\/mm\/dd") & "# And [DateCutting] <= #" & Format(Me.txtto, "yyyy\/mm\/dd") & "#" DoCmd.ApplyFilter strCriteria ``` To find a number: ``` strCriteria = "[Number] = " & Me.txtNumber & "" ``` as text: ``` strCriteria = "[TextNumber] = '" & Me.txtNumber & "'" ```
Try `Dim strCriteria as String` `dim task As String`
36,706,131
I am having trouble getting one of my functions in python to work. The code for my function is below: ``` def checkBlackjack(value, pot, player, wager): if (value == 21): print("Congratulations!! Blackjack!!") pot -= wager player += wager print ("The pot value is $", pot) print ("Your remaining balance is $",player) return (pot, player) ``` The function call is: ``` potValue, playerBalance = checkBlackjack(playerValue, potValue, playerBalance, wager) ``` And the error I get is: ``` potValue, playerBalance = checkBlackjack(playerValue, potValue, playerBalance, wager) TypeError: 'NoneType' object is not iterable ``` Since the error talks about not being able to iterate, I am not sure how to relate this to using the if condition. Any help will really be appreciated. Thanks!
2016/04/18
[ "https://Stackoverflow.com/questions/36706131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4832091/" ]
You're only returning something if the condition in your function is met, otherwise the function returns `None` by default and it is then trying to unpack `None` into two values (your variables)
Here is an [MCVE](https://stackoverflow.com/help/mcve) for this question: ``` >>> a, b = None Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> a, b = None TypeError: 'NoneType' object is not iterable ``` At this point, the problem should be clear. If not, one could look up multiple assignment in the manual.
59,952,898
I am trying to install `python3-psycopg2` as a part of `postgresql` installation, but I get: ``` The following packages have unmet dependencies: python3-psycopg2 : Depends: python3 (>= 3.7~) but 3.6.7-1~18.04 is to be installed E: Unable to correct problems, you have held broken packages. ``` I installed `python3.8` and configured `python3` link to it: ``` sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1 ``` But I still get the same error. I have an `Ubuntu 18.04` OS.
2020/01/28
[ "https://Stackoverflow.com/questions/59952898", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1626977/" ]
The `Psycopg2` library is built as a wrapper around `libpq` and mostly written in C. It is distributed as a `sdist` and is built during installation. For this reason it requires some `PostgreSQL` binaries and headers to be present during installation. Consider running these 2 commands: ``` sudo apt install python3-dev libpq-dev ``` The main goal of the command above is to provide all requirements for building `Psycopg2`. Then: ``` pip3 install psycopg2 ``` You should have `psycopg2` installed and working now.
While the explanation of Gitau is very good. You can simply install the psycopg2 binary instead as mentioned by Maurice: `python3 -m pip install psycopg2-binary` or just `pip install psycopg2-binary`
59,952,898
I am trying to install `python3-psycopg2` as a part of `postgresql` installation, but I get: ``` The following packages have unmet dependencies: python3-psycopg2 : Depends: python3 (>= 3.7~) but 3.6.7-1~18.04 is to be installed E: Unable to correct problems, you have held broken packages. ``` I installed `python3.8` and configured `python3` link to it: ``` sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1 ``` But I still get the same error. I have an `Ubuntu 18.04` OS.
2020/01/28
[ "https://Stackoverflow.com/questions/59952898", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1626977/" ]
The `Psycopg2` library is built as a wrapper around `libpq` and mostly written in C. It is distributed as a `sdist` and is built during installation. For this reason it requires some `PostgreSQL` binaries and headers to be present during installation. Consider running these 2 commands: ``` sudo apt install python3-dev libpq-dev ``` The main goal of the command above is to provide all requirements for building `Psycopg2`. Then: ``` pip3 install psycopg2 ``` You should have `psycopg2` installed and working now.
Sometime psycopg2-binary is not a solution because other packages require for psycopg2 on their dependencies. As cited above, you need to install some libs on ubuntu before install psycopg2. For me, I had to install build-essential as well. ``` sudo apt install python3-dev libpq-dev build-essential ``` and then ``` pip install psycopg2 ```
60,140,174
I have a very basic flask app with dependencies installed from my requirements.txt. All of these dependencies are installed in my virtual environment. requirements.txt given below, ``` aniso8601==6.0.0 Click==7.0 Flask==1.0.3 Flask-Cors==3.0.7 Flask-RESTful==0.3.7 Flask-SQLAlchemy==2.4.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 # psycopg2-binary==2.8.2 pytz==2019.1 six==1.12.0 # SQLAlchemy==1.3.4 Werkzeug==0.15.4 python-dotenv requests authlib ``` My code in NewTest.py file, ``` from flask import Flask, request, jsonify, abort, url_for app = Flask(__name__) @app.route('/') def index(): return jsonify({ 'success': True, 'index': 'Test Pass' }) if __name__ == '__main__': app.run(debug=True) ``` When I run the app through, ``` export FLASK_APP=NewTest.py export FLASK_ENV=development export FLASK_DEBUG=true flask run or flask run --reload ``` I get the following error, ``` 127.0.0.1 - - [09/Feb/2020 12:43:40] "GET / HTTP/1.1" 500 - Traceback (most recent call last): File "/projects/env/lib/python3.8/site-packages/flask/_compat.py", line 36, i n reraise raise value File "/projects/NewTest.py", line 3, in <module> app = Flask(__name__) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 559, in _ _init__ self.add_url_rule( File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 67, in wr apper_func return f(self, *args, **kwargs) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 1217, in add_url_rule self.url_map.add(rule) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add rule.bind(self) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind self.compile() File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile self._build = self._compile_builder(False).__get__(self, None) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder code = compile(module, "<werkzeug routing>", "exec") TypeError: required field "type_ignores" missing from Module ``` Can anyone please point out what am I missing or doing wrong and how can I fix it? Thanks.
2020/02/09
[ "https://Stackoverflow.com/questions/60140174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/727390/" ]
The bug was fixed in [werkzeug 0.15.5](https://werkzeug.palletsprojects.com/en/1.0.x/changes/#version-0-15-5). Upgrade from 0.15.4 to to a later version.
I had the error in django shell, it seems there is a bug in ipython. finally, I decided to remove ipython temporary until bug fix ``` pip uninstall ipython ``` [more info](https://bugs.python.org/issue35894)
60,140,174
I have a very basic flask app with dependencies installed from my requirements.txt. All of these dependencies are installed in my virtual environment. requirements.txt given below, ``` aniso8601==6.0.0 Click==7.0 Flask==1.0.3 Flask-Cors==3.0.7 Flask-RESTful==0.3.7 Flask-SQLAlchemy==2.4.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 # psycopg2-binary==2.8.2 pytz==2019.1 six==1.12.0 # SQLAlchemy==1.3.4 Werkzeug==0.15.4 python-dotenv requests authlib ``` My code in NewTest.py file, ``` from flask import Flask, request, jsonify, abort, url_for app = Flask(__name__) @app.route('/') def index(): return jsonify({ 'success': True, 'index': 'Test Pass' }) if __name__ == '__main__': app.run(debug=True) ``` When I run the app through, ``` export FLASK_APP=NewTest.py export FLASK_ENV=development export FLASK_DEBUG=true flask run or flask run --reload ``` I get the following error, ``` 127.0.0.1 - - [09/Feb/2020 12:43:40] "GET / HTTP/1.1" 500 - Traceback (most recent call last): File "/projects/env/lib/python3.8/site-packages/flask/_compat.py", line 36, i n reraise raise value File "/projects/NewTest.py", line 3, in <module> app = Flask(__name__) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 559, in _ _init__ self.add_url_rule( File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 67, in wr apper_func return f(self, *args, **kwargs) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 1217, in add_url_rule self.url_map.add(rule) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add rule.bind(self) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind self.compile() File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile self._build = self._compile_builder(False).__get__(self, None) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder code = compile(module, "<werkzeug routing>", "exec") TypeError: required field "type_ignores" missing from Module ``` Can anyone please point out what am I missing or doing wrong and how can I fix it? Thanks.
2020/02/09
[ "https://Stackoverflow.com/questions/60140174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/727390/" ]
The bug was fixed in [werkzeug 0.15.5](https://werkzeug.palletsprojects.com/en/1.0.x/changes/#version-0-15-5). Upgrade from 0.15.4 to to a later version.
I solved the error by simply executing the following line of code on the terminal: ``` sudo pip3 install --upgrade ipython ```
60,140,174
I have a very basic flask app with dependencies installed from my requirements.txt. All of these dependencies are installed in my virtual environment. requirements.txt given below, ``` aniso8601==6.0.0 Click==7.0 Flask==1.0.3 Flask-Cors==3.0.7 Flask-RESTful==0.3.7 Flask-SQLAlchemy==2.4.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 # psycopg2-binary==2.8.2 pytz==2019.1 six==1.12.0 # SQLAlchemy==1.3.4 Werkzeug==0.15.4 python-dotenv requests authlib ``` My code in NewTest.py file, ``` from flask import Flask, request, jsonify, abort, url_for app = Flask(__name__) @app.route('/') def index(): return jsonify({ 'success': True, 'index': 'Test Pass' }) if __name__ == '__main__': app.run(debug=True) ``` When I run the app through, ``` export FLASK_APP=NewTest.py export FLASK_ENV=development export FLASK_DEBUG=true flask run or flask run --reload ``` I get the following error, ``` 127.0.0.1 - - [09/Feb/2020 12:43:40] "GET / HTTP/1.1" 500 - Traceback (most recent call last): File "/projects/env/lib/python3.8/site-packages/flask/_compat.py", line 36, i n reraise raise value File "/projects/NewTest.py", line 3, in <module> app = Flask(__name__) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 559, in _ _init__ self.add_url_rule( File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 67, in wr apper_func return f(self, *args, **kwargs) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 1217, in add_url_rule self.url_map.add(rule) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add rule.bind(self) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind self.compile() File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile self._build = self._compile_builder(False).__get__(self, None) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder code = compile(module, "<werkzeug routing>", "exec") TypeError: required field "type_ignores" missing from Module ``` Can anyone please point out what am I missing or doing wrong and how can I fix it? Thanks.
2020/02/09
[ "https://Stackoverflow.com/questions/60140174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/727390/" ]
The bug was fixed in [werkzeug 0.15.5](https://werkzeug.palletsprojects.com/en/1.0.x/changes/#version-0-15-5). Upgrade from 0.15.4 to to a later version.
[werkzeug](https://github.com/pallets/werkzeug) library can have issues with different python versions. First of all, upgrade the werkzeug library to be the latest, then try again. ``` pip3 install --upgrade werkzeug ``` If that didn't work, I presume you may be using python version which kind of creates the whole issue. You can always create a virtualenv [[Install, Create, and Activate](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)] with a specific python version where werkzeug won't cause you this issue.
60,140,174
I have a very basic flask app with dependencies installed from my requirements.txt. All of these dependencies are installed in my virtual environment. requirements.txt given below, ``` aniso8601==6.0.0 Click==7.0 Flask==1.0.3 Flask-Cors==3.0.7 Flask-RESTful==0.3.7 Flask-SQLAlchemy==2.4.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 # psycopg2-binary==2.8.2 pytz==2019.1 six==1.12.0 # SQLAlchemy==1.3.4 Werkzeug==0.15.4 python-dotenv requests authlib ``` My code in NewTest.py file, ``` from flask import Flask, request, jsonify, abort, url_for app = Flask(__name__) @app.route('/') def index(): return jsonify({ 'success': True, 'index': 'Test Pass' }) if __name__ == '__main__': app.run(debug=True) ``` When I run the app through, ``` export FLASK_APP=NewTest.py export FLASK_ENV=development export FLASK_DEBUG=true flask run or flask run --reload ``` I get the following error, ``` 127.0.0.1 - - [09/Feb/2020 12:43:40] "GET / HTTP/1.1" 500 - Traceback (most recent call last): File "/projects/env/lib/python3.8/site-packages/flask/_compat.py", line 36, i n reraise raise value File "/projects/NewTest.py", line 3, in <module> app = Flask(__name__) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 559, in _ _init__ self.add_url_rule( File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 67, in wr apper_func return f(self, *args, **kwargs) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 1217, in add_url_rule self.url_map.add(rule) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add rule.bind(self) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind self.compile() File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile self._build = self._compile_builder(False).__get__(self, None) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder code = compile(module, "<werkzeug routing>", "exec") TypeError: required field "type_ignores" missing from Module ``` Can anyone please point out what am I missing or doing wrong and how can I fix it? Thanks.
2020/02/09
[ "https://Stackoverflow.com/questions/60140174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/727390/" ]
The bug was fixed in [werkzeug 0.15.5](https://werkzeug.palletsprojects.com/en/1.0.x/changes/#version-0-15-5). Upgrade from 0.15.4 to to a later version.
try ``` pip uninstall Flask ``` then ``` pip install Flask ``` and `pip uninstall Werkzeug` then `pip install Werkzeug`
60,140,174
I have a very basic flask app with dependencies installed from my requirements.txt. All of these dependencies are installed in my virtual environment. requirements.txt given below, ``` aniso8601==6.0.0 Click==7.0 Flask==1.0.3 Flask-Cors==3.0.7 Flask-RESTful==0.3.7 Flask-SQLAlchemy==2.4.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 # psycopg2-binary==2.8.2 pytz==2019.1 six==1.12.0 # SQLAlchemy==1.3.4 Werkzeug==0.15.4 python-dotenv requests authlib ``` My code in NewTest.py file, ``` from flask import Flask, request, jsonify, abort, url_for app = Flask(__name__) @app.route('/') def index(): return jsonify({ 'success': True, 'index': 'Test Pass' }) if __name__ == '__main__': app.run(debug=True) ``` When I run the app through, ``` export FLASK_APP=NewTest.py export FLASK_ENV=development export FLASK_DEBUG=true flask run or flask run --reload ``` I get the following error, ``` 127.0.0.1 - - [09/Feb/2020 12:43:40] "GET / HTTP/1.1" 500 - Traceback (most recent call last): File "/projects/env/lib/python3.8/site-packages/flask/_compat.py", line 36, i n reraise raise value File "/projects/NewTest.py", line 3, in <module> app = Flask(__name__) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 559, in _ _init__ self.add_url_rule( File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 67, in wr apper_func return f(self, *args, **kwargs) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 1217, in add_url_rule self.url_map.add(rule) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add rule.bind(self) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind self.compile() File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile self._build = self._compile_builder(False).__get__(self, None) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder code = compile(module, "<werkzeug routing>", "exec") TypeError: required field "type_ignores" missing from Module ``` Can anyone please point out what am I missing or doing wrong and how can I fix it? Thanks.
2020/02/09
[ "https://Stackoverflow.com/questions/60140174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/727390/" ]
I solved the error by simply executing the following line of code on the terminal: ``` sudo pip3 install --upgrade ipython ```
I had the error in django shell, it seems there is a bug in ipython. finally, I decided to remove ipython temporary until bug fix ``` pip uninstall ipython ``` [more info](https://bugs.python.org/issue35894)
60,140,174
I have a very basic flask app with dependencies installed from my requirements.txt. All of these dependencies are installed in my virtual environment. requirements.txt given below, ``` aniso8601==6.0.0 Click==7.0 Flask==1.0.3 Flask-Cors==3.0.7 Flask-RESTful==0.3.7 Flask-SQLAlchemy==2.4.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 # psycopg2-binary==2.8.2 pytz==2019.1 six==1.12.0 # SQLAlchemy==1.3.4 Werkzeug==0.15.4 python-dotenv requests authlib ``` My code in NewTest.py file, ``` from flask import Flask, request, jsonify, abort, url_for app = Flask(__name__) @app.route('/') def index(): return jsonify({ 'success': True, 'index': 'Test Pass' }) if __name__ == '__main__': app.run(debug=True) ``` When I run the app through, ``` export FLASK_APP=NewTest.py export FLASK_ENV=development export FLASK_DEBUG=true flask run or flask run --reload ``` I get the following error, ``` 127.0.0.1 - - [09/Feb/2020 12:43:40] "GET / HTTP/1.1" 500 - Traceback (most recent call last): File "/projects/env/lib/python3.8/site-packages/flask/_compat.py", line 36, i n reraise raise value File "/projects/NewTest.py", line 3, in <module> app = Flask(__name__) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 559, in _ _init__ self.add_url_rule( File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 67, in wr apper_func return f(self, *args, **kwargs) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 1217, in add_url_rule self.url_map.add(rule) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add rule.bind(self) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind self.compile() File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile self._build = self._compile_builder(False).__get__(self, None) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder code = compile(module, "<werkzeug routing>", "exec") TypeError: required field "type_ignores" missing from Module ``` Can anyone please point out what am I missing or doing wrong and how can I fix it? Thanks.
2020/02/09
[ "https://Stackoverflow.com/questions/60140174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/727390/" ]
[werkzeug](https://github.com/pallets/werkzeug) library can have issues with different python versions. First of all, upgrade the werkzeug library to be the latest, then try again. ``` pip3 install --upgrade werkzeug ``` If that didn't work, I presume you may be using python version which kind of creates the whole issue. You can always create a virtualenv [[Install, Create, and Activate](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)] with a specific python version where werkzeug won't cause you this issue.
I had the error in django shell, it seems there is a bug in ipython. finally, I decided to remove ipython temporary until bug fix ``` pip uninstall ipython ``` [more info](https://bugs.python.org/issue35894)
60,140,174
I have a very basic flask app with dependencies installed from my requirements.txt. All of these dependencies are installed in my virtual environment. requirements.txt given below, ``` aniso8601==6.0.0 Click==7.0 Flask==1.0.3 Flask-Cors==3.0.7 Flask-RESTful==0.3.7 Flask-SQLAlchemy==2.4.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 # psycopg2-binary==2.8.2 pytz==2019.1 six==1.12.0 # SQLAlchemy==1.3.4 Werkzeug==0.15.4 python-dotenv requests authlib ``` My code in NewTest.py file, ``` from flask import Flask, request, jsonify, abort, url_for app = Flask(__name__) @app.route('/') def index(): return jsonify({ 'success': True, 'index': 'Test Pass' }) if __name__ == '__main__': app.run(debug=True) ``` When I run the app through, ``` export FLASK_APP=NewTest.py export FLASK_ENV=development export FLASK_DEBUG=true flask run or flask run --reload ``` I get the following error, ``` 127.0.0.1 - - [09/Feb/2020 12:43:40] "GET / HTTP/1.1" 500 - Traceback (most recent call last): File "/projects/env/lib/python3.8/site-packages/flask/_compat.py", line 36, i n reraise raise value File "/projects/NewTest.py", line 3, in <module> app = Flask(__name__) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 559, in _ _init__ self.add_url_rule( File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 67, in wr apper_func return f(self, *args, **kwargs) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 1217, in add_url_rule self.url_map.add(rule) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add rule.bind(self) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind self.compile() File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile self._build = self._compile_builder(False).__get__(self, None) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder code = compile(module, "<werkzeug routing>", "exec") TypeError: required field "type_ignores" missing from Module ``` Can anyone please point out what am I missing or doing wrong and how can I fix it? Thanks.
2020/02/09
[ "https://Stackoverflow.com/questions/60140174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/727390/" ]
try ``` pip uninstall Flask ``` then ``` pip install Flask ``` and `pip uninstall Werkzeug` then `pip install Werkzeug`
I had the error in django shell, it seems there is a bug in ipython. finally, I decided to remove ipython temporary until bug fix ``` pip uninstall ipython ``` [more info](https://bugs.python.org/issue35894)
60,140,174
I have a very basic flask app with dependencies installed from my requirements.txt. All of these dependencies are installed in my virtual environment. requirements.txt given below, ``` aniso8601==6.0.0 Click==7.0 Flask==1.0.3 Flask-Cors==3.0.7 Flask-RESTful==0.3.7 Flask-SQLAlchemy==2.4.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 # psycopg2-binary==2.8.2 pytz==2019.1 six==1.12.0 # SQLAlchemy==1.3.4 Werkzeug==0.15.4 python-dotenv requests authlib ``` My code in NewTest.py file, ``` from flask import Flask, request, jsonify, abort, url_for app = Flask(__name__) @app.route('/') def index(): return jsonify({ 'success': True, 'index': 'Test Pass' }) if __name__ == '__main__': app.run(debug=True) ``` When I run the app through, ``` export FLASK_APP=NewTest.py export FLASK_ENV=development export FLASK_DEBUG=true flask run or flask run --reload ``` I get the following error, ``` 127.0.0.1 - - [09/Feb/2020 12:43:40] "GET / HTTP/1.1" 500 - Traceback (most recent call last): File "/projects/env/lib/python3.8/site-packages/flask/_compat.py", line 36, i n reraise raise value File "/projects/NewTest.py", line 3, in <module> app = Flask(__name__) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 559, in _ _init__ self.add_url_rule( File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 67, in wr apper_func return f(self, *args, **kwargs) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 1217, in add_url_rule self.url_map.add(rule) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add rule.bind(self) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind self.compile() File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile self._build = self._compile_builder(False).__get__(self, None) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder code = compile(module, "<werkzeug routing>", "exec") TypeError: required field "type_ignores" missing from Module ``` Can anyone please point out what am I missing or doing wrong and how can I fix it? Thanks.
2020/02/09
[ "https://Stackoverflow.com/questions/60140174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/727390/" ]
I solved the error by simply executing the following line of code on the terminal: ``` sudo pip3 install --upgrade ipython ```
[werkzeug](https://github.com/pallets/werkzeug) library can have issues with different python versions. First of all, upgrade the werkzeug library to be the latest, then try again. ``` pip3 install --upgrade werkzeug ``` If that didn't work, I presume you may be using python version which kind of creates the whole issue. You can always create a virtualenv [[Install, Create, and Activate](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)] with a specific python version where werkzeug won't cause you this issue.
60,140,174
I have a very basic flask app with dependencies installed from my requirements.txt. All of these dependencies are installed in my virtual environment. requirements.txt given below, ``` aniso8601==6.0.0 Click==7.0 Flask==1.0.3 Flask-Cors==3.0.7 Flask-RESTful==0.3.7 Flask-SQLAlchemy==2.4.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 # psycopg2-binary==2.8.2 pytz==2019.1 six==1.12.0 # SQLAlchemy==1.3.4 Werkzeug==0.15.4 python-dotenv requests authlib ``` My code in NewTest.py file, ``` from flask import Flask, request, jsonify, abort, url_for app = Flask(__name__) @app.route('/') def index(): return jsonify({ 'success': True, 'index': 'Test Pass' }) if __name__ == '__main__': app.run(debug=True) ``` When I run the app through, ``` export FLASK_APP=NewTest.py export FLASK_ENV=development export FLASK_DEBUG=true flask run or flask run --reload ``` I get the following error, ``` 127.0.0.1 - - [09/Feb/2020 12:43:40] "GET / HTTP/1.1" 500 - Traceback (most recent call last): File "/projects/env/lib/python3.8/site-packages/flask/_compat.py", line 36, i n reraise raise value File "/projects/NewTest.py", line 3, in <module> app = Flask(__name__) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 559, in _ _init__ self.add_url_rule( File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 67, in wr apper_func return f(self, *args, **kwargs) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 1217, in add_url_rule self.url_map.add(rule) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add rule.bind(self) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind self.compile() File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile self._build = self._compile_builder(False).__get__(self, None) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder code = compile(module, "<werkzeug routing>", "exec") TypeError: required field "type_ignores" missing from Module ``` Can anyone please point out what am I missing or doing wrong and how can I fix it? Thanks.
2020/02/09
[ "https://Stackoverflow.com/questions/60140174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/727390/" ]
I solved the error by simply executing the following line of code on the terminal: ``` sudo pip3 install --upgrade ipython ```
try ``` pip uninstall Flask ``` then ``` pip install Flask ``` and `pip uninstall Werkzeug` then `pip install Werkzeug`
60,140,174
I have a very basic flask app with dependencies installed from my requirements.txt. All of these dependencies are installed in my virtual environment. requirements.txt given below, ``` aniso8601==6.0.0 Click==7.0 Flask==1.0.3 Flask-Cors==3.0.7 Flask-RESTful==0.3.7 Flask-SQLAlchemy==2.4.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 # psycopg2-binary==2.8.2 pytz==2019.1 six==1.12.0 # SQLAlchemy==1.3.4 Werkzeug==0.15.4 python-dotenv requests authlib ``` My code in NewTest.py file, ``` from flask import Flask, request, jsonify, abort, url_for app = Flask(__name__) @app.route('/') def index(): return jsonify({ 'success': True, 'index': 'Test Pass' }) if __name__ == '__main__': app.run(debug=True) ``` When I run the app through, ``` export FLASK_APP=NewTest.py export FLASK_ENV=development export FLASK_DEBUG=true flask run or flask run --reload ``` I get the following error, ``` 127.0.0.1 - - [09/Feb/2020 12:43:40] "GET / HTTP/1.1" 500 - Traceback (most recent call last): File "/projects/env/lib/python3.8/site-packages/flask/_compat.py", line 36, i n reraise raise value File "/projects/NewTest.py", line 3, in <module> app = Flask(__name__) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 559, in _ _init__ self.add_url_rule( File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 67, in wr apper_func return f(self, *args, **kwargs) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 1217, in add_url_rule self.url_map.add(rule) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add rule.bind(self) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind self.compile() File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile self._build = self._compile_builder(False).__get__(self, None) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder code = compile(module, "<werkzeug routing>", "exec") TypeError: required field "type_ignores" missing from Module ``` Can anyone please point out what am I missing or doing wrong and how can I fix it? Thanks.
2020/02/09
[ "https://Stackoverflow.com/questions/60140174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/727390/" ]
[werkzeug](https://github.com/pallets/werkzeug) library can have issues with different python versions. First of all, upgrade the werkzeug library to be the latest, then try again. ``` pip3 install --upgrade werkzeug ``` If that didn't work, I presume you may be using python version which kind of creates the whole issue. You can always create a virtualenv [[Install, Create, and Activate](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)] with a specific python version where werkzeug won't cause you this issue.
try ``` pip uninstall Flask ``` then ``` pip install Flask ``` and `pip uninstall Werkzeug` then `pip install Werkzeug`
6,377,535
I have trouble setting up funkload to work well with cookies. I turn on `fl-record` and perform a series of requests of which each is sending a cookie. If I use the command without supplying a folder path, the output is stored in TCPWatch-Proxy format and I can see the contents of all the cookies, so I know that they are sent. For example this is the contents of `watch0003.request`: ``` GET http://mydomainnamehere.pl/api/world/me/ HTTP/1.1 Host: mydomainnamehere.pl Proxy-Connection: keep-alive Referer: http://mydomainnamehere.pl/test/engine/ X-Requested-With: XMLHttpRequest User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.57 Safari/534.24 Accept: */* Accept-Encoding: gzip,deflate,sdch Accept-Language: pl,en-US;q=0.8,en;q=0.6,fr-FR;q=0.4,fr;q=0.2 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: Beacon-ClientID=<<<some-beaconpush-id-here>>>; sessionid=<<<some-session-id>>>; fbs_<<<some-facebook-id>>>="access_token=<<<some-access-token>>>&expires=1308254400&secret=<<<some-secret>>>&session_key=<<<some-session-key>>>&sig=<<<some-signature>>>&uid=<<<some-user-id>>>"; Beacon-Preferred-Client=WebSocket ``` However if I run `fl-record` with a name of the test case and by doing so order funkload to store the output as a python test, all the Cookies are omitted. There isn't a single line in the code that would have anything to do with them: ``` import unittest from funkload.FunkLoadTestCase import FunkLoadTestCase from webunit.utility import Upload from funkload.utils import Data #from funkload.utils import xmlrpc_get_credential class Simple(FunkLoadTestCase): def setUp(self): """Setting up test.""" self.logd("setUp") self.server_url = self.conf_get('main', 'url') # XXX here you can setup the credential access like this # credential_host = self.conf_get('credential', 'host') # credential_port = self.conf_getInt('credential', 'port') # self.login, self.password = xmlrpc_get_credential(credential_host, # credential_port, # XXX replace with a valid group # 'members') def test_simple(self): # The description should be set in the configuration file server_url = self.server_url # begin of test --------------------------------------------- ... # /tmp/tmp5Nv5lW_funkload/watch0003.request self.get(server_url + "/api/world/me/", description="Get /api/world/me/") ... # end of test ----------------------------------------------- def tearDown(self): """Setting up test.""" self.logd("tearDown.\n") if __name__ in ('main', '__main__'): unittest.main() ``` There is also a configuration file, but nothing about cookies there either. On the other hand the documentation states that fl has (Cookie support). I've also found some bugfixes in the previous releases concerning Cookie support so I can assume this isn't just an empty statement. I've also found a point in one of the changelogs that states that "deleted cookies" are not included in the output. This got me wondering that maybe the problem is that the cookies as they were recorded are marked for deletion or are recognized as such by fl upon conversion from the TCP-Watch format to an actual testcase. This is just a wild guess however. I'd like to know: * If you ever had successes with support of funkload for cookies. If so, which version were you using. * Of your general experiences with funkload and whether or not it is worth using in a more complex setup. **EDIT** Apparently some of the requests that are recorded by `TCPWatch` are totally ignored and not included in the output test case. Anybody has idea why would it do that? Does it have anything to do with redirection? **EDIT(2)** Ok, it does. This one thing actually makes sense. It leaves out the results of redirection as these will be generated by simply following `HTTP 302 Found`. However the question of cookies still remains unexplained.
2011/06/16
[ "https://Stackoverflow.com/questions/6377535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/475763/" ]
I see this old post not answered, so I thought I could post: In Python: Identify the name of cookie you are sending. mine is 'csrftoken' in header and same one in post as 'csrfmiddlewaretoken'> intially I get the value of cookie then pass the same in post for authentication. Example: ``` res = self.get ( server_url + '/login/', description = 'Get url' ).cookies.itervalues ( ).next ( ) morsel_str = res [ '/' ] [ 'csrftoken' ] csrftoken = morsel_str.value # Once Cookie found include it in params params = [ [ 'csrfmiddlewaretoken', csrftoken ], [ 'username', 'username..' ], [ 'password', '********' ] ] self.setHeader ( 'cookie', 'csrftoken={0}'.format ( csrftoken ) ) resp = self.post ( server_url + '/login/', params, description = "Post /login/" ) ```
I've found a bug in Funkload. Funkload isn't handling correctly the cookies with a leading '.' in the domain. At the moment all that cookies are being silently ignored. Check this branch: <https://github.com/sbook/FunkLoad> I've already send a pull request: <https://github.com/nuxeo/FunkLoad/pull/32>
913,396
I'm using swig to wrap a class from a C++ library with python. It works overall, but there is an exception that is thrown from within the library and I can't seem to catch it in the swig interface, so it just crashes the python application! The class PyMonitor.cc describes the swig interface to the desired class, Monitor. Monitor's constructor throws an exception if it fails to connect. I'd like to handle this exception in PyMonitor, e.g.: PyMonitor.cc: ``` #include "Monitor.h" // ... bool PyMonitor::connect() { try { _monitor = new Monitor(_host, _calibration); } catch (...) { printf("oops!\n"); } } // ... ``` However, the connect() method never catches the exception, I just get a "terminate called after throwing ..." error, and the program aborts. I don't know too much about swig, but it seems to me that this is all fine C++ and the exception should propagate to the connect() method before killing the program. Any thoughts?
2009/05/27
[ "https://Stackoverflow.com/questions/913396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/75827/" ]
You have to forward the exceptions to Python if you want to parse them there. See the [SWIG Documentation](http://www.swig.org/Doc1.3/Customization.html#exception). In order to forward exceptions, you only have to add some code in the SWIG interface (.i) file. Basically, this can be anywhere in the .i file. All types of exceptions should be specified here, and SWIG **only** catches the listed exception types (in this case std::runtime\_error, std::invalid\_argument, std::out\_of\_range), all other exceptions are caught as unknown exceptions (and are thus forwarded correctly!). ``` // Handle standard exceptions. // NOTE: needs to be before the %import! %include "exception.i" %exception { try { $action } catch (const std::runtime_error& e) { SWIG_exception(SWIG_RuntimeError, e.what()); } catch (const std::invalid_argument& e) { SWIG_exception(SWIG_ValueError, e.what()); } catch (const std::out_of_range& e) { SWIG_exception(SWIG_IndexError, e.what()); } catch (...) { SWIG_exception(SWIG_RuntimeError, "unknown exception"); } } ```
I'm not familiar with swig, or with using C++ and Python together, but if this is under a recent version of Microsoft Visual C++, then the `Monitor` class is probably throwing a C structured exception, rather than a C++ typed exception. C structured exceptions aren't caught by C++ exception handlers, even the `catch(...)` one. If that's the case, you can use the `__try/__except` keywords (instead of `try/catch`), or use the `_set_se_translator` function to translate the C structured exception into a C++ typed exception. (Older versions of MSVC++ treated C structured exceptions as C++ `int` types, and *are* caught by C++ handlers, if I remember correctly.) If this *isn't* under Microsoft Visual C++, then I'm not sure how this could be happening. EDIT: Since you say that this isn't MSVC, perhaps something else is catching the exception (and terminating the program) before your code gets it, or maybe there's something in your catch block that's throwing another exception? Without more detail to work with, those are the only cases I can think of that would cause those symptoms.
913,396
I'm using swig to wrap a class from a C++ library with python. It works overall, but there is an exception that is thrown from within the library and I can't seem to catch it in the swig interface, so it just crashes the python application! The class PyMonitor.cc describes the swig interface to the desired class, Monitor. Monitor's constructor throws an exception if it fails to connect. I'd like to handle this exception in PyMonitor, e.g.: PyMonitor.cc: ``` #include "Monitor.h" // ... bool PyMonitor::connect() { try { _monitor = new Monitor(_host, _calibration); } catch (...) { printf("oops!\n"); } } // ... ``` However, the connect() method never catches the exception, I just get a "terminate called after throwing ..." error, and the program aborts. I don't know too much about swig, but it seems to me that this is all fine C++ and the exception should propagate to the connect() method before killing the program. Any thoughts?
2009/05/27
[ "https://Stackoverflow.com/questions/913396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/75827/" ]
You have to forward the exceptions to Python if you want to parse them there. See the [SWIG Documentation](http://www.swig.org/Doc1.3/Customization.html#exception). In order to forward exceptions, you only have to add some code in the SWIG interface (.i) file. Basically, this can be anywhere in the .i file. All types of exceptions should be specified here, and SWIG **only** catches the listed exception types (in this case std::runtime\_error, std::invalid\_argument, std::out\_of\_range), all other exceptions are caught as unknown exceptions (and are thus forwarded correctly!). ``` // Handle standard exceptions. // NOTE: needs to be before the %import! %include "exception.i" %exception { try { $action } catch (const std::runtime_error& e) { SWIG_exception(SWIG_RuntimeError, e.what()); } catch (const std::invalid_argument& e) { SWIG_exception(SWIG_ValueError, e.what()); } catch (const std::out_of_range& e) { SWIG_exception(SWIG_IndexError, e.what()); } catch (...) { SWIG_exception(SWIG_RuntimeError, "unknown exception"); } } ```
It's possible that a function called directly or indirectly by the Monitor `constructor` is violating its exception specification and doesn't allow `std::bad_exception` to be thrown. If you haven't replaced the standard function for trapping this, then it would explain the behaviour that you are seeing. To test this hypothesis you could try defining your own handler: ``` void my_unexpected() { std::cerr << "Bad things have happened!\n"; std::terminate(); } bool PyMonitor::connect() { std::set_unexpected( my_unexpected ); try { _monitor = new Monitor(_host, _calibration); } catch (...) { printf("oops!\n"); } } ``` If you get the "Bad things have happened!" error message then you have confirmed that this is the case, but unfortunately there may not be a lot that you can do. If you're 'lucky', you may be able to throw an exception from `my_unexpected` that is allowed by the exception specification of the function that is currently failing, but in any case your unexpected handler is not allowed to terminate normally. It must throw or otherwise terminate. To fix this you really need to get into the called code and either correct it so that the exception specification is not violated, either by fixing the specification itself or by fixing the code so that it doesn't throw the exception that isn't expected. Another possibility is that an exception is being thrown during stack unwinding caused by the original exception being thrown. This also would cause termination of the process. In this case, although you can replace the standard terminate function, you have no option but to abort the program. A terminate handler isn't allowed to throw or return, it must terminate the program.
3,856,314
After using C# for long time I finally decided to switch to Python. The question I am facing for the moment has to do about auto-complete. I guess I am spoiled by C# and especially from resharper and I was expecting something similar to exist for Python. My editor of choice is emacs and after doing some research I found `autocomplete.pl`, `yasnippet` and rope although it is not clear to me if and how they can be installed in a cygwin based system which is what I use since all the related documentation appears to be linux specific... The version of emacs I currently use is 23.2.1 which bundles the python mode that although useful is far behind from whatever research has to offer. My question to python users has to do about how common is autocomplete vs manual typing (using `M-/` where possible) ? I am thinking about just memorizing python build-in functions like len, append, extend etc. and revert close to a pre-autocomplete editing mode. How different such an approach is from what other pythonistas are doing?
2010/10/04
[ "https://Stackoverflow.com/questions/3856314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/404984/" ]
I found this post > > [My Emacs Python environment](http://www.saltycrane.com/blog/2010/05/my-emacs-python-environment/) > > > to be the most useful and comprehensive list of instructions and references on how to setup a decent Python development environment in Emacs regardless of OS platform. It is still a bit of work to setup but at least it covers the popular packages and components generally recommended for Python in Emacs that provide auto-completion functionality. I loosely used this post as a guide to do the setup on my Windows machine with Emacs 23.2.1 and Python 2.6.5. Although, I also have Cygwin installed in some cases instead of running the \*nix shell commands mentioned in the post, I just download the packages via a web browser, unzip them with 7zip, and copy them to my Emacs' plugin directory. Also, to install Pymacs, Rope, and Ropemacs, I used Python's [EasyInstall](http://en.wikipedia.org/wiki/EasyInstall) package manager. To use it, I downloaded and installed [the `setuptools` package using the Windows install version](http://pypi.python.org/pypi/setuptools#windows). Once installed, at the command line, cd to their respective download locations and run the command `easy_install .` instead of the shell commands shown in the post. Generally, I saved any `*.el` files in my `~\.emacs.d\plugins` (e.g. in `%USERPROFILE%\Application Data\.emacs.d\`) and then updated my `.emacs` file to reference them as documented in the post. Despite all this, on occasion, I've used [DreamPie](http://dreampie.sourceforge.net/) since it does have overall better auto-completion out of the box than my Emacs setup.
I find that [PyDev](http://pydev.org/) + Eclipse can meet most of my needs. There is also [PyCharm](http://www.jetbrains.com/pycharm/) from the Intellij team. PyCharm has the added advantage of smooth integration with git.
3,856,314
After using C# for long time I finally decided to switch to Python. The question I am facing for the moment has to do about auto-complete. I guess I am spoiled by C# and especially from resharper and I was expecting something similar to exist for Python. My editor of choice is emacs and after doing some research I found `autocomplete.pl`, `yasnippet` and rope although it is not clear to me if and how they can be installed in a cygwin based system which is what I use since all the related documentation appears to be linux specific... The version of emacs I currently use is 23.2.1 which bundles the python mode that although useful is far behind from whatever research has to offer. My question to python users has to do about how common is autocomplete vs manual typing (using `M-/` where possible) ? I am thinking about just memorizing python build-in functions like len, append, extend etc. and revert close to a pre-autocomplete editing mode. How different such an approach is from what other pythonistas are doing?
2010/10/04
[ "https://Stackoverflow.com/questions/3856314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/404984/" ]
I find that [PyDev](http://pydev.org/) + Eclipse can meet most of my needs. There is also [PyCharm](http://www.jetbrains.com/pycharm/) from the Intellij team. PyCharm has the added advantage of smooth integration with git.
IMO, by far the easiest way to take advantage of the python tools available for emacs is to take advantage of the defaults that are all set up at: <https://github.com/gabrielelanaro/emacs-for-python> I actually took the time to get pymacs and ropemacs and python-mode all working independently before finding that little gem, and now I rely on it entirely for all my python based customizations. If you are new, I would definitely start there.