qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
9,701,227 | I don't understand the answers for a [similar question.](https://stackoverflow.com/questions/9359677/why-python-reduce-skip-over-the-none-element)
It is clear that this should return `True`
```
l = [1,1,1]
reduce(lambda x,y: x== y, l)
```
However, how do you explain this retuns `False` (when `None==None` is `True`)
```
l = [None,None,None]
reduce(lambda x,y: x== y, l)
``` | 2012/03/14 | [
"https://Stackoverflow.com/questions/9701227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/94363/"
] | Consider the following:
```
In [214]: l = [None,None,None]
In [215]: reduce(lambda x,y: x== y, l)
Out[215]: False
In [216]: from functional import scanl
In [217]: scanl(lambda x,y: x== y, None, l)
Out[217]: <generator object _scanl at 0x0000000005770D38>
In [218]: list(scanl(lambda x,y: x== y, None, l))
Out[218]: [None, True, False, False]
```
`scanl` shows intermediate results, starting from the initial element. What is happening is that at first initial is returned, then the result of `None == None` (`True`), then `True == None` (`False`), then until the end, `False == None` (`False`).
**Reduce compares the result of the last calculation with the next element in the sequence.** | It's not different with `None`, actually, what happens within `reduce` in the first case is
* 1 compared with 1 (== `True`)
* `True` compared with 1 (== `True`)
In the second case, it's
* `None` compared with `None` (== `True`)
* `True` compared with `None` (== `False`)
The funny example would be:
```
>> from operator import eq
>> reduce(eq, [False, False, False])
False
>> reduce(eq, [False, False, False, False])
True
``` |
56,966,429 | I want to do this
```py
from some_cool_library import fancy_calculation
arr = [1,2,3,4,5]
for i, item in enumerate(arr):
the_rest = arr[:i] + arr[i+1:]
print(item, fancy_calculation(the_rest))
[Expected output:] # some fancy output from the fancy_calculation
12.13452134
2416245.4315432
542.343152
15150.1152
10.1591
```
But I wonder if there is a more pythonic way or existing library to get pairs as shown above.
The problem with the current implementation is that we need about O(n) more memory for the `the_rest` variable. Is there any way on how we can do this without additional memory allocation?
```py
for item, the_rest in some_cool_function(arr):
print(item, fancy_calculation(the_rest))
``` | 2019/07/10 | [
"https://Stackoverflow.com/questions/56966429",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8809992/"
] | Try This-
```
SELECT A.smanTeam,
A.TotalTarget,
B.TotalSales,
B.TotalSales*100/A.TotalTarget TotalPercentage
FROM
(
SELECT smanTeam,SUM(Target) TotalTarget
FROM Sman S
INNER JOIN SalesTarget ST ON S.smanID = ST.smanID
GROUP BY smanTeam
)A
LEFT JOIN
(
SELECT smanTeam, SUM(Amount) TotalSales
FROM Sman S
INNER JOIN Sales SA ON S.smanID = SA.smanID
GROUP BY smanTeam
)B ON A.smanTeam = B.smanTeam
``` | Try below query:
```
select smanTeam, sum(Target) TotalTarget, sum(Amount) TotalSales , sum(Target)/sum(Amount) TotalPercentage from (
select smanTeam, Target, Amount from Sman sm
join
(select smanID, sum(Target) Target from SalesTarget group by smanID) st
on sm.smanID = st.smanID
join
(select smanID, sum(Amount) Amount from Sales group by smanID) s
on sm.smanID = s.smanID
) a group by smanTeam
``` |
48,435,417 | If I have python code that requires indenting (`for`, `with`, function, etc), will a single line comment end potentially the context of the construct if I place it incorrectly? For example, presuming `step1`, `step2` and `step3` are functions already defined, will:
```
def myFunc():
step1()
# step2()
step3()
```
(unintentionally) reduce the scope of `myFunc()` so that it only contains `step1`? If I only want to remove `step2` from the 3-step sequence, must I place the `#` at the same level of indentation as the statements within the scope of the construct? All the code I have seen so far suggests this is a requirement, but it might just be a coding habit. | 2018/01/25 | [
"https://Stackoverflow.com/questions/48435417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1099237/"
] | Try it out:
```
def myFunc():
print(1)
# print(2)
print(3)
myFunc()
```
which outputs:
```
1
3
```
So yeah, the answer is "Line comments don't need to match indentation". That said, [PEP8 really prefers that they do, just for readability](https://www.python.org/dev/peps/pep-0008/#block-comments). | It doesn't really matter where you place the `#`
Either in the first identation level or close to the instruction, everything underneath it is going to be executed.
I suggest you to play with the code below and You'll figure it out yourself.
```
a = 1
b = 10
c = 100
d = 1000
if (a == 1):
result = a+b
# result = result + c
result = result + d
print(result)
``` |
48,435,417 | If I have python code that requires indenting (`for`, `with`, function, etc), will a single line comment end potentially the context of the construct if I place it incorrectly? For example, presuming `step1`, `step2` and `step3` are functions already defined, will:
```
def myFunc():
step1()
# step2()
step3()
```
(unintentionally) reduce the scope of `myFunc()` so that it only contains `step1`? If I only want to remove `step2` from the 3-step sequence, must I place the `#` at the same level of indentation as the statements within the scope of the construct? All the code I have seen so far suggests this is a requirement, but it might just be a coding habit. | 2018/01/25 | [
"https://Stackoverflow.com/questions/48435417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1099237/"
] | Try it out:
```
def myFunc():
print(1)
# print(2)
print(3)
myFunc()
```
which outputs:
```
1
3
```
So yeah, the answer is "Line comments don't need to match indentation". That said, [PEP8 really prefers that they do, just for readability](https://www.python.org/dev/peps/pep-0008/#block-comments). | Python clearly considers comments when checking for indentation errors, which I hope the devs think of as a bug, and fix. I was just running a program that failed to errors, but suddently worked when I deleted some of the comments (and changed nothing else). |
48,435,417 | If I have python code that requires indenting (`for`, `with`, function, etc), will a single line comment end potentially the context of the construct if I place it incorrectly? For example, presuming `step1`, `step2` and `step3` are functions already defined, will:
```
def myFunc():
step1()
# step2()
step3()
```
(unintentionally) reduce the scope of `myFunc()` so that it only contains `step1`? If I only want to remove `step2` from the 3-step sequence, must I place the `#` at the same level of indentation as the statements within the scope of the construct? All the code I have seen so far suggests this is a requirement, but it might just be a coding habit. | 2018/01/25 | [
"https://Stackoverflow.com/questions/48435417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1099237/"
] | Syntax-wise, blank lines are ignored. Blank lines include lines that have any amount of white space followed by a comment.
<https://docs.python.org/2/reference/lexical_analysis.html#blank-lines>
Indenting a comment the way you show in your example does not change the block of code included in your function.
Convention-wise, PEP8 calls for comments indented to the same indentation as code. | It doesn't really matter where you place the `#`
Either in the first identation level or close to the instruction, everything underneath it is going to be executed.
I suggest you to play with the code below and You'll figure it out yourself.
```
a = 1
b = 10
c = 100
d = 1000
if (a == 1):
result = a+b
# result = result + c
result = result + d
print(result)
``` |
48,435,417 | If I have python code that requires indenting (`for`, `with`, function, etc), will a single line comment end potentially the context of the construct if I place it incorrectly? For example, presuming `step1`, `step2` and `step3` are functions already defined, will:
```
def myFunc():
step1()
# step2()
step3()
```
(unintentionally) reduce the scope of `myFunc()` so that it only contains `step1`? If I only want to remove `step2` from the 3-step sequence, must I place the `#` at the same level of indentation as the statements within the scope of the construct? All the code I have seen so far suggests this is a requirement, but it might just be a coding habit. | 2018/01/25 | [
"https://Stackoverflow.com/questions/48435417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1099237/"
] | Syntax-wise, blank lines are ignored. Blank lines include lines that have any amount of white space followed by a comment.
<https://docs.python.org/2/reference/lexical_analysis.html#blank-lines>
Indenting a comment the way you show in your example does not change the block of code included in your function.
Convention-wise, PEP8 calls for comments indented to the same indentation as code. | Python clearly considers comments when checking for indentation errors, which I hope the devs think of as a bug, and fix. I was just running a program that failed to errors, but suddently worked when I deleted some of the comments (and changed nothing else). |
32,270,272 | I need to get a particular attribute value from a tag whose inner word matches my query word. For example, consider a target html-
```html
<span data-attr="something" attr1="" ><i>other_word</i></span>
<span data-attr="required" attr1="" ><i>word_to_match</i></span>
<span data-attr="something1" attr1="" ><i>some_other_word</i></span>
```
Now, I need to get the '**required**' value from the **data-attr** attribute for the tag where the inner word(in this case **word\_to\_match**) matches my query word.
The problem is that the regexes that I'm writing are returning the other spans as well. I haven't been able to make a non greedy regex in this case.
If it helps, I'm doing this in python and kindly no "don't use regex here" solutions. | 2015/08/28 | [
"https://Stackoverflow.com/questions/32270272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2636802/"
] | You're not too far off. You need to iterate the words in each line and check if they are in the dictionary. Also, you need to call `read_words`, otherwise `ret` doesn't exist in the `for` loop.
```
dictionary = read_words(dictionary)
for paper in library:
file = os.path.join(path, paper)
text = open(file, "r")
hit_count = 0
for line in text:
for word in line:
if word in dictionary:
hit_count = hit_count + 1
print >> output, paper + "|" + line,
``` | If you want to check if any element in the list are in the line
**change this from this :**
```
if re.match("(.*)(ret[])(.*)", line):
```
**To this :**
```
if any(word in line for word in ret)
``` |
34,048,316 | I have a sample file which looks like
```
emp_id(int),name(string),age(int)
1,hasa,34
2,dafa,45
3,fasa,12
8f,123Rag,12
8,fafl,12
```
Requirement: Column data types are specified as strings and integers. Emp\_id should be a integer not string. these conditions ll be the same for name and age columns.
**My output should look like#**
```
Actual column Emp_id type is INT but string was found at the position 4, value is 8f
Actual column name type is STRING but numbers were found at the position 4, value is 123Rag
```
continues..
here is my code
Shell script
```
read input
if [ $input -eq $input 2>/dev/null ]
then
echo "$input is an integer"
else
echo "$input is not an integer"
fi
```
In python, i was trying with Isinstance(obj,type) but it is not serving the purpose.
can any guide me in this regard, Any shell/python/perl script help would be appreciated! | 2015/12/02 | [
"https://Stackoverflow.com/questions/34048316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1696853/"
] | Here is an awk-solution:
```
awk -F"," 'NR==1{for(i=1; i <= NF; i++){
split($i,a,"(");
name[i]=a[1];
type[i] = ($i ~ "int" ? "INT" : "String")}next}
{for(i=1; i <= NF; i++){
if($i != int($i) && type[i] == "INT"){error[i][NR] = $i}
if($i ~ /[0-9]+/ && type[i] == "String"){error[i][NR] = $i}
}}
END{for(i in error){
for(key in error[i]){
print "Actual column "name[i]" type is "type[i]\
" but string was found at the position "key-1\
", value is "error[i][key]}}}' inputFile
```
The output is - as desired:
```
Actual column emp_id type is INT but string was found at the position 4, value is 8f
Actual column name type is String but string was found at the position 4, value is 123Rag
```
However, in my opinion `123Rag` is a string and should not be indicated as an incorrect entry in the second column. | With `perl` I would tackle it like this:
* Define some regex patterns that match/don't match the string content.
* pick out the header row - separate it into names and types. (Optionally reporting if a type doesn't match).
* iterate your fields, matching by column, figuring out type and applying the regex to validate
Something like:
```
#!/usr/bin/env perl
use strict;
use warnings;
use Data::Dumper;
#define regex to apply for a given data type
my %pattern_for = (
int => qr/^\d+$/,
string => qr/^[A-Z]+$/i,
);
print Dumper \%pattern_for;
#read the first line.
# <> is a magic filehandle, that reads files specified as arguments
# or piped input - like grep/sed do.
my $header_row = <>;
#extract just the names, in order.
my @headers = $header_row =~ m/(\w+)\(/g;
#create a type lookup for the named headers.
my %type_for = $header_row =~ m|(\w+)\((\w+)\)|g;
print Dumper \@headers;
print Dumper \%type_for;
#iterate input again
while (<>) {
#remove trailing linefeed
chomp;
#parse incoming data into named fields based on ordering.
my %fields;
@fields{@headers} = split /,/;
#print for diag
print Dumper \%fields;
#iterate the headers, applying the looked up 'type' regex
foreach my $field_name (@headers) {
if ( $fields{$field_name} =~ m/$pattern_for{$type_for{$field_name}}/ ) {
print
"$field_name => $fields{$field_name} is valid, $type_for{$field_name} matching $pattern_for{$type_for{$field_name}}\n";
}
else {
print "$field_name $fields{$field_name} not valid $type_for{$field_name} matching $pattern_for{$type_for{$field_name}}\n";
}
}
}
```
This gives for your input (just the invalids for brevity):
```
name 123Rag not valid string matching (?^i:^[A-Z]+$)
emp_id 8f not valid int matching (?^:^\d+$)
```
Note - it only supports 'simple' CSV style (no nested commas or quotes) but could easily be adapted to use the `Text::CSV` module. |
71,448,461 | I was writing a python code in VS Code and somehow it's not detecting the input() function like it should.
Suppose, the code is as simple as
```
def main():
x= int ( input() )
print(x)
if __name__ == "__main__":
main()
```
even then, for some reason it is throwing error and I cannot figure out why.
The error being-
[](https://i.stack.imgur.com/gkMHA.png)
P.S. 1)I am using Python 3.10 2) I tried removing the int() and it still doesn't work. | 2022/03/12 | [
"https://Stackoverflow.com/questions/71448461",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16979277/"
] | The traceback shows you where to look. It's actually the `int` function throwing a `ValueError`. It looks as if you're feeding it a filepath whereas it it's expecting a number.
You could add a check to repeat the input if incorrect like so:
```py
user_input = None
while not user_input:
raw_input = input("Put in a number: ")
try:
user_input = int(raw_input)
except ValueError:
continue
print(f"Number is: {user_input}")
``` | it's working!!!
see my example that says why you don't understand this:
```py
>>> x1 = input('enter a number: ')
enter a number: 10
>>> x1
'10'
>>> x2 = int(x1)
>>> x2
10
>>> x1 = input() # no text
100
>>> # it takes
>>> x1
'100'
>>> # but how you try?
>>> x1 = input()
NOT-NUMBER OR EMPTY-TEXT
>>> x2 = int(x1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: 'IS-NOT-NUMBER OR EMPTY TEXT'
>>>
```
I thing this is enough. |
71,448,461 | I was writing a python code in VS Code and somehow it's not detecting the input() function like it should.
Suppose, the code is as simple as
```
def main():
x= int ( input() )
print(x)
if __name__ == "__main__":
main()
```
even then, for some reason it is throwing error and I cannot figure out why.
The error being-
[](https://i.stack.imgur.com/gkMHA.png)
P.S. 1)I am using Python 3.10 2) I tried removing the int() and it still doesn't work. | 2022/03/12 | [
"https://Stackoverflow.com/questions/71448461",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16979277/"
] | The traceback shows you where to look. It's actually the `int` function throwing a `ValueError`. It looks as if you're feeding it a filepath whereas it it's expecting a number.
You could add a check to repeat the input if incorrect like so:
```py
user_input = None
while not user_input:
raw_input = input("Put in a number: ")
try:
user_input = int(raw_input)
except ValueError:
continue
print(f"Number is: {user_input}")
``` | The int() func is try to convert your string to Integer so it should be just numbers. It seems you are giving number and characters as an input so it raise the Value Error. if you want you can check if it is just numbers or not
```
x = input()
if x.isdigit():
x = int(x)
``` |
37,124,504 | All,
I wrote a small python program to create a file which is used as an input file to run an external program called srce3d. Here it is:
```
fin = open('eff.pwr.template','r')
fout = open('eff.pwr','wr')
for line in fin:
if 'li' in line:
fout.write( line.replace('-2.000000E+00', `-15.0`) )
else:
fout.write(line)
fin.close
fout.close
os.chmod('eff.pwr',0744)
# call srce3d
os.system("srce3d -bat -pwr eff.pwr >& junk.out")
```
This does not work. The input file gets written properly but srce3d complains of an end of file during read. The os.system command works fine with a pre-existing file, without any need to open that file.
Thanks for your help | 2016/05/09 | [
"https://Stackoverflow.com/questions/37124504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5590629/"
] | Firstly you are missing the function calls for close.
```
fin.close() ## the round braces () were missing.
fout.close()
```
A better way to do the same is using contexts.
```
with open('eff.pwr.template','r') as fin, open('eff.pwr','wr') as fout:
## do all processing here
``` | You didn't actually close the file – you have to *call* `file.close`. So,
```
fin.close
fout.close
```
should be
```
fin.close()
fout.close()
``` |
29,411,952 | I need to delete all the rows in a csv file which have more than a certain number of columns.
This happens because sometimes the code, which generates the csv file, skips some values and prints the following on the same line.
Example: Consider the following file to parse. I want to remove all the rows which have more than 3 columns (i.e. the columns of the header):
```
timestamp,header2,header3
1,1val2,1val3
2,2val2,2val3
3,4,4val2,4val3
5val1,5val2,5val3
6,6val2,6val3
```
The output file I would like to have is:
```
timestamp,header2,header3
1,1val2,1val3
2,2val2,2val3
5val1,5val2,5val3
6,6val2,6val3
```
I don't care if the row with timestamp 4 is missing.
I would prefer a solution in bash or perhaps using awk, rather than a python one, so that I can learn how to use it. | 2015/04/02 | [
"https://Stackoverflow.com/questions/29411952",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3014331/"
] | This can be done straight forward with `awk`:
```
awk -F, 'NF<=3' file
```
This uses the `awk` variable `NF` that holds the number of fields in the current line. Since we have set the field separator to the comma (with `-F,` or, equivalent, `-v FS=","`), then it is just a matter of checking when the number of fields is not higher than 3. This is done with `NF<=3`: when this is true, the line will be printed automatically.
### Test
```
$ awk -F, 'NF<=3' a
timestamp,header2,header3
1,1val2,1val3
2,2val2,2val3
5val1,5val2,5val3
6,6val2,6val3
``` | Try the following (do not omit to replace your file path and your max column):
```bash
#! /bin/bash
filepath=test.csv
max_columns=3
for line in $(cat $filepath);
do
count=$(echo "$line" | grep -o "," | wc -l)
if [ $(($count + 1)) -le $max_columns ]
then
echo $line
fi
done
```
Copy this in a `.sh` file (*cropper.sh* for example), make it executable `chmod +x cropper.sh` and run `./cropper.sh`).
This will output only the valid lines. You can then catch the result in a file this way:
`./cropper.sh > result.txt` |
47,635,838 | I'm trying to use the LinearSVC of sklearn and export the decision tree to a .dot file. I can fit the classifier with sample data and then use it on some test data but the export to the .dot file gives a NotFittedError.
```
data = pd.read_csv("census-income-data.data", skipinitialspace=True, usecols=list(range(0, 41)))
data = data.fillna('Missing value').apply(pp.LabelEncoder().fit_transform)
target = pd.read_csv("census-income-data.data", skipinitialspace=True, usecols=[41])
dataTest = pd.read_csv("census-income-test.test", skipinitialspace=True, usecols=list(range(0, 41)))
dataTest = dataTest.fillna('Missing value').apply(pp.LabelEncoder().fit_transform)
targetTest = pd.read_csv("census-income-test.test", skipinitialspace=True, usecols=[41])
clfSVC = LinearSVC(random_state=0)
clfSVC = clfSVC.fit(data, target.target)
scoreSVC = clfSVC.score(dataTest, targetTest.target)
print(scoreSVC)
tree.export_graphviz(clfSVC, out_file='tree.dot')
```
Here is the output:
```
> Traceback (most recent call last):
File "D:\Documents\Telecom\IA\ai-person-income\project\sklearn_test.py", line 49, in <module>
tree.export_graphviz(clfSVC, out_file='tree.dot')
File "D:\Program Files\WinPython-64bit-3.6.3.0Qt5\python-3.6.3.amd64\lib\site-packages\sklearn\tree\export.py", line 392, in export_graphviz
check_is_fitted(decision_tree, 'tree_')
File "D:\Program Files\WinPython-64bit-3.6.3.0Qt5\python-3.6.3.amd64\lib\site-packages\sklearn\utils\validation.py", line 768, in check_is_fitted
raise NotFittedError(msg % {'name': type(estimator).__name__})
sklearn.exceptions.NotFittedError: This LinearSVC instance is not fitted yet. Call 'fit' with appropriate arguments before using this method.
```
What am I missing ? | 2017/12/04 | [
"https://Stackoverflow.com/questions/47635838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6534294/"
] | You are using a [function](http://scikit-learn.org/stable/modules/generated/sklearn.tree.export_graphviz.html#sklearn-tree-export-graphviz) to plot a decision-tree. Look at the first argument: *decision\_tree*, like an object of [this](http://scikit-learn.org/stable/modules/tree.html).
A SVM is not a decision-tree! It isn't any kind of tree and you can't use this function and it also makes no sense in theory.
The error itself is based on [this code](https://github.com/scikit-learn/scikit-learn/blob/a24c8b46/sklearn/tree/export.py#L392):
```
check_is_fitted(decision_tree, 'tree_')
```
where the internal tree of a decision-tree object is queried. This does not exists for SVMs. | In `sklearn.tree.export_graphviz`, the first parameter is a fitted decision tree.
You give a fitted estimator, but not a decision tree.
**Indeed, `LinearSVC` is not a decision tree.**
Try with `sklearn.tree.DecisionTreeClassifier` instead of `sklearn.svm.LinearSVC`. |
47,443,434 | I'm new to python/data science in general, trying to understand why the below isn't working:
```
import pandas as pd
url = 'https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2017-06.csv'
trip_df = []
for chunk in pd.read_csv(url, chunksize=1000, nrows=10000):
trip_df.append(chunk)
trip_df = pd.concat(trip_df, axis='rows')
```
It's returning a MemoryError, but I was under the impression that loading the file in chunks was a workaround for that. I'd prefer not to save the csv to my hard drive, plus I'm not entirely sure if that would help.
My computer's pretty limited, only 4GB of RAM (2.9 usable). I imagine that's a factor here, but if importing the whole file isn't an option, how would I just get part of it, say 5,000 rows? | 2017/11/22 | [
"https://Stackoverflow.com/questions/47443434",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8992936/"
] | Try this:
```
reader = pd.read_csv(url, chunksize=1000, nrows=10000)
df = pd.concat([x for x in reader], ignore_index=True)
```
>
> how would I just get part of it, say 5,000 rows?
>
>
>
```
df = pd.read_csv(url, nrows=5000)
``` | Just get the 5000 rows in the command line with
```
wget -q -O - https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2017-06.csv | head -n 5000 > tmp.csv
```
and then proceed with tmp.csv |
47,443,434 | I'm new to python/data science in general, trying to understand why the below isn't working:
```
import pandas as pd
url = 'https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2017-06.csv'
trip_df = []
for chunk in pd.read_csv(url, chunksize=1000, nrows=10000):
trip_df.append(chunk)
trip_df = pd.concat(trip_df, axis='rows')
```
It's returning a MemoryError, but I was under the impression that loading the file in chunks was a workaround for that. I'd prefer not to save the csv to my hard drive, plus I'm not entirely sure if that would help.
My computer's pretty limited, only 4GB of RAM (2.9 usable). I imagine that's a factor here, but if importing the whole file isn't an option, how would I just get part of it, say 5,000 rows? | 2017/11/22 | [
"https://Stackoverflow.com/questions/47443434",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8992936/"
] | This works for me
```py
with contextlib.closing(urllib.request.urlopen(url=url)) as rd:
for df in pd.read_csv(rd, chunksize=5, index_col=0):
print(df)
```
while just
```py
pd.read_csv(url, chunksize=5, index_col=0)
```
crashes with MemoryError. | Just get the 5000 rows in the command line with
```
wget -q -O - https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2017-06.csv | head -n 5000 > tmp.csv
```
and then proceed with tmp.csv |
10,931,150 | php has the [`strtr`](http://php.net/manual/en/function.strtr.php) function:
```
strtr('aa-bb-cc', array('aa' => 'bbz', 'bb' => 'x', 'cc' => 'y'));
# bbz-x-y
```
It replaces dictionary keys in a string with corresponding values and (important) doesn't replace already replaced strings. A naive attempt to write the same in python:
```
def strtr(strng, replace):
for s, r in replace.items():
strng = strng.replace(s, r)
return strng
strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'})
```
returns `xz-x-y` which is not we want (`bb` got replaced again). How to change the above function so that it behaves like its php counterpart?
(I would prefer an answer without regular expressions, if possible).
Upd: some great answers here. I timed them and found that for short strings Gumbo's version appears to be the fastest, on longer strings the winner is the `re` solution:
```
# 'aa-bb-cc'
0.0258 strtr_thg
0.0274 strtr_gumbo
0.0447 strtr_kojiro
0.0701 strtr_aix
# 'aa-bb-cc'*10
0.1474 strtr_aix
0.2261 strtr_thg
0.2366 strtr_gumbo
0.3226 strtr_kojiro
```
My own version (which is slightly optimized Gumbo's):
```
def strtr(strng, replace):
buf, i = [], 0
while i < len(strng):
for s, r in replace.items():
if strng[i:len(s)+i] == s:
buf.append(r)
i += len(s)
break
else:
buf.append(strng[i])
i += 1
return ''.join(buf)
```
Complete codes and timings: <https://gist.github.com/2889181> | 2012/06/07 | [
"https://Stackoverflow.com/questions/10931150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | Here is a naive algorithm:
Use an index to walk the original string character by character and check for each index whether one of the search strings is equal to the string from the current index on. If a match is found, push the replacement in a buffer and proceed the index by the length of the matched string. If no match is found, proceed the index by one. At the end, concatenate the strings in the buffer to a single string.
```
def strtr(strng, replace):
buffer = []
i, n = 0, len(strng)
while i < n:
match = False
for s, r in replace.items():
if strng[i:len(s)+i] == s:
buffer.append(r)
i = i + len(s)
match = True
break
if not match:
buffer.append(strng[i])
i = i + 1
return ''.join(buffer)
``` | [`str.translate`](http://docs.python.org/library/stdtypes.html#str.translate) is the equivalent, but can only map to single characters. |
10,931,150 | php has the [`strtr`](http://php.net/manual/en/function.strtr.php) function:
```
strtr('aa-bb-cc', array('aa' => 'bbz', 'bb' => 'x', 'cc' => 'y'));
# bbz-x-y
```
It replaces dictionary keys in a string with corresponding values and (important) doesn't replace already replaced strings. A naive attempt to write the same in python:
```
def strtr(strng, replace):
for s, r in replace.items():
strng = strng.replace(s, r)
return strng
strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'})
```
returns `xz-x-y` which is not we want (`bb` got replaced again). How to change the above function so that it behaves like its php counterpart?
(I would prefer an answer without regular expressions, if possible).
Upd: some great answers here. I timed them and found that for short strings Gumbo's version appears to be the fastest, on longer strings the winner is the `re` solution:
```
# 'aa-bb-cc'
0.0258 strtr_thg
0.0274 strtr_gumbo
0.0447 strtr_kojiro
0.0701 strtr_aix
# 'aa-bb-cc'*10
0.1474 strtr_aix
0.2261 strtr_thg
0.2366 strtr_gumbo
0.3226 strtr_kojiro
```
My own version (which is slightly optimized Gumbo's):
```
def strtr(strng, replace):
buf, i = [], 0
while i < len(strng):
for s, r in replace.items():
if strng[i:len(s)+i] == s:
buf.append(r)
i += len(s)
break
else:
buf.append(strng[i])
i += 1
return ''.join(buf)
```
Complete codes and timings: <https://gist.github.com/2889181> | 2012/06/07 | [
"https://Stackoverflow.com/questions/10931150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | The following uses regular expressions to do it:
```
import re
def strtr(s, repl):
pattern = '|'.join(map(re.escape, sorted(repl, key=len, reverse=True)))
return re.sub(pattern, lambda m: repl[m.group()], s)
print(strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'}))
```
Like the PHP's version, this gives preference to longer matches. | [`str.translate`](http://docs.python.org/library/stdtypes.html#str.translate) is the equivalent, but can only map to single characters. |
10,931,150 | php has the [`strtr`](http://php.net/manual/en/function.strtr.php) function:
```
strtr('aa-bb-cc', array('aa' => 'bbz', 'bb' => 'x', 'cc' => 'y'));
# bbz-x-y
```
It replaces dictionary keys in a string with corresponding values and (important) doesn't replace already replaced strings. A naive attempt to write the same in python:
```
def strtr(strng, replace):
for s, r in replace.items():
strng = strng.replace(s, r)
return strng
strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'})
```
returns `xz-x-y` which is not we want (`bb` got replaced again). How to change the above function so that it behaves like its php counterpart?
(I would prefer an answer without regular expressions, if possible).
Upd: some great answers here. I timed them and found that for short strings Gumbo's version appears to be the fastest, on longer strings the winner is the `re` solution:
```
# 'aa-bb-cc'
0.0258 strtr_thg
0.0274 strtr_gumbo
0.0447 strtr_kojiro
0.0701 strtr_aix
# 'aa-bb-cc'*10
0.1474 strtr_aix
0.2261 strtr_thg
0.2366 strtr_gumbo
0.3226 strtr_kojiro
```
My own version (which is slightly optimized Gumbo's):
```
def strtr(strng, replace):
buf, i = [], 0
while i < len(strng):
for s, r in replace.items():
if strng[i:len(s)+i] == s:
buf.append(r)
i += len(s)
break
else:
buf.append(strng[i])
i += 1
return ''.join(buf)
```
Complete codes and timings: <https://gist.github.com/2889181> | 2012/06/07 | [
"https://Stackoverflow.com/questions/10931150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | ```
def strtr(strng, replace):
if replace and strng:
s, r = replace.popitem()
return r.join(strtr(subs, dict(replace)) for subs in strng.split(s))
return strng
j=strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'})
assert j=='bbz-x-y', j
``` | [`str.translate`](http://docs.python.org/library/stdtypes.html#str.translate) is the equivalent, but can only map to single characters. |
10,931,150 | php has the [`strtr`](http://php.net/manual/en/function.strtr.php) function:
```
strtr('aa-bb-cc', array('aa' => 'bbz', 'bb' => 'x', 'cc' => 'y'));
# bbz-x-y
```
It replaces dictionary keys in a string with corresponding values and (important) doesn't replace already replaced strings. A naive attempt to write the same in python:
```
def strtr(strng, replace):
for s, r in replace.items():
strng = strng.replace(s, r)
return strng
strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'})
```
returns `xz-x-y` which is not we want (`bb` got replaced again). How to change the above function so that it behaves like its php counterpart?
(I would prefer an answer without regular expressions, if possible).
Upd: some great answers here. I timed them and found that for short strings Gumbo's version appears to be the fastest, on longer strings the winner is the `re` solution:
```
# 'aa-bb-cc'
0.0258 strtr_thg
0.0274 strtr_gumbo
0.0447 strtr_kojiro
0.0701 strtr_aix
# 'aa-bb-cc'*10
0.1474 strtr_aix
0.2261 strtr_thg
0.2366 strtr_gumbo
0.3226 strtr_kojiro
```
My own version (which is slightly optimized Gumbo's):
```
def strtr(strng, replace):
buf, i = [], 0
while i < len(strng):
for s, r in replace.items():
if strng[i:len(s)+i] == s:
buf.append(r)
i += len(s)
break
else:
buf.append(strng[i])
i += 1
return ''.join(buf)
```
Complete codes and timings: <https://gist.github.com/2889181> | 2012/06/07 | [
"https://Stackoverflow.com/questions/10931150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | [`str.translate`](http://docs.python.org/library/stdtypes.html#str.translate) is the equivalent, but can only map to single characters. | The answers on this thread are so out-dated. Here we go...
### Option #1: Use the str.format() function to handle this:
```
"Hello there {first_name} {last_name}".format(first_name="Bob", last_name="Roy")
```
### Option #2: Use the Template class
```
from string import Template
t = Template('Hello there $first_name $last_name')
t.substitute(first_name="Bob", last_name="Roy")
```
Reference:
[Python String Formatting Best Practices](https://realpython.com/python-string-formatting/) |
10,931,150 | php has the [`strtr`](http://php.net/manual/en/function.strtr.php) function:
```
strtr('aa-bb-cc', array('aa' => 'bbz', 'bb' => 'x', 'cc' => 'y'));
# bbz-x-y
```
It replaces dictionary keys in a string with corresponding values and (important) doesn't replace already replaced strings. A naive attempt to write the same in python:
```
def strtr(strng, replace):
for s, r in replace.items():
strng = strng.replace(s, r)
return strng
strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'})
```
returns `xz-x-y` which is not we want (`bb` got replaced again). How to change the above function so that it behaves like its php counterpart?
(I would prefer an answer without regular expressions, if possible).
Upd: some great answers here. I timed them and found that for short strings Gumbo's version appears to be the fastest, on longer strings the winner is the `re` solution:
```
# 'aa-bb-cc'
0.0258 strtr_thg
0.0274 strtr_gumbo
0.0447 strtr_kojiro
0.0701 strtr_aix
# 'aa-bb-cc'*10
0.1474 strtr_aix
0.2261 strtr_thg
0.2366 strtr_gumbo
0.3226 strtr_kojiro
```
My own version (which is slightly optimized Gumbo's):
```
def strtr(strng, replace):
buf, i = [], 0
while i < len(strng):
for s, r in replace.items():
if strng[i:len(s)+i] == s:
buf.append(r)
i += len(s)
break
else:
buf.append(strng[i])
i += 1
return ''.join(buf)
```
Complete codes and timings: <https://gist.github.com/2889181> | 2012/06/07 | [
"https://Stackoverflow.com/questions/10931150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | Here is a naive algorithm:
Use an index to walk the original string character by character and check for each index whether one of the search strings is equal to the string from the current index on. If a match is found, push the replacement in a buffer and proceed the index by the length of the matched string. If no match is found, proceed the index by one. At the end, concatenate the strings in the buffer to a single string.
```
def strtr(strng, replace):
buffer = []
i, n = 0, len(strng)
while i < n:
match = False
for s, r in replace.items():
if strng[i:len(s)+i] == s:
buffer.append(r)
i = i + len(s)
match = True
break
if not match:
buffer.append(strng[i])
i = i + 1
return ''.join(buffer)
``` | The answers on this thread are so out-dated. Here we go...
### Option #1: Use the str.format() function to handle this:
```
"Hello there {first_name} {last_name}".format(first_name="Bob", last_name="Roy")
```
### Option #2: Use the Template class
```
from string import Template
t = Template('Hello there $first_name $last_name')
t.substitute(first_name="Bob", last_name="Roy")
```
Reference:
[Python String Formatting Best Practices](https://realpython.com/python-string-formatting/) |
10,931,150 | php has the [`strtr`](http://php.net/manual/en/function.strtr.php) function:
```
strtr('aa-bb-cc', array('aa' => 'bbz', 'bb' => 'x', 'cc' => 'y'));
# bbz-x-y
```
It replaces dictionary keys in a string with corresponding values and (important) doesn't replace already replaced strings. A naive attempt to write the same in python:
```
def strtr(strng, replace):
for s, r in replace.items():
strng = strng.replace(s, r)
return strng
strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'})
```
returns `xz-x-y` which is not we want (`bb` got replaced again). How to change the above function so that it behaves like its php counterpart?
(I would prefer an answer without regular expressions, if possible).
Upd: some great answers here. I timed them and found that for short strings Gumbo's version appears to be the fastest, on longer strings the winner is the `re` solution:
```
# 'aa-bb-cc'
0.0258 strtr_thg
0.0274 strtr_gumbo
0.0447 strtr_kojiro
0.0701 strtr_aix
# 'aa-bb-cc'*10
0.1474 strtr_aix
0.2261 strtr_thg
0.2366 strtr_gumbo
0.3226 strtr_kojiro
```
My own version (which is slightly optimized Gumbo's):
```
def strtr(strng, replace):
buf, i = [], 0
while i < len(strng):
for s, r in replace.items():
if strng[i:len(s)+i] == s:
buf.append(r)
i += len(s)
break
else:
buf.append(strng[i])
i += 1
return ''.join(buf)
```
Complete codes and timings: <https://gist.github.com/2889181> | 2012/06/07 | [
"https://Stackoverflow.com/questions/10931150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | The following uses regular expressions to do it:
```
import re
def strtr(s, repl):
pattern = '|'.join(map(re.escape, sorted(repl, key=len, reverse=True)))
return re.sub(pattern, lambda m: repl[m.group()], s)
print(strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'}))
```
Like the PHP's version, this gives preference to longer matches. | The answers on this thread are so out-dated. Here we go...
### Option #1: Use the str.format() function to handle this:
```
"Hello there {first_name} {last_name}".format(first_name="Bob", last_name="Roy")
```
### Option #2: Use the Template class
```
from string import Template
t = Template('Hello there $first_name $last_name')
t.substitute(first_name="Bob", last_name="Roy")
```
Reference:
[Python String Formatting Best Practices](https://realpython.com/python-string-formatting/) |
10,931,150 | php has the [`strtr`](http://php.net/manual/en/function.strtr.php) function:
```
strtr('aa-bb-cc', array('aa' => 'bbz', 'bb' => 'x', 'cc' => 'y'));
# bbz-x-y
```
It replaces dictionary keys in a string with corresponding values and (important) doesn't replace already replaced strings. A naive attempt to write the same in python:
```
def strtr(strng, replace):
for s, r in replace.items():
strng = strng.replace(s, r)
return strng
strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'})
```
returns `xz-x-y` which is not we want (`bb` got replaced again). How to change the above function so that it behaves like its php counterpart?
(I would prefer an answer without regular expressions, if possible).
Upd: some great answers here. I timed them and found that for short strings Gumbo's version appears to be the fastest, on longer strings the winner is the `re` solution:
```
# 'aa-bb-cc'
0.0258 strtr_thg
0.0274 strtr_gumbo
0.0447 strtr_kojiro
0.0701 strtr_aix
# 'aa-bb-cc'*10
0.1474 strtr_aix
0.2261 strtr_thg
0.2366 strtr_gumbo
0.3226 strtr_kojiro
```
My own version (which is slightly optimized Gumbo's):
```
def strtr(strng, replace):
buf, i = [], 0
while i < len(strng):
for s, r in replace.items():
if strng[i:len(s)+i] == s:
buf.append(r)
i += len(s)
break
else:
buf.append(strng[i])
i += 1
return ''.join(buf)
```
Complete codes and timings: <https://gist.github.com/2889181> | 2012/06/07 | [
"https://Stackoverflow.com/questions/10931150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] | ```
def strtr(strng, replace):
if replace and strng:
s, r = replace.popitem()
return r.join(strtr(subs, dict(replace)) for subs in strng.split(s))
return strng
j=strtr('aa-bb-cc', {'aa': 'bbz', 'bb': 'x', 'cc': 'y'})
assert j=='bbz-x-y', j
``` | The answers on this thread are so out-dated. Here we go...
### Option #1: Use the str.format() function to handle this:
```
"Hello there {first_name} {last_name}".format(first_name="Bob", last_name="Roy")
```
### Option #2: Use the Template class
```
from string import Template
t = Template('Hello there $first_name $last_name')
t.substitute(first_name="Bob", last_name="Roy")
```
Reference:
[Python String Formatting Best Practices](https://realpython.com/python-string-formatting/) |
36,900,272 | Being a complete begginer in python, I decided to install the python interpreter 3.4.4, and also PyDev plugin for eclipse IDE. I am also using windows 10.
I have encountered a problem regarding certain imports, namely : `from PIL import Image, ImageTk`, which is apparently an unresolved import.
I have looked at certain questions which were similar to my situation, but most of them gave a solution like installing packaged from the linux apt-get. Here are some topics I visited :
1. [Uninstalling and reinstalling Pillow and pip,](https://stackoverflow.com/questions/26505958/why-cant-python-import-image-from-pil)
2. [Installing pillow-imaging-tk on Ubuntu,](https://stackoverflow.com/questions/22788454/importerror-no-module-named-imagingtk/22788542#22788542)
3. [This topic which left me very confused,](https://stackoverflow.com/questions/8209819/unresolved-import-error-with-pydev-while-using-from-import-as)
4. [This one seemed helpful, but is on Unix OS again](https://stackoverflow.com/questions/6171749/how-do-i-add-pil-to-pydev-in-eclipse-so-i-could-import-it-and-use-it-in-my-proj)
So please, could someone explain to me why I am seeing this error, and how could I correct it if I absolutely want to use Eclipse, Pydev, windows 10 and Python 3. | 2016/04/27 | [
"https://Stackoverflow.com/questions/36900272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4375983/"
] | Found the solution, here's what I did:
1. Set the PYTHONPATH [like it is shown in this article](https://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-in-windows-7/4855685#4855685), make sure python.exe is accessible via cmd,
2. Via cmd, type `pip install pillow`. Alternatively, you can enter the same command from Windows+R,
3. (Not sure if relevant step) Via eclipse, `Windows->Preferences->PyDev->PythonInterpreter`remove your interpreter to re-add it,
4. Restart eclipse. | For Python import problems in PyDev, the project web site has a page on [interpreter configuration](http://www.pydev.org/manual_101_interpreter.html) that is a good place to start. I recently had a similar problem that I solved by adding a module to the forced builtins tab. |
21,845,390 | hello friends i just started to use GitHub and i just want to know it is possible to download github repository to my local computer through by Using GitHub Api or Api libraries (ie. python library " pygithub3" for Github api) | 2014/02/18 | [
"https://Stackoverflow.com/questions/21845390",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3321823/"
] | Using [`github3.py`](http://github3py.rtfd.org/) you can clone all of your repositories (including forks and private repositories) by doing:
```
import github3
import subprocess
g = github3.login('username', 'password')
for repo in g.iter_repos(type='all'):
subprocess.call(['git', 'clone', repo.clone_url])
```
If you're looking to clone an arbitrary repository you can do this:
```
import github3
import subprocess
r = github3.repository('owner', 'repository_name')
subprocess.call(['git', 'clone', repo.clone_url])
```
pygithub3 has not been actively developed in over a year. I would advise not use it since it is unmaintained and missing a large number of the additions GitHub has made to their API since then. | As illustrated in [this Gist](https://gist.github.com/jharjono/1159239), the simplest solution is simply to call git clone.
```python
#!/usr/bin/env python
# Script to clone all the github repos that a user is watching
import requests
import json
import subprocess
# Grab all the URLs of the watched repo
user = 'jharjono'
r = requests.get("http://github.com/api/users/%s/subscriptions" % (user))
repos = json.loads(r.content)
urls = [repo['url'] for repo in repos['repositories']]
# Clone them all
for url in urls:
cmd = 'git clone ' + url
pipe = subprocess.Popen(cmd, shell=True)
pipe.wait()
print "Finished cloning %d watched repos!" % (len(urls))
```
[This gist](https://gist.github.com/decause/5777114#file-list-all-repos-py), which uses **[pygithub3](http://pygithub3.readthedocs.org/en/latest/)**, will call git clone on the repos it finds:
```python
#!/usr/bin/env python
import pygithub3
gh = None
def gather_clone_urls(organization, no_forks=True):
all_repos = gh.repos.list(user=organization).all()
for repo in all_repos:
# Don't print the urls for repos that are forks.
if no_forks and repo.fork:
continue
yield repo.clone_url
if __name__ == '__main__':
gh = pygithub3.Github()
clone_urls = gather_clone_urls("gittip")
for url in clone_urls:
print url
``` |
36,238,155 | I have a script in python that consists of multiple list of functions, and at every end of a list I want to put a back function that will let me return to the beginning of the script and choose another list. for example:
```
list = ("1. List of all users",
"2. List of all groups",
"3. Reset password",
"4. Create new user",
"5. Create new group",
"6. List all kernel drivers",
"7. List all mounts",
"8. Mount a folder",
"9. Exit")
for i in list:
print(i)
```
And if I choose 1 another list opens:
```
list = "1) Show user Groups \n2) Show user ID \n3) Show user aliases \n4) Add new aliases \n5) Change password \n6) Back"
print
print list
```
A more specific example. | 2016/03/26 | [
"https://Stackoverflow.com/questions/36238155",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5974999/"
] | Maybe your EditTexts are not initialized, you need something like `quantity1 = (EditText) findViewById(R.id.YOUR_EDIT_TEXT_ID)` for both. | Check what you are passing...check my example.
```
package general;
public class TestNumberFormat {
public static void main(String[] args){
String addquantity = "40";
String subquantity = "30";
int final_ = Integer.parseInt(addquantity) - Integer.parseInt(subquantity);
System.out.println("PRINT :" + final_);
String addquantity1 = "D40";
String subquantity1 = "D30";
int final1_ = Integer.parseInt(addquantity1) - Integer.parseInt(subquantity1);
System.out.println("PRINT :" + final1_);
}
}
```
Output:
PRINT :10
```
Exception in thread "main" java.lang.NumberFormatException: For input string: "D40"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at general.TestNumberFormat.main(TestNumberFormat.java:13)
``` |
36,238,155 | I have a script in python that consists of multiple list of functions, and at every end of a list I want to put a back function that will let me return to the beginning of the script and choose another list. for example:
```
list = ("1. List of all users",
"2. List of all groups",
"3. Reset password",
"4. Create new user",
"5. Create new group",
"6. List all kernel drivers",
"7. List all mounts",
"8. Mount a folder",
"9. Exit")
for i in list:
print(i)
```
And if I choose 1 another list opens:
```
list = "1) Show user Groups \n2) Show user ID \n3) Show user aliases \n4) Add new aliases \n5) Change password \n6) Back"
print
print list
```
A more specific example. | 2016/03/26 | [
"https://Stackoverflow.com/questions/36238155",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5974999/"
] | Maybe your EditTexts are not initialized, you need something like `quantity1 = (EditText) findViewById(R.id.YOUR_EDIT_TEXT_ID)` for both. | Maybe you got an space in your string.
Use following
```
String addquantity = quantity1.getText().toString().trim();
String subquantity = quantity2.getText().toString().trim();
``` |
66,602,480 | I am learning fastapi, and I am starting a uvicorn server on localhost. Whenever there is an error/exception, I am not getting the traceback.
All I am getting is : `INFO: 127.0.0.1:56914 - "POST /create/user/ HTTP/1.1" 500 Internal Server Error`
So, It is difficult to debug, I am trying out logging module of python
```
import logging
log = logging.getLogger("uvicorn")
log.setLevel(logging.DEBUG)
```
I have also tried starting uvicorn with debug parameter
```
if __name__ == "__main__":
dev = 1
print("printing")
if dev == 1:
uvicorn.run('main:app', host="127.0.0.1", port=5000, log_level="info", reload=True, debug=True)
if dev == 2:
uvicorn.run('main:app', host="127.0.0.1", port=5000, log_level="info", workers=2)
still the same problem persists. I am in development phase and I need to error traceback,please guide.
``` | 2021/03/12 | [
"https://Stackoverflow.com/questions/66602480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14354318/"
] | Solution / Fix
==============
Now, when you execute uvicorn by the in-Python command `uvicorn.run(app)`, this is your next move:
take the ucivorn default logging config and add the handler from your application to it:
```py
config = {}
# this is default (site-packages\uvicorn\main.py)
config['log_config'] = {
'version': 1, 'disable_existing_loggers': True,
'formatters': {'default': {'()': 'uvicorn.logging.DefaultFormatter', 'fmt': '%(levelprefix)s %(message)s', 'use_colors': None},
'access': {'()': 'uvicorn.logging.AccessFormatter', 'fmt': '%(levelprefix)s %(client_addr)s - "%(request_line)s" %(status_code)s'}},
'handlers': {'default': {'formatter': 'default', 'class': 'logging.StreamHandler', 'stream': 'ext://sys.stderr'},
'access': {'formatter': 'access', 'class': 'logging.StreamHandler', 'stream': 'ext://sys.stdout'}},
'loggers': {'uvicorn': {'handlers': ['default'], 'level': 'INFO'},
'uvicorn.error': {'level': 'INFO', 'handlers': ['default'], 'propagate': True},
'uvicorn.access': {'handlers': ['access'], 'level': 'INFO', 'propagate': False},
},
}
# add your handler to it (in my case, I'm working with quart, but you can do this with Flask etc. as well, they're all the same)
config['log_config']['loggers']['quart'] = {'handlers': ['default'], 'level': 'INFO'}
```
this will keep the logger from quart/Flask/etc. enabled when uvicorn starts. Alternatively, you can set `disable_existing_loggers` to False. But this will keep all loggers enabled and then you will probable get more messages than you wish.
Finally, pass the config to uvicorn:
```
uvicorn.run(app, **config)
```
Explanation
===========
When uvicorn's logging config has set `disable_existing_loggers` to True, all other loggers will be disabled. This also means that the logger quart and Flask use (which prints the traceback) get disabled. You can either set the config to NOT disable other loggers, or re-add them to the config so uvicorn doesn't disable them in the first place. | For "500 Internal Server Error" occurring during a post request, if you invoke FastAPI in debug mode:
```
app = FastAPI(debug=True)
```
Retry the request with Chrome dev tools Network tab open. When you see the failing request show up (note - my route url was '/rule' here):
[](https://i.stack.imgur.com/eHfvi.png)
Click on it, and you'll see the Traceback text in the Fetch/XHR / Response tab window.
[](https://i.stack.imgur.com/6ivcF.png)
You can quickly verify it by inserting "assert False" in your post handler. |
35,744,408 | I have an astrophysic project by using data file. It's a csv data file.
I'm using the following code in Python :
```
#!/usr/bin/python
# coding: utf-8
import numpy as np
# Fichier contenant le champ 169 #
file = '/astromaster/home/xxx/Bureau/Stage/Champs/Field_169/Field169_combined_final_roughcal.csv'
###############################
# Lecture du fichier field169 #
###############################
field169 = np.loadtxt(fname = file,
dtype = [('ID',object),
('RA','f10'),
('DEC','f10'),
('NDET','i2'),
('DEPTHFAG','i2'),
('SEPINDX',object),
('SEPFINDX',object),
('U','f10'),
('UERR','f10'),
('G','f10'),
('GERR','f10'),
('R','f10'),
('RERR','f10'),
('I','f10'),
('IERR','f10'),
('Z','f10'),
('ZERR','f10'),
('CHI','f10'),
('SHARP','f10'),
('FLAG','i3'),
('PROB','f10'),
('EBV','f10')],
delimiter=",")
print field169
```
But, when I print this array, I just have half of my array .. and I don't understand why ?
Thanks for your answers :) | 2016/03/02 | [
"https://Stackoverflow.com/questions/35744408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | The [recommendation](https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-specific-guidelines) of Red Hat is to make files group owned by GID 0 - the user in the container is always in the root group. You won't be able to chown, but you can selectively expose which files to write to.
A second option:
*In order to allow images that use either named users or the root (0) user to build in OpenShift, you can add the project’s builder service account (system:serviceaccount::builder) to the privileged security context constraint (SCC). Alternatively, you can allow all images to run as any user.* | Can you see the logs using
```
kubectl logs <podname> -p
```
This should give you the errors why the pod failed. |
35,744,408 | I have an astrophysic project by using data file. It's a csv data file.
I'm using the following code in Python :
```
#!/usr/bin/python
# coding: utf-8
import numpy as np
# Fichier contenant le champ 169 #
file = '/astromaster/home/xxx/Bureau/Stage/Champs/Field_169/Field169_combined_final_roughcal.csv'
###############################
# Lecture du fichier field169 #
###############################
field169 = np.loadtxt(fname = file,
dtype = [('ID',object),
('RA','f10'),
('DEC','f10'),
('NDET','i2'),
('DEPTHFAG','i2'),
('SEPINDX',object),
('SEPFINDX',object),
('U','f10'),
('UERR','f10'),
('G','f10'),
('GERR','f10'),
('R','f10'),
('RERR','f10'),
('I','f10'),
('IERR','f10'),
('Z','f10'),
('ZERR','f10'),
('CHI','f10'),
('SHARP','f10'),
('FLAG','i3'),
('PROB','f10'),
('EBV','f10')],
delimiter=",")
print field169
```
But, when I print this array, I just have half of my array .. and I don't understand why ?
Thanks for your answers :) | 2016/03/02 | [
"https://Stackoverflow.com/questions/35744408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You are seeing this because whatever process your image is starting isn't a long running process and finds no TTY and the container just exits and gets restarted repeatedly, which is a "crash loop" as far as openshift is concerned.
Your dockerfile mentions below :
ENTRYPOINT ["container-entrypoint"]
What actually this "container-entrypoint" doing ?
you need to check.
Did you use the -p or --previous flag to oc logs to see if the logs from the previous attempt to start the pod show anything | The [recommendation](https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-specific-guidelines) of Red Hat is to make files group owned by GID 0 - the user in the container is always in the root group. You won't be able to chown, but you can selectively expose which files to write to.
A second option:
*In order to allow images that use either named users or the root (0) user to build in OpenShift, you can add the project’s builder service account (system:serviceaccount::builder) to the privileged security context constraint (SCC). Alternatively, you can allow all images to run as any user.* |
35,744,408 | I have an astrophysic project by using data file. It's a csv data file.
I'm using the following code in Python :
```
#!/usr/bin/python
# coding: utf-8
import numpy as np
# Fichier contenant le champ 169 #
file = '/astromaster/home/xxx/Bureau/Stage/Champs/Field_169/Field169_combined_final_roughcal.csv'
###############################
# Lecture du fichier field169 #
###############################
field169 = np.loadtxt(fname = file,
dtype = [('ID',object),
('RA','f10'),
('DEC','f10'),
('NDET','i2'),
('DEPTHFAG','i2'),
('SEPINDX',object),
('SEPFINDX',object),
('U','f10'),
('UERR','f10'),
('G','f10'),
('GERR','f10'),
('R','f10'),
('RERR','f10'),
('I','f10'),
('IERR','f10'),
('Z','f10'),
('ZERR','f10'),
('CHI','f10'),
('SHARP','f10'),
('FLAG','i3'),
('PROB','f10'),
('EBV','f10')],
delimiter=",")
print field169
```
But, when I print this array, I just have half of my array .. and I don't understand why ?
Thanks for your answers :) | 2016/03/02 | [
"https://Stackoverflow.com/questions/35744408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | The [recommendation](https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-specific-guidelines) of Red Hat is to make files group owned by GID 0 - the user in the container is always in the root group. You won't be able to chown, but you can selectively expose which files to write to.
A second option:
*In order to allow images that use either named users or the root (0) user to build in OpenShift, you can add the project’s builder service account (system:serviceaccount::builder) to the privileged security context constraint (SCC). Alternatively, you can allow all images to run as any user.* | I am able to resolve this by creating a script as "run.sh" with the content at end:
```
while :; do
sleep 300
done
```
and in Dockerfile:
```
ADD run.sh /run.sh
RUN chmod +x /*.sh
CMD ["/run.sh"]
```
This way it works, thanks everybody for pointing out the reason ,which helped me in finding the resolution. But one doubt I still have why process gets exited in openshift in this case only, I have tried running tomcat server in the same way which just works fine without having sleep in script. |
35,744,408 | I have an astrophysic project by using data file. It's a csv data file.
I'm using the following code in Python :
```
#!/usr/bin/python
# coding: utf-8
import numpy as np
# Fichier contenant le champ 169 #
file = '/astromaster/home/xxx/Bureau/Stage/Champs/Field_169/Field169_combined_final_roughcal.csv'
###############################
# Lecture du fichier field169 #
###############################
field169 = np.loadtxt(fname = file,
dtype = [('ID',object),
('RA','f10'),
('DEC','f10'),
('NDET','i2'),
('DEPTHFAG','i2'),
('SEPINDX',object),
('SEPFINDX',object),
('U','f10'),
('UERR','f10'),
('G','f10'),
('GERR','f10'),
('R','f10'),
('RERR','f10'),
('I','f10'),
('IERR','f10'),
('Z','f10'),
('ZERR','f10'),
('CHI','f10'),
('SHARP','f10'),
('FLAG','i3'),
('PROB','f10'),
('EBV','f10')],
delimiter=",")
print field169
```
But, when I print this array, I just have half of my array .. and I don't understand why ?
Thanks for your answers :) | 2016/03/02 | [
"https://Stackoverflow.com/questions/35744408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You are seeing this because whatever process your image is starting isn't a long running process and finds no TTY and the container just exits and gets restarted repeatedly, which is a "crash loop" as far as openshift is concerned.
Your dockerfile mentions below :
ENTRYPOINT ["container-entrypoint"]
What actually this "container-entrypoint" doing ?
you need to check.
Did you use the -p or --previous flag to oc logs to see if the logs from the previous attempt to start the pod show anything | Can you see the logs using
```
kubectl logs <podname> -p
```
This should give you the errors why the pod failed. |
35,744,408 | I have an astrophysic project by using data file. It's a csv data file.
I'm using the following code in Python :
```
#!/usr/bin/python
# coding: utf-8
import numpy as np
# Fichier contenant le champ 169 #
file = '/astromaster/home/xxx/Bureau/Stage/Champs/Field_169/Field169_combined_final_roughcal.csv'
###############################
# Lecture du fichier field169 #
###############################
field169 = np.loadtxt(fname = file,
dtype = [('ID',object),
('RA','f10'),
('DEC','f10'),
('NDET','i2'),
('DEPTHFAG','i2'),
('SEPINDX',object),
('SEPFINDX',object),
('U','f10'),
('UERR','f10'),
('G','f10'),
('GERR','f10'),
('R','f10'),
('RERR','f10'),
('I','f10'),
('IERR','f10'),
('Z','f10'),
('ZERR','f10'),
('CHI','f10'),
('SHARP','f10'),
('FLAG','i3'),
('PROB','f10'),
('EBV','f10')],
delimiter=",")
print field169
```
But, when I print this array, I just have half of my array .. and I don't understand why ?
Thanks for your answers :) | 2016/03/02 | [
"https://Stackoverflow.com/questions/35744408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You are seeing this because whatever process your image is starting isn't a long running process and finds no TTY and the container just exits and gets restarted repeatedly, which is a "crash loop" as far as openshift is concerned.
Your dockerfile mentions below :
ENTRYPOINT ["container-entrypoint"]
What actually this "container-entrypoint" doing ?
you need to check.
Did you use the -p or --previous flag to oc logs to see if the logs from the previous attempt to start the pod show anything | I am able to resolve this by creating a script as "run.sh" with the content at end:
```
while :; do
sleep 300
done
```
and in Dockerfile:
```
ADD run.sh /run.sh
RUN chmod +x /*.sh
CMD ["/run.sh"]
```
This way it works, thanks everybody for pointing out the reason ,which helped me in finding the resolution. But one doubt I still have why process gets exited in openshift in this case only, I have tried running tomcat server in the same way which just works fine without having sleep in script. |
62,056,688 | ```
eleUserMessage = driver.find_element_by_id("xxxxxxx")
eleUserMessage.send_keys(email)
```
Im trying to use selenium with python to auto fill out a form and fill in my details. So far I have read in my info from a .txt file and stored them in variables for easy reference. When I Find the element and try to fill it out with send\_keys, after each send\_keys line, the form highlights the other fields that aren't filled in and says I need to fill them in before I submit. My code to submit the info is way after this code segment.
Why does send\_keys try to submit the form or even send the enter key when I didn't tell it to? and how do I stop this from happening?
The main issue that this is causing is that the element ids change when they are highlighted in red since they are required fields. I need to get around that somehow. Please let me know what I can do. | 2020/05/28 | [
"https://Stackoverflow.com/questions/62056688",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13631666/"
] | Because you are storing your details in a text file, it is likely that when you create the email variable there is a newline at the end of the string as this is how text files work. This would explain why the form gets submitted because it is the equivalent of typing the email followed by the enter key. You can try to fix this by using
```py
eleUserMessage.send_keys(email.rstrip())
```
rstrip() is a builtin function and it by default, with no parameters, strips the whitespace and newlines from the right side. | if you just want to fill out a form ,then submit the finished form.
you can try :
```
eleUserMessage = driver.find_element_by_xpath("//select[@name='name']")
all_options = eleUserMessage.find_elements_by_tag_name("option")
for option in all_options:
print("Value is: %s" % option.get_attribute("value"))
option.click()
eleUserMessage.send_keys(email)
``` |
57,275,797 | This question may seem very basic, however, I would like to improve the code I have written. I have a function that will need either 2 or 3 parameters, depending on some other conditions. I'm checking the length and passing either 2 or 3 with and if statement (see code). I'm sure there must be a better and compact way to re-write in one line/sentence instead of using IIFs. Sorry i'm fairly new to python.
```py
dist = distr.gum
# ... or: dist = distr.gev
# parFitHist has length 2 for distr.gum or 3 for distr.gev
parFitHist = dist.fit(ppt_hist)
# now I check if
if len(parFitBase) > 2:
# 3 parameters for distr.gev
t_mp = dist.cdf(ppt_fut, parFitFut[0], parFitFut[1], parFitBase[2])
else:
# 2 parameters for distr.gum
t_mp = dist.cdf(ppt_fut, parFitFut[0], parFitFut[1])
``` | 2019/07/30 | [
"https://Stackoverflow.com/questions/57275797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2255752/"
] | Not sure what data types you have and how your method looks like, but with \*args you can solve that:
```
def cdf(ppt_fut, *params):
print(ppt_fut)
print(params)
```
Then you can call it like that:
```
cdf(1, 2, 3, 4) # -> prints: 1 (2,3,4)
cdf(1, 2, 3) # -> prints: 1 (2,3)
```
The `params` is in this case a tuple with all arguments except the first one. | You can unpack elements of a list into a function call with `*`.
You don't need to know how many items there are in the list to do this. But this means you can introduce errors if the number of items don't match the function arguments. It's therefore a good idea to check your data for some basic sanity as well.
For example:
```
if 1 < len(parFitBase) < 4:
t_mp = dist.cdf(ppt_fut, *parFitFut)
else:
raise ValueError('Array must have length 2 or 3')
```
You can read more about that here: [Unpacking Argument Lists](https://docs.python.org/2/tutorial/controlflow.html#unpacking-argument-lists) |
57,275,797 | This question may seem very basic, however, I would like to improve the code I have written. I have a function that will need either 2 or 3 parameters, depending on some other conditions. I'm checking the length and passing either 2 or 3 with and if statement (see code). I'm sure there must be a better and compact way to re-write in one line/sentence instead of using IIFs. Sorry i'm fairly new to python.
```py
dist = distr.gum
# ... or: dist = distr.gev
# parFitHist has length 2 for distr.gum or 3 for distr.gev
parFitHist = dist.fit(ppt_hist)
# now I check if
if len(parFitBase) > 2:
# 3 parameters for distr.gev
t_mp = dist.cdf(ppt_fut, parFitFut[0], parFitFut[1], parFitBase[2])
else:
# 2 parameters for distr.gum
t_mp = dist.cdf(ppt_fut, parFitFut[0], parFitFut[1])
``` | 2019/07/30 | [
"https://Stackoverflow.com/questions/57275797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2255752/"
] | Not sure what data types you have and how your method looks like, but with \*args you can solve that:
```
def cdf(ppt_fut, *params):
print(ppt_fut)
print(params)
```
Then you can call it like that:
```
cdf(1, 2, 3, 4) # -> prints: 1 (2,3,4)
cdf(1, 2, 3) # -> prints: 1 (2,3)
```
The `params` is in this case a tuple with all arguments except the first one. | Welcome to python. Please consider using [pep8](https://www.python.org/dev/peps/pep-0008/#introduction) for formatting your code. It is a great way to respect conventions and make your code more readable for other people.
[Variable names](https://www.python.org/dev/peps/pep-0008/#function-and-variable-names) should be lower case and separated by underscores.
Regarding your question, one way of improving it could be to pass `len(par_fit_base)` to your function and doing the if clause there. This would make it more fluent to read the code. This is assuming that `par_fit_fut` always has the same length, regardless of `par_fit_base`.
```
def my_function(par_fit_fut, n):
if n > 2:
# one set of routines using par_fit_fut[0:3]
else:
# another set of routines using par_fit_fut[0:2]
my_function(par_fit_fut, len(par_fit_base))
``` |
26,328,648 | [Answered first part, please scroll for second question edit]
Currently coding a web scraper in python.
I have the following example string:
`Columbus Blue Jackets at Buffalo Sabres - 10/09/2014`
I want to split it so that I have [Columbus Blue Jackets, Buffalo Sabres, 10/09/2014]
I read up on regular expressions including a few answers on this site but can't figure out how to format my particular example. The best I could come up with was something like this, although it doesn't work.
`re.split('\w+\s\w\w\s\w+\s\.\s\w+', teams)`
My second try is:
`re.split("\w+\s'at'\s\w+\s'-'\s\w+", teams)`, but I'm not sure if you can even enter exact strings like ['at' and '-'] inside a regex function.
Please let me know where I'm going wrong with the regex function or if there's another way to delimit my particular example in python.
(Also note that the team names can be either 2 or 3 words for each team, eg. `Montreal Canadiens at Buffalo Sabres`)
***EDIT:***
`re.split(r"\s+at\s+|\s+-\s+", teams)` seems to do the trick. However I now have a second problem. Testing it in its own file, this works, but in my program for some reason it doesn`t.
Code:
```
def getTable(url):
currentMatchup = Crawl.setup(url)
teams = currentMatchup.title.string
print(teams)
re.split(r"\s+at\s+|\s+-\s+", teams)
print(teams)
```
The output is:
```
Columbus Blue Jackets at Buffalo Sabres - 10/09/2014
Columbus Blue Jackets at Buffalo Sabres - 10/09/2014
```
Any ideas? | 2014/10/12 | [
"https://Stackoverflow.com/questions/26328648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2621303/"
] | ```
print re.split(r"\s+at\s+|\s+-\s+",teams)
```
Output:`['Columbus Blue Jackets', 'Buffalo Sabres', '10/09/2014']`
Try this.You can do it in one line.Here `teams` is your string.This will give you desired results.
Edit:
```
def getTable(url):
currentMatchup = Crawl.setup(url)
teams = currentMatchup.title.string
print(teams)
y=re.split(r"\s+at\s+|\s+-\s+", teams)
print(y)
``` | Capture them into groups with lazy dot-match-all repetition.
```
(.*?)\s+at\s+(.*?)\s+-\s+(\d{2}/\d{2}/\d{4})
```
[***Demo***](http://regex101.com/r/lU3wV3/1)
---
```
import re;
match = re.search(r"(.*?)\s+at\s+(.*?)\s+-\s+(\d{2}/\d{2}/\d{4})", "Columbus Blue Jackets at Buffalo Sabres - 10/09/2014")
print match.groups()
# ('Columbus Blue Jackets', 'Buffalo Sabres', '10/09/2014')
``` |
26,328,648 | [Answered first part, please scroll for second question edit]
Currently coding a web scraper in python.
I have the following example string:
`Columbus Blue Jackets at Buffalo Sabres - 10/09/2014`
I want to split it so that I have [Columbus Blue Jackets, Buffalo Sabres, 10/09/2014]
I read up on regular expressions including a few answers on this site but can't figure out how to format my particular example. The best I could come up with was something like this, although it doesn't work.
`re.split('\w+\s\w\w\s\w+\s\.\s\w+', teams)`
My second try is:
`re.split("\w+\s'at'\s\w+\s'-'\s\w+", teams)`, but I'm not sure if you can even enter exact strings like ['at' and '-'] inside a regex function.
Please let me know where I'm going wrong with the regex function or if there's another way to delimit my particular example in python.
(Also note that the team names can be either 2 or 3 words for each team, eg. `Montreal Canadiens at Buffalo Sabres`)
***EDIT:***
`re.split(r"\s+at\s+|\s+-\s+", teams)` seems to do the trick. However I now have a second problem. Testing it in its own file, this works, but in my program for some reason it doesn`t.
Code:
```
def getTable(url):
currentMatchup = Crawl.setup(url)
teams = currentMatchup.title.string
print(teams)
re.split(r"\s+at\s+|\s+-\s+", teams)
print(teams)
```
The output is:
```
Columbus Blue Jackets at Buffalo Sabres - 10/09/2014
Columbus Blue Jackets at Buffalo Sabres - 10/09/2014
```
Any ideas? | 2014/10/12 | [
"https://Stackoverflow.com/questions/26328648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2621303/"
] | You could split the input string according to `<space>at<space>` or `<space>-<space>`,
```
>>> s = "Columbus Blue Jackets at Buffalo Sabres - 10/09/2014"
>>> re.split(r'\s+(?:at|-)\s+', s)
['Columbus Blue Jackets', 'Buffalo Sabres', '10/09/2014']
>>> s = 'Montreal Canadiens at Buffalo Sabres - 10/09/2014'
>>> re.split(r'\s+(?:at|-)\s+', s)
['Montreal Canadiens', 'Buffalo Sabres', '10/09/2014']
```
Through `re.findall` function,
```
>>> s = "Columbus Blue Jackets at Buffalo Sabres - 10/09/2014"
>>> re.findall(r'\b[A-Z]\S+(?:\s+[A-Z]\S+){1,}|(?<=-\s)\S+', s)
['Columbus Blue Jackets', 'Buffalo Sabres', '10/09/2014']
``` | ```
print re.split(r"\s+at\s+|\s+-\s+",teams)
```
Output:`['Columbus Blue Jackets', 'Buffalo Sabres', '10/09/2014']`
Try this.You can do it in one line.Here `teams` is your string.This will give you desired results.
Edit:
```
def getTable(url):
currentMatchup = Crawl.setup(url)
teams = currentMatchup.title.string
print(teams)
y=re.split(r"\s+at\s+|\s+-\s+", teams)
print(y)
``` |
26,328,648 | [Answered first part, please scroll for second question edit]
Currently coding a web scraper in python.
I have the following example string:
`Columbus Blue Jackets at Buffalo Sabres - 10/09/2014`
I want to split it so that I have [Columbus Blue Jackets, Buffalo Sabres, 10/09/2014]
I read up on regular expressions including a few answers on this site but can't figure out how to format my particular example. The best I could come up with was something like this, although it doesn't work.
`re.split('\w+\s\w\w\s\w+\s\.\s\w+', teams)`
My second try is:
`re.split("\w+\s'at'\s\w+\s'-'\s\w+", teams)`, but I'm not sure if you can even enter exact strings like ['at' and '-'] inside a regex function.
Please let me know where I'm going wrong with the regex function or if there's another way to delimit my particular example in python.
(Also note that the team names can be either 2 or 3 words for each team, eg. `Montreal Canadiens at Buffalo Sabres`)
***EDIT:***
`re.split(r"\s+at\s+|\s+-\s+", teams)` seems to do the trick. However I now have a second problem. Testing it in its own file, this works, but in my program for some reason it doesn`t.
Code:
```
def getTable(url):
currentMatchup = Crawl.setup(url)
teams = currentMatchup.title.string
print(teams)
re.split(r"\s+at\s+|\s+-\s+", teams)
print(teams)
```
The output is:
```
Columbus Blue Jackets at Buffalo Sabres - 10/09/2014
Columbus Blue Jackets at Buffalo Sabres - 10/09/2014
```
Any ideas? | 2014/10/12 | [
"https://Stackoverflow.com/questions/26328648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2621303/"
] | You could split the input string according to `<space>at<space>` or `<space>-<space>`,
```
>>> s = "Columbus Blue Jackets at Buffalo Sabres - 10/09/2014"
>>> re.split(r'\s+(?:at|-)\s+', s)
['Columbus Blue Jackets', 'Buffalo Sabres', '10/09/2014']
>>> s = 'Montreal Canadiens at Buffalo Sabres - 10/09/2014'
>>> re.split(r'\s+(?:at|-)\s+', s)
['Montreal Canadiens', 'Buffalo Sabres', '10/09/2014']
```
Through `re.findall` function,
```
>>> s = "Columbus Blue Jackets at Buffalo Sabres - 10/09/2014"
>>> re.findall(r'\b[A-Z]\S+(?:\s+[A-Z]\S+){1,}|(?<=-\s)\S+', s)
['Columbus Blue Jackets', 'Buffalo Sabres', '10/09/2014']
``` | Capture them into groups with lazy dot-match-all repetition.
```
(.*?)\s+at\s+(.*?)\s+-\s+(\d{2}/\d{2}/\d{4})
```
[***Demo***](http://regex101.com/r/lU3wV3/1)
---
```
import re;
match = re.search(r"(.*?)\s+at\s+(.*?)\s+-\s+(\d{2}/\d{2}/\d{4})", "Columbus Blue Jackets at Buffalo Sabres - 10/09/2014")
print match.groups()
# ('Columbus Blue Jackets', 'Buffalo Sabres', '10/09/2014')
``` |
63,557,957 | I am a beginner in python, pycharm and Linux, I want to open an existing Django project. But when I use "python manage.py runserver", I am getting a series of trace-back errors which I am attaching below.
I have installed all the LAMP stack i.e., Linux OS, Apache2 Web server,MariaDB and MYSQLclient with latest versions and have also tried updating the versions in requirements.txt. However, I haven't installed PhpMyAdmin yet, for time basis I would want to just use terminal for viewing my data tables.
Could you please explain me about the tracebacks and what can I do to run the program. It would be of great support if you can provide me with a solution.
Thank you.
Keep safe and kind regards,
SD.

 | 2020/08/24 | [
"https://Stackoverflow.com/questions/63557957",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14156220/"
] | The input pipeline of a dataset is always traced into a graph (as if you used [`@tf.function`](https://www.tensorflow.org/api_docs/python/tf/function)) to make it faster, which means, among other things, that you cannot use `.numpy()`. You can however use [`tf.numpy_function`](https://www.tensorflow.org/api_docs/python/tf/numpy_function) to access the data as a NumPy array within the graph:
```py
def transform(example):
# example will now by a NumPy array
str_example = example.decode("utf-8")
json_example = json.loads(str_example)
overall = json_example.get('overall', None)
text = json_example.get('reviewText', None)
return (overall, text)
line_dataset = tf.data.TextLineDataset(filenames = [file_path])
line_dataset = line_dataset.map(
lambda row: tf.numpy_function(transform, row, (tf.float32, tf.string)))
for example in line_dataset.take(5):
print(example)
``` | A bit wordy, but try it like this:
```
def transform(example):
str_example = example.numpy().decode("utf-8")
json_example = json.loads(str_example)
overall = json_example.get('overall', None)
text = json_example.get('reviewText', None)
return (overall, text)
line_dataset = tf.data.TextLineDataset(filenames = [file_path])
line_dataset = line_dataset.map(
lambda input:
tf.py_function(transform, [input], (tf.float32, tf.string))
)
for example in line_dataset.take(5):
print(example)
```
This particular snippet works for any python function, not only the for numpy functions. So, if you need functions like `print`, `input` and so on, you can use this. You don't have to know all the details, but if you are interested, please ask me. :) |
2,293,968 | For my project, the role of the Lecturer (defined as a class) is to offer projects to students. Project itself is also a class. I have some global dictionaries, keyed by the unique numeric id's for lecturers and projects that map to objects.
Thus for the "lecturers" dictionary (currently):
```
lecturer[id] = Lecturer(lec_name, lec_id, max_students)
```
I'm currently reading in a white-space delimited text file that has been generated from a database. I have no direct access to the database so I haven't much say on how the file is formatted. Here's a fictionalised snippet that shows how the text file is structured. Please pardon the cheesiness.
```
0001 001 "Miyamoto, S." "Even Newer Super Mario Bros"
0002 001 "Miyamoto, S." "Legend of Zelda: Skies of Hyrule"
0003 002 "Molyneux, P." "Project Milo"
0004 002 "Molyneux, P." "Fable III"
0005 003 "Blow, J." "Ponytail"
```
The structure of each line is basically `proj_id, lec_id, lec_name, proj_name`.
Now, I'm currently reading the relevant data into the relevant objects. Thus, `proj_id` is stored in `class Project` whereas `lec_name` is a `class Lecturer` object, et al. The `Lecturer` and `Project` classes are not currently related.
However, as I read in each line from the text file, for that line, I wish to read in the project offered by the lecturer into the `Lecturer` class; I'm already reading the `proj_id` into the `Project` class. I'd like to create an object in `Lecturer` called `offered_proj` which should be a set or list of the projects offered by that lecturer. Thus whenever, for a line, I read in a new project under the same `lec_id`, `offered_proj` will be updated with that project. If I wanted to get display a list of projects offered by a lecturer I'd ideally just want to use `print lecturers[lec_id].offered_proj`.
My Python isn't great and I'd appreciate it if someone could show me a way to do that. I'm not sure if it's better as a set or a list, as well.
`Update`
After the advice from [Alex Martelli](https://stackoverflow.com/questions/2293968/how-do-i-create-a-list-or-set-object-in-a-class-in-python/2293998#2293998) and [Oddthinking](https://stackoverflow.com/questions/2293968/how-do-i-create-a-list-or-set-object-in-a-class-in-python/2294002#2294002) I went back and made some changes and tried to print the results.
Here's the code snippet:
```
for line in csv_file:
proj_id = int(line[0])
lec_id = int(line[1])
lec_name = line[2]
proj_name = line[3]
projects[proj_id] = Project(proj_id, proj_name)
lecturers[lec_id] = Lecturer(lec_id, lec_name)
if lec_id in lecturers.keys():
lecturers[lec_id].offered_proj.add(proj_id)
print lec_id, lecturers[lec_id].offered_proj
```
The `print lecturers[lec_id].offered_proj` line prints the following output:
```
001 set([0001])
001 set([0002])
002 set([0003])
002 set([0004])
003 set([0005])
```
It basically feels like the set is being over-written or somesuch. So if I try to print for a specific lecturer `print lec_id, lecturers[001].offered_proj` all I get is the last the `proj_id` that has been read in. | 2010/02/19 | [
"https://Stackoverflow.com/questions/2293968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/273875/"
] | `set` is better since you don't care about order and have no duplicate.
You can parse the file easily with the [csv](http://docs.python.org/library/csv.html?highlight=sv#module-csv) module (with a `delimiter` of `' '`).
Once you have the `lec_name` you must check if that lecturer's already know; for that purpose, keep a dictionary from `lec_name` to lecturer objects (that's just another reference to the same lecturer object which you also refer to from the `lecturer` dictionary). On finding a `lec_name` that's not in that dictionary you know it's a lecturer not previously seen, so make a new lecturer object (and stick it in both dicts) in that case only, with an empty set of offered courses. Finally, just `.add` the course to the current lecturer's `offered_proj`. It's really a pretty smooth flow.
Have you tried implementing this flow? If so, what problems have you had? Can you show us the relevant code -- should be a dozen lines or so, at most?
**Edit**: since the OP has posted code now, I can spot the bug -- it's here:
```
lecturers[lec_id] = Lecturer(lec_id, lec_name)
if lec_id in lecturers.keys():
lecturers[lec_id].offered_proj.add(proj_id)
```
this is **unconditionally** creating a new lecturer object (trampling over the old one in the `lecturers` dict, if any) so of course the previous set gets tossed away. This is the code you need: **first** check, and create only if needed! (also, minor bug, **don't** check `in....keys()`, that's horribly inefficient - just check for presence in the dict). As follows:
```
if lec_id in lecturers:
thelec = lecturers[lec_id]
else:
thelec = lecturers[lec_id] = Lecturer(lec_id, lec_name)
thelec.offered_proj.add(proj_id)
```
You could express this in several different ways, but I hope this is clear enough. Just for completeness, the way I would normally phrase it (to avoid two lookups into the dictionary) is as follows:
```
thelec = lecturers.get(lec_id)
if thelec is None:
thelec = lecturers[lec_id] = Lecturer(lec_id, lec_name)
thelec.offered_proj.add(proj_id)
``` | Sets are useful when you want to guarantee you only have one instance of each item. They are also faster than a list at calculating whether an item is present in the collection.
Lists are faster at adding items, and also have an ordering.
This sounds like you would like a set. You sound like you are very close already.
in Lecturer.**init**, add a line:
```
self.offered_proj = set()
```
That will make an empty set.
When you read in the project, you can simply add to that set:
```
lecturer.offered_proj.add(project)
```
And you can print, just as you suggest (although you may like to pretty it up.) |
2,293,968 | For my project, the role of the Lecturer (defined as a class) is to offer projects to students. Project itself is also a class. I have some global dictionaries, keyed by the unique numeric id's for lecturers and projects that map to objects.
Thus for the "lecturers" dictionary (currently):
```
lecturer[id] = Lecturer(lec_name, lec_id, max_students)
```
I'm currently reading in a white-space delimited text file that has been generated from a database. I have no direct access to the database so I haven't much say on how the file is formatted. Here's a fictionalised snippet that shows how the text file is structured. Please pardon the cheesiness.
```
0001 001 "Miyamoto, S." "Even Newer Super Mario Bros"
0002 001 "Miyamoto, S." "Legend of Zelda: Skies of Hyrule"
0003 002 "Molyneux, P." "Project Milo"
0004 002 "Molyneux, P." "Fable III"
0005 003 "Blow, J." "Ponytail"
```
The structure of each line is basically `proj_id, lec_id, lec_name, proj_name`.
Now, I'm currently reading the relevant data into the relevant objects. Thus, `proj_id` is stored in `class Project` whereas `lec_name` is a `class Lecturer` object, et al. The `Lecturer` and `Project` classes are not currently related.
However, as I read in each line from the text file, for that line, I wish to read in the project offered by the lecturer into the `Lecturer` class; I'm already reading the `proj_id` into the `Project` class. I'd like to create an object in `Lecturer` called `offered_proj` which should be a set or list of the projects offered by that lecturer. Thus whenever, for a line, I read in a new project under the same `lec_id`, `offered_proj` will be updated with that project. If I wanted to get display a list of projects offered by a lecturer I'd ideally just want to use `print lecturers[lec_id].offered_proj`.
My Python isn't great and I'd appreciate it if someone could show me a way to do that. I'm not sure if it's better as a set or a list, as well.
`Update`
After the advice from [Alex Martelli](https://stackoverflow.com/questions/2293968/how-do-i-create-a-list-or-set-object-in-a-class-in-python/2293998#2293998) and [Oddthinking](https://stackoverflow.com/questions/2293968/how-do-i-create-a-list-or-set-object-in-a-class-in-python/2294002#2294002) I went back and made some changes and tried to print the results.
Here's the code snippet:
```
for line in csv_file:
proj_id = int(line[0])
lec_id = int(line[1])
lec_name = line[2]
proj_name = line[3]
projects[proj_id] = Project(proj_id, proj_name)
lecturers[lec_id] = Lecturer(lec_id, lec_name)
if lec_id in lecturers.keys():
lecturers[lec_id].offered_proj.add(proj_id)
print lec_id, lecturers[lec_id].offered_proj
```
The `print lecturers[lec_id].offered_proj` line prints the following output:
```
001 set([0001])
001 set([0002])
002 set([0003])
002 set([0004])
003 set([0005])
```
It basically feels like the set is being over-written or somesuch. So if I try to print for a specific lecturer `print lec_id, lecturers[001].offered_proj` all I get is the last the `proj_id` that has been read in. | 2010/02/19 | [
"https://Stackoverflow.com/questions/2293968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/273875/"
] | `set` is better since you don't care about order and have no duplicate.
You can parse the file easily with the [csv](http://docs.python.org/library/csv.html?highlight=sv#module-csv) module (with a `delimiter` of `' '`).
Once you have the `lec_name` you must check if that lecturer's already know; for that purpose, keep a dictionary from `lec_name` to lecturer objects (that's just another reference to the same lecturer object which you also refer to from the `lecturer` dictionary). On finding a `lec_name` that's not in that dictionary you know it's a lecturer not previously seen, so make a new lecturer object (and stick it in both dicts) in that case only, with an empty set of offered courses. Finally, just `.add` the course to the current lecturer's `offered_proj`. It's really a pretty smooth flow.
Have you tried implementing this flow? If so, what problems have you had? Can you show us the relevant code -- should be a dozen lines or so, at most?
**Edit**: since the OP has posted code now, I can spot the bug -- it's here:
```
lecturers[lec_id] = Lecturer(lec_id, lec_name)
if lec_id in lecturers.keys():
lecturers[lec_id].offered_proj.add(proj_id)
```
this is **unconditionally** creating a new lecturer object (trampling over the old one in the `lecturers` dict, if any) so of course the previous set gets tossed away. This is the code you need: **first** check, and create only if needed! (also, minor bug, **don't** check `in....keys()`, that's horribly inefficient - just check for presence in the dict). As follows:
```
if lec_id in lecturers:
thelec = lecturers[lec_id]
else:
thelec = lecturers[lec_id] = Lecturer(lec_id, lec_name)
thelec.offered_proj.add(proj_id)
```
You could express this in several different ways, but I hope this is clear enough. Just for completeness, the way I would normally phrase it (to avoid two lookups into the dictionary) is as follows:
```
thelec = lecturers.get(lec_id)
if thelec is None:
thelec = lecturers[lec_id] = Lecturer(lec_id, lec_name)
thelec.offered_proj.add(proj_id)
``` | Thanks for the help [Alex](https://stackoverflow.com/users/95810/alex-martelli) and [Oddthinking](https://stackoverflow.com/users/8014/oddthinking)! I think I've figured out what was going on:
I modified the code snippet that I added to the question. Basically, every time it read the line I think it was recreating the lecturer object. Thus I put in another `if` statement that checks if `lec_id` already exists in the dictionary. If it does, then it skips the object creation and simply moves onto adding projects to the `offered_proj` set.
The change I made is:
```
if not lec_id in lecturers.keys():
projects[proj_id] = Project(proj_id, proj_name)
lecturers[lec_id] = Lecturer(lec_id, lec_name)
lecturers[lec_id].offered_proj.add(proj_id)
```
I only recently discovered the concept behind `if not` thanks to my friend Samir.
Now I get the following output:
```
001 set([0001])
001 set([0001, 0002])
002 set([0003])
002 set([0003, 0004])
003 set([0005])
```
If I `print` for a chosen `lec_id` I get the fully updated set. Glee. |
2,293,968 | For my project, the role of the Lecturer (defined as a class) is to offer projects to students. Project itself is also a class. I have some global dictionaries, keyed by the unique numeric id's for lecturers and projects that map to objects.
Thus for the "lecturers" dictionary (currently):
```
lecturer[id] = Lecturer(lec_name, lec_id, max_students)
```
I'm currently reading in a white-space delimited text file that has been generated from a database. I have no direct access to the database so I haven't much say on how the file is formatted. Here's a fictionalised snippet that shows how the text file is structured. Please pardon the cheesiness.
```
0001 001 "Miyamoto, S." "Even Newer Super Mario Bros"
0002 001 "Miyamoto, S." "Legend of Zelda: Skies of Hyrule"
0003 002 "Molyneux, P." "Project Milo"
0004 002 "Molyneux, P." "Fable III"
0005 003 "Blow, J." "Ponytail"
```
The structure of each line is basically `proj_id, lec_id, lec_name, proj_name`.
Now, I'm currently reading the relevant data into the relevant objects. Thus, `proj_id` is stored in `class Project` whereas `lec_name` is a `class Lecturer` object, et al. The `Lecturer` and `Project` classes are not currently related.
However, as I read in each line from the text file, for that line, I wish to read in the project offered by the lecturer into the `Lecturer` class; I'm already reading the `proj_id` into the `Project` class. I'd like to create an object in `Lecturer` called `offered_proj` which should be a set or list of the projects offered by that lecturer. Thus whenever, for a line, I read in a new project under the same `lec_id`, `offered_proj` will be updated with that project. If I wanted to get display a list of projects offered by a lecturer I'd ideally just want to use `print lecturers[lec_id].offered_proj`.
My Python isn't great and I'd appreciate it if someone could show me a way to do that. I'm not sure if it's better as a set or a list, as well.
`Update`
After the advice from [Alex Martelli](https://stackoverflow.com/questions/2293968/how-do-i-create-a-list-or-set-object-in-a-class-in-python/2293998#2293998) and [Oddthinking](https://stackoverflow.com/questions/2293968/how-do-i-create-a-list-or-set-object-in-a-class-in-python/2294002#2294002) I went back and made some changes and tried to print the results.
Here's the code snippet:
```
for line in csv_file:
proj_id = int(line[0])
lec_id = int(line[1])
lec_name = line[2]
proj_name = line[3]
projects[proj_id] = Project(proj_id, proj_name)
lecturers[lec_id] = Lecturer(lec_id, lec_name)
if lec_id in lecturers.keys():
lecturers[lec_id].offered_proj.add(proj_id)
print lec_id, lecturers[lec_id].offered_proj
```
The `print lecturers[lec_id].offered_proj` line prints the following output:
```
001 set([0001])
001 set([0002])
002 set([0003])
002 set([0004])
003 set([0005])
```
It basically feels like the set is being over-written or somesuch. So if I try to print for a specific lecturer `print lec_id, lecturers[001].offered_proj` all I get is the last the `proj_id` that has been read in. | 2010/02/19 | [
"https://Stackoverflow.com/questions/2293968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/273875/"
] | Sets are useful when you want to guarantee you only have one instance of each item. They are also faster than a list at calculating whether an item is present in the collection.
Lists are faster at adding items, and also have an ordering.
This sounds like you would like a set. You sound like you are very close already.
in Lecturer.**init**, add a line:
```
self.offered_proj = set()
```
That will make an empty set.
When you read in the project, you can simply add to that set:
```
lecturer.offered_proj.add(project)
```
And you can print, just as you suggest (although you may like to pretty it up.) | Thanks for the help [Alex](https://stackoverflow.com/users/95810/alex-martelli) and [Oddthinking](https://stackoverflow.com/users/8014/oddthinking)! I think I've figured out what was going on:
I modified the code snippet that I added to the question. Basically, every time it read the line I think it was recreating the lecturer object. Thus I put in another `if` statement that checks if `lec_id` already exists in the dictionary. If it does, then it skips the object creation and simply moves onto adding projects to the `offered_proj` set.
The change I made is:
```
if not lec_id in lecturers.keys():
projects[proj_id] = Project(proj_id, proj_name)
lecturers[lec_id] = Lecturer(lec_id, lec_name)
lecturers[lec_id].offered_proj.add(proj_id)
```
I only recently discovered the concept behind `if not` thanks to my friend Samir.
Now I get the following output:
```
001 set([0001])
001 set([0001, 0002])
002 set([0003])
002 set([0003, 0004])
003 set([0005])
```
If I `print` for a chosen `lec_id` I get the fully updated set. Glee. |
2,112,632 | Is it possible to create a grid like below?
I didn't found anything in the forum.
```
#euler-project problem number 11
#In the 20 times 20 grid below,
#four numbers along a diagonal line have been marked in red.
#The product of these numbers is 26 times 63 times 78 times 14 = 1788696.
#What is the greatest product of four adjacent numbers in any direction
#(up, down, left, right, or diagonally) in the 20 times 20 grid?
import numpy
number = numpy.array([[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
[49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00]
[81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65]
[52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91]
[22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80]
[24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50]
[32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70]
[67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21]
[24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72]
[21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95]
[78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92]
[16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57]
[86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58]
[19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40]
[04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66]
[88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69]
[04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36]
[20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16]
[20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54]
[01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48]])
```
EDIT no.1:
I found numpy-array now.
```
x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
```
Is there a way to do it without the commas?
EDIT no.2:
I also found a new problem.
[Python: Invalid Token](https://stackoverflow.com/questions/336181/python-invalid-token)
Invalid token in number 08! :) | 2010/01/21 | [
"https://Stackoverflow.com/questions/2112632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/237934/"
] | Check out [NumPy](http://numpy.scipy.org/) - specifically, the [N-dimensional array](http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html) object. | Your code example won't compile unless you put commas between the list elements.
For example, this will compile:
```
value = [
[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9,10,11,12]
]
```
If you're interested in taking strings like you show, and **parsing** them into a list of lists (or numpy multi-dimensional array), or if you have a list of lists or numpy array and want to print them out like you describe, you can do that too with a clever couple of list comprehensions. |
2,112,632 | Is it possible to create a grid like below?
I didn't found anything in the forum.
```
#euler-project problem number 11
#In the 20 times 20 grid below,
#four numbers along a diagonal line have been marked in red.
#The product of these numbers is 26 times 63 times 78 times 14 = 1788696.
#What is the greatest product of four adjacent numbers in any direction
#(up, down, left, right, or diagonally) in the 20 times 20 grid?
import numpy
number = numpy.array([[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
[49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00]
[81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65]
[52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91]
[22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80]
[24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50]
[32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70]
[67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21]
[24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72]
[21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95]
[78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92]
[16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57]
[86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58]
[19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40]
[04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66]
[88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69]
[04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36]
[20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16]
[20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54]
[01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48]])
```
EDIT no.1:
I found numpy-array now.
```
x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
```
Is there a way to do it without the commas?
EDIT no.2:
I also found a new problem.
[Python: Invalid Token](https://stackoverflow.com/questions/336181/python-invalid-token)
Invalid token in number 08! :) | 2010/01/21 | [
"https://Stackoverflow.com/questions/2112632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/237934/"
] | Check out [NumPy](http://numpy.scipy.org/) - specifically, the [N-dimensional array](http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html) object. | What you have above does not work, e.g if pasted into a file and then run as a script, or pasted into the interpreter. I get:
```
SyntaxError: invalid token
```
Again, I suspect that what you have done is paste text (a string) containing these characters. They are not integers, and you will get nowhere unless you realize that fact.
Edit: I see...we only get "invalid syntax" if we avoid the "invalid token" error caused by the "08"
```
>>> import numpy
>>> number = numpy.array([[08 02 22 97]])
File "<stdin>", line 1
number = numpy.array([[08 02 22 97]])
^
SyntaxError: invalid token
>>> number = numpy.array([[18 12 22 97]])
File "<stdin>", line 1
number = numpy.array([[18 12 22 97]])
^
SyntaxError: invalid syntax
``` |
2,112,632 | Is it possible to create a grid like below?
I didn't found anything in the forum.
```
#euler-project problem number 11
#In the 20 times 20 grid below,
#four numbers along a diagonal line have been marked in red.
#The product of these numbers is 26 times 63 times 78 times 14 = 1788696.
#What is the greatest product of four adjacent numbers in any direction
#(up, down, left, right, or diagonally) in the 20 times 20 grid?
import numpy
number = numpy.array([[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
[49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00]
[81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65]
[52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91]
[22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80]
[24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50]
[32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70]
[67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21]
[24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72]
[21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95]
[78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92]
[16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57]
[86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58]
[19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40]
[04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66]
[88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69]
[04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36]
[20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16]
[20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54]
[01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48]])
```
EDIT no.1:
I found numpy-array now.
```
x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
```
Is there a way to do it without the commas?
EDIT no.2:
I also found a new problem.
[Python: Invalid Token](https://stackoverflow.com/questions/336181/python-invalid-token)
Invalid token in number 08! :) | 2010/01/21 | [
"https://Stackoverflow.com/questions/2112632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/237934/"
] | Check out [NumPy](http://numpy.scipy.org/) - specifically, the [N-dimensional array](http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html) object. | As for parsing the actual data, and you don't want to read it from a file or use sensible methods, there is always this:
```
s = """[[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
...etc
"""
s = s.replace("]", "").replace("[", "").split()
numbers = [int(x) for x in s]
```
Then you got a 1d array of numbers, which you can have fun with. |
2,112,632 | Is it possible to create a grid like below?
I didn't found anything in the forum.
```
#euler-project problem number 11
#In the 20 times 20 grid below,
#four numbers along a diagonal line have been marked in red.
#The product of these numbers is 26 times 63 times 78 times 14 = 1788696.
#What is the greatest product of four adjacent numbers in any direction
#(up, down, left, right, or diagonally) in the 20 times 20 grid?
import numpy
number = numpy.array([[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
[49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00]
[81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65]
[52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91]
[22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80]
[24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50]
[32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70]
[67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21]
[24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72]
[21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95]
[78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92]
[16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57]
[86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58]
[19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40]
[04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66]
[88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69]
[04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36]
[20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16]
[20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54]
[01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48]])
```
EDIT no.1:
I found numpy-array now.
```
x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
```
Is there a way to do it without the commas?
EDIT no.2:
I also found a new problem.
[Python: Invalid Token](https://stackoverflow.com/questions/336181/python-invalid-token)
Invalid token in number 08! :) | 2010/01/21 | [
"https://Stackoverflow.com/questions/2112632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/237934/"
] | Check out [NumPy](http://numpy.scipy.org/) - specifically, the [N-dimensional array](http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html) object. | You can define the numbers in a string and split it easily in row/columns:
```
nums = """\
1 2 3
4 5 6
7 8 9 10
"""
rows = [map(int, row.split()) for row in nums.splitlines()]
print rows ##> [[1, 2, 3], [4, 5, 6], [7, 8, 9, 10]]
``` |
2,112,632 | Is it possible to create a grid like below?
I didn't found anything in the forum.
```
#euler-project problem number 11
#In the 20 times 20 grid below,
#four numbers along a diagonal line have been marked in red.
#The product of these numbers is 26 times 63 times 78 times 14 = 1788696.
#What is the greatest product of four adjacent numbers in any direction
#(up, down, left, right, or diagonally) in the 20 times 20 grid?
import numpy
number = numpy.array([[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
[49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00]
[81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65]
[52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91]
[22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80]
[24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50]
[32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70]
[67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21]
[24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72]
[21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95]
[78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92]
[16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57]
[86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58]
[19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40]
[04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66]
[88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69]
[04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36]
[20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16]
[20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54]
[01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48]])
```
EDIT no.1:
I found numpy-array now.
```
x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
```
Is there a way to do it without the commas?
EDIT no.2:
I also found a new problem.
[Python: Invalid Token](https://stackoverflow.com/questions/336181/python-invalid-token)
Invalid token in number 08! :) | 2010/01/21 | [
"https://Stackoverflow.com/questions/2112632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/237934/"
] | Your code example won't compile unless you put commas between the list elements.
For example, this will compile:
```
value = [
[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9,10,11,12]
]
```
If you're interested in taking strings like you show, and **parsing** them into a list of lists (or numpy multi-dimensional array), or if you have a list of lists or numpy array and want to print them out like you describe, you can do that too with a clever couple of list comprehensions. | What you have above does not work, e.g if pasted into a file and then run as a script, or pasted into the interpreter. I get:
```
SyntaxError: invalid token
```
Again, I suspect that what you have done is paste text (a string) containing these characters. They are not integers, and you will get nowhere unless you realize that fact.
Edit: I see...we only get "invalid syntax" if we avoid the "invalid token" error caused by the "08"
```
>>> import numpy
>>> number = numpy.array([[08 02 22 97]])
File "<stdin>", line 1
number = numpy.array([[08 02 22 97]])
^
SyntaxError: invalid token
>>> number = numpy.array([[18 12 22 97]])
File "<stdin>", line 1
number = numpy.array([[18 12 22 97]])
^
SyntaxError: invalid syntax
``` |
2,112,632 | Is it possible to create a grid like below?
I didn't found anything in the forum.
```
#euler-project problem number 11
#In the 20 times 20 grid below,
#four numbers along a diagonal line have been marked in red.
#The product of these numbers is 26 times 63 times 78 times 14 = 1788696.
#What is the greatest product of four adjacent numbers in any direction
#(up, down, left, right, or diagonally) in the 20 times 20 grid?
import numpy
number = numpy.array([[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
[49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00]
[81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65]
[52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91]
[22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80]
[24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50]
[32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70]
[67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21]
[24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72]
[21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95]
[78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92]
[16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57]
[86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58]
[19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40]
[04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66]
[88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69]
[04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36]
[20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16]
[20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54]
[01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48]])
```
EDIT no.1:
I found numpy-array now.
```
x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
```
Is there a way to do it without the commas?
EDIT no.2:
I also found a new problem.
[Python: Invalid Token](https://stackoverflow.com/questions/336181/python-invalid-token)
Invalid token in number 08! :) | 2010/01/21 | [
"https://Stackoverflow.com/questions/2112632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/237934/"
] | Your code example won't compile unless you put commas between the list elements.
For example, this will compile:
```
value = [
[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9,10,11,12]
]
```
If you're interested in taking strings like you show, and **parsing** them into a list of lists (or numpy multi-dimensional array), or if you have a list of lists or numpy array and want to print them out like you describe, you can do that too with a clever couple of list comprehensions. | As for parsing the actual data, and you don't want to read it from a file or use sensible methods, there is always this:
```
s = """[[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
...etc
"""
s = s.replace("]", "").replace("[", "").split()
numbers = [int(x) for x in s]
```
Then you got a 1d array of numbers, which you can have fun with. |
2,112,632 | Is it possible to create a grid like below?
I didn't found anything in the forum.
```
#euler-project problem number 11
#In the 20 times 20 grid below,
#four numbers along a diagonal line have been marked in red.
#The product of these numbers is 26 times 63 times 78 times 14 = 1788696.
#What is the greatest product of four adjacent numbers in any direction
#(up, down, left, right, or diagonally) in the 20 times 20 grid?
import numpy
number = numpy.array([[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
[49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00]
[81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65]
[52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91]
[22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80]
[24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50]
[32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70]
[67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21]
[24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72]
[21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95]
[78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92]
[16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57]
[86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58]
[19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40]
[04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66]
[88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69]
[04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36]
[20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16]
[20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54]
[01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48]])
```
EDIT no.1:
I found numpy-array now.
```
x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
```
Is there a way to do it without the commas?
EDIT no.2:
I also found a new problem.
[Python: Invalid Token](https://stackoverflow.com/questions/336181/python-invalid-token)
Invalid token in number 08! :) | 2010/01/21 | [
"https://Stackoverflow.com/questions/2112632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/237934/"
] | You can define the numbers in a string and split it easily in row/columns:
```
nums = """\
1 2 3
4 5 6
7 8 9 10
"""
rows = [map(int, row.split()) for row in nums.splitlines()]
print rows ##> [[1, 2, 3], [4, 5, 6], [7, 8, 9, 10]]
``` | What you have above does not work, e.g if pasted into a file and then run as a script, or pasted into the interpreter. I get:
```
SyntaxError: invalid token
```
Again, I suspect that what you have done is paste text (a string) containing these characters. They are not integers, and you will get nowhere unless you realize that fact.
Edit: I see...we only get "invalid syntax" if we avoid the "invalid token" error caused by the "08"
```
>>> import numpy
>>> number = numpy.array([[08 02 22 97]])
File "<stdin>", line 1
number = numpy.array([[08 02 22 97]])
^
SyntaxError: invalid token
>>> number = numpy.array([[18 12 22 97]])
File "<stdin>", line 1
number = numpy.array([[18 12 22 97]])
^
SyntaxError: invalid syntax
``` |
2,112,632 | Is it possible to create a grid like below?
I didn't found anything in the forum.
```
#euler-project problem number 11
#In the 20 times 20 grid below,
#four numbers along a diagonal line have been marked in red.
#The product of these numbers is 26 times 63 times 78 times 14 = 1788696.
#What is the greatest product of four adjacent numbers in any direction
#(up, down, left, right, or diagonally) in the 20 times 20 grid?
import numpy
number = numpy.array([[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
[49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00]
[81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65]
[52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91]
[22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80]
[24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50]
[32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70]
[67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21]
[24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72]
[21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95]
[78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92]
[16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57]
[86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58]
[19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40]
[04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66]
[88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69]
[04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36]
[20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16]
[20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54]
[01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48]])
```
EDIT no.1:
I found numpy-array now.
```
x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
```
Is there a way to do it without the commas?
EDIT no.2:
I also found a new problem.
[Python: Invalid Token](https://stackoverflow.com/questions/336181/python-invalid-token)
Invalid token in number 08! :) | 2010/01/21 | [
"https://Stackoverflow.com/questions/2112632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/237934/"
] | You can define the numbers in a string and split it easily in row/columns:
```
nums = """\
1 2 3
4 5 6
7 8 9 10
"""
rows = [map(int, row.split()) for row in nums.splitlines()]
print rows ##> [[1, 2, 3], [4, 5, 6], [7, 8, 9, 10]]
``` | As for parsing the actual data, and you don't want to read it from a file or use sensible methods, there is always this:
```
s = """[[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
...etc
"""
s = s.replace("]", "").replace("[", "").split()
numbers = [int(x) for x in s]
```
Then you got a 1d array of numbers, which you can have fun with. |
58,846,573 | I'm building a voice assistant using python. I want to make it available as a web application. How do I build the same?
Thanks | 2019/11/13 | [
"https://Stackoverflow.com/questions/58846573",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10441927/"
] | When you set
```
channel_shift_range=10,
brightness_range=(0.7, 1.3)
```
This modifies the RNG of this generator so that the Image RNG and the Mask RNG are not in sync anymore.
I propose you use a custom Sequence for this task until the KP new API is released. (see <https://github.com/keras-team/governance/blob/master/rfcs/20190729-keras-preprocessing-redesign.md>)
For an example of a custom Sequence, I propose an example here: <https://dref360.github.io/deterministic-da/> | For anyone else struggling with this - concatenating the images and masks along the channel axis is a handy way to synchronise the augmentations
```
image_mask = np.concatenate([image, mask], axis=3)
image_mask = augmenter.flow(image_mask).next()
image = image_mask [:, :, :, 0]
mask = image_mask [:, :, :, 1]
``` |
56,047,365 | I need a python code to extract the selected word using python.
```
<a class="tel ttel">
<span class="mobilesv icon-hg"></span>
<span class="mobilesv icon-rq"></span>
<span class="mobilesv icon-ba"></span>
<span class="mobilesv icon-rq"></span>
<span class="mobilesv icon-ba"></span>
<span class="mobilesv icon-ikj"></span>
<span class="mobilesv icon-dc"></span>
<span class="mobilesv icon-acb"></span>
<span class="mobilesv icon-lk"></span>
<span class="mobilesv icon-ba"></span>
<span class="mobilesv icon-nm"></span>
<span class="mobilesv icon-ba"></span>
<span class="mobilesv icon-yz"></span>
</a>
```
I need to extract the words which start with the "icon"
The Output which I required is
icon-hg, icon-rq, icon-ba, icon-rq, icon-ba, icon-ikj, icon-dc, icon-acb, icon-lk, icon-ba, icon-nm, icon-ba, icon-yz | 2019/05/08 | [
"https://Stackoverflow.com/questions/56047365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8235643/"
] | You can change line 1 to `import Data.List hiding (find)`, assuming you never intend to use the `find` defined there. | In your situation your options are:
1. Rename your own `find` into something else.
2. Import `Data.List` as qualified: `import qualified Data.List`. You can add `as L` to shorten code that uses stuff from `Data.List`. |
35,811,400 | I have about 650 csv-based matrices. I plan on loading each one using Numpy as in the following example:
```
m1 = numpy.loadtext(open("matrix1.txt", "rb"), delimiter=",", skiprows=1)
```
There are matrix2.txt, matrix3.txt, ..., matrix650.txt files that I need to process.
My end goal is to multiply each matrix by each other, meaning I don't necessarily have to maintain 650 matrices but rather just 2 (1 ongoing and 1 that I am currently multiplying my ongoing by.)
Here is an example of what I mean with matrices defined from 1 to n: M1, M2, M3, .., Mn.
M1\*M2\*M3\*...\*Mn
The dimensions on all the matrices are the same. **The matrices are not square. There are 197 rows and 11 columns.** None of the matrices are sparse and every cell comes into play.
What is the best/most efficient way to do this in python?
EDIT: I took what was suggested and got it to work by taking the transpose since it isn't a square matrix. As an addendum to the question, i**s there a way in Numpy to do element by element multiplication**? | 2016/03/05 | [
"https://Stackoverflow.com/questions/35811400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3973851/"
] | **1. Variant: Nice code but reads all matrices at once**
```
matrixFileCount = 3
matrices = [np.loadtxt(open("matrix%s.txt" % i ), delimiter=",", skiprows=1) for i in range(1,matrixFileCount+1)]
allC = itertools.combinations([x for x in range(matrixFileCount)], 2)
allCMultiply = [np.dot(matrices[c[0]], matrices[c[1]]) for c in allC]
print allCMultiply
```
**2. Variant: Only load 2 Files at once, nice code but a lot of reloading**
```
allCMulitply = []
fileList = ["matrix%s.txt" % x for x in range(1,matrixFileCount+1)]
allC = itertools.combinations(fileList, 2)
for c in allC:
m = [np.loadtxt(open(file), delimiter=",", skiprows=1) for file in c]
allCMulitply.append(np.dot(m[0], m[1]))
print allCMulitply
```
**3. Variant: like the second but avoid loading every time. But only 2 matrix at one point in memory**
Cause the permutations created with itertools are like `(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)` you can avoid somtimes loading both of the 2 matrices.
```
matrixFileCount = 3
allCMulitply = []
mLoaded = {'file' : None, 'matrix' : None}
fileList = ["matrix%s.txt" % x for x in range(1,matrixFileCount+1)]
allC = itertools.combinations(fileList, 2)
for c in allC:
if c[0] is mLoaded['file']:
m = [mLoaded['matrix'], np.loadtxt(open(c[1]), delimiter=",", skiprows=1)]
else:
mLoaded = {'file' : None, 'matrix' : None}
m = [np.loadtxt(open(file), delimiter=",", skiprows=1) for file in c]
mLoaded = {'file' : c[0], 'matrix' : m[0]}
allCMulitply.append(np.dot(m[0], m[1]))
print allCMulitply
```
**Performance**
If you can load all Matrix at once in the memory, the first part is faster then the second, cause in the second you reload matrices a lot. Third part slower than first, but faster than second, cause it avoids sometimes to reloading matrices.
```
0.943613052368 (Part 1: 10 Matrices a 2,2 with 1000 executions)
7.75622487068 (Part 2: 10 Matrices a 2,2 with 1000 executions)
4.83783197403 (Part 3: 10 Matrices a 2,2 with 1000 executions)
``` | Kordi's answer loads *all* of the matrices before doing the multiplication. And that's fine if you know the matrices are going to be small. If you want to conserve memory, however, I'd do the following:
```
import numpy as np
def get_dot_product(fnames):
assert len(fnames) > 0
accum_val = np.loadtxt(fnames[0], delimiter=',', skiprows=1)
return reduce(_product_from_file, fnames[1:], initializer=accum_val)
def _product_from_file(running_product, fname):
return running_product.dot(np.loadtxt(fname, delimiter=',', skiprows=1))
```
If the matrices are large and irregular in shape (not square), there are also optimization algorithms for determining the optimal associative groupings (i.e., where to put the parentheses), but in most cases I doubt it would be worth the overhead of loading and unloading each file twice, once to figure out the associative groupings and then once to carry it out. NumPy is surprisingly fast even on pretty big matrices. |
35,811,400 | I have about 650 csv-based matrices. I plan on loading each one using Numpy as in the following example:
```
m1 = numpy.loadtext(open("matrix1.txt", "rb"), delimiter=",", skiprows=1)
```
There are matrix2.txt, matrix3.txt, ..., matrix650.txt files that I need to process.
My end goal is to multiply each matrix by each other, meaning I don't necessarily have to maintain 650 matrices but rather just 2 (1 ongoing and 1 that I am currently multiplying my ongoing by.)
Here is an example of what I mean with matrices defined from 1 to n: M1, M2, M3, .., Mn.
M1\*M2\*M3\*...\*Mn
The dimensions on all the matrices are the same. **The matrices are not square. There are 197 rows and 11 columns.** None of the matrices are sparse and every cell comes into play.
What is the best/most efficient way to do this in python?
EDIT: I took what was suggested and got it to work by taking the transpose since it isn't a square matrix. As an addendum to the question, i**s there a way in Numpy to do element by element multiplication**? | 2016/03/05 | [
"https://Stackoverflow.com/questions/35811400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3973851/"
] | A Python3 solution, if "each matrix by each other" actually means just multiplying them in a row and *the matrices have compatible dimensions* ( (n, m) · (m, o) · (o, p) · ... ), which you hint at with "(1 ongoing and 1 that...)", then use (if available):
```
from functools import partial
fnames = map("matrix{}.txt".format, range(1, 651))
np.linalg.multi_dot(map(partial(np.loadtxt, delimiter=',', skiprows=1), fnames))
```
or:
```
from functools import reduce, partial
fnames = map("matrix{}.txt".format, range(1, 651))
matrices = map(partial(np.loadtxt, delimiter=',', skiprows=1), fnames)
res = reduce(np.dot, matrices)
```
Maps etc. are lazy in python3, so files are read as needed. Loadtxt doesn't require a pre-opened file, a filename will do.
Doing all the combinations lazily, given that the matrices have the same shape (will do a lot of rereading of data):
```
from functools import partial
from itertools import starmap, combinations
map_loadtxt = partial(map, partial(np.loadtxt, delimiter=',', skiprows=1))
fname_combs = combinations(map("matrix{}.txt".format, range(1, 651)), 2)
res = list(starmap(np.dot, map(map_loadtxt, fname_combs)))
```
Using a bit of grouping to reduce reloading of files:
```
from itertools import groupby, combinations, chain
from functools import partial
from operator import itemgetter
loader = partial(np.loadtxt, delimiter=',', skiprows=1)
fname_pairs = combinations(map("matrix{}.txt".format, range(1, 651)), 2)
groups = groupby(fname_pairs, itemgetter(0))
res = list(chain.from_iterable(
map(loader(k).dot, map(loader, map(itemgetter(1), g)))
for k, g in groups
))
```
Since the matrices are not square, but have the same dimensions, you would have to add transposes before multiplication to match the dimensions. For example either `loader(k).T.dot` or `map(np.transpose, map(loader, ...))`.
If on the other hand the question actually was meant to address element wise multiplication, replace `np.dot` with `np.multiply`. | **1. Variant: Nice code but reads all matrices at once**
```
matrixFileCount = 3
matrices = [np.loadtxt(open("matrix%s.txt" % i ), delimiter=",", skiprows=1) for i in range(1,matrixFileCount+1)]
allC = itertools.combinations([x for x in range(matrixFileCount)], 2)
allCMultiply = [np.dot(matrices[c[0]], matrices[c[1]]) for c in allC]
print allCMultiply
```
**2. Variant: Only load 2 Files at once, nice code but a lot of reloading**
```
allCMulitply = []
fileList = ["matrix%s.txt" % x for x in range(1,matrixFileCount+1)]
allC = itertools.combinations(fileList, 2)
for c in allC:
m = [np.loadtxt(open(file), delimiter=",", skiprows=1) for file in c]
allCMulitply.append(np.dot(m[0], m[1]))
print allCMulitply
```
**3. Variant: like the second but avoid loading every time. But only 2 matrix at one point in memory**
Cause the permutations created with itertools are like `(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)` you can avoid somtimes loading both of the 2 matrices.
```
matrixFileCount = 3
allCMulitply = []
mLoaded = {'file' : None, 'matrix' : None}
fileList = ["matrix%s.txt" % x for x in range(1,matrixFileCount+1)]
allC = itertools.combinations(fileList, 2)
for c in allC:
if c[0] is mLoaded['file']:
m = [mLoaded['matrix'], np.loadtxt(open(c[1]), delimiter=",", skiprows=1)]
else:
mLoaded = {'file' : None, 'matrix' : None}
m = [np.loadtxt(open(file), delimiter=",", skiprows=1) for file in c]
mLoaded = {'file' : c[0], 'matrix' : m[0]}
allCMulitply.append(np.dot(m[0], m[1]))
print allCMulitply
```
**Performance**
If you can load all Matrix at once in the memory, the first part is faster then the second, cause in the second you reload matrices a lot. Third part slower than first, but faster than second, cause it avoids sometimes to reloading matrices.
```
0.943613052368 (Part 1: 10 Matrices a 2,2 with 1000 executions)
7.75622487068 (Part 2: 10 Matrices a 2,2 with 1000 executions)
4.83783197403 (Part 3: 10 Matrices a 2,2 with 1000 executions)
``` |
35,811,400 | I have about 650 csv-based matrices. I plan on loading each one using Numpy as in the following example:
```
m1 = numpy.loadtext(open("matrix1.txt", "rb"), delimiter=",", skiprows=1)
```
There are matrix2.txt, matrix3.txt, ..., matrix650.txt files that I need to process.
My end goal is to multiply each matrix by each other, meaning I don't necessarily have to maintain 650 matrices but rather just 2 (1 ongoing and 1 that I am currently multiplying my ongoing by.)
Here is an example of what I mean with matrices defined from 1 to n: M1, M2, M3, .., Mn.
M1\*M2\*M3\*...\*Mn
The dimensions on all the matrices are the same. **The matrices are not square. There are 197 rows and 11 columns.** None of the matrices are sparse and every cell comes into play.
What is the best/most efficient way to do this in python?
EDIT: I took what was suggested and got it to work by taking the transpose since it isn't a square matrix. As an addendum to the question, i**s there a way in Numpy to do element by element multiplication**? | 2016/03/05 | [
"https://Stackoverflow.com/questions/35811400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3973851/"
] | **1. Variant: Nice code but reads all matrices at once**
```
matrixFileCount = 3
matrices = [np.loadtxt(open("matrix%s.txt" % i ), delimiter=",", skiprows=1) for i in range(1,matrixFileCount+1)]
allC = itertools.combinations([x for x in range(matrixFileCount)], 2)
allCMultiply = [np.dot(matrices[c[0]], matrices[c[1]]) for c in allC]
print allCMultiply
```
**2. Variant: Only load 2 Files at once, nice code but a lot of reloading**
```
allCMulitply = []
fileList = ["matrix%s.txt" % x for x in range(1,matrixFileCount+1)]
allC = itertools.combinations(fileList, 2)
for c in allC:
m = [np.loadtxt(open(file), delimiter=",", skiprows=1) for file in c]
allCMulitply.append(np.dot(m[0], m[1]))
print allCMulitply
```
**3. Variant: like the second but avoid loading every time. But only 2 matrix at one point in memory**
Cause the permutations created with itertools are like `(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)` you can avoid somtimes loading both of the 2 matrices.
```
matrixFileCount = 3
allCMulitply = []
mLoaded = {'file' : None, 'matrix' : None}
fileList = ["matrix%s.txt" % x for x in range(1,matrixFileCount+1)]
allC = itertools.combinations(fileList, 2)
for c in allC:
if c[0] is mLoaded['file']:
m = [mLoaded['matrix'], np.loadtxt(open(c[1]), delimiter=",", skiprows=1)]
else:
mLoaded = {'file' : None, 'matrix' : None}
m = [np.loadtxt(open(file), delimiter=",", skiprows=1) for file in c]
mLoaded = {'file' : c[0], 'matrix' : m[0]}
allCMulitply.append(np.dot(m[0], m[1]))
print allCMulitply
```
**Performance**
If you can load all Matrix at once in the memory, the first part is faster then the second, cause in the second you reload matrices a lot. Third part slower than first, but faster than second, cause it avoids sometimes to reloading matrices.
```
0.943613052368 (Part 1: 10 Matrices a 2,2 with 1000 executions)
7.75622487068 (Part 2: 10 Matrices a 2,2 with 1000 executions)
4.83783197403 (Part 3: 10 Matrices a 2,2 with 1000 executions)
``` | How about a really simple solution avoiding `map`, `reduce` and the like? The default `numpy` array object does element-wise multiplication by default.
```
size = (197, 11)
result = numpy.ones(size)
for i in range(1, 651):
result *= numpy.loadtext(open("matrix{}.txt".format(i), "rb"),
delimiter=",", skiprows=1)
``` |
35,811,400 | I have about 650 csv-based matrices. I plan on loading each one using Numpy as in the following example:
```
m1 = numpy.loadtext(open("matrix1.txt", "rb"), delimiter=",", skiprows=1)
```
There are matrix2.txt, matrix3.txt, ..., matrix650.txt files that I need to process.
My end goal is to multiply each matrix by each other, meaning I don't necessarily have to maintain 650 matrices but rather just 2 (1 ongoing and 1 that I am currently multiplying my ongoing by.)
Here is an example of what I mean with matrices defined from 1 to n: M1, M2, M3, .., Mn.
M1\*M2\*M3\*...\*Mn
The dimensions on all the matrices are the same. **The matrices are not square. There are 197 rows and 11 columns.** None of the matrices are sparse and every cell comes into play.
What is the best/most efficient way to do this in python?
EDIT: I took what was suggested and got it to work by taking the transpose since it isn't a square matrix. As an addendum to the question, i**s there a way in Numpy to do element by element multiplication**? | 2016/03/05 | [
"https://Stackoverflow.com/questions/35811400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3973851/"
] | A Python3 solution, if "each matrix by each other" actually means just multiplying them in a row and *the matrices have compatible dimensions* ( (n, m) · (m, o) · (o, p) · ... ), which you hint at with "(1 ongoing and 1 that...)", then use (if available):
```
from functools import partial
fnames = map("matrix{}.txt".format, range(1, 651))
np.linalg.multi_dot(map(partial(np.loadtxt, delimiter=',', skiprows=1), fnames))
```
or:
```
from functools import reduce, partial
fnames = map("matrix{}.txt".format, range(1, 651))
matrices = map(partial(np.loadtxt, delimiter=',', skiprows=1), fnames)
res = reduce(np.dot, matrices)
```
Maps etc. are lazy in python3, so files are read as needed. Loadtxt doesn't require a pre-opened file, a filename will do.
Doing all the combinations lazily, given that the matrices have the same shape (will do a lot of rereading of data):
```
from functools import partial
from itertools import starmap, combinations
map_loadtxt = partial(map, partial(np.loadtxt, delimiter=',', skiprows=1))
fname_combs = combinations(map("matrix{}.txt".format, range(1, 651)), 2)
res = list(starmap(np.dot, map(map_loadtxt, fname_combs)))
```
Using a bit of grouping to reduce reloading of files:
```
from itertools import groupby, combinations, chain
from functools import partial
from operator import itemgetter
loader = partial(np.loadtxt, delimiter=',', skiprows=1)
fname_pairs = combinations(map("matrix{}.txt".format, range(1, 651)), 2)
groups = groupby(fname_pairs, itemgetter(0))
res = list(chain.from_iterable(
map(loader(k).dot, map(loader, map(itemgetter(1), g)))
for k, g in groups
))
```
Since the matrices are not square, but have the same dimensions, you would have to add transposes before multiplication to match the dimensions. For example either `loader(k).T.dot` or `map(np.transpose, map(loader, ...))`.
If on the other hand the question actually was meant to address element wise multiplication, replace `np.dot` with `np.multiply`. | Kordi's answer loads *all* of the matrices before doing the multiplication. And that's fine if you know the matrices are going to be small. If you want to conserve memory, however, I'd do the following:
```
import numpy as np
def get_dot_product(fnames):
assert len(fnames) > 0
accum_val = np.loadtxt(fnames[0], delimiter=',', skiprows=1)
return reduce(_product_from_file, fnames[1:], initializer=accum_val)
def _product_from_file(running_product, fname):
return running_product.dot(np.loadtxt(fname, delimiter=',', skiprows=1))
```
If the matrices are large and irregular in shape (not square), there are also optimization algorithms for determining the optimal associative groupings (i.e., where to put the parentheses), but in most cases I doubt it would be worth the overhead of loading and unloading each file twice, once to figure out the associative groupings and then once to carry it out. NumPy is surprisingly fast even on pretty big matrices. |
35,811,400 | I have about 650 csv-based matrices. I plan on loading each one using Numpy as in the following example:
```
m1 = numpy.loadtext(open("matrix1.txt", "rb"), delimiter=",", skiprows=1)
```
There are matrix2.txt, matrix3.txt, ..., matrix650.txt files that I need to process.
My end goal is to multiply each matrix by each other, meaning I don't necessarily have to maintain 650 matrices but rather just 2 (1 ongoing and 1 that I am currently multiplying my ongoing by.)
Here is an example of what I mean with matrices defined from 1 to n: M1, M2, M3, .., Mn.
M1\*M2\*M3\*...\*Mn
The dimensions on all the matrices are the same. **The matrices are not square. There are 197 rows and 11 columns.** None of the matrices are sparse and every cell comes into play.
What is the best/most efficient way to do this in python?
EDIT: I took what was suggested and got it to work by taking the transpose since it isn't a square matrix. As an addendum to the question, i**s there a way in Numpy to do element by element multiplication**? | 2016/03/05 | [
"https://Stackoverflow.com/questions/35811400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3973851/"
] | A Python3 solution, if "each matrix by each other" actually means just multiplying them in a row and *the matrices have compatible dimensions* ( (n, m) · (m, o) · (o, p) · ... ), which you hint at with "(1 ongoing and 1 that...)", then use (if available):
```
from functools import partial
fnames = map("matrix{}.txt".format, range(1, 651))
np.linalg.multi_dot(map(partial(np.loadtxt, delimiter=',', skiprows=1), fnames))
```
or:
```
from functools import reduce, partial
fnames = map("matrix{}.txt".format, range(1, 651))
matrices = map(partial(np.loadtxt, delimiter=',', skiprows=1), fnames)
res = reduce(np.dot, matrices)
```
Maps etc. are lazy in python3, so files are read as needed. Loadtxt doesn't require a pre-opened file, a filename will do.
Doing all the combinations lazily, given that the matrices have the same shape (will do a lot of rereading of data):
```
from functools import partial
from itertools import starmap, combinations
map_loadtxt = partial(map, partial(np.loadtxt, delimiter=',', skiprows=1))
fname_combs = combinations(map("matrix{}.txt".format, range(1, 651)), 2)
res = list(starmap(np.dot, map(map_loadtxt, fname_combs)))
```
Using a bit of grouping to reduce reloading of files:
```
from itertools import groupby, combinations, chain
from functools import partial
from operator import itemgetter
loader = partial(np.loadtxt, delimiter=',', skiprows=1)
fname_pairs = combinations(map("matrix{}.txt".format, range(1, 651)), 2)
groups = groupby(fname_pairs, itemgetter(0))
res = list(chain.from_iterable(
map(loader(k).dot, map(loader, map(itemgetter(1), g)))
for k, g in groups
))
```
Since the matrices are not square, but have the same dimensions, you would have to add transposes before multiplication to match the dimensions. For example either `loader(k).T.dot` or `map(np.transpose, map(loader, ...))`.
If on the other hand the question actually was meant to address element wise multiplication, replace `np.dot` with `np.multiply`. | How about a really simple solution avoiding `map`, `reduce` and the like? The default `numpy` array object does element-wise multiplication by default.
```
size = (197, 11)
result = numpy.ones(size)
for i in range(1, 651):
result *= numpy.loadtext(open("matrix{}.txt".format(i), "rb"),
delimiter=",", skiprows=1)
``` |
43,814,236 | Example dataset columns: ["A","B","C","D","num1","num2"]. So I have 6 columns - first 4 for grouping and last 2 are numeric and means will be calculated based on groupBy statements.
I want to groupBy all possible combinations of the 4 grouping columns.
I wish to avoid explicitly typing all possible groupBy's such as groupBy["A","B","C","D"] then groupBy["A","B","D","C"] etc.
I'm new to Python - in python how can I automate group by in a loop so that it does a groupBy calc for all possible combinations - in this case 4\*3\*2\*1 = 24 combinations?
Ta.
Thanks for your help so far. Any idea why the 'a =' part isn't working?
```
import itertools
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0,10,size=(100, 5)), columns=list('ABCDE'))
group_by_vars = list(df.columns)[0:4]
perms = [perm for perm in itertools.permutations(group_by_vars)]
print list(itertools.combinations(group_by_vars,2))
a = [x for x in itertools.combinations(group_by_vars,group_by_n+1) for group_by_n in range(len(group_by_vars))]
```
a doesn't error I just get an empty object. Why???
Something like [comb for comb in itertools.combinations(group\_by\_vars,2)] is easy enough but how to get a = [x for x in itertools.combinations(group\_by\_vars,group\_by\_n+1) for group\_by\_n in range(len(group\_by\_vars))]?? | 2017/05/05 | [
"https://Stackoverflow.com/questions/43814236",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6647085/"
] | Syntax errors are a computer not being able to posses an imput.
Like this:
`answer = 1 +/ 6`
The computer does not recognize the `+/`
a semantics error are human errors. The computer will execute the code, but it will not be as wanted
Like this:
```
if(player = win){
print "You Lose"
}
```
It will print "You Lose" if they player won.
A logical error is a synonym for semantics error. | Syntax error is an error which will make your code "unprocessable".
```
if true {}
```
instead of
```
if (true) {}
```
for example
Semantics error and logical errors are the same. Your code is correct, but doesn't do what you think it does.
```
while(c = true) {}
```
instead of
```
while (c == true) {}
```
for example |
38,686,830 | I'm using python to input data to my script
then trying to return it back
on demand to show the results
I tried to write it as simple as possible since it's only practicing and trying to get the hang of python
here's how my script looks like
```
#!/usr/python
## imports #####
##################
import os
import sys
## functions
##################
# GET INSERT DATA
def getdata():
clientname = raw_input(" *** Enter Client Name > ")
phone = raw_input(" *** Enter Client Phone > ")
location = raw_input(" *** Enter Client Location > ")
email = raw_input(" *** Enter Client email > ")
website = raw_input(" *** Enter Client Website > ")
return clientname, phone, location, email, website
# VIEW DATA
def showdata():
print "==================="
print ""
print clientname
print ""
print phone
print ""
print location
print ""
print email
print ""
print website
print ""
print "==================="
# CLEAR
def clear():
os.system("clear") #linux
os.system("cls") #windows
# SHOW INSTRUCTIONS
def welcome():
clear()
while True:
choice = raw_input(" Select Option > ")
# INSERT DATA
if choice == "1":
getdata()
# VIEW DATA
elif choice == "2":
showdata()
else:
print "Invalid Selection.. "
print "Terminating... "
#exit()
welcome()
```
what am i doing wrong ? what am i missing? | 2016/07/31 | [
"https://Stackoverflow.com/questions/38686830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2310584/"
] | You are getting an error because your JSON data is an array and what you have done is:
```
XmlNode xml = JsonConvert.DeserializeXmlNode(sBody, "BiddingHistory");
```
the above line of code will only work for JSON objects.
So, if your JSON is an Array, then try this:
```
XmlNode xml = JsonConvert.DeserializeXmlNode("{\"Row\":" + sBody + "}", "BiddingHistory").ToXmlString();
``` | Use service stack from nuget [Service Stack](https://www.nuget.org/packages/ServiceStack/)
add reference to your program
```
using ServiceStack;
```
Convert your json to object
```
var jRst = JsonConvert.DeserializeObject(body);
```
after that you can get xml using service stack like below
```
var xml = jRst.ToXml();
``` |
9,372,672 | I want to use vlc.py to play mpeg2 stream <http://wiki.videolan.org/Python_bindings>.
There are some examples here: <http://git.videolan.org/?p=vlc/bindings/python.git;a=tree;f=examples;hb=HEAD>
When I run the examples, it just can play video file, I want to know is there any examples to play video stream ? | 2012/02/21 | [
"https://Stackoverflow.com/questions/9372672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/335499/"
] | According to [this](http://pastebin.com/edncPpW0) Pastebin entry, linked to in [this](https://mailman.videolan.org/pipermail/vlc-devel/2012-September/090310.html) mailing list, it can be solved using a method like this:
```
import vlc
i = vlc.Instance('--verbose 2'.split())
p = i.media_player_new()
p.set_mrl('rtp://@224.1.1.1')
p.play()
```
I haven't tried it though, so please let me know if it works. | This is a bare bones solution:
```
import vlc
Instance = vlc.Instance()
player = Instance.media_player_new()
Media = Instance.media_new('http://localhost/postcard/GWPE.avi')
Media.get_mrl()
player.set_media(Media)
player.play()
```
if the media is a local file you will have to alter:
```
Media = Instance.media_new('http://localhost/postcard/GWPE.avi')
Media.get_mrl()
```
to:
```
Media = Instance.media_new_path('/path/to_your/file/filename.avi')
```
note that you must lose the `get_mrl()` as well as changing the function. |
9,372,672 | I want to use vlc.py to play mpeg2 stream <http://wiki.videolan.org/Python_bindings>.
There are some examples here: <http://git.videolan.org/?p=vlc/bindings/python.git;a=tree;f=examples;hb=HEAD>
When I run the examples, it just can play video file, I want to know is there any examples to play video stream ? | 2012/02/21 | [
"https://Stackoverflow.com/questions/9372672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/335499/"
] | According to [this](http://pastebin.com/edncPpW0) Pastebin entry, linked to in [this](https://mailman.videolan.org/pipermail/vlc-devel/2012-September/090310.html) mailing list, it can be solved using a method like this:
```
import vlc
i = vlc.Instance('--verbose 2'.split())
p = i.media_player_new()
p.set_mrl('rtp://@224.1.1.1')
p.play()
```
I haven't tried it though, so please let me know if it works. | ```
import vlc
vlcInstance = vlc.Instance()
player = vlcInstance.media_player_new()
player.set_mrl("rtsp://URL_PATH")
player.play()
```
I was able to open a stream with the following code, combining the previous answers.
Tested this with a network webcam |
9,372,672 | I want to use vlc.py to play mpeg2 stream <http://wiki.videolan.org/Python_bindings>.
There are some examples here: <http://git.videolan.org/?p=vlc/bindings/python.git;a=tree;f=examples;hb=HEAD>
When I run the examples, it just can play video file, I want to know is there any examples to play video stream ? | 2012/02/21 | [
"https://Stackoverflow.com/questions/9372672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/335499/"
] | This is a bare bones solution:
```
import vlc
Instance = vlc.Instance()
player = Instance.media_player_new()
Media = Instance.media_new('http://localhost/postcard/GWPE.avi')
Media.get_mrl()
player.set_media(Media)
player.play()
```
if the media is a local file you will have to alter:
```
Media = Instance.media_new('http://localhost/postcard/GWPE.avi')
Media.get_mrl()
```
to:
```
Media = Instance.media_new_path('/path/to_your/file/filename.avi')
```
note that you must lose the `get_mrl()` as well as changing the function. | ```
import vlc
vlcInstance = vlc.Instance()
player = vlcInstance.media_player_new()
player.set_mrl("rtsp://URL_PATH")
player.play()
```
I was able to open a stream with the following code, combining the previous answers.
Tested this with a network webcam |
40,890,768 | Tensorflow is now available on Windows:
```
https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html
```
I used pip install tensorflow.
I try running the intro code:
```
https://www.tensorflow.org/versions/r0.12/get_started/index.html
```
I get this error:
```
C:\Python\Python35-32\python.exe "C:/tensorflow_tutorial.py"
Traceback (most recent call last):
File "C:\Python\Python35-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 18, in swig_import_helper
return importlib.import_module(mname)
File "C:\Python\Python35-32\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 666, in _load_unlocked
File "<frozen importlib._bootstrap>", line 577, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 903, in create_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
ImportError: DLL load failed: %1 is not a valid Win32 application.
```
This site had some suggestions on DLL's that were missing but anyones guess is good:
```
https://github.com/tensorflow/tensorflow/issues/5949
```
Update: I switched to x64 I get this error:
```
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library cublas64_80.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_blas.cc:2294] Unable to load cuBLAS DSO.
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library cudnn64_5.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:3459] Unable to load cuDNN DSO
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library cufft64_80.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_fft.cc:344] Unable to load cuFFT DSO.
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library nvcuda.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:165] hostname: ����
```
Update2:
I installed the nvidia cuda files
```
https://developer.nvidia.com/cuda-downloads
```
Im getting these errors:
```
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cublas64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library cudnn64_5.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:3459] Unable to load cuDNN DSO
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cufft64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library nvcuda.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:165] hostname: �٩��
```
Update3:
I registered and downloaded the cudnn packge from NVIDIA
```
https://developer.nvidia.com/cudnn
```
Moved the file location
```
C:\Desktop\cudnn-8.0-windows10-x64-v5.1.zip\cuda\bin\cudnn64_5.dll to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin\cudnn64_5.dll
```
Still get this error:
```
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library nvcuda.dll
```
Update 4:
Downloaded nvcuda.dll into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin
This resolved my issue. | 2016/11/30 | [
"https://Stackoverflow.com/questions/40890768",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1239984/"
] | From the path of your Python interpreter (`C:\Python\Python35-32`), it appears that you are using the 32-bit version of Python 3.5. The official TensorFlow packages are only available for 64-bit architectures (`x64`/`amd64`), so you have two options:
1. Install the [64-bit version](https://www.python.org/ftp/python/3.5.2/python-3.5.2-amd64.exe) of Python 3.5 (either from Python.org or Anaconda), *or*
2. Compile the PIP package yourself for 32-bit Python 3.5. You may be able to do this using the experimental CMake build (see [here](https://github.com/tensorflow/tensorflow/tree/r0.12/tensorflow/contrib/cmake) for details), but this is not a supported or tested configuration.
(Note that I'm not sure how you installed the package on a 32-bit version of Python, because when I tried to do that I got an error message: "Not a supported wheel on this platform.") | The problem is not with platform (amd64) but with GPU drivers. You need to either install packages which runs on CPU or use that GPU ones you already installed but install also CUDA drivers. |
40,890,768 | Tensorflow is now available on Windows:
```
https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html
```
I used pip install tensorflow.
I try running the intro code:
```
https://www.tensorflow.org/versions/r0.12/get_started/index.html
```
I get this error:
```
C:\Python\Python35-32\python.exe "C:/tensorflow_tutorial.py"
Traceback (most recent call last):
File "C:\Python\Python35-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 18, in swig_import_helper
return importlib.import_module(mname)
File "C:\Python\Python35-32\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 666, in _load_unlocked
File "<frozen importlib._bootstrap>", line 577, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 903, in create_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
ImportError: DLL load failed: %1 is not a valid Win32 application.
```
This site had some suggestions on DLL's that were missing but anyones guess is good:
```
https://github.com/tensorflow/tensorflow/issues/5949
```
Update: I switched to x64 I get this error:
```
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library cublas64_80.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_blas.cc:2294] Unable to load cuBLAS DSO.
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library cudnn64_5.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:3459] Unable to load cuDNN DSO
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library cufft64_80.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_fft.cc:344] Unable to load cuFFT DSO.
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library nvcuda.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:165] hostname: ����
```
Update2:
I installed the nvidia cuda files
```
https://developer.nvidia.com/cuda-downloads
```
Im getting these errors:
```
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cublas64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library cudnn64_5.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:3459] Unable to load cuDNN DSO
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cufft64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library nvcuda.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:165] hostname: �٩��
```
Update3:
I registered and downloaded the cudnn packge from NVIDIA
```
https://developer.nvidia.com/cudnn
```
Moved the file location
```
C:\Desktop\cudnn-8.0-windows10-x64-v5.1.zip\cuda\bin\cudnn64_5.dll to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin\cudnn64_5.dll
```
Still get this error:
```
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library nvcuda.dll
```
Update 4:
Downloaded nvcuda.dll into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin
This resolved my issue. | 2016/11/30 | [
"https://Stackoverflow.com/questions/40890768",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1239984/"
] | From the path of your Python interpreter (`C:\Python\Python35-32`), it appears that you are using the 32-bit version of Python 3.5. The official TensorFlow packages are only available for 64-bit architectures (`x64`/`amd64`), so you have two options:
1. Install the [64-bit version](https://www.python.org/ftp/python/3.5.2/python-3.5.2-amd64.exe) of Python 3.5 (either from Python.org or Anaconda), *or*
2. Compile the PIP package yourself for 32-bit Python 3.5. You may be able to do this using the experimental CMake build (see [here](https://github.com/tensorflow/tensorflow/tree/r0.12/tensorflow/contrib/cmake) for details), but this is not a supported or tested configuration.
(Note that I'm not sure how you installed the package on a 32-bit version of Python, because when I tried to do that I got an error message: "Not a supported wheel on this platform.") | you can find cudnn64\_5.dll file inside **"cuda\bin"**[this is cudnn-8.0-windows7-x64-v5.1\_4 zip extraction folder].Then copy above file into **"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin"**.
This is work for Python 3.5 bit 64 version and Windows 7 bit 64 environment |
40,890,768 | Tensorflow is now available on Windows:
```
https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html
```
I used pip install tensorflow.
I try running the intro code:
```
https://www.tensorflow.org/versions/r0.12/get_started/index.html
```
I get this error:
```
C:\Python\Python35-32\python.exe "C:/tensorflow_tutorial.py"
Traceback (most recent call last):
File "C:\Python\Python35-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 18, in swig_import_helper
return importlib.import_module(mname)
File "C:\Python\Python35-32\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 666, in _load_unlocked
File "<frozen importlib._bootstrap>", line 577, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 903, in create_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
ImportError: DLL load failed: %1 is not a valid Win32 application.
```
This site had some suggestions on DLL's that were missing but anyones guess is good:
```
https://github.com/tensorflow/tensorflow/issues/5949
```
Update: I switched to x64 I get this error:
```
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library cublas64_80.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_blas.cc:2294] Unable to load cuBLAS DSO.
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library cudnn64_5.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:3459] Unable to load cuDNN DSO
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library cufft64_80.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_fft.cc:344] Unable to load cuFFT DSO.
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library nvcuda.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:165] hostname: ����
```
Update2:
I installed the nvidia cuda files
```
https://developer.nvidia.com/cuda-downloads
```
Im getting these errors:
```
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cublas64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library cudnn64_5.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:3459] Unable to load cuDNN DSO
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cufft64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library nvcuda.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:165] hostname: �٩��
```
Update3:
I registered and downloaded the cudnn packge from NVIDIA
```
https://developer.nvidia.com/cudnn
```
Moved the file location
```
C:\Desktop\cudnn-8.0-windows10-x64-v5.1.zip\cuda\bin\cudnn64_5.dll to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin\cudnn64_5.dll
```
Still get this error:
```
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:119] Couldn't open CUDA library nvcuda.dll
```
Update 4:
Downloaded nvcuda.dll into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin
This resolved my issue. | 2016/11/30 | [
"https://Stackoverflow.com/questions/40890768",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1239984/"
] | From the path of your Python interpreter (`C:\Python\Python35-32`), it appears that you are using the 32-bit version of Python 3.5. The official TensorFlow packages are only available for 64-bit architectures (`x64`/`amd64`), so you have two options:
1. Install the [64-bit version](https://www.python.org/ftp/python/3.5.2/python-3.5.2-amd64.exe) of Python 3.5 (either from Python.org or Anaconda), *or*
2. Compile the PIP package yourself for 32-bit Python 3.5. You may be able to do this using the experimental CMake build (see [here](https://github.com/tensorflow/tensorflow/tree/r0.12/tensorflow/contrib/cmake) for details), but this is not a supported or tested configuration.
(Note that I'm not sure how you installed the package on a 32-bit version of Python, because when I tried to do that I got an error message: "Not a supported wheel on this platform.") | IF you are installing the GPU version please make sure you have following on your system:
* CUDA® Toolkit 9.0. For details, see NVIDIA's documentation Ensure
that you append the relevant Cuda pathnames to the %PATH% environment
variable as described in the NVIDIA documentation.
* The NVIDIA drivers associated with CUDA Toolkit 9.0.
* cuDNN v7.0. For details, see NVIDIA's documentation. Note that cuDNN
is typically installed in a different location from the other CUDA
DLLs. Ensure that you add the directory where you installed the cuDNN
DLL to your %PATH% environment variable.
* GPU card with CUDA Compute Capability 3.0 or higher for building from
source and 3.5 or higher for our binaries. See NVIDIA documentation
for a list of supported GPU cards. |
55,681,488 | There is an existing question [How to write binary data to stdout in python 3?](https://stackoverflow.com/questions/908331/how-to-write-binary-data-to-stdout-in-python-3) but all of the answers suggest `sys.stdout.buffer` or variants thereof (e.g., manually rewrapping the file descriptor), which have a problem: they don't respect buffering:
```
MacBook-Pro-116:~ ezyang$ cat test.py
import sys
sys.stdout.write("A")
sys.stdout.buffer.write(b"B")
MacBook-Pro-116:~ ezyang$ python3 test.py | cat
BA
```
Is there a way to write binary data to stdout while respecting buffering with respect to `sys.stdout` and unadorned `print` statements? (The actual use-case is, I have "text-like" data of an unknown encoding and I just want to pass it straight to stdout without making a commitment to a particular encoding.) | 2019/04/15 | [
"https://Stackoverflow.com/questions/55681488",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23845/"
] | Can't you interleave calls to `write` with `flush` ?
```
sys.stdout.write("A")
sys.stdout.buffer.write(b"B")
```
Results in:
>
> BA
>
>
>
---
```
sys.stdout.write("A")
sys.stdout.flush()
sys.stdout.buffer.write(b"B")
sys.stdout.flush()
```
Results in:
>
> AB
>
>
> | You can define a local function called `_print` (or even override the system `print` function by naming it `print`) as follows:
```
import sys
def _print(data):
"""
If data is bytes, write to stdout using sys.stdout.buffer.write,
otherwise, assume it's str and convert to bytes with utf-8
encoding before writing.
"""
if type(data) != bytes:
data = bytes(data, 'utf-8')
sys.stdout.buffer.write(data)
_print('A')
_print(b'B')
```
The output should be `AB`.
Note: normally the system `print` function adds a newline to the output. The above `_print` just outputs the data (either `bytes` or by assuming it's `str`) without the newline.
### buffered implementation
If you want buffered I/O, you can manage that by using the tools from the `io` library.
Simple example:
```
import io
import sys
output_buffer = None
text_wrapper = None
def init_buffer():
global output_buffer, text_wrapper
if not output_buffer:
output_buffer = io.BytesIO()
text_wrapper = io.TextIOWrapper(
output_buffer,
encoding='utf-8',
write_through=True)
def write(data):
if type(data) == bytes:
output_buffer.write(data)
else:
text_wrapper.write(data)
def flush():
sys.stdout.buffer.write(output_buffer.getvalue())
# initialize buffer, write some data, and then flush to stdout
init_buffer()
write("A")
write(b"B")
write("foo")
write(b"bar")
flush()
```
If you are performing all the output writes in a function, for example, you can use the `contextlib.contextmanager` to create a factory function that allow you to use the `with ...` statement:
```
# This uses the vars and functions in the example above.
import contextlib
@contextlib.contextmanager
def buffered_stdout():
"""
Create a factory function for using the `with` statement
to write to the output buffer.
"""
global output_buffer
init_buffer()
fh = sys.stdout.buffer
try:
yield fh
finally:
try:
fh.write(output_buffer.getvalue())
except AttributeError:
pass
# open the buffered output stream and write some data to it
with buffered_stdout():
write("A")
write(b"B")
write("foo")
write(b"bar")
```
See:
* [PyMOTW-3 - io — Text, Binary, and Raw Stream I/O Tools](https://pymotw.com/3/io/)
* [Python 3 - io — Core tools for working with streams](https://docs.python.org/3/library/io.html)
* [Python 3 - contextlib — Utilities for with-statement contexts](https://docs.python.org/3/library/contextlib.html)
* [This answer on stackoverflow: Difference between `open` and `io.BytesIO` in binary streams](https://stackoverflow.com/a/42800629/220783) |
48,935,995 | I am a newbie in python. I have a question about the dimension of array.
I have (10,192,192,1) array which type is (class 'numpy.ndarray').
I would like to divid this array to 10 separated array like 10 \* (1,192,192,1). but I always got (192,192,1) array when I separate.
How can I get separated arrays as a same dimension type of original one?
below is my code.
```
b = np.ndarray((a.shape[0],a.shape[1],a.shape[2],a.shape[3]))
print(b.shape) # (10,192,192,1)
for i in range(a.shape[0]):
b[i] = a[i]
print(b[i].shape) # (192,192,1), but I want to get (1,192,192,1)
``` | 2018/02/22 | [
"https://Stackoverflow.com/questions/48935995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8032125/"
] | An intent object couldn't be created after the finish. Try it before `finish();`
```
Intent intent = new Intent(MainActivity.this, LoginActivity.class);
startActivity(intent);
finish();
```
**Update:** In your case import the intent like this.
```
import android.content.Intent
``` | As your code implies, you are calling `finish()` method before calling your new activity. In other words, the following lines of code will never run:
```
// Opening the Login Activity using Intent.
Intent intent = new Intent(MainActivity.this, LoginActivity.class);
startActivity(intent);
```
In order to solve the problem, add `finish()` method after above block of code. |
48,935,995 | I am a newbie in python. I have a question about the dimension of array.
I have (10,192,192,1) array which type is (class 'numpy.ndarray').
I would like to divid this array to 10 separated array like 10 \* (1,192,192,1). but I always got (192,192,1) array when I separate.
How can I get separated arrays as a same dimension type of original one?
below is my code.
```
b = np.ndarray((a.shape[0],a.shape[1],a.shape[2],a.shape[3]))
print(b.shape) # (10,192,192,1)
for i in range(a.shape[0]):
b[i] = a[i]
print(b[i].shape) # (192,192,1), but I want to get (1,192,192,1)
``` | 2018/02/22 | [
"https://Stackoverflow.com/questions/48935995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8032125/"
] | An intent object couldn't be created after the finish. Try it before `finish();`
```
Intent intent = new Intent(MainActivity.this, LoginActivity.class);
startActivity(intent);
finish();
```
**Update:** In your case import the intent like this.
```
import android.content.Intent
``` | You should try to clean and rebuild your project. Also delete the already build apks and then the issue might be resolved and you will be allowed to import
import android.content.Intent for using Intent. |
15,512,741 | I have a .txt file that is UTF-8 formatted and have problems to read it into Python. I have a large number of files and a conversion would be cumbersome.
So if I read the file in via
```
for line in file_obj:
...
```
I get the following error:
```
File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 291: ordinal not in range(128)
```
I guess `x.decode("utf-8")` wouldn't work since the error occurs before the line is even read in. | 2013/03/19 | [
"https://Stackoverflow.com/questions/15512741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | There are two choices.
1. Specify the encoding when opening the file, instead of using the default.
2. Open the file in binary mode, and explicitly `decode` from `bytes` to `str`.
The first is obviously the simpler one. You don't show how you're opening the file, but assuming your code looks like this:
```
with open(path) as file_obj:
for line in file_obj:
```
Do this:
```
with open(path, encoding='utf-8') as file_obj:
for line in file_obj:
```
That's it.
As [the docs](http://docs.python.org/3/library/functions.html#open) explain, if you don't specify an encoding in text mode:
>
> The default encoding is platform dependent (whatever `locale.getpreferredencoding()` returns), but any encoding supported by Python can be used.
>
>
>
In some cases (e.g., any OS X, or linux with an appropriate configuration), `locale.getpreferredencoding()` will always be 'UTF-8'. But it'll obviously never be "automatically whatever's right for any file I might open". So if you know a file is UTF-8, you should specify it explicitly. | For Python 2 and 3 solution, use codecs:
```
import codecs
file_obj = codecs.open('ur file', "r", "utf-8")
for line in file_obj:
...
```
Otherwise -- Python 3 -- use abarnert's [solution](https://stackoverflow.com/a/15512760/298607) |
45,414,796 | I have a list of objects with multiple attributes. I want to filter the list based on one attribute of the object (country\_code), i.e.
Current list
```
elems = [{'region_code': 'EUD', 'country_code': 'ROM', 'country_desc': 'Romania', 'event_number': '6880'},
{'region_code': 'EUD', 'country_code': 'ROM', 'country_desc':'Romania', 'event_number': '3200'},
{'region_code': 'EUD', 'country_code': 'ROM', 'country_desc': 'Romania', 'event_number': '4000'},
{'region_code': 'EUD', 'country_code': 'SVN', 'country_desc': 'Slovenia', 'event_number': '6880'},
{'region_code': 'EUD', 'country_code': 'NLD', 'country_desc':'Netherlands', 'event_number': '6880'},
{'region_code': 'EUD', 'country_code': 'BEL', 'country_desc':'Belgium', 'event_number': '6880'}]
```
Desired list
```
elems = [{'region_code': 'EUD', 'country_code': 'ROM', 'country_desc': 'Romania', 'event_number': '6880'},
{'region_code': 'EUD', 'country_code': 'SVN', 'country_desc': 'Slovenia', 'event_number': '6880'},
{'region_code': 'EUD', 'country_code': 'NLD', 'country_desc': 'Netherlands', 'event_number': '6880'},
{'region_code': 'EUD', 'country_code': 'BEL', 'country_desc': 'Belgium', 'event_number': '6880'}]
```
I can achieve this by creating a dictionary and a for-loop, but I feel like there's an easier way in python using the filter() or reduce() functions, I just can't figure out how.
**Can anyone simplify the below code using in-built python functions? Performance is a big factor because the real data will be substantial.**
Working code:
```
unique = {}
for elem in elems:
if elem['country_code'] not in unique.keys():
unique[elem['country_code']] = elem
print(unique.values())
```
Worth noting I have also tried the code below, but it performs worse than the current working code:
```
unique = []
for elem in elems:
if not any(u['country_code'] == elem['country_code'] for u in unique):
unique.append(elem)
``` | 2017/07/31 | [
"https://Stackoverflow.com/questions/45414796",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5142595/"
] | I think your first approach is already pretty close to being optimal. Dictionary lookup is fast (just as fast as in a `set`) and the loop is easy to understand, even though a bit lengthy (by Python standards), but you should not sacrifice readability for brevity.
You can, however, shave off one line using `setdefault`, and you might want to use [`collections.OrderedDict()`](https://docs.python.org/3/library/collections.html#collections.OrderedDict) so that the elements in the resulting list are in their orginal order. Also, note that in Python 3, `unique.values()` is not a list but a view on the dict.
```
unique = collections.OrderedDict()
for elem in elems:
unique.setdefault(elem["country_code"], elem)
```
If you really, *really* want to use `reduce`, you can use the empty dict as an initializer and then use `d.setdefault(k,v) and d` to set the value (if not present) and return the modified dict.
```
unique = reduce(lambda unique, elem: unique.setdefault(elem["country_code"], elem) and unique,
elems, collections.OrderedDict())
```
I would just use the loop, though. | I think that your approach is just fine. It would be slightly better to check `elem['country_code'] not in unique` instead of `elem['country_code'] not in unique.keys()`.
However, here is another way to do it with a list comprehension:
```
visited = set()
res = [e for e in elems
if e['country_code'] not in visited
and not visited.add(e['country_code'])]
```
The last bit abuses the fact that `not None == True` and `list.add` returns `None`. |
58,997,105 | Fatal Python error: failed to get random numbers to initialize Python
Environment windows 10, VSC 15
Using CreateProcessA winapi and passing lpenvironment variable to run python with scripts.
when lpenvironment is passed null, it works fine.
If I set environment variable PATH and PYTHONPATH = "paths", and pass that LPSTR(env.c\_Str()), it throws above error on running.
The python version is 3.5.6
Any help?
---
Some more details.
1. I run child process python.exe "C:\Program Files\endpoint\Python\_ML\mlprocessor\_server.py" using CreateProcessA WINAPI.
2. I want to run child process with two environmental variables "PYTHONPATH" and "PATH".
PYTHONPATH="C:\Program Files\endpoint\Python";"C:\Program Files\endpoint\Python\Scripts";"C:\Program Files\endpoint\Python\include";"C:\Program Files\endpoint\Python\Lib";"C:\Program Files\endpoint\Python\libs";"C:\Program Files\endpoint\Python\Lib\site-packages";"C:\Program Files\endpoint\Python\_ML"
PATH="C:\Program Files\endpoint\Python";"C:\Program Files\endpoint\Python\Lib";"C:\Program Files\endpoint\Python\Scripts";"C:\Program Files\endpoint\Python\libs"
For some reason, the 7th parameter in CreateProcessA fails, the python.exe runs successfully if it is null, or else it prints "Fatal Python error: failed to get random numbers to initialize Python".
The way I set the parameter as follows...
-----------------------------------------
std::string Base = Configuration::getBasePath();
```
std::string environPython = Base;
environPython.append("\\Python;");
environPython.append(Base);
environPython.append("\\Python\\Scripts;");
environPython.append(Base);
environPython.append("\\Python\\include;");
environPython.append(Base);
environPython.append("\\Python\\Lib;");
environPython.append(Base);
environPython.append("\\Python\\libs;");
environPython.append(Base);
environPython.append("\\Python\\Lib\\site-packages;");
environPython.append(Base);
environPython.append("\\Python\\_ML;");
environPython.push_back('\0');
std::string environPath = Base;
environPath.append("\\Python;");
environPath.append(Base);
environPath.append("\\Python\\Lib;");
environPath.append(Base);
environPath.append("\\Python\\Scripts;");
environPath.append(Base);
environPath.append("\\Python\\libs;");
environPath.push_back('\0');
std::string cmd = Base;
cmd.append("\\Python\\python.exe");
std::string params = "\"";
params.append(cmd);
params.append("\" \"");
params.append(Base);
params.append("\\Python\\_ML\\mlprocessor_server.py\"");
```
std::map env = { { "PYTHONPATH", environPython.data() },
{ "PATH", environPath.data() }};
```
// example for generating block of strings
std::vector<char> envBlock;
std::for_each(env.begin(), env.end(),
[&envBlock](const std::pair<std::string, std::string> & p) {
std::copy(p.first.begin(), p.first.end(), std::back_inserter(envBlock));
envBlock.push_back('=');
std::copy(p.second.begin(), p.second.end(), std::back_inserter(envBlock));
envBlock.push_back('\0');
}
);
envBlock.push_back('\0');
// feed this into ::CreateProcess()
LPVOID lpEnvironment = (LPVOID)envBlock.data();
bool result = CreateProcessA(cmd.c_str(), (LPSTR)params.c_str(), NULL, NULL, FALSE, CREATE_NO_WINDOW, lpEnvironment, NULL, &info, &pi);
```
---
The result is always true, python.exe is not shown up in task manager and gives Fatal Python error: failed to get random numbers to initialize Python.
If the lpEnvironment is NULL, python.exe is shown up in task manager. | 2019/11/22 | [
"https://Stackoverflow.com/questions/58997105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9758247/"
] | The environment you pass to `CreateProcessA` must include `SYSTEMROOT`, otherwise the Win32 API call `CryptAcquireContext` will fail when called inside python during initialization.
When passing in NULL as lpEnvironment, your new process inherits the environment of the calling process, which has `SYSTEMROOT` already defined. | To follow up with an example how this can very easily be triggered in pure Python software out in the real world, there are times where it is useful for Python to open up an instance of itself to do some task, where the sub-task need a specific `PYTHONPATH` be set. Often times this may be done lazily on less fussy platforms (i.e. not Windows) like so:
```
import sys
from subprocess import Popen
p = Popen([sys.executable, '-c', 'print("hello world")'], env={
'PYTHONPATH': '', # set it to somewhere
})
```
However, doing so on Windows, will lead to the following perplexing failure:
```
Python 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:34:34) [MSC v.1928 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> from subprocess import Popen
>>> p = Popen([sys.executable, '-c', 'print("hello world")'], env={
... 'PYTHONPATH': ''
... })
Fatal Python error: _Py_HashRandomization_Init: failed to get random numbers to initialize Python
Python runtime state: preinitialized
```
The fix is obvious: clone the `os.environ` to ensure `SYSTEMROOT` is in place such that the issue pointed out by @Joe Savage's answer be averted, e.g.:
```
>>> import os
>>> env = os.environ.copy()
>>> env['PYTHONPATH'] = ''
>>> p = Popen([sys.executable, '-c', 'print("hello world")'], env=env)
hello world
```
A real world example where this type of fix was needed:
* [Glean SDK](https://github.com/mozilla/glean/pull/1908/commits/b44f9f7c0c10fde9b495f61794253b051a0d3f62) |
28,859,295 | If I am in **/home/usr** and I call python **/usr/local/rcom/bin/something.py**
How can I make the script inside **something.py** know he resides in **/usr/local/rcom/bin**?
The `os.path.abspath` is calculated with the `cwd` which is **/home/usr** in this case. | 2015/03/04 | [
"https://Stackoverflow.com/questions/28859295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/67153/"
] | Assign the result of `df.groupby('User_ID')['Datetime'].apply(lambda g: len(g)>1)` to a variable so you can perform boolean indexing and then use the index from this to call `isin` and filter your orig df:
```
In [366]:
users = df.groupby('User_ID')['Datetime'].apply(lambda g: len(g)>1)
users
Out[366]:
User_ID
189757330 False
222583401 False
287280509 False
329757763 False
414673119 True
624921653 False
Name: Datetime, dtype: bool
In [367]:
users[users]
Out[367]:
User_ID
414673119 True
Name: Datetime, dtype: bool
In [368]:
users[users].index
Out[368]:
Int64Index([414673119], dtype='int64')
In [361]:
df[df['User_ID'].isin(users[users].index)]
Out[361]:
User_ID Latitude Longitude Datetime
5 414673119 41.555014 2.096583 2014-02-24 20:15:30
6 414673119 41.555014 2.097583 2014-02-24 20:16:30
7 414673119 41.555014 2.098583 2014-02-24 20:17:30
```
You can then call `to_csv` on the above as normal | first, make sure you have no duplicate entries:
```
df = df.drop_duplicates()
```
then, figure out the counts for each:
```
counts = df.groupby('User_ID').Datetime.count()
```
finally, figure out where the indexes overlap:
```
df[df.User_ID.isin(counts[counts > 1].index)]
``` |
4,585,776 | I am trying for a while installing [Hg-Git addon](http://hg-git.github.com/) to my Windows 7 Operating system
1. I have crossed several difficulties like installing Python and other utilities described in [this blog](http://blog.sadphaeton.com/2009/01/20/python-development-windows-part-2-installing-easyinstallcould-be-easier.html).
2. I had even after this manual problems and could not install this addon because of errors described in [this log](http://pastebin.com/C3i1hMJB).
3. I searched google and I have found [this manual](http://blog.nlift.com/2009/06/python-on-windows-c-extensions-and.html)
4. Now i have these errors showed in [this log](http://pastebin.com/iQYp4n5C).
Any help with these errors? Thank you for your help. | 2011/01/03 | [
"https://Stackoverflow.com/questions/4585776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/559792/"
] | Ok i got it so ... For others - you need to clone this repo
**HTTPS:**
```
git clone https://github.com/jelmer/dulwich.git
```
**SSH:**
```
git clone git@github.com:jelmer/dulwich.git
```
or just download source - after that you need to go to its folder when you downloaded in command line type:
```
python setup.py install
```
You are done after you can simply do:
```
easy_install hg-git
```
On windows and make it work - so congratulations :-) Close. | I created a powershell script which does the installation in one step. The prereq is you have some build tools and python already installed:
<http://ig2600.blogspot.com/2013/02/using-git-via-hg-on-windows.html> |
4,585,776 | I am trying for a while installing [Hg-Git addon](http://hg-git.github.com/) to my Windows 7 Operating system
1. I have crossed several difficulties like installing Python and other utilities described in [this blog](http://blog.sadphaeton.com/2009/01/20/python-development-windows-part-2-installing-easyinstallcould-be-easier.html).
2. I had even after this manual problems and could not install this addon because of errors described in [this log](http://pastebin.com/C3i1hMJB).
3. I searched google and I have found [this manual](http://blog.nlift.com/2009/06/python-on-windows-c-extensions-and.html)
4. Now i have these errors showed in [this log](http://pastebin.com/iQYp4n5C).
Any help with these errors? Thank you for your help. | 2011/01/03 | [
"https://Stackoverflow.com/questions/4585776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/559792/"
] | In case it helps anyone, I have Windows 7 64bit and TortoiseHg and following their [instructions](http://tortoisehg.bitbucket.io/manual/1.0/nonhg.html#hg-git-git) was easy and worked without issue.
>
> **Installation**
>
>
> TortoiseHg Windows installers come with the python-git bindings (named
> dulwich) that hg-git requires, so one only needs to clone the hg-git
> repository to your local computer:
>
>
> `hg clone http://bitbucket.org/durin42/hg-git/ C:\hg-git`
>
>
> Then enable
> hggit and bookmarks in your Mercurial.ini file:
>
>
> `[extensions]`
>
>
> `bookmarks =`
>
>
> `hggit = C:\hg-git\hggit`
>
>
> You can verify that
> worked by typing hg help hggit
>
>
> | I created a powershell script which does the installation in one step. The prereq is you have some build tools and python already installed:
<http://ig2600.blogspot.com/2013/02/using-git-via-hg-on-windows.html> |
25,343,981 | I'm writing a preprocessor in python, part of which works with an AST.
There is a `render()` method that takes care of converting various statements to source code.
Now, I have it like this (shortened):
```
def render(self, s):
""" Render a statement by type. """
# code block (used in structures)
if isinstance(s, S_Block):
# delegate to private method that does the work
return self._render_block(s)
# empty statement
if isinstance(s, S_Empty):
return self._render_empty(s)
# a function declaration
if isinstance(s, S_Function):
return self._render_function(s)
# ...
```
As you can see, it's tedious, prone to errors and the code is quite long (I have many more kinds of statements).
The ideal solution would be (in Java syntax):
```
String render(S_Block s)
{
// render block
}
String render(S_Empty s)
{
// render empty statement
}
String render(S_Function s)
{
// render function statement
}
// ...
```
Of course, python can't do this, because it has dynamic typing. When I searched for how to mimick method overloading, all answers just said "You don't want to do that in python". I guess that is true in some cases, but here `kwargs` is really not useful at all.
How would I do this in python, without the hideous kilometre-long sequence if type checking ifs, as shown above? Also, preferably a "pythonic" way to do so?
**Note:** There can be multiple "Renderer" implementations, which render the statements in different manners. I can't therefore move the rendering code to the statements and just call `s.render()`. It must be done in the renderer class.
(I've found some [interesting "visitor" code](http://curtis.schlak.com/2013/06/20/follow-up-to-python-visitor-pattern.html), but I'm not sure if it's really the thing I want). | 2014/08/16 | [
"https://Stackoverflow.com/questions/25343981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2180189/"
] | Would something like this work?
```
self.map = {
S_Block : self._render_block,
S_Empty : self._render_empty,
S_Function: self._render_function
}
def render(self, s):
return self.map[type(s)](s)
```
Keeping a reference to a class object as a key in a dictionary and having it's value be the function object you want to call will make your code shorter and less error prone. The only place an error could occur here would be in the definition of the dictionary. Or one of your internal functions of course. | The overloading syntax you are looking for can be achieved using [Guido van Rossum's multimethod decorator](http://www.artima.com/weblogs/viewpost.jsp?thread=101605).
Here is a variant of the multimethod decorator which can decorate class methods (the original decorates plain functions). I've named the variant `multidispatch` to disambiguate it from the original:
```
import functools
def multidispatch(*types):
def register(function):
name = function.__name__
mm = multidispatch.registry.get(name)
if mm is None:
@functools.wraps(function)
def wrapper(self, *args):
types = tuple(arg.__class__ for arg in args)
function = wrapper.typemap.get(types)
if function is None:
raise TypeError("no match")
return function(self, *args)
wrapper.typemap = {}
mm = multidispatch.registry[name] = wrapper
if types in mm.typemap:
raise TypeError("duplicate registration")
mm.typemap[types] = function
return mm
return register
multidispatch.registry = {}
```
and it can be used like this:
```
class Foo(object):
@multidispatch(str)
def render(self, s):
print('string: {}'.format(s))
@multidispatch(float)
def render(self, s):
print('float: {}'.format(s))
@multidispatch(float, int)
def render(self, s, t):
print('float, int: {}, {}'.format(s, t))
foo = Foo()
foo.render('text')
# string: text
foo.render(1.234)
# float: 1.234
foo.render(1.234, 2)
# float, int: 1.234, 2
```
The demo code above shows how to overload the `Foo.render` method based on the types of its arguments.
This code searches for exact matching types as opposed to checking for `isinstance` relationships. It could be modified to handle that (at the expense of making the lookups O(n) instead of O(1)) but since it sounds like you don't need this anyway, I'll leave the code in this simpler form. |
25,343,981 | I'm writing a preprocessor in python, part of which works with an AST.
There is a `render()` method that takes care of converting various statements to source code.
Now, I have it like this (shortened):
```
def render(self, s):
""" Render a statement by type. """
# code block (used in structures)
if isinstance(s, S_Block):
# delegate to private method that does the work
return self._render_block(s)
# empty statement
if isinstance(s, S_Empty):
return self._render_empty(s)
# a function declaration
if isinstance(s, S_Function):
return self._render_function(s)
# ...
```
As you can see, it's tedious, prone to errors and the code is quite long (I have many more kinds of statements).
The ideal solution would be (in Java syntax):
```
String render(S_Block s)
{
// render block
}
String render(S_Empty s)
{
// render empty statement
}
String render(S_Function s)
{
// render function statement
}
// ...
```
Of course, python can't do this, because it has dynamic typing. When I searched for how to mimick method overloading, all answers just said "You don't want to do that in python". I guess that is true in some cases, but here `kwargs` is really not useful at all.
How would I do this in python, without the hideous kilometre-long sequence if type checking ifs, as shown above? Also, preferably a "pythonic" way to do so?
**Note:** There can be multiple "Renderer" implementations, which render the statements in different manners. I can't therefore move the rendering code to the statements and just call `s.render()`. It must be done in the renderer class.
(I've found some [interesting "visitor" code](http://curtis.schlak.com/2013/06/20/follow-up-to-python-visitor-pattern.html), but I'm not sure if it's really the thing I want). | 2014/08/16 | [
"https://Stackoverflow.com/questions/25343981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2180189/"
] | Would something like this work?
```
self.map = {
S_Block : self._render_block,
S_Empty : self._render_empty,
S_Function: self._render_function
}
def render(self, s):
return self.map[type(s)](s)
```
Keeping a reference to a class object as a key in a dictionary and having it's value be the function object you want to call will make your code shorter and less error prone. The only place an error could occur here would be in the definition of the dictionary. Or one of your internal functions of course. | If you're using Python 3.4 (or are willing to install the [backport](https://pypi.python.org/pypi/singledispatch) for Python 2.6+), you can use [`functools.singledispatch`](https://docs.python.org/3/library/functools.html#functools.singledispatch) for this\*:
```
from functools import singledispatch
class S_Block(object): pass
class S_Empty(object): pass
class S_Function(object): pass
class Test(object):
def __init__(self):
self.render = singledispatch(self.render)
self.render.register(S_Block, self._render_block)
self.render.register(S_Empty, self._render_empty)
self.render.register(S_Function, self._render_function)
def render(self, s):
raise TypeError("This type isn't supported: {}".format(type(s)))
def _render_block(self, s):
print("render block")
def _render_empty(self, s):
print("render empty")
def _render_function(self, s):
print("render function")
if __name__ == "__main__":
t = Test()
b = S_Block()
f = S_Function()
e = S_Empty()
t.render(b)
t.render(f)
t.render(e)
```
Output:
```
render block
render function
render empty
```
\*Code based on [this gist.](https://gist.github.com/ambv/5682351) |
25,343,981 | I'm writing a preprocessor in python, part of which works with an AST.
There is a `render()` method that takes care of converting various statements to source code.
Now, I have it like this (shortened):
```
def render(self, s):
""" Render a statement by type. """
# code block (used in structures)
if isinstance(s, S_Block):
# delegate to private method that does the work
return self._render_block(s)
# empty statement
if isinstance(s, S_Empty):
return self._render_empty(s)
# a function declaration
if isinstance(s, S_Function):
return self._render_function(s)
# ...
```
As you can see, it's tedious, prone to errors and the code is quite long (I have many more kinds of statements).
The ideal solution would be (in Java syntax):
```
String render(S_Block s)
{
// render block
}
String render(S_Empty s)
{
// render empty statement
}
String render(S_Function s)
{
// render function statement
}
// ...
```
Of course, python can't do this, because it has dynamic typing. When I searched for how to mimick method overloading, all answers just said "You don't want to do that in python". I guess that is true in some cases, but here `kwargs` is really not useful at all.
How would I do this in python, without the hideous kilometre-long sequence if type checking ifs, as shown above? Also, preferably a "pythonic" way to do so?
**Note:** There can be multiple "Renderer" implementations, which render the statements in different manners. I can't therefore move the rendering code to the statements and just call `s.render()`. It must be done in the renderer class.
(I've found some [interesting "visitor" code](http://curtis.schlak.com/2013/06/20/follow-up-to-python-visitor-pattern.html), but I'm not sure if it's really the thing I want). | 2014/08/16 | [
"https://Stackoverflow.com/questions/25343981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2180189/"
] | Would something like this work?
```
self.map = {
S_Block : self._render_block,
S_Empty : self._render_empty,
S_Function: self._render_function
}
def render(self, s):
return self.map[type(s)](s)
```
Keeping a reference to a class object as a key in a dictionary and having it's value be the function object you want to call will make your code shorter and less error prone. The only place an error could occur here would be in the definition of the dictionary. Or one of your internal functions of course. | To add some performance measurements to the @unutbu 's answer:
```
@multimethod(float)
def foo(bar: float) -> str:
return 'float: {}'.format(bar)
def foo_simple(bar):
return 'string: {}'.format(bar)
```
---
```
import time
string_type = "test"
iterations = 10000000
start_time1 = time.time()
for i in range(iterations):
foo(string_type)
end_time1 = time.time() - start_time1
start_time2 = time.time()
for i in range(iterations):
foo_simple(string_type)
end_time2 = time.time() - start_time2
print("multimethod: " + str(end_time1))
print("standard: " + str(end_time2))
```
---
Returns:
```
> multimethod: 16.846999883651733
> standard: 4.509999990463257
``` |
25,343,981 | I'm writing a preprocessor in python, part of which works with an AST.
There is a `render()` method that takes care of converting various statements to source code.
Now, I have it like this (shortened):
```
def render(self, s):
""" Render a statement by type. """
# code block (used in structures)
if isinstance(s, S_Block):
# delegate to private method that does the work
return self._render_block(s)
# empty statement
if isinstance(s, S_Empty):
return self._render_empty(s)
# a function declaration
if isinstance(s, S_Function):
return self._render_function(s)
# ...
```
As you can see, it's tedious, prone to errors and the code is quite long (I have many more kinds of statements).
The ideal solution would be (in Java syntax):
```
String render(S_Block s)
{
// render block
}
String render(S_Empty s)
{
// render empty statement
}
String render(S_Function s)
{
// render function statement
}
// ...
```
Of course, python can't do this, because it has dynamic typing. When I searched for how to mimick method overloading, all answers just said "You don't want to do that in python". I guess that is true in some cases, but here `kwargs` is really not useful at all.
How would I do this in python, without the hideous kilometre-long sequence if type checking ifs, as shown above? Also, preferably a "pythonic" way to do so?
**Note:** There can be multiple "Renderer" implementations, which render the statements in different manners. I can't therefore move the rendering code to the statements and just call `s.render()`. It must be done in the renderer class.
(I've found some [interesting "visitor" code](http://curtis.schlak.com/2013/06/20/follow-up-to-python-visitor-pattern.html), but I'm not sure if it's really the thing I want). | 2014/08/16 | [
"https://Stackoverflow.com/questions/25343981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2180189/"
] | Would something like this work?
```
self.map = {
S_Block : self._render_block,
S_Empty : self._render_empty,
S_Function: self._render_function
}
def render(self, s):
return self.map[type(s)](s)
```
Keeping a reference to a class object as a key in a dictionary and having it's value be the function object you want to call will make your code shorter and less error prone. The only place an error could occur here would be in the definition of the dictionary. Or one of your internal functions of course. | An alternate implementation with [functools.singledispatch](https://docs.python.org/3/library/functools.html#functools.singledispatch), using the decorators as defined in [PEP-443](https://www.python.org/dev/peps/pep-0443/):
```python
from functools import singledispatch
class S_Unknown: pass
class S_Block: pass
class S_Empty: pass
class S_Function: pass
class S_SpecialBlock(S_Block): pass
@singledispatch
def render(s, **kwargs):
print('Rendering an unknown type')
@render.register(S_Block)
def _(s, **kwargs):
print('Rendering an S_Block')
@render.register(S_Empty)
def _(s, **kwargs):
print('Rendering an S_Empty')
@render.register(S_Function)
def _(s, **kwargs):
print('Rendering an S_Function')
if __name__ == '__main__':
for t in [S_Unknown, S_Block, S_Empty, S_Function, S_SpecialBlock]:
print(f'Passing an {t.__name__}')
render(t())
```
This outputs
```none
Passing an S_Unknown
Rendering an unknown type
Passing an S_Block
Rendering an S_Block
Passing an S_Empty
Rendering an S_Empty
Passing an S_Function
Rendering an S_Function
Passing an S_SpecialBlock
Rendering an S_Block
```
I like this version better than the one with the map because it has the same behavior as the implementation that uses `isinstance()`: when you pass an S\_SpecialBlock, it passes it to the renderer that takes an S\_Block.
### Availability
As mentioned by dano in [another answer](https://stackoverflow.com/a/25344445/3898322), this works in Python 3.4+ and there is a [backport](https://pypi.org/project/singledispatch/) for Python 2.6+.
If you have Python 3.7+, the `register()` attribute supports using type annotations:
```python
@render.register
def _(s: S_Block, **kwargs):
print('Rendering an S_Block')
```
### Note
The one problem I can see is that you have to pass `s` as a positional argument, which means you can't do `render(s=S_Block())`.
Since `single_dispatch` uses the type of the first argument to figure out which version of `render()` to call, that would result in a TypeError - "render requires at least 1 positional argument" (cf [source code](https://github.com/python/cpython/blob/445f1b35ce8461268438c8a6b327ddc764287e05/Lib/functools.py#L819-L824))
Actually, I think it should be possible to use the keyword argument if there is only one... If you really need that then you can do something similar to [this answer](https://stackoverflow.com/a/24602374/3898322), which creates a custom decorator with a different wrapper.
It would be a nice feature of Python as well. |
25,343,981 | I'm writing a preprocessor in python, part of which works with an AST.
There is a `render()` method that takes care of converting various statements to source code.
Now, I have it like this (shortened):
```
def render(self, s):
""" Render a statement by type. """
# code block (used in structures)
if isinstance(s, S_Block):
# delegate to private method that does the work
return self._render_block(s)
# empty statement
if isinstance(s, S_Empty):
return self._render_empty(s)
# a function declaration
if isinstance(s, S_Function):
return self._render_function(s)
# ...
```
As you can see, it's tedious, prone to errors and the code is quite long (I have many more kinds of statements).
The ideal solution would be (in Java syntax):
```
String render(S_Block s)
{
// render block
}
String render(S_Empty s)
{
// render empty statement
}
String render(S_Function s)
{
// render function statement
}
// ...
```
Of course, python can't do this, because it has dynamic typing. When I searched for how to mimick method overloading, all answers just said "You don't want to do that in python". I guess that is true in some cases, but here `kwargs` is really not useful at all.
How would I do this in python, without the hideous kilometre-long sequence if type checking ifs, as shown above? Also, preferably a "pythonic" way to do so?
**Note:** There can be multiple "Renderer" implementations, which render the statements in different manners. I can't therefore move the rendering code to the statements and just call `s.render()`. It must be done in the renderer class.
(I've found some [interesting "visitor" code](http://curtis.schlak.com/2013/06/20/follow-up-to-python-visitor-pattern.html), but I'm not sure if it's really the thing I want). | 2014/08/16 | [
"https://Stackoverflow.com/questions/25343981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2180189/"
] | The overloading syntax you are looking for can be achieved using [Guido van Rossum's multimethod decorator](http://www.artima.com/weblogs/viewpost.jsp?thread=101605).
Here is a variant of the multimethod decorator which can decorate class methods (the original decorates plain functions). I've named the variant `multidispatch` to disambiguate it from the original:
```
import functools
def multidispatch(*types):
def register(function):
name = function.__name__
mm = multidispatch.registry.get(name)
if mm is None:
@functools.wraps(function)
def wrapper(self, *args):
types = tuple(arg.__class__ for arg in args)
function = wrapper.typemap.get(types)
if function is None:
raise TypeError("no match")
return function(self, *args)
wrapper.typemap = {}
mm = multidispatch.registry[name] = wrapper
if types in mm.typemap:
raise TypeError("duplicate registration")
mm.typemap[types] = function
return mm
return register
multidispatch.registry = {}
```
and it can be used like this:
```
class Foo(object):
@multidispatch(str)
def render(self, s):
print('string: {}'.format(s))
@multidispatch(float)
def render(self, s):
print('float: {}'.format(s))
@multidispatch(float, int)
def render(self, s, t):
print('float, int: {}, {}'.format(s, t))
foo = Foo()
foo.render('text')
# string: text
foo.render(1.234)
# float: 1.234
foo.render(1.234, 2)
# float, int: 1.234, 2
```
The demo code above shows how to overload the `Foo.render` method based on the types of its arguments.
This code searches for exact matching types as opposed to checking for `isinstance` relationships. It could be modified to handle that (at the expense of making the lookups O(n) instead of O(1)) but since it sounds like you don't need this anyway, I'll leave the code in this simpler form. | To add some performance measurements to the @unutbu 's answer:
```
@multimethod(float)
def foo(bar: float) -> str:
return 'float: {}'.format(bar)
def foo_simple(bar):
return 'string: {}'.format(bar)
```
---
```
import time
string_type = "test"
iterations = 10000000
start_time1 = time.time()
for i in range(iterations):
foo(string_type)
end_time1 = time.time() - start_time1
start_time2 = time.time()
for i in range(iterations):
foo_simple(string_type)
end_time2 = time.time() - start_time2
print("multimethod: " + str(end_time1))
print("standard: " + str(end_time2))
```
---
Returns:
```
> multimethod: 16.846999883651733
> standard: 4.509999990463257
``` |
25,343,981 | I'm writing a preprocessor in python, part of which works with an AST.
There is a `render()` method that takes care of converting various statements to source code.
Now, I have it like this (shortened):
```
def render(self, s):
""" Render a statement by type. """
# code block (used in structures)
if isinstance(s, S_Block):
# delegate to private method that does the work
return self._render_block(s)
# empty statement
if isinstance(s, S_Empty):
return self._render_empty(s)
# a function declaration
if isinstance(s, S_Function):
return self._render_function(s)
# ...
```
As you can see, it's tedious, prone to errors and the code is quite long (I have many more kinds of statements).
The ideal solution would be (in Java syntax):
```
String render(S_Block s)
{
// render block
}
String render(S_Empty s)
{
// render empty statement
}
String render(S_Function s)
{
// render function statement
}
// ...
```
Of course, python can't do this, because it has dynamic typing. When I searched for how to mimick method overloading, all answers just said "You don't want to do that in python". I guess that is true in some cases, but here `kwargs` is really not useful at all.
How would I do this in python, without the hideous kilometre-long sequence if type checking ifs, as shown above? Also, preferably a "pythonic" way to do so?
**Note:** There can be multiple "Renderer" implementations, which render the statements in different manners. I can't therefore move the rendering code to the statements and just call `s.render()`. It must be done in the renderer class.
(I've found some [interesting "visitor" code](http://curtis.schlak.com/2013/06/20/follow-up-to-python-visitor-pattern.html), but I'm not sure if it's really the thing I want). | 2014/08/16 | [
"https://Stackoverflow.com/questions/25343981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2180189/"
] | The overloading syntax you are looking for can be achieved using [Guido van Rossum's multimethod decorator](http://www.artima.com/weblogs/viewpost.jsp?thread=101605).
Here is a variant of the multimethod decorator which can decorate class methods (the original decorates plain functions). I've named the variant `multidispatch` to disambiguate it from the original:
```
import functools
def multidispatch(*types):
def register(function):
name = function.__name__
mm = multidispatch.registry.get(name)
if mm is None:
@functools.wraps(function)
def wrapper(self, *args):
types = tuple(arg.__class__ for arg in args)
function = wrapper.typemap.get(types)
if function is None:
raise TypeError("no match")
return function(self, *args)
wrapper.typemap = {}
mm = multidispatch.registry[name] = wrapper
if types in mm.typemap:
raise TypeError("duplicate registration")
mm.typemap[types] = function
return mm
return register
multidispatch.registry = {}
```
and it can be used like this:
```
class Foo(object):
@multidispatch(str)
def render(self, s):
print('string: {}'.format(s))
@multidispatch(float)
def render(self, s):
print('float: {}'.format(s))
@multidispatch(float, int)
def render(self, s, t):
print('float, int: {}, {}'.format(s, t))
foo = Foo()
foo.render('text')
# string: text
foo.render(1.234)
# float: 1.234
foo.render(1.234, 2)
# float, int: 1.234, 2
```
The demo code above shows how to overload the `Foo.render` method based on the types of its arguments.
This code searches for exact matching types as opposed to checking for `isinstance` relationships. It could be modified to handle that (at the expense of making the lookups O(n) instead of O(1)) but since it sounds like you don't need this anyway, I'll leave the code in this simpler form. | An alternate implementation with [functools.singledispatch](https://docs.python.org/3/library/functools.html#functools.singledispatch), using the decorators as defined in [PEP-443](https://www.python.org/dev/peps/pep-0443/):
```python
from functools import singledispatch
class S_Unknown: pass
class S_Block: pass
class S_Empty: pass
class S_Function: pass
class S_SpecialBlock(S_Block): pass
@singledispatch
def render(s, **kwargs):
print('Rendering an unknown type')
@render.register(S_Block)
def _(s, **kwargs):
print('Rendering an S_Block')
@render.register(S_Empty)
def _(s, **kwargs):
print('Rendering an S_Empty')
@render.register(S_Function)
def _(s, **kwargs):
print('Rendering an S_Function')
if __name__ == '__main__':
for t in [S_Unknown, S_Block, S_Empty, S_Function, S_SpecialBlock]:
print(f'Passing an {t.__name__}')
render(t())
```
This outputs
```none
Passing an S_Unknown
Rendering an unknown type
Passing an S_Block
Rendering an S_Block
Passing an S_Empty
Rendering an S_Empty
Passing an S_Function
Rendering an S_Function
Passing an S_SpecialBlock
Rendering an S_Block
```
I like this version better than the one with the map because it has the same behavior as the implementation that uses `isinstance()`: when you pass an S\_SpecialBlock, it passes it to the renderer that takes an S\_Block.
### Availability
As mentioned by dano in [another answer](https://stackoverflow.com/a/25344445/3898322), this works in Python 3.4+ and there is a [backport](https://pypi.org/project/singledispatch/) for Python 2.6+.
If you have Python 3.7+, the `register()` attribute supports using type annotations:
```python
@render.register
def _(s: S_Block, **kwargs):
print('Rendering an S_Block')
```
### Note
The one problem I can see is that you have to pass `s` as a positional argument, which means you can't do `render(s=S_Block())`.
Since `single_dispatch` uses the type of the first argument to figure out which version of `render()` to call, that would result in a TypeError - "render requires at least 1 positional argument" (cf [source code](https://github.com/python/cpython/blob/445f1b35ce8461268438c8a6b327ddc764287e05/Lib/functools.py#L819-L824))
Actually, I think it should be possible to use the keyword argument if there is only one... If you really need that then you can do something similar to [this answer](https://stackoverflow.com/a/24602374/3898322), which creates a custom decorator with a different wrapper.
It would be a nice feature of Python as well. |
25,343,981 | I'm writing a preprocessor in python, part of which works with an AST.
There is a `render()` method that takes care of converting various statements to source code.
Now, I have it like this (shortened):
```
def render(self, s):
""" Render a statement by type. """
# code block (used in structures)
if isinstance(s, S_Block):
# delegate to private method that does the work
return self._render_block(s)
# empty statement
if isinstance(s, S_Empty):
return self._render_empty(s)
# a function declaration
if isinstance(s, S_Function):
return self._render_function(s)
# ...
```
As you can see, it's tedious, prone to errors and the code is quite long (I have many more kinds of statements).
The ideal solution would be (in Java syntax):
```
String render(S_Block s)
{
// render block
}
String render(S_Empty s)
{
// render empty statement
}
String render(S_Function s)
{
// render function statement
}
// ...
```
Of course, python can't do this, because it has dynamic typing. When I searched for how to mimick method overloading, all answers just said "You don't want to do that in python". I guess that is true in some cases, but here `kwargs` is really not useful at all.
How would I do this in python, without the hideous kilometre-long sequence if type checking ifs, as shown above? Also, preferably a "pythonic" way to do so?
**Note:** There can be multiple "Renderer" implementations, which render the statements in different manners. I can't therefore move the rendering code to the statements and just call `s.render()`. It must be done in the renderer class.
(I've found some [interesting "visitor" code](http://curtis.schlak.com/2013/06/20/follow-up-to-python-visitor-pattern.html), but I'm not sure if it's really the thing I want). | 2014/08/16 | [
"https://Stackoverflow.com/questions/25343981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2180189/"
] | If you're using Python 3.4 (or are willing to install the [backport](https://pypi.python.org/pypi/singledispatch) for Python 2.6+), you can use [`functools.singledispatch`](https://docs.python.org/3/library/functools.html#functools.singledispatch) for this\*:
```
from functools import singledispatch
class S_Block(object): pass
class S_Empty(object): pass
class S_Function(object): pass
class Test(object):
def __init__(self):
self.render = singledispatch(self.render)
self.render.register(S_Block, self._render_block)
self.render.register(S_Empty, self._render_empty)
self.render.register(S_Function, self._render_function)
def render(self, s):
raise TypeError("This type isn't supported: {}".format(type(s)))
def _render_block(self, s):
print("render block")
def _render_empty(self, s):
print("render empty")
def _render_function(self, s):
print("render function")
if __name__ == "__main__":
t = Test()
b = S_Block()
f = S_Function()
e = S_Empty()
t.render(b)
t.render(f)
t.render(e)
```
Output:
```
render block
render function
render empty
```
\*Code based on [this gist.](https://gist.github.com/ambv/5682351) | To add some performance measurements to the @unutbu 's answer:
```
@multimethod(float)
def foo(bar: float) -> str:
return 'float: {}'.format(bar)
def foo_simple(bar):
return 'string: {}'.format(bar)
```
---
```
import time
string_type = "test"
iterations = 10000000
start_time1 = time.time()
for i in range(iterations):
foo(string_type)
end_time1 = time.time() - start_time1
start_time2 = time.time()
for i in range(iterations):
foo_simple(string_type)
end_time2 = time.time() - start_time2
print("multimethod: " + str(end_time1))
print("standard: " + str(end_time2))
```
---
Returns:
```
> multimethod: 16.846999883651733
> standard: 4.509999990463257
``` |
25,343,981 | I'm writing a preprocessor in python, part of which works with an AST.
There is a `render()` method that takes care of converting various statements to source code.
Now, I have it like this (shortened):
```
def render(self, s):
""" Render a statement by type. """
# code block (used in structures)
if isinstance(s, S_Block):
# delegate to private method that does the work
return self._render_block(s)
# empty statement
if isinstance(s, S_Empty):
return self._render_empty(s)
# a function declaration
if isinstance(s, S_Function):
return self._render_function(s)
# ...
```
As you can see, it's tedious, prone to errors and the code is quite long (I have many more kinds of statements).
The ideal solution would be (in Java syntax):
```
String render(S_Block s)
{
// render block
}
String render(S_Empty s)
{
// render empty statement
}
String render(S_Function s)
{
// render function statement
}
// ...
```
Of course, python can't do this, because it has dynamic typing. When I searched for how to mimick method overloading, all answers just said "You don't want to do that in python". I guess that is true in some cases, but here `kwargs` is really not useful at all.
How would I do this in python, without the hideous kilometre-long sequence if type checking ifs, as shown above? Also, preferably a "pythonic" way to do so?
**Note:** There can be multiple "Renderer" implementations, which render the statements in different manners. I can't therefore move the rendering code to the statements and just call `s.render()`. It must be done in the renderer class.
(I've found some [interesting "visitor" code](http://curtis.schlak.com/2013/06/20/follow-up-to-python-visitor-pattern.html), but I'm not sure if it's really the thing I want). | 2014/08/16 | [
"https://Stackoverflow.com/questions/25343981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2180189/"
] | If you're using Python 3.4 (or are willing to install the [backport](https://pypi.python.org/pypi/singledispatch) for Python 2.6+), you can use [`functools.singledispatch`](https://docs.python.org/3/library/functools.html#functools.singledispatch) for this\*:
```
from functools import singledispatch
class S_Block(object): pass
class S_Empty(object): pass
class S_Function(object): pass
class Test(object):
def __init__(self):
self.render = singledispatch(self.render)
self.render.register(S_Block, self._render_block)
self.render.register(S_Empty, self._render_empty)
self.render.register(S_Function, self._render_function)
def render(self, s):
raise TypeError("This type isn't supported: {}".format(type(s)))
def _render_block(self, s):
print("render block")
def _render_empty(self, s):
print("render empty")
def _render_function(self, s):
print("render function")
if __name__ == "__main__":
t = Test()
b = S_Block()
f = S_Function()
e = S_Empty()
t.render(b)
t.render(f)
t.render(e)
```
Output:
```
render block
render function
render empty
```
\*Code based on [this gist.](https://gist.github.com/ambv/5682351) | An alternate implementation with [functools.singledispatch](https://docs.python.org/3/library/functools.html#functools.singledispatch), using the decorators as defined in [PEP-443](https://www.python.org/dev/peps/pep-0443/):
```python
from functools import singledispatch
class S_Unknown: pass
class S_Block: pass
class S_Empty: pass
class S_Function: pass
class S_SpecialBlock(S_Block): pass
@singledispatch
def render(s, **kwargs):
print('Rendering an unknown type')
@render.register(S_Block)
def _(s, **kwargs):
print('Rendering an S_Block')
@render.register(S_Empty)
def _(s, **kwargs):
print('Rendering an S_Empty')
@render.register(S_Function)
def _(s, **kwargs):
print('Rendering an S_Function')
if __name__ == '__main__':
for t in [S_Unknown, S_Block, S_Empty, S_Function, S_SpecialBlock]:
print(f'Passing an {t.__name__}')
render(t())
```
This outputs
```none
Passing an S_Unknown
Rendering an unknown type
Passing an S_Block
Rendering an S_Block
Passing an S_Empty
Rendering an S_Empty
Passing an S_Function
Rendering an S_Function
Passing an S_SpecialBlock
Rendering an S_Block
```
I like this version better than the one with the map because it has the same behavior as the implementation that uses `isinstance()`: when you pass an S\_SpecialBlock, it passes it to the renderer that takes an S\_Block.
### Availability
As mentioned by dano in [another answer](https://stackoverflow.com/a/25344445/3898322), this works in Python 3.4+ and there is a [backport](https://pypi.org/project/singledispatch/) for Python 2.6+.
If you have Python 3.7+, the `register()` attribute supports using type annotations:
```python
@render.register
def _(s: S_Block, **kwargs):
print('Rendering an S_Block')
```
### Note
The one problem I can see is that you have to pass `s` as a positional argument, which means you can't do `render(s=S_Block())`.
Since `single_dispatch` uses the type of the first argument to figure out which version of `render()` to call, that would result in a TypeError - "render requires at least 1 positional argument" (cf [source code](https://github.com/python/cpython/blob/445f1b35ce8461268438c8a6b327ddc764287e05/Lib/functools.py#L819-L824))
Actually, I think it should be possible to use the keyword argument if there is only one... If you really need that then you can do something similar to [this answer](https://stackoverflow.com/a/24602374/3898322), which creates a custom decorator with a different wrapper.
It would be a nice feature of Python as well. |
25,343,981 | I'm writing a preprocessor in python, part of which works with an AST.
There is a `render()` method that takes care of converting various statements to source code.
Now, I have it like this (shortened):
```
def render(self, s):
""" Render a statement by type. """
# code block (used in structures)
if isinstance(s, S_Block):
# delegate to private method that does the work
return self._render_block(s)
# empty statement
if isinstance(s, S_Empty):
return self._render_empty(s)
# a function declaration
if isinstance(s, S_Function):
return self._render_function(s)
# ...
```
As you can see, it's tedious, prone to errors and the code is quite long (I have many more kinds of statements).
The ideal solution would be (in Java syntax):
```
String render(S_Block s)
{
// render block
}
String render(S_Empty s)
{
// render empty statement
}
String render(S_Function s)
{
// render function statement
}
// ...
```
Of course, python can't do this, because it has dynamic typing. When I searched for how to mimick method overloading, all answers just said "You don't want to do that in python". I guess that is true in some cases, but here `kwargs` is really not useful at all.
How would I do this in python, without the hideous kilometre-long sequence if type checking ifs, as shown above? Also, preferably a "pythonic" way to do so?
**Note:** There can be multiple "Renderer" implementations, which render the statements in different manners. I can't therefore move the rendering code to the statements and just call `s.render()`. It must be done in the renderer class.
(I've found some [interesting "visitor" code](http://curtis.schlak.com/2013/06/20/follow-up-to-python-visitor-pattern.html), but I'm not sure if it's really the thing I want). | 2014/08/16 | [
"https://Stackoverflow.com/questions/25343981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2180189/"
] | An alternate implementation with [functools.singledispatch](https://docs.python.org/3/library/functools.html#functools.singledispatch), using the decorators as defined in [PEP-443](https://www.python.org/dev/peps/pep-0443/):
```python
from functools import singledispatch
class S_Unknown: pass
class S_Block: pass
class S_Empty: pass
class S_Function: pass
class S_SpecialBlock(S_Block): pass
@singledispatch
def render(s, **kwargs):
print('Rendering an unknown type')
@render.register(S_Block)
def _(s, **kwargs):
print('Rendering an S_Block')
@render.register(S_Empty)
def _(s, **kwargs):
print('Rendering an S_Empty')
@render.register(S_Function)
def _(s, **kwargs):
print('Rendering an S_Function')
if __name__ == '__main__':
for t in [S_Unknown, S_Block, S_Empty, S_Function, S_SpecialBlock]:
print(f'Passing an {t.__name__}')
render(t())
```
This outputs
```none
Passing an S_Unknown
Rendering an unknown type
Passing an S_Block
Rendering an S_Block
Passing an S_Empty
Rendering an S_Empty
Passing an S_Function
Rendering an S_Function
Passing an S_SpecialBlock
Rendering an S_Block
```
I like this version better than the one with the map because it has the same behavior as the implementation that uses `isinstance()`: when you pass an S\_SpecialBlock, it passes it to the renderer that takes an S\_Block.
### Availability
As mentioned by dano in [another answer](https://stackoverflow.com/a/25344445/3898322), this works in Python 3.4+ and there is a [backport](https://pypi.org/project/singledispatch/) for Python 2.6+.
If you have Python 3.7+, the `register()` attribute supports using type annotations:
```python
@render.register
def _(s: S_Block, **kwargs):
print('Rendering an S_Block')
```
### Note
The one problem I can see is that you have to pass `s` as a positional argument, which means you can't do `render(s=S_Block())`.
Since `single_dispatch` uses the type of the first argument to figure out which version of `render()` to call, that would result in a TypeError - "render requires at least 1 positional argument" (cf [source code](https://github.com/python/cpython/blob/445f1b35ce8461268438c8a6b327ddc764287e05/Lib/functools.py#L819-L824))
Actually, I think it should be possible to use the keyword argument if there is only one... If you really need that then you can do something similar to [this answer](https://stackoverflow.com/a/24602374/3898322), which creates a custom decorator with a different wrapper.
It would be a nice feature of Python as well. | To add some performance measurements to the @unutbu 's answer:
```
@multimethod(float)
def foo(bar: float) -> str:
return 'float: {}'.format(bar)
def foo_simple(bar):
return 'string: {}'.format(bar)
```
---
```
import time
string_type = "test"
iterations = 10000000
start_time1 = time.time()
for i in range(iterations):
foo(string_type)
end_time1 = time.time() - start_time1
start_time2 = time.time()
for i in range(iterations):
foo_simple(string_type)
end_time2 = time.time() - start_time2
print("multimethod: " + str(end_time1))
print("standard: " + str(end_time2))
```
---
Returns:
```
> multimethod: 16.846999883651733
> standard: 4.509999990463257
``` |
13,621,095 | I want to use mmseg in Go as in other languages, for example, in python you can use pymmseg
So, is there is a similar Go package?
Or, how can I call the C functions from Go?
Or, how can I call shared library (\*.so) from Go like in python? | 2012/11/29 | [
"https://Stackoverflow.com/questions/13621095",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/860788/"
] | you can call use C Libs as described in this tutorial:
<http://cheesesun.blogspot.de/2009/12/basic-cgo.html>
and:
<http://siganakis.com/linking-c-libraries-with-go-lang-and-stdc99>
or you may have a look at this library:
<https://github.com/awsong/MMSEGO>
its for chinese, but you may could use it to put your own algorithm there. | Use `cgo` to call C code from Go.
[Command cgo](http://golang.org/cmd/cgo/)
[C? Go? Cgo!](http://blog.golang.org/2011/03/c-go-cgo.html) |
64,143,930 | I'm trying to read a json file as a pandas dataframe and convert it to a numpy array:
```
sample.json = [[["1", "2"], ["3", "4"]], [["7", "8"], ["9", "10"]]]
-------------------------------------------------------------------
df = pd.read_json('sample.json', dtype=float)
data = df.to_numpy()
print(df)
print(data)
```
However, this yields a numpy array of python lists:
```
0 1
0 [1, 2] [3, 4]
1 [7, 8] [9, 10]
[[list(['1', '2']) list(['3', '4'])]
[list(['7', '8']) list(['9', '10'])]]
```
When I want it to look like this:
```
[[1, 2], [3, 4]],
[[7, 8], [9, 10]]
```
I understand this can be accomplished by iterating over the array manually, but I'd rather avoid doing that as the data set is quite large. I have read that using `df.values()` is not encouraged. Any help appreciated | 2020/09/30 | [
"https://Stackoverflow.com/questions/64143930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14061296/"
] | Why not load the JSON file with the builtin `json` module and convert to a numpy array?
```
import json
import numpy as np
data = json.loads("""[[["1", "2"], ["3", "4"]], [["7", "8"], ["9", "10"]]]""")
np.array(data, dtype=float)
array([[[ 1., 2.],
[ 3., 4.]],
[[ 7., 8.],
[ 9., 10.]]])
``` | Your data is 3-dimensional, not 2-dimensional. DataFrames are 2-dimensional, so the only way that it can convert your `sample.json` to a dataframe is by having a 2-dimensional table containing 1-dimensional items.
The easiest is to skip the pandas part completely:
```
import json
with open('/home/robby/temp/sample.json', 'r') as f:
jsonarray = json.load(f)
np.array(jsonarray, dtype=float)
``` |
52,090,461 | I want to use firebase-admin on GAE.
So I installed firebase-admin following method.
<https://cloud.google.com/appengine/docs/standard/python/tools/using-libraries-python-27>
appengine\_config.py
```
from google.appengine.ext import vendor
# Add any libraries install in the "lib" folder.
vendor.add('lib')
```
requirements.txt
```
firebase-admin
```
and install it.
```
pip install -t lib -r requirements.txt
```
Then I checked in the "lib" folder, six is existed.
And six version is 1.11.0.
But I've already use built-in six.
app.yaml
```
libraries:
- name: six
version: latest
```
Built-in six version is "1.9.0".
Does these difference have any effect on the process of GAE?
If there is any effect, How to solve this? | 2018/08/30 | [
"https://Stackoverflow.com/questions/52090461",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8118439/"
] | The `firebase-admin` package [requires `six>=1.6.1`](http://%60https://github.com/firebase/firebase-admin-python/blob/master/setup.py#L45), so manually copying in version `1.11.0` to your app won't cause problems with that library.
However, you should ensure that the code in your app that you originally added the `six` dependency for will work with this later version, as copied-in libraries will take precedence over any [built-in libraries](https://cloud.google.com/appengine/docs/standard/python/tools/built-in-libraries-27) (thus specifying it in `app.yaml` is unnecessary as well).
It's worth mentioning that copied-in libraries count towards file quotas, because the library is uploaded to App Engine along with your application code. If you're concerned about hitting this quota, [you can use this this technique to only install the dependencies that aren't already built-in](https://stackoverflow.com/questions/33441033/pip-install-to-custom-target-directory-and-exclude-specific-dependencies), which will cut down on the overall file size. | If there's a different version of a library in the lib directory and in the app.yaml, the one in the lib directory is the one which will be available to your app.
So, effectively, your app will be using six 1.11.0. You can verify that by logging `six.__version__` and see what version you get.
To avoid confusions, I would probably delete the six library entry in app.yaml. |
53,908,319 | Numbers that do not contain 4 convert just fine, but once number that contain 4 is tested, it does not convert properly.
I am new to python and I am struggling to see what was wrong in the code. The code for converting Arabic number to Roman numerals work for numbers that does not contain 4 in them. I have tried to test with different combination of numbers. The codes before the one below pretty much determine how many thousands, five hundreds, hundreds, etc that is in the number inputted. Could anyone help me?
```
def display_roman(M, D, C, L, X, V, I):
CM = 0
CD = 0
XC = 0
XL = 0
IX = 0
IV = 0
if D == 2:
M += 1
D -= 2
elif L == 2:
C += 1
L -= 2
elif V == 2:
X += 1
V -= 2
if V == 1 and I == 4:
V = 0
I = 0
IX = 1
elif I == 4:
I == 0
IV == 1
if X == 4:
X == 0
XL == 1
if L == 1 and X == 4:
L == 0
X == 0
XC == 1
if C == 4:
C == 0
CD == 1
if D == 1 and C == 4:
D == 0
C == 0
CM == 1
print("The roman numeral of your number is: ")
print("M" * M, "CM" * CM, "D" * D, "CD" * CD, "C" * C,"XC" * XC, "L" * L, "XL" * XL, "X" * X, "IX" * IX, "V" * V, "IV" * IV, "I" * I)
```
If I input numbers like 4 or 14, I expect to get IV and XIV respectively. But the actual outputs are IIII and XIIII respectively.
Please help. I'm sorry if there is something wrong with the format of my question as I am also new to stackoverflow. Thank you in advance. | 2018/12/24 | [
"https://Stackoverflow.com/questions/53908319",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10827443/"
] | Welcome to SO!
The problem is the way you are trying to define and change your variables. For example, this piece of code:
```
elif I == 4:
I == 0
IV == 1
```
should look like this instead:
```
elif I == 4:
I = 0
IV = 1
```
`==` is a boolean Operator that will return `True` if two values are the same and `False` if they are not. `=` is the correct way to assign a new value to a variable. After changing this, all works as intended.
---
```
display_roman(0, 0, 0, 0, 0, 0, 4)
display_roman(0, 0, 0, 0, 0, 1, 4)
The roman numeral of your number is:
IV
The roman numeral of your number is:
IX
``` | This converts any positive integer to roman numeral string:
```
def roman(num: int) -> str:
chlist = "VXLCDM"
rev = [int(ch) for ch in reversed(str(num))]
chlist = ["I"] + [chlist[i % len(chlist)] + "\u0304" * (i // len(chlist))
for i in range(0, len(rev) * 2)]
def period(p: int, ten: str, five: str, one: str) -> str:
if p == 9:
return one + ten
elif p >= 5:
return five + one * (p - 5)
elif p == 4:
return one + five
else:
return one * p
return "".join(reversed([period(rev[i], chlist[i * 2 + 2], chlist[i * 2 + 1], chlist[i * 2])
for i in range(0, len(rev))]))
```
Test code:
```
print(roman(6))
print(roman(78))
print(roman(901))
print(roman(2345))
print(roman(67890))
print(roman(123456))
print(roman(7890123))
print(roman(45678901))
print(roman(234567890))
```
Output:
```
VI
LXXVIII
CMI
MMCCCXLV
L̄X̄V̄MMDCCCXC
C̄X̄X̄MMMCDLVI
V̄̄M̄M̄D̄C̄C̄C̄X̄C̄CXXIII
X̄̄L̄̄V̄̄D̄C̄L̄X̄X̄V̄MMMCMI
C̄̄C̄̄X̄̄X̄̄X̄̄M̄V̄̄D̄L̄X̄V̄MMDCCCXC
```
Note that integers greater than 9 million are represented by the characters that contains 2 or more macrons, which are very unclear unless they are badly scaled-up |
53,908,319 | Numbers that do not contain 4 convert just fine, but once number that contain 4 is tested, it does not convert properly.
I am new to python and I am struggling to see what was wrong in the code. The code for converting Arabic number to Roman numerals work for numbers that does not contain 4 in them. I have tried to test with different combination of numbers. The codes before the one below pretty much determine how many thousands, five hundreds, hundreds, etc that is in the number inputted. Could anyone help me?
```
def display_roman(M, D, C, L, X, V, I):
CM = 0
CD = 0
XC = 0
XL = 0
IX = 0
IV = 0
if D == 2:
M += 1
D -= 2
elif L == 2:
C += 1
L -= 2
elif V == 2:
X += 1
V -= 2
if V == 1 and I == 4:
V = 0
I = 0
IX = 1
elif I == 4:
I == 0
IV == 1
if X == 4:
X == 0
XL == 1
if L == 1 and X == 4:
L == 0
X == 0
XC == 1
if C == 4:
C == 0
CD == 1
if D == 1 and C == 4:
D == 0
C == 0
CM == 1
print("The roman numeral of your number is: ")
print("M" * M, "CM" * CM, "D" * D, "CD" * CD, "C" * C,"XC" * XC, "L" * L, "XL" * XL, "X" * X, "IX" * IX, "V" * V, "IV" * IV, "I" * I)
```
If I input numbers like 4 or 14, I expect to get IV and XIV respectively. But the actual outputs are IIII and XIIII respectively.
Please help. I'm sorry if there is something wrong with the format of my question as I am also new to stackoverflow. Thank you in advance. | 2018/12/24 | [
"https://Stackoverflow.com/questions/53908319",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10827443/"
] | Welcome to SO!
The problem is the way you are trying to define and change your variables. For example, this piece of code:
```
elif I == 4:
I == 0
IV == 1
```
should look like this instead:
```
elif I == 4:
I = 0
IV = 1
```
`==` is a boolean Operator that will return `True` if two values are the same and `False` if they are not. `=` is the correct way to assign a new value to a variable. After changing this, all works as intended.
---
```
display_roman(0, 0, 0, 0, 0, 0, 4)
display_roman(0, 0, 0, 0, 0, 1, 4)
The roman numeral of your number is:
IV
The roman numeral of your number is:
IX
``` | ```
print("ARABIC TO ROMAN CONVERTER [1-3999]:- \n \n")
x=int(input("ENTER THE ARABIC NUMBER: "))
b=["",'I','II','III','IV','V','VI','VII','VIII','IX','X','XX','XXX','XL','L','LX','LXX','LXXX','XC','C','CX','CXX','CXXX','CXL','CL','CLX','CLXX','CLXXX','CXC','CC','CCC','CD','D','DC','DCC','DCCC','CM','M']
d=["",'X','XX','XXX','XL','L','LX','LXX','LXXX','XC']
e=["",'C','CC','CCC','CD','D','DC','DCC','DCCC','CM']
if x in range(1,1000):
print(e[int((((int(x/100))*100)-1000)/100)]+d[int((x-(int(x/100)*100))/10)]+b[((x%1000)%100)%10])
if x==1000:
print(b[37])
if x in range(1001,2000):
print(b[37]+e[int((((int(x/100))*100)-1000)/100)]+d[int((x-(int(x/100)*100))/10)]+b[((x%1000)%100)%10])
if x==2000:
print("MM")
if x in range(2001,3000):
print(b[37]+b[37]+e[int((((int(x/100))*100)-2000)/100)]+d[int((x-(int(x/100)*100))/10)]+b[((x%1000)%100)%10])
if x==3000:
print("MMM")
if x in range(3001,4000):
print(b[37]+b[37]+b[37]+e[int((((int(x/100))*100)-3000)/100)]+d[int((x-(int(x/100)*100))/10)]+b[((x%1000)%100)%10])
if x not in range(1,4000):
print("Error")
``` |
53,908,319 | Numbers that do not contain 4 convert just fine, but once number that contain 4 is tested, it does not convert properly.
I am new to python and I am struggling to see what was wrong in the code. The code for converting Arabic number to Roman numerals work for numbers that does not contain 4 in them. I have tried to test with different combination of numbers. The codes before the one below pretty much determine how many thousands, five hundreds, hundreds, etc that is in the number inputted. Could anyone help me?
```
def display_roman(M, D, C, L, X, V, I):
CM = 0
CD = 0
XC = 0
XL = 0
IX = 0
IV = 0
if D == 2:
M += 1
D -= 2
elif L == 2:
C += 1
L -= 2
elif V == 2:
X += 1
V -= 2
if V == 1 and I == 4:
V = 0
I = 0
IX = 1
elif I == 4:
I == 0
IV == 1
if X == 4:
X == 0
XL == 1
if L == 1 and X == 4:
L == 0
X == 0
XC == 1
if C == 4:
C == 0
CD == 1
if D == 1 and C == 4:
D == 0
C == 0
CM == 1
print("The roman numeral of your number is: ")
print("M" * M, "CM" * CM, "D" * D, "CD" * CD, "C" * C,"XC" * XC, "L" * L, "XL" * XL, "X" * X, "IX" * IX, "V" * V, "IV" * IV, "I" * I)
```
If I input numbers like 4 or 14, I expect to get IV and XIV respectively. But the actual outputs are IIII and XIIII respectively.
Please help. I'm sorry if there is something wrong with the format of my question as I am also new to stackoverflow. Thank you in advance. | 2018/12/24 | [
"https://Stackoverflow.com/questions/53908319",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10827443/"
] | Welcome to SO!
The problem is the way you are trying to define and change your variables. For example, this piece of code:
```
elif I == 4:
I == 0
IV == 1
```
should look like this instead:
```
elif I == 4:
I = 0
IV = 1
```
`==` is a boolean Operator that will return `True` if two values are the same and `False` if they are not. `=` is the correct way to assign a new value to a variable. After changing this, all works as intended.
---
```
display_roman(0, 0, 0, 0, 0, 0, 4)
display_roman(0, 0, 0, 0, 0, 1, 4)
The roman numeral of your number is:
IV
The roman numeral of your number is:
IX
``` | Done in 2 lines! (Has an upper limit of 4000)
Here's the code:
```
n,k = {0:'',1:'I',2:'II',3:'III',4:'IV',5:'V',6:'VI',7:'VII',8:'VIII',9:'IX',10:'X',20:'XX',30:'XXX',40:'XL',50:'L',60:'LX',70:'LXX',80:'LXXX',90:'XC',100:'C',200:'CC',300:'CCC',400:'CD',500:'D',600:'DC',700:'DCC',800:'DCCC',900:'CM',1000:'M',2000:'MM',3000:'MMM',4000:'MMMM'},int(input('ARABIC TO ROMAN CONVERTER:-\n\n>> Enter Arabic Number: '))
print('>> Arabic Equivalent:',n[(k//1000)*1000] + n[((k-((k//1000)*1000))//100)*100] + n[((k-(((k-((k//1000)*1000))//100)*100)-(k//1000)*1000)//10)*10] + n[k%10])
``` |
53,908,319 | Numbers that do not contain 4 convert just fine, but once number that contain 4 is tested, it does not convert properly.
I am new to python and I am struggling to see what was wrong in the code. The code for converting Arabic number to Roman numerals work for numbers that does not contain 4 in them. I have tried to test with different combination of numbers. The codes before the one below pretty much determine how many thousands, five hundreds, hundreds, etc that is in the number inputted. Could anyone help me?
```
def display_roman(M, D, C, L, X, V, I):
CM = 0
CD = 0
XC = 0
XL = 0
IX = 0
IV = 0
if D == 2:
M += 1
D -= 2
elif L == 2:
C += 1
L -= 2
elif V == 2:
X += 1
V -= 2
if V == 1 and I == 4:
V = 0
I = 0
IX = 1
elif I == 4:
I == 0
IV == 1
if X == 4:
X == 0
XL == 1
if L == 1 and X == 4:
L == 0
X == 0
XC == 1
if C == 4:
C == 0
CD == 1
if D == 1 and C == 4:
D == 0
C == 0
CM == 1
print("The roman numeral of your number is: ")
print("M" * M, "CM" * CM, "D" * D, "CD" * CD, "C" * C,"XC" * XC, "L" * L, "XL" * XL, "X" * X, "IX" * IX, "V" * V, "IV" * IV, "I" * I)
```
If I input numbers like 4 or 14, I expect to get IV and XIV respectively. But the actual outputs are IIII and XIIII respectively.
Please help. I'm sorry if there is something wrong with the format of my question as I am also new to stackoverflow. Thank you in advance. | 2018/12/24 | [
"https://Stackoverflow.com/questions/53908319",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10827443/"
] | This converts any positive integer to roman numeral string:
```
def roman(num: int) -> str:
chlist = "VXLCDM"
rev = [int(ch) for ch in reversed(str(num))]
chlist = ["I"] + [chlist[i % len(chlist)] + "\u0304" * (i // len(chlist))
for i in range(0, len(rev) * 2)]
def period(p: int, ten: str, five: str, one: str) -> str:
if p == 9:
return one + ten
elif p >= 5:
return five + one * (p - 5)
elif p == 4:
return one + five
else:
return one * p
return "".join(reversed([period(rev[i], chlist[i * 2 + 2], chlist[i * 2 + 1], chlist[i * 2])
for i in range(0, len(rev))]))
```
Test code:
```
print(roman(6))
print(roman(78))
print(roman(901))
print(roman(2345))
print(roman(67890))
print(roman(123456))
print(roman(7890123))
print(roman(45678901))
print(roman(234567890))
```
Output:
```
VI
LXXVIII
CMI
MMCCCXLV
L̄X̄V̄MMDCCCXC
C̄X̄X̄MMMCDLVI
V̄̄M̄M̄D̄C̄C̄C̄X̄C̄CXXIII
X̄̄L̄̄V̄̄D̄C̄L̄X̄X̄V̄MMMCMI
C̄̄C̄̄X̄̄X̄̄X̄̄M̄V̄̄D̄L̄X̄V̄MMDCCCXC
```
Note that integers greater than 9 million are represented by the characters that contains 2 or more macrons, which are very unclear unless they are badly scaled-up | ```
print("ARABIC TO ROMAN CONVERTER [1-3999]:- \n \n")
x=int(input("ENTER THE ARABIC NUMBER: "))
b=["",'I','II','III','IV','V','VI','VII','VIII','IX','X','XX','XXX','XL','L','LX','LXX','LXXX','XC','C','CX','CXX','CXXX','CXL','CL','CLX','CLXX','CLXXX','CXC','CC','CCC','CD','D','DC','DCC','DCCC','CM','M']
d=["",'X','XX','XXX','XL','L','LX','LXX','LXXX','XC']
e=["",'C','CC','CCC','CD','D','DC','DCC','DCCC','CM']
if x in range(1,1000):
print(e[int((((int(x/100))*100)-1000)/100)]+d[int((x-(int(x/100)*100))/10)]+b[((x%1000)%100)%10])
if x==1000:
print(b[37])
if x in range(1001,2000):
print(b[37]+e[int((((int(x/100))*100)-1000)/100)]+d[int((x-(int(x/100)*100))/10)]+b[((x%1000)%100)%10])
if x==2000:
print("MM")
if x in range(2001,3000):
print(b[37]+b[37]+e[int((((int(x/100))*100)-2000)/100)]+d[int((x-(int(x/100)*100))/10)]+b[((x%1000)%100)%10])
if x==3000:
print("MMM")
if x in range(3001,4000):
print(b[37]+b[37]+b[37]+e[int((((int(x/100))*100)-3000)/100)]+d[int((x-(int(x/100)*100))/10)]+b[((x%1000)%100)%10])
if x not in range(1,4000):
print("Error")
``` |
53,908,319 | Numbers that do not contain 4 convert just fine, but once number that contain 4 is tested, it does not convert properly.
I am new to python and I am struggling to see what was wrong in the code. The code for converting Arabic number to Roman numerals work for numbers that does not contain 4 in them. I have tried to test with different combination of numbers. The codes before the one below pretty much determine how many thousands, five hundreds, hundreds, etc that is in the number inputted. Could anyone help me?
```
def display_roman(M, D, C, L, X, V, I):
CM = 0
CD = 0
XC = 0
XL = 0
IX = 0
IV = 0
if D == 2:
M += 1
D -= 2
elif L == 2:
C += 1
L -= 2
elif V == 2:
X += 1
V -= 2
if V == 1 and I == 4:
V = 0
I = 0
IX = 1
elif I == 4:
I == 0
IV == 1
if X == 4:
X == 0
XL == 1
if L == 1 and X == 4:
L == 0
X == 0
XC == 1
if C == 4:
C == 0
CD == 1
if D == 1 and C == 4:
D == 0
C == 0
CM == 1
print("The roman numeral of your number is: ")
print("M" * M, "CM" * CM, "D" * D, "CD" * CD, "C" * C,"XC" * XC, "L" * L, "XL" * XL, "X" * X, "IX" * IX, "V" * V, "IV" * IV, "I" * I)
```
If I input numbers like 4 or 14, I expect to get IV and XIV respectively. But the actual outputs are IIII and XIIII respectively.
Please help. I'm sorry if there is something wrong with the format of my question as I am also new to stackoverflow. Thank you in advance. | 2018/12/24 | [
"https://Stackoverflow.com/questions/53908319",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10827443/"
] | Done in 2 lines! (Has an upper limit of 4000)
Here's the code:
```
n,k = {0:'',1:'I',2:'II',3:'III',4:'IV',5:'V',6:'VI',7:'VII',8:'VIII',9:'IX',10:'X',20:'XX',30:'XXX',40:'XL',50:'L',60:'LX',70:'LXX',80:'LXXX',90:'XC',100:'C',200:'CC',300:'CCC',400:'CD',500:'D',600:'DC',700:'DCC',800:'DCCC',900:'CM',1000:'M',2000:'MM',3000:'MMM',4000:'MMMM'},int(input('ARABIC TO ROMAN CONVERTER:-\n\n>> Enter Arabic Number: '))
print('>> Arabic Equivalent:',n[(k//1000)*1000] + n[((k-((k//1000)*1000))//100)*100] + n[((k-(((k-((k//1000)*1000))//100)*100)-(k//1000)*1000)//10)*10] + n[k%10])
``` | ```
print("ARABIC TO ROMAN CONVERTER [1-3999]:- \n \n")
x=int(input("ENTER THE ARABIC NUMBER: "))
b=["",'I','II','III','IV','V','VI','VII','VIII','IX','X','XX','XXX','XL','L','LX','LXX','LXXX','XC','C','CX','CXX','CXXX','CXL','CL','CLX','CLXX','CLXXX','CXC','CC','CCC','CD','D','DC','DCC','DCCC','CM','M']
d=["",'X','XX','XXX','XL','L','LX','LXX','LXXX','XC']
e=["",'C','CC','CCC','CD','D','DC','DCC','DCCC','CM']
if x in range(1,1000):
print(e[int((((int(x/100))*100)-1000)/100)]+d[int((x-(int(x/100)*100))/10)]+b[((x%1000)%100)%10])
if x==1000:
print(b[37])
if x in range(1001,2000):
print(b[37]+e[int((((int(x/100))*100)-1000)/100)]+d[int((x-(int(x/100)*100))/10)]+b[((x%1000)%100)%10])
if x==2000:
print("MM")
if x in range(2001,3000):
print(b[37]+b[37]+e[int((((int(x/100))*100)-2000)/100)]+d[int((x-(int(x/100)*100))/10)]+b[((x%1000)%100)%10])
if x==3000:
print("MMM")
if x in range(3001,4000):
print(b[37]+b[37]+b[37]+e[int((((int(x/100))*100)-3000)/100)]+d[int((x-(int(x/100)*100))/10)]+b[((x%1000)%100)%10])
if x not in range(1,4000):
print("Error")
``` |
33,797,793 | Here is part of program 'Trackbar as the Color Palette' in python which is include with opencv. I want to use it in c++.
My problem is the last line.
```
r = cv2.getTrackbarPos('R','image')
g = cv2.getTrackbarPos('G','image')
b = cv2.getTrackbarPos('B','image')
img[:] = [b,g,r]
```
Without this command I just have a black image. | 2015/11/19 | [
"https://Stackoverflow.com/questions/33797793",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5580005/"
] | You probably want to set all pixels of the `CV_8UC3` image `img` to the color given by `b`, `g` and `r`;
You can do this in OpenCV like:
```
img.setTo(Vec3b(b, g, r));
```
or equivalently:
```
img.setTo(Scalar(b, g, r));
```
---
In your code you're missing basically all the important parts:
* the infinite loop (so you exit the program without refreshing the image color)
* the assignment of the new color
* you are mixing obsolete C syntax and C++ syntax.
This is what you need:
```
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char** argv)
{
// Initialize a black image
Mat3b imgScribble(256, 512, Vec3b(0,0,0));
namedWindow("Image", WINDOW_AUTOSIZE);
createTrackbar("R", "Image", 0, 255);
createTrackbar("G", "Image", 0, 255);
createTrackbar("B", "Image", 0, 255);
while (true)
{
int r = getTrackbarPos("R", "Image");
int g = getTrackbarPos("G", "Image");
int b = getTrackbarPos("B", "Image");
// Fill image with new color
imgScribble.setTo(Vec3b(b, g, r));
imshow("Image", imgScribble);
if (waitKey(1) == 27 /*ESC*/) break;
}
return 0;
}
``` | I think, you are looking for std::for\_each(). This code is untested. It is intended to show the concept, It might contain bugs:
```
// color type - use whatever you have
using color = std::array<char, 3>;
// prepare image, wherever you get that from
auto img = std::vector<color>{width * height, color{{0x00, 0x00, 0x00}}};
// define the color we want to have: white
char r = 0xff;
char g = 0xff;
char b = 0xff;
std::for_each(std::begin(img), std::end(img),
[&](color& i){
i = {r, g, b};
});
```
By choosing iterators different from `std::begin()` and `std::end()` you can of course select any slice of your vector.
Yes, the idioms in C++ are different to the ones in Python. |
36,308,537 | I am trying to create a variable in python with prefix as `list` and then number which will be generated dynamically in the script. For example
I am trying to have a list10 where list is the prefix and 10 is the number generated dynamically.
In TCL we give like
```
list${i}
```
Is there same way in python too? | 2016/03/30 | [
"https://Stackoverflow.com/questions/36308537",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1336962/"
] | The pythonic way to do this would be to make a dictionary to store your lists with the generated names as the dictionary's keys:
```
d = {}
d['list1'] = [1, 2, 3]
d['list2'] = ['a', 'b', 'c']
```
EDIT: generate key names
You can create keys like this:
```
key = 'list' + str(1) # or whatever number you are using
d[key] = [your list]
```
END EDIT
Or if you don't really need to know the names, store your lists in a list and retrieve them by index:
`lists = [[1, 2, 3], ['a', 'b', 'c']]` | You can use `locals()`, `vars()`, or `globals()` and inject your variable name there. For eg.
```
>>> list10
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'list10' is not defined
>>> locals()
{'__builtins__': <module '__builtin__' (built-in)>, '__package__': None, 'x': [], '__name__': '__main__', '__doc__': None}
>>> locals()['list10'] = []
>>> locals()
{'__builtins__': <module '__builtin__' (built-in)>, 'list10': [], '__package__': None, 'x': [], '__name__': '__main__', '__doc__': None}
>>> list10
[]
```
Generally, if you're doing something like this, you'd probably be better off with using a dictionary to store the variable name and the value(s).
For eg.
```
>>> my_lists = {}
>>> my_lists['list10'] = []
```
And then when you want to look it up, you can `.get()` it if you want robustness against a variable name not existing, or directly accessing it if you're going to guard against non-existence yourself.
```
>>> the_list_i_want = my_lists.get('list10')
>>> the_list_i_want = my_lists['list10'] # Will raise a KeyError if it does not exist
``` |
5,882,362 | What is the difference between `thread.start_new_thread` and `threading.Thread.start` in python?
I have noticed that when `start_new_thread` is called, the new thread terminates as soon as the calling thread terminates. `threading.Thread.start` is the opposite: the calling thread waits for other threads to terminate. | 2011/05/04 | [
"https://Stackoverflow.com/questions/5882362",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322419/"
] | The `thread` module is the low-level threading API of Python. Its direct usage isn't recommended, unless you really need to. The `threading` module is a high-level API, built on top of `thread`. The `Thread.start` method is actually implemented using `thread.start_new_thread`.
The `daemon` attribute of `Thread` must be set before calling `start`, specifying whether the thread should be a daemon. The entire Python program exits when no alive non-daemon threads are left. By default, `daemon` is `False`, so the thread is not a daemon, and hence the process will wait for all its non-daemon thread to exit, which is the behavior you're observing.
---
P.S. `start_new_thread` really is very low-level. It's just a thin wrapper around the Python core thread launcher, which itself calls the OS thread spawning function. | See the [threading.Thread.daemon](http://docs.python.org/library/threading.html#threading.Thread.daemon) flag - basically whenever no non-daemon threads are running, the interpreter terminates. |
53,748,145 | I'm trying to containerize my django file, and I keep running into the issue:`(2006, ’Can\‘t connect to local MySQL server through socket \‘/var/run/mysqld/mysqld.sock\’ (2 “No such file or directory”)`
I found out later mysql.sock is in this location:`/tmp/mysql.sock` instead of `/var/run/mysqld/mysqld.sock`, how do I change the location for docker to see `/tmp/mysql.sock`
Here is my docker-composr.yml:
```
version: '3'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: somepassword
adminer:
image: adminer
restart: always
ports:
- 8080:8080
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
```
I have followed the instructions on the mysql docker website to link mysql instance to a container
EDIT: I read another stack overflow similar to this, I changed my django code to `'HOST': '127.0.0.1'` in `DATABASES` now I get : `(2006, 'Can\'t connect to MySQL server on \'127.0.0.1\' (111 "Connection refused")')` | 2018/12/12 | [
"https://Stackoverflow.com/questions/53748145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10241176/"
] | Without external dependencies, you can use [`filter`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter) to extract elements from `A` that don't have ids in `B` and `concat` that with `B`:
```js
const A = [{id: 1, name: 'x'}, {id: 2, name: 'y'}, {id: 3, name: 'z'}];
const B = [{id: 2, name: 'hello'}];
let ids = new Set(B.map(e => e.id));
let newState = A.filter(a => !ids.has(a.id)).concat(B);
console.log(newState);
``` | Since you are already using lodash, you can use `_.unionBy` which merges the arrays using a criterion by which uniqueness is computed:
```
let result = _.unionBy(B, A, "id");
```
Start with `B` before `A`, so that in case of duplicates, `B` values are taken instead of `A` ones.
**Example:**
```js
let A = [
{ id: "a", arr: "A" },
{ id: "b", arr: "A" },
{ id: "c", arr: "A" },
{ id: "d", arr: "A" }
];
let B = [
{ id: "b", arr: "B" },
{ id: "d", arr: "B" }
];
let result = _.unionBy(B, A, "id");
console.log(result);
```
```html
<script src="https://cdn.jsdelivr.net/npm/lodash@4.17.11/lodash.min.js"></script>
```
**Note:** This messes up the order of the items, the duplicates come first, then the rest. |
53,748,145 | I'm trying to containerize my django file, and I keep running into the issue:`(2006, ’Can\‘t connect to local MySQL server through socket \‘/var/run/mysqld/mysqld.sock\’ (2 “No such file or directory”)`
I found out later mysql.sock is in this location:`/tmp/mysql.sock` instead of `/var/run/mysqld/mysqld.sock`, how do I change the location for docker to see `/tmp/mysql.sock`
Here is my docker-composr.yml:
```
version: '3'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: somepassword
adminer:
image: adminer
restart: always
ports:
- 8080:8080
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
```
I have followed the instructions on the mysql docker website to link mysql instance to a container
EDIT: I read another stack overflow similar to this, I changed my django code to `'HOST': '127.0.0.1'` in `DATABASES` now I get : `(2006, 'Can\'t connect to MySQL server on \'127.0.0.1\' (111 "Connection refused")')` | 2018/12/12 | [
"https://Stackoverflow.com/questions/53748145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10241176/"
] | Without external dependencies, you can use [`filter`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter) to extract elements from `A` that don't have ids in `B` and `concat` that with `B`:
```js
const A = [{id: 1, name: 'x'}, {id: 2, name: 'y'}, {id: 3, name: 'z'}];
const B = [{id: 2, name: 'hello'}];
let ids = new Set(B.map(e => e.id));
let newState = A.filter(a => !ids.has(a.id)).concat(B);
console.log(newState);
``` | Use lodash's `_.differenceBy(A, B)` to remove all items that exist in `B` from `A`, and then combine with `B` items. This will preserve the order of A items before B items.
```js
const A = [{"id":"a","arr":"A"},{"id":"b","arr":"A"},{"id":"c","arr":"A"},{"id":"d","arr":"A"}];
const B = [{"id":"c","arr":"B"},{"id":"d","arr":"B"}];
const result = [..._.differenceBy(A, B, 'id'), ...B];
console.log(result);
```
```html
<script src="https://cdn.jsdelivr.net/npm/lodash@4.17.11/lodash.min.js"></script>
``` |
53,748,145 | I'm trying to containerize my django file, and I keep running into the issue:`(2006, ’Can\‘t connect to local MySQL server through socket \‘/var/run/mysqld/mysqld.sock\’ (2 “No such file or directory”)`
I found out later mysql.sock is in this location:`/tmp/mysql.sock` instead of `/var/run/mysqld/mysqld.sock`, how do I change the location for docker to see `/tmp/mysql.sock`
Here is my docker-composr.yml:
```
version: '3'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: somepassword
adminer:
image: adminer
restart: always
ports:
- 8080:8080
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
```
I have followed the instructions on the mysql docker website to link mysql instance to a container
EDIT: I read another stack overflow similar to this, I changed my django code to `'HOST': '127.0.0.1'` in `DATABASES` now I get : `(2006, 'Can\'t connect to MySQL server on \'127.0.0.1\' (111 "Connection refused")')` | 2018/12/12 | [
"https://Stackoverflow.com/questions/53748145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10241176/"
] | Without external dependencies, you can use [`filter`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter) to extract elements from `A` that don't have ids in `B` and `concat` that with `B`:
```js
const A = [{id: 1, name: 'x'}, {id: 2, name: 'y'}, {id: 3, name: 'z'}];
const B = [{id: 2, name: 'hello'}];
let ids = new Set(B.map(e => e.id));
let newState = A.filter(a => !ids.has(a.id)).concat(B);
console.log(newState);
``` | Here is a solution that follows more of the redux styled approach...
```
// imported redux actions (these are simply strings)
import {
SOME_DEFINED_ACTION_CASE
} from '../actions/someActions';
const initialState = {
reduxList: []
}
// reducer function
export default function someReducer(state = initialState, action) {
switch (action.type) {
case SOME_DEFINED_ACTION_CASE: {
let ids = new Set(action.payload.map(e => e.id));
let newState = state.reduxList.filter(a => !ids.has(a.id)).concat(action.payload);
return Object.assign({}, state, {
reduxList: newState
});
}
}
}
``` |
53,748,145 | I'm trying to containerize my django file, and I keep running into the issue:`(2006, ’Can\‘t connect to local MySQL server through socket \‘/var/run/mysqld/mysqld.sock\’ (2 “No such file or directory”)`
I found out later mysql.sock is in this location:`/tmp/mysql.sock` instead of `/var/run/mysqld/mysqld.sock`, how do I change the location for docker to see `/tmp/mysql.sock`
Here is my docker-composr.yml:
```
version: '3'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: somepassword
adminer:
image: adminer
restart: always
ports:
- 8080:8080
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
```
I have followed the instructions on the mysql docker website to link mysql instance to a container
EDIT: I read another stack overflow similar to this, I changed my django code to `'HOST': '127.0.0.1'` in `DATABASES` now I get : `(2006, 'Can\'t connect to MySQL server on \'127.0.0.1\' (111 "Connection refused")')` | 2018/12/12 | [
"https://Stackoverflow.com/questions/53748145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10241176/"
] | Since you are already using lodash, you can use `_.unionBy` which merges the arrays using a criterion by which uniqueness is computed:
```
let result = _.unionBy(B, A, "id");
```
Start with `B` before `A`, so that in case of duplicates, `B` values are taken instead of `A` ones.
**Example:**
```js
let A = [
{ id: "a", arr: "A" },
{ id: "b", arr: "A" },
{ id: "c", arr: "A" },
{ id: "d", arr: "A" }
];
let B = [
{ id: "b", arr: "B" },
{ id: "d", arr: "B" }
];
let result = _.unionBy(B, A, "id");
console.log(result);
```
```html
<script src="https://cdn.jsdelivr.net/npm/lodash@4.17.11/lodash.min.js"></script>
```
**Note:** This messes up the order of the items, the duplicates come first, then the rest. | Use lodash's `_.differenceBy(A, B)` to remove all items that exist in `B` from `A`, and then combine with `B` items. This will preserve the order of A items before B items.
```js
const A = [{"id":"a","arr":"A"},{"id":"b","arr":"A"},{"id":"c","arr":"A"},{"id":"d","arr":"A"}];
const B = [{"id":"c","arr":"B"},{"id":"d","arr":"B"}];
const result = [..._.differenceBy(A, B, 'id'), ...B];
console.log(result);
```
```html
<script src="https://cdn.jsdelivr.net/npm/lodash@4.17.11/lodash.min.js"></script>
``` |
53,748,145 | I'm trying to containerize my django file, and I keep running into the issue:`(2006, ’Can\‘t connect to local MySQL server through socket \‘/var/run/mysqld/mysqld.sock\’ (2 “No such file or directory”)`
I found out later mysql.sock is in this location:`/tmp/mysql.sock` instead of `/var/run/mysqld/mysqld.sock`, how do I change the location for docker to see `/tmp/mysql.sock`
Here is my docker-composr.yml:
```
version: '3'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: somepassword
adminer:
image: adminer
restart: always
ports:
- 8080:8080
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
```
I have followed the instructions on the mysql docker website to link mysql instance to a container
EDIT: I read another stack overflow similar to this, I changed my django code to `'HOST': '127.0.0.1'` in `DATABASES` now I get : `(2006, 'Can\'t connect to MySQL server on \'127.0.0.1\' (111 "Connection refused")')` | 2018/12/12 | [
"https://Stackoverflow.com/questions/53748145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10241176/"
] | Since you are already using lodash, you can use `_.unionBy` which merges the arrays using a criterion by which uniqueness is computed:
```
let result = _.unionBy(B, A, "id");
```
Start with `B` before `A`, so that in case of duplicates, `B` values are taken instead of `A` ones.
**Example:**
```js
let A = [
{ id: "a", arr: "A" },
{ id: "b", arr: "A" },
{ id: "c", arr: "A" },
{ id: "d", arr: "A" }
];
let B = [
{ id: "b", arr: "B" },
{ id: "d", arr: "B" }
];
let result = _.unionBy(B, A, "id");
console.log(result);
```
```html
<script src="https://cdn.jsdelivr.net/npm/lodash@4.17.11/lodash.min.js"></script>
```
**Note:** This messes up the order of the items, the duplicates come first, then the rest. | Here is a solution that follows more of the redux styled approach...
```
// imported redux actions (these are simply strings)
import {
SOME_DEFINED_ACTION_CASE
} from '../actions/someActions';
const initialState = {
reduxList: []
}
// reducer function
export default function someReducer(state = initialState, action) {
switch (action.type) {
case SOME_DEFINED_ACTION_CASE: {
let ids = new Set(action.payload.map(e => e.id));
let newState = state.reduxList.filter(a => !ids.has(a.id)).concat(action.payload);
return Object.assign({}, state, {
reduxList: newState
});
}
}
}
``` |
55,577,893 | I want to run a recursive function in Numba, using nopython mode. Until now I'm only getting errors. This is a very simple code, the user gives a tuple with less than five elements and then the function creates another tuple with a new value added to the tuple (in this case, the number 3). This is repeated until the final tuple has length 5. For some reason this is not working, don't know why.
```py
@njit
def tup(a):
if len(a) == 5:
return a
else:
b = a + (3,)
b = tup(b)
return b
```
For example, if `a = (0,1)`, I would expect the final result to be tuple `(0,1,3,3,3)`.
EDIT: I'm using Numba 0.41.0 and the error I'm getting is the kernel dying, 'The kernel appears to have died. It will restart automatically.' | 2019/04/08 | [
"https://Stackoverflow.com/questions/55577893",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2136601/"
] | There are several reasons why you shouldn't do that:
* This is generally a kind of approach that will likely be faster in pure Python than in a numba-decorated function.
* Iteration will be simpler and probably faster, however beware that concatenating tuples is generally an `O(n)` operation, even in numba. So the overall performance of the function will be `O(n**2)`. This can be improved by using a data-structure that supports `O(1)` appends or a data-structure that supports pre-allocating the size. Or simply by not using a "loopy" or "recursive" approach.
* Have you tried what happens if you leave out the `njit` decorator and pass in a tuple that contains 6 elements? (hint: it will hit the recursion limit because it never fulfills the end condition of the recursion).
Numba, at the time of writing 0.43.1, only supports simple recursions when the type of the arguments don't change between recursions. In your case the type does change, you pass in a `tuple(int64 x 2)` but the recursive call tries to pass in a `tuple(int64 x 3)` which is a different type. Strangely it runs into a `StackOverflow` on my computer - which seems like a bug in numba.
My suggestion would be to use this (no numba, no recursion):
```
def tup(a):
if len(a) < 5:
a += (3, ) * (5 - len(a))
return a
```
Which also returns the expected result:
```
>>> tup((1,))
(1, 3, 3, 3, 3)
>>> tup((1, 2))
(1, 2, 3, 3, 3)
``` | According to [this list of proposals](https://numba.pydata.org/numba-doc/latest/proposals/typing_recursion.html) in the current releases:
>
> Recursion support in numba is currently limited to self-recursion with
> explicit type annotation for the function. This limitation comes from
> the inability to determine the return type of a recursive call.
>
>
>
So, instead try:
```
from numba import jit
@jit()
def tup(a:tuple) -> tuple:
if len(a) == 5:
return a
return tup(a + (3,))
print(tup((0, 1)))
```
To see if that works any better for you. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.