qid
int64 469
74.7M
| question
stringlengths 36
37.8k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 5
31.5k
| response_k
stringlengths 10
31.6k
|
---|---|---|---|---|---|
1,584,864 | Say that we have a multilayered iterable with some strings at the "final" level, yes strings are iterable, but I think that you get my meaning:
```
['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch')),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
...
]
```
Is there any convenient way that I could implement a search that would give me the indices of the matching strings? I would like something that would act something like this (where the above list is `data`):
```
>>> grep('javascript', data)
```
and it would return [ (2,1,1), (2,2,0), (2,2,1), (2,2,2) ] perhaps. Maybe I'm missing a comparable solution that returns nothing of the sort but can help me find some strings within a multi-layered list of iterables of iterables of .... strings.
I wrote a little bit but it was seeming juvenile and inelegant so I thought I would ask here. I guess that I could just keep nesting the exception the way I started here to the number of levels that the function would then support, but I was hoping to get something neat, abstract, pythonic.
```
import re
def rgrep(s, data):
''' given a iterable of strings or an iterable of iterables of strings,
returns the index/indices of strings that contain the search string.
Args::
s - the string that you are searching for
data - the iterable of strings or iterable of iterables of strings
'''
results = []
expr = re.compile(s)
for item in data:
try:
match = expr.search(item)
if match != None:
results.append( data.index(item) )
except TypeError:
for t in item:
try:
m = expr.search(t)
if m != None:
results.append( (list.index(item), item.index(t)) )
except TypeError:
''' you can only go 2 deep! '''
pass
return results
``` | 2009/10/18 | [
"https://Stackoverflow.com/questions/1584864",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/177293/"
] | Here is a grep that uses recursion to search the data structure.
Note that good data structures lead the way to elegant solutions.
Bad data structures make you bend over backwards to accomodate.
This feels to me like one of those cases where a bad data structure is obstructing
rather than helping you.
Having a simple data structure with a more uniform structure
(instead of using this grep) might be worth investigating.
```
#!/usr/bin/env python
data=['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch',['find','java deep','down'])),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
]
def grep(astr,data,prefix=[]):
result=[]
for idx,elt in enumerate(data):
if isinstance(elt,basestring):
if astr in elt:
result.append(tuple(prefix+[idx]))
else:
result.extend(grep(astr,elt,prefix+[idx]))
return result
def pick(data,idx):
if idx:
return pick(data[idx[0]],idx[1:])
else:
return data
idxs=grep('java',data)
print(idxs)
for idx in idxs:
print('data[%s] = %s'%(idx,pick(data,idx)))
``` | To get the position use `enumerate()`
```
>>> data = [('foo', 'bar', 'frrr', 'baz'), ('foo/bar', 'baz/foo')]
>>>
>>> for l1, v1 in enumerate(data):
... for l2, v2 in enumerate(v1):
... if 'f' in v2:
... print l1, l2, v2
...
0 0 foo
1 0 foo/bar
1 1 baz/foo
```
In this example I am using a simple match `'foo' in bar` yet you probably use regex for the job.
Obviously, `enumerate()` can provide support in more than 2 levels as in your edited post. |
1,584,864 | Say that we have a multilayered iterable with some strings at the "final" level, yes strings are iterable, but I think that you get my meaning:
```
['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch')),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
...
]
```
Is there any convenient way that I could implement a search that would give me the indices of the matching strings? I would like something that would act something like this (where the above list is `data`):
```
>>> grep('javascript', data)
```
and it would return [ (2,1,1), (2,2,0), (2,2,1), (2,2,2) ] perhaps. Maybe I'm missing a comparable solution that returns nothing of the sort but can help me find some strings within a multi-layered list of iterables of iterables of .... strings.
I wrote a little bit but it was seeming juvenile and inelegant so I thought I would ask here. I guess that I could just keep nesting the exception the way I started here to the number of levels that the function would then support, but I was hoping to get something neat, abstract, pythonic.
```
import re
def rgrep(s, data):
''' given a iterable of strings or an iterable of iterables of strings,
returns the index/indices of strings that contain the search string.
Args::
s - the string that you are searching for
data - the iterable of strings or iterable of iterables of strings
'''
results = []
expr = re.compile(s)
for item in data:
try:
match = expr.search(item)
if match != None:
results.append( data.index(item) )
except TypeError:
for t in item:
try:
m = expr.search(t)
if m != None:
results.append( (list.index(item), item.index(t)) )
except TypeError:
''' you can only go 2 deep! '''
pass
return results
``` | 2009/10/18 | [
"https://Stackoverflow.com/questions/1584864",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/177293/"
] | I'd split recursive enumeration from grepping:
```
def enumerate_recursive(iter, base=()):
for index, item in enumerate(iter):
if isinstance(item, basestring):
yield (base + (index,)), item
else:
for pair in enumerate_recursive(item, (base + (index,))):
yield pair
def grep_index(filt, iter):
return (index for index, text in iter if filt in text)
```
This way you can do both non-recursive and recursive grepping:
```
l = list(grep_index('opt1', enumerate(sys.argv))) # non-recursive
r = list(grep_index('diff', enumerate_recursive(your_data))) # recursive
```
Also note that we're using iterators here, saving RAM for longer sequences if necessary.
Even more generic solution would be to give a callable instead of string to grep\_index. But that might not be necessary for you. | To get the position use `enumerate()`
```
>>> data = [('foo', 'bar', 'frrr', 'baz'), ('foo/bar', 'baz/foo')]
>>>
>>> for l1, v1 in enumerate(data):
... for l2, v2 in enumerate(v1):
... if 'f' in v2:
... print l1, l2, v2
...
0 0 foo
1 0 foo/bar
1 1 baz/foo
```
In this example I am using a simple match `'foo' in bar` yet you probably use regex for the job.
Obviously, `enumerate()` can provide support in more than 2 levels as in your edited post. |
1,584,864 | Say that we have a multilayered iterable with some strings at the "final" level, yes strings are iterable, but I think that you get my meaning:
```
['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch')),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
...
]
```
Is there any convenient way that I could implement a search that would give me the indices of the matching strings? I would like something that would act something like this (where the above list is `data`):
```
>>> grep('javascript', data)
```
and it would return [ (2,1,1), (2,2,0), (2,2,1), (2,2,2) ] perhaps. Maybe I'm missing a comparable solution that returns nothing of the sort but can help me find some strings within a multi-layered list of iterables of iterables of .... strings.
I wrote a little bit but it was seeming juvenile and inelegant so I thought I would ask here. I guess that I could just keep nesting the exception the way I started here to the number of levels that the function would then support, but I was hoping to get something neat, abstract, pythonic.
```
import re
def rgrep(s, data):
''' given a iterable of strings or an iterable of iterables of strings,
returns the index/indices of strings that contain the search string.
Args::
s - the string that you are searching for
data - the iterable of strings or iterable of iterables of strings
'''
results = []
expr = re.compile(s)
for item in data:
try:
match = expr.search(item)
if match != None:
results.append( data.index(item) )
except TypeError:
for t in item:
try:
m = expr.search(t)
if m != None:
results.append( (list.index(item), item.index(t)) )
except TypeError:
''' you can only go 2 deep! '''
pass
return results
``` | 2009/10/18 | [
"https://Stackoverflow.com/questions/1584864",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/177293/"
] | I'd split recursive enumeration from grepping:
```
def enumerate_recursive(iter, base=()):
for index, item in enumerate(iter):
if isinstance(item, basestring):
yield (base + (index,)), item
else:
for pair in enumerate_recursive(item, (base + (index,))):
yield pair
def grep_index(filt, iter):
return (index for index, text in iter if filt in text)
```
This way you can do both non-recursive and recursive grepping:
```
l = list(grep_index('opt1', enumerate(sys.argv))) # non-recursive
r = list(grep_index('diff', enumerate_recursive(your_data))) # recursive
```
Also note that we're using iterators here, saving RAM for longer sequences if necessary.
Even more generic solution would be to give a callable instead of string to grep\_index. But that might not be necessary for you. | Here is a grep that uses recursion to search the data structure.
Note that good data structures lead the way to elegant solutions.
Bad data structures make you bend over backwards to accomodate.
This feels to me like one of those cases where a bad data structure is obstructing
rather than helping you.
Having a simple data structure with a more uniform structure
(instead of using this grep) might be worth investigating.
```
#!/usr/bin/env python
data=['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch',['find','java deep','down'])),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
]
def grep(astr,data,prefix=[]):
result=[]
for idx,elt in enumerate(data):
if isinstance(elt,basestring):
if astr in elt:
result.append(tuple(prefix+[idx]))
else:
result.extend(grep(astr,elt,prefix+[idx]))
return result
def pick(data,idx):
if idx:
return pick(data[idx[0]],idx[1:])
else:
return data
idxs=grep('java',data)
print(idxs)
for idx in idxs:
print('data[%s] = %s'%(idx,pick(data,idx)))
``` |
36,831,274 | I want to connect my Django web app database to my postgresql database I have on my Pythonanywhere paid account. Before coding anything, I just wanted to get everything talking to each other. This is the settings.py DATABASE section from my django app. I'm running Python 3.5 and Django 1.9.
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': '[myDatabaseName]',
'USER': '[myUsername]',
'PASSWORD': '[myPassword]',
'HOST': 'xxxxxxxx-xxx.postgres.pythonanywhere-services.com',
'PORT': '10130',
}
}
```
The HOST and PORT we're both provided from the pythonanywhere.com site under the tab DATABASE and Postgres. I did create my database, username, and password on the postgres console.
I then created a checkedb.py script I found that would check if the connection with the postgres database works.
```
from django.db import connections
db_conn = connections['default']
try:
c = db_conn.cursor()
except OperationalError:
connected = False
else:
connected = True
```
This is the error I receive after running this code.
```
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/django/conf/__init__.py", line 38, in _setup
settings_module = os.environ[ENVIRONMENT_VARIABLE]
File "/usr/lib/python3.4/os.py", line 633, in __getitem__
raise KeyError(key) from None
KeyError: 'DJANGO_SETTINGS_MODULE'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/giraldez/golf/golf/dbcheck.py", line 2, in <module>
db_conn = connections['default']
File "/usr/local/lib/python3.4/dist-packages/django/db/utils.py", line 196, in __getitem__
self.ensure_defaults(alias)
File "/usr/local/lib/python3.4/dist-packages/django/db/utils.py", line 170, in ensure_defaults
conn = self.databases[alias]
File "/usr/local/lib/python3.4/dist-packages/django/utils/functional.py", line 49, in __get__
res = instance.__dict__[self.func.__name__] = self.func(instance)
File "/usr/local/lib/python3.4/dist-packages/django/db/utils.py", line 153, in databases
self._databases = settings.DATABASES
File "/usr/local/lib/python3.4/dist-packages/django/conf/__init__.py", line 54, in __getattr__
self._setup(name)
File "/usr/local/lib/python3.4/dist-packages/django/conf/__init__.py", line 47, in _setup
% (desc, ENVIRONMENT_VARIABLE))
django.core.exceptions.ImproperlyConfigured: Requested setting DATABASES, but settings are not configured. You must either define the environment variable D
JANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
```
The directory for my project looks like this
```
golf/
---golf/
------__init.py__
------dbcheck.py
------settings.py
------urls.py
------wsgi.py
---media/
---static/
---manage.py
``` | 2016/04/25 | [
"https://Stackoverflow.com/questions/36831274",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3314523/"
] | You need to setup django first if you are using it as a standalone script. Would have been easier to try with `./manage.py shell`. but if you want to test with a standalone script, here goes:
```
import sys,os
if __name__ == '__main__': # pragma nocover
# Setup environ
sys.path.append(os.getcwd())
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "main.settings_dev")
import django
django.setup()
from django.db import connections
db_conn = connections['default']
try:
c = db_conn.cursor()
except OperationalError:
connected = False
else:
connected = True
``` | The error you are getting is because you need to properly initialize the django environment before you can write custom scripts against it.
The easiest way to solve this is to run a python shell that already has the django configuration loaded, you can do this with `python manage.py shell`.
Once this shell has loaded, enter your code and it should tell you if the connection is valid or not. |
56,652,022 | I am working with an Altera DE1-SoC board where I am reading data from a sensor using a C program. The data is being read continually, in a while loop and written to a text file. I want to read this data using a python program and display the data.
The problem is that I am not sure how to avoid collision during the read/write from the file as these need to happen simultaneously. I was thinking of creating a mutex, but I am not sure how to implement it so the two different program languages can work with it.
Is there a simple way to do this? Thanks. | 2019/06/18 | [
"https://Stackoverflow.com/questions/56652022",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6649616/"
] | Loop the *TableDefs* collection.
For each *TableDef*, loop the *Fields* collection.
For each *Field*, check the property *Type* (= 101, as I recall) or *IsComplex* = True.
IsComplex is also True for *Multi-Value* fields, but if you don't use these, you should be fine. | Here is an example on VBA. It prints in immediate (open VBA editor by `Alt` + `F11`, then press `Ctrl` + `G`) messages about tables with Attachment type field.
```vb
Public Sub subTest()
Dim db As DAO.Database
Dim td As DAO.TableDef
Dim fld As DAO.Field
Dim boolIsAttachmentFieldPresent As Boolean
Set db = CurrentDb()
For Each td In db.TableDefs
If Left(td.Name, 4) <> "MSys" Then
'Debug.Print "Contents of: " & td.Name
boolIsAttachmentFieldPresent = False
For Each fld In td.Fields
'Debug.Print fld.Name & " of type " & fld.Type
If fld.Type = 101 Then
boolIsAttachmentFieldPresent = True
End If
Next fld
If boolIsAttachmentFieldPresent Then
Debug.Print "Table " & td.Name & " contains attachment field"
End If
End If
Next td
End Sub
```
All as @Gustav described. |
56,652,022 | I am working with an Altera DE1-SoC board where I am reading data from a sensor using a C program. The data is being read continually, in a while loop and written to a text file. I want to read this data using a python program and display the data.
The problem is that I am not sure how to avoid collision during the read/write from the file as these need to happen simultaneously. I was thinking of creating a mutex, but I am not sure how to implement it so the two different program languages can work with it.
Is there a simple way to do this? Thanks. | 2019/06/18 | [
"https://Stackoverflow.com/questions/56652022",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6649616/"
] | Loop the *TableDefs* collection.
For each *TableDef*, loop the *Fields* collection.
For each *Field*, check the property *Type* (= 101, as I recall) or *IsComplex* = True.
IsComplex is also True for *Multi-Value* fields, but if you don't use these, you should be fine. | This is what I came up with:
```vb
Public Function ListAttachmentTables()
Dim tdf As TableDef
Dim fld As Field
Dim FldsCnt As Long
Dim lngCountLoop As Long
CurrentDb.TableDefs.Refresh
For Each tdf In CurrentDb.TableDefs
If Not tdf.Name Like "MSys*" Then
For Each fld In tdf.Fields
If fld.Type = 101 Or fld.IsComplex = True Then
Debug.Print tdf.Name & " / " & fld.Name
End If
Next fld
End If
Next tdf
Set tdf = Nothing
End Function
```
When running `ListAttachmentTables` in immediate window the result was:
```
ASSOC_CLOSING_INFO / Attachments
ASSOC_NAME2 / Attachments
Backup Closing Sharepoint / Documents
Backup Closing Sharepoint / Attachments
CC_Card / Field1
Closing_requests1 / Documents
Closing_requests1 / Delivery_Dates
Closing_requests1 / Total_Package
Closing_requests1 / Attachments
Mail_Requests / group1
Mail_Requests / group2
Mail_Requests / Attachments
MSysResources / Data
UserInfo / Attachments
```
Close to what I needed, thanks! |
60,494,341 | I have a large csv data file, sample of the data as below.
```
name year value
China 1997 481970
Japan 1997 8491480
Germany 1997 4678022
China 1998 589759
Japan 1998 7912546
Germany 1998 5426582
```
After several attempts with no success, I would like to interpolate my data to monthly, and then change the format of data to be as in the example below,
```
date China Japan Germany
1997-01-31 40164.17 707623.33 389835.17
1997-02-28 80328.33 1415246.67 779670.33
1997-03-30 1204925 2122870 1169505.50
1997-04-30 160656.67 2830493.33 1559340.67
1997-05-31 200820.83 3538116.67 1949175.83
. . . .
. . . .
. . . .
1997-12-31 481970 8491480 4678022
1998-01-31 49146.58 659378.83 452215.17
1998-02-28 98293.17 1318757.67 904430.33
1998-03-30 147439.75 1978136.5 1356645.5
1998-04-30 196586.33 2637515.33 1808860.67
1998-05-31 245732.97 3296894.17 2261075.83
. . . .
. . . .
. . . .
1998-12-31 589759 7912546 5426582
```
someone suggested this [How to pivot a dataframe](https://stackoverflow.com/questions/47152691/how-to-pivot-a-dataframe) though it proved hard for me to reach the desired results. Maybe I'm not that good in python.
I would like to do it in R.
Thoughts? | 2020/03/02 | [
"https://Stackoverflow.com/questions/60494341",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12807398/"
] | Assuming the input shown reproducibly in the Note at the end convert it to a zoo object `z` which, by specifying `split=`, will also convert it to wide form at the same time. Then expand it using `merge` and use linear interpolation with `na.approx`. Alternately replace `na.approx` with `na.spline`. Finally convert the time index to `Date` class. The result is a zoo object `m`. If you need a data frame use `fortify.zoo(m)`.
```
library(zoo)
z <- read.zoo(DF, split = 1, index = 2, FUN = as.numeric)
m <- na.approx(merge(z, zoo(, c(kronecker(time(z), 0:11/12, "+")))))
time(m) <- as.Date(as.yearmon(time(m)), frac = 1)
m
```
giving:
```
China Germany Japan
1997-01-31 481970.0 4678022 8491480
1997-02-28 490952.4 4740402 8443236
1997-03-31 499934.8 4802782 8394991
1997-04-30 508917.2 4865162 8346747
1997-05-31 517899.7 4927542 8298502
1997-06-30 526882.1 4989922 8250257
1997-07-31 535864.5 5052302 8202013
1997-08-31 544846.9 5114682 8153769
1997-09-30 553829.3 5177062 8105524
1997-10-31 562811.8 5239442 8057280
1997-11-30 571794.2 5301822 8009035
1997-12-31 580776.6 5364202 7960790
1998-01-31 589759.0 5426582 7912546
```
Note
----
```
Lines <- "name year value
China 1997 481970
Japan 1997 8491480
Germany 1997 4678022
China 1998 589759
Japan 1998 7912546
Germany 1998 5426582"
DF <- read.table(text = Lines, header = TRUE, as.is = TRUE)
``` | An option using `data.table`:
```
DT[, date := as.IDate(paste0(year, "-12-31"))][,
c("y0", "y1") := .(value, shift(value, -1L, fill=value[.N])), name]
longDT <- DT[, {
eom <- seq(min(date)+1L, max(date)+1L, by="1 month") - 1L
v <- unlist(mapply(function(a, d) a + (0:11) * d, y0, (y1 - y0)/12, SIMPLIFY=FALSE))
.(eom, v=v[seq_along(eom)])
}, name]
dcast(longDT, eom ~ name, sum, value.var="v")
```
output:
```
eom China Germany Japan
1: 1996-12-31 40164.17 389835.2 707623.3
2: 1997-01-31 76981.32 747184.1 1356278.1
3: 1997-02-28 113798.48 1104533.0 2004932.8
4: 1997-03-31 150615.63 1461881.9 2653587.5
5: 1997-04-30 187432.78 1819230.8 3302242.2
6: 1997-05-31 224249.93 2176579.7 3950896.9
7: 1997-06-30 261067.09 2533928.6 4599551.7
8: 1997-07-31 297884.24 2891277.5 5248206.4
9: 1997-08-31 334701.39 3248626.4 5896861.1
10: 1997-09-30 371518.54 3605975.3 6545515.8
11: 1997-10-31 408335.70 3963324.2 7194170.6
12: 1997-11-30 445152.85 4320673.1 7842825.3
13: 1997-12-31 481970.00 4678022.0 8491480.0
14: 1998-01-31 490952.42 4740402.0 8443235.5
15: 1998-02-28 499934.83 4802782.0 8394991.0
16: 1998-03-31 508917.25 4865162.0 8346746.5
17: 1998-04-30 517899.67 4927542.0 8298502.0
18: 1998-05-31 526882.08 4989922.0 8250257.5
19: 1998-06-30 535864.50 5052302.0 8202013.0
20: 1998-07-31 544846.92 5114682.0 8153768.5
21: 1998-08-31 553829.33 5177062.0 8105524.0
22: 1998-09-30 562811.75 5239442.0 8057279.5
23: 1998-10-31 571794.17 5301822.0 8009035.0
24: 1998-11-30 580776.58 5364202.0 7960790.5
25: 1998-12-31 589759.00 5426582.0 7912546.0
eom China Germany Japan
```
data:
```
library(data.table)
DT <- fread("name year value
China 1996 40164.17
Japan 1996 707623.33
Germany 1996 389835.17
China 1997 481970
Japan 1997 8491480
Germany 1997 4678022
China 1998 589759
Japan 1998 7912546
Germany 1998 5426582")
```
I have taken the liberty to add in the data for 1996. |
49,687,860 | After upgrade pycharm to 2018.1, and upgrade python to 3.6.5, pycharm reports "unresolved reference 'join'". The last version of pycharm doesn't show any warning for the line below:
```
from os.path import join, expanduser
```
May I know why?
(I used python 3.6.? before)
I tried almost everything I can find, such as delete and recreate interpreter, invalidate cache and restart, delete and recreate virtualenv... how do I fix this?
(I can run my program without any error.) | 2018/04/06 | [
"https://Stackoverflow.com/questions/49687860",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8335451/"
] | Sadly, it seems that PyCharm will try to evaluate the path to an existing file/folder, which in some cases will not exist and thus create this warning.
It's not very useful when you are building a path for something that's supposed to be created, because obviously it will not exist yet, but PyCharm will still complain.
You could try clicking on File > Invalidate Caches > Invalidate and Restart. This worked for me.
[edit] It will come back tho, not much else to do. | Check that pycharms is using the correct interpreter. |
63,404,899 | I'm trying to write a highly modular Python logging system (using the logging module) and include information from the trace module in the log message.
For example, I want to be able to write a line of code like:
```
my_logger.log_message(MyLogFilter, "this is a message")
```
and have it include the trace of where the "log\_message" call was made, instead of the actual logger call itself.
I almost have the following code working except for the fact that the trace information is from the `logging.debug()` call rather than the `my_logger.log_message()` one.
```
class MyLogFilter(logging.Filter):
def __init__(self):
self.extra = {"error_code": 999}
self.level = "debug"
def filter(self, record):
for key in self.extra.keys():
setattr(record, key, self.extra[key])
class myLogger(object):
def __init__(self):
fid = logging.FileHandler("test.log")
formatter = logging.Formatter('%(pathname)s:%(lineno)i, %(error_code)%I, %(message)s'
fid.setFormatter(formatter)
self.my_logger = logging.getLogger(name="test")
self.my_logger.setLevel(logging.DEBUG)
self.my_logger.addHandler(fid)
def log_message(self, lfilter, message):
xfilter = lfilter()
self.my_logger.addFilter(xfilter)
log_funct = getattr(self.logger, xfilter.level)
log_funct(message)
if __name__ == "__main__":
logger = myLogger()
logger.log_message(MyLogFilter, "debugging")
```
This is a lot of trouble to go through in order to make a simple `logging.debug` call but in reality, I will have a list of many different versions of `MyLogFilter` at different logging levels that contain different values of the "error\_code" attribute and I'm trying to make the `log_message()` call as short and sweet as possible because it will be repeated numerous times.
I would appreciate any information about how to do what I want to, or if I'm completely off on the wrong track and if that's the case, what I should be doing instead.
I would like to stick to the internal python modules of "logging" and "trace" if that's possible instead of using any external solutions. | 2020/08/14 | [
"https://Stackoverflow.com/questions/63404899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3014653/"
] | >
> or if I'm completely off on the wrong track and if that's the case, what I should be doing instead.
>
>
>
My strong suggestion is that you view logging as a solved problem and avoid reinventing the wheel.
If you need more than the standard library's `logging` module provides, it's probably something like [structlog](https://www.structlog.org/en/stable/) (`pip install structlog`)
Structlog will give you:
* data binding
* cloud native structured logging
* pipelines
* ...and more
It will handle most local and cloud use cases.
Below is one common configuration that will output colorized logging to a .log file, to stdout, and can be extended further to log to eg AWS CloudWatch.
Notice there is an included processor: `StackInfoRenderer` -- this will include stack information to all logging calls with a 'truthy' value for stack\_info (this is also in stdlib's logging btw). If you only want stack info for exceptions, then you'd want to do something like exc\_info=True for your logging calls.
**main.py**
```py
from structlog import get_logger
from logging_config import configure_local_logging
configure_local_logging()
logger = get_logger()
logger.info("Some random info")
logger.debug("Debugging info with stack", stack_info=True)
try:
assert 'foo'=='bar'
catch Exception as e:
logger.error("Error info with an exc", exc_info=e)
```
**logging\_config.py**
```py
import logging
import structlog
def configure_local_logging(filename=__name__):
"""Provides a structlog colorized console and file renderer for logging in eg ING tickets"""
timestamper = structlog.processors.TimeStamper(fmt="%Y-%m-%d %H:%M:%S")
pre_chain = [
structlog.stdlib.add_log_level,
timestamper,
]
logging.config.dictConfig({
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"plain": {
"()": structlog.stdlib.ProcessorFormatter,
"processor": structlog.dev.ConsoleRenderer(colors=False),
"foreign_pre_chain": pre_chain,
},
"colored": {
"()": structlog.stdlib.ProcessorFormatter,
"processor": structlog.dev.ConsoleRenderer(colors=True),
"foreign_pre_chain": pre_chain,
},
},
"handlers": {
"default": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "colored",
},
"file": {
"level": "DEBUG",
"class": "logging.handlers.WatchedFileHandler",
"filename": filename + ".log",
"formatter": "plain",
},
},
"loggers": {
"": {
"handlers": ["default", "file"],
"level": "DEBUG",
"propagate": True,
},
}
})
structlog.configure_once(
processors=[
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
timestamper,
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
],
context_class=dict,
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
cache_logger_on_first_use=True,
)
```
Structlog can do quite a bit more than this. I suggest you check it out. | It turns out the missing piece to the puzzle is using the "traceback" module rather than the "trace" one. It's simple enough to parse the output of traceback to pull out the source filename and line number of the ".log\_message()" call.
If my logging needs become any more complicated then I'll definitely look into struct\_log. Thank you for that information as I'd never heard about it before. |
51,074,335 | Want to find the delimiter in the text file.
The text looks:
```
ID; Name
1; John Mak
2; David H
4; Herry
```
The file consists of tabs with the delimiter.
I tried with following: [by referring](https://stackoverflow.com/questions/21407993/find-delimiter-in-txt-to-convert-to-csv-using-python)
```
with open(filename, 'r') as f1:
dialect = csv.Sniffer().sniff(f1.read(1024), "\t")
print 'Delimiter:', dialect.delimiter
```
The result shows: `Delimiter:`
Expected result: `Delimiter: ;` | 2018/06/28 | [
"https://Stackoverflow.com/questions/51074335",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4268241/"
] | `sniff` can conclude with only one single character as the delimiter. Since your CSV file contains two characters as the delimiter, `sniff` will simply pick one of them. But since you also pass in the optional second argument to `sniff`, it will only pick what's contained in that value as a possible delimiter, which in your case, is `'\t'` (which is not visible from your `print` output).
From [`sniff`'s documentation](https://docs.python.org/2/library/csv.html#csv.Sniffer.sniff):
>
> If the optional *delimiters* parameter is given, it is interpreted as a
> string containing possible valid delimiter characters.
>
>
> | Sniffing is not guaranteed to work.
Here is one approach that will work with any kind of delimiter.
You start with what you assume is the most common delimiter `;` if that fails, then you try others until you manage to parse the row.
```
import csv
with open('sample.csv') as f:
reader = csv.reader(f, delimiter=';')
for row in reader:
try:
a,b = row
except ValueError:
try:
a,b = row[0].split(None, 1)
except ValueError:
a,b = row[0].split('\t', 1)
print('{} - {}'.format(a.strip(), b.strip()))
```
You can play around with this at [this replt.it link](https://repl.it/repls/KindQuizzicalMultiprocessing), play with the `sample.csv` file if you want to try out different delimiters.
You can combine sniffing with this to catch any odd delimiters that are not known to you. |
40,523,328 | I have code below for a simple test of `sympy.solve`:
```
#!/usr/bin/python
from sympy import *
x = Symbol('x', real=True)
#expr = sympify('exp(1 - 10*x) - 15')
expr = exp(1 - x) - 15
print "Expressiong:", expr
out = solve(expr)
for item in out:
print "Answer:", item
expr = exp(1 - 10*x) - 15
print expr
out = solve(expr)
for item in out:
print "Answer:", item
```
output is as follows:
```
Expressiong: exp(-x + 1) - 15
Answer: -log(15) + 1
exp(-10*x + 1) - 15
Answer: log(15**(9/10)*exp(1/10)/15)
```
The equation `exp(1 - x) = 15` is solved correctly (`x = -15log(15) + 1`).
But when I change `x` to `10*x`, the result is weird.
1. Why would there be a lot of complex answers if I initialize the symbol `x` without `real=True`?
2. Even with `real=True` when initializing the symbol `x`, the answer still is not correct. Comparing to the first equation, the result should be `-3/2*log(15) + 1/10`. Did I write the equation wrong?
Thanks in advance. | 2016/11/10 | [
"https://Stackoverflow.com/questions/40523328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2298014/"
] | I can also confirm that the `solve` output for the equation `exp(1 - 10*x) - 15 == 0` appears unecessarily complicated. I would suggest for univariate equations to first consider `sympy.solveset`. For this example, it gives the following nicely formatted solutions.
```
import sympy as sp
sp.init_printing(pretty_print=True)
x = sp.symbols('x')
sp.solveset(sp.exp(1 - 10*x) - 15,x)
```
[](https://i.stack.imgur.com/z2Tbt.png)
Note that there are complex roots due to the exponential function being multi-valued (in complex domain). If you want to restrict the domain of the solution to reals, `solveset` has the convenient option `domain` for this purpose.
```
sp.solveset(sp.exp(1 - 10*x) - 15,x, domain = sp.S.Reals)
```
[](https://i.stack.imgur.com/aeyZS.png) | `solve` gives real and complex roots if symbols allow. An equation like `exp(2*x)-4` can be though of as `y**2 - 4` with `y = exp(x)` and `y` (thus `x`) will have two solutions. There are 10 solutions if the 2 is replaced with 10. (But there are actually many more solutions besides as `solveset` indicates.)
You based your expectation of the 2nd case on a misstatement of the solution for the first case which was actually `-log(15) + 1`; the second case correctly gives 1/10th that value. |
53,289,402 | I have a windows setup file (.exe), which is used to install a software. This is a third party executable. During installation, it expects certain values and has a UI.
I want to run this setup .exe silently without any manual intervention (even for providing the parameter values).
After spending some time googling about the approach, I feel powershell should be able to help me with my requirements.
Can anyone suggest if powershell would be the right tool for this, or does a better tool exists to get this requirement?
Can python be used to implement this requirement?
Please note: Since this is a third party executable, I don't have the names of the parameters for which values must be provided to the UI during installation
Thank you | 2018/11/13 | [
"https://Stackoverflow.com/questions/53289402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1773169/"
] | ***Deployment***: Note that it is not always possible to run a setup.exe silently with full control of parameters and with reliable silent running. It depends on how the installer was designed. In these cases I normally resort to repackaging - some more detail below on this.
Some general tips for dealing with deployment:
1. ***Software Library Tip***: Maybe try to look up the software to see if others have dealt with it for silent installation and deployment: **<https://www.itninja.com/software>**
2. ***Extract Files***: Is this an embedded MSI (Windows Installer) file or a legacy style `setup.exe`? Maybe try to extract the files first: **[Programmatically extract contents of InstallShield setup.exe](https://stackoverflow.com/questions/8681252/programmatically-extract-contents-of-installshield-setup-exe/8694205#8694205)** (Installshield setup.exe files). More elaborate details:
* [How to run an installation in /silent mode with adjusted settings](https://stackoverflow.com/questions/52327442/how-to-run-an-installation-in-silent-mode-with-adjusted-settings/52338626#52338626) (extraction of non-Installshield setup.exe files - for example Advanced Installer or WiX setup.exe files)
* [Extract MSI from EXE](https://stackoverflow.com/questions/1547809/extract-msi-from-exe/24987512#24987512)
3. ***Setup.exe***: Just adding for completeness. You can try **`setup.exe /?`** or **`setup.exe /help`** or similar at the command line to check for embedded help in the exe.
---
***MSI Transforms***: If you discover and embedded MSI file in the setup.exe, then you can customize the installation parameters in a standardized way. details here: **[How to make better use of MSI files](https://stackoverflow.com/questions/458857/how-to-make-better-use-of-msi-files/1055861#1055861)**. Light weight customization is by **command line**, heavy weight customization via **transforms**.
***Legacy Setup.exe***: Legacy **`setup.exe`** are often created with [Inno Setup](http://www.jrsoftware.org/isinfo.htm), [NSIS](http://nsis.sourceforge.net/), or a [few other non-MSI setup authoring tools](http://www.installsite.org/pages/en/tt_nonmsi.htm). Each with their own quirks for command line. Here is an old source for some samples: <http://unattended.sourceforge.net/installers.php>.
***Repackaging***: Corporate users often repackage such legacy setup.exe files and turn them into **MSI** or **App-V** packages (or the brand new **MSIX** format). On the topic of repackaging and also a piece on PowerShell and the availability of [Windows Installer PowerShell Modules](https://github.com/heaths/psmsi): **[How can I use powershell to run through an installer?](https://stackoverflow.com/questions/46221983/how-can-i-use-powershell-to-run-through-an-installer/46224987#46224987)**.
---
***Some Further Links***:
* [How to create windows installer](https://stackoverflow.com/questions/49624070/how-to-create-windows-installer/49632260#49632260)
* [System Administrator deployment tools, including repackaging tools](http://www.installsite.org/pages/en/msi/admins.htm) | You can also try creating a shortcut to the exe and adding (one at a time) common help parameters in the shortcut target and see if one gives you a help dialog. Some common parameters are
/?
/help
-help
--help
This also depends on the developer implementing a help parameter, but most installer builders default to implementing, so more often than not you will get something. Also, try an internet search for "SOFTWARE NAME silent install". Quiet often the developer has some documentation on their web site. But, if it's a really small developer or freeware or the like, you may not find much. |
62,515,497 | I have a directory with quite some files. I have `n` search patterns and would like to list all files that match `m` of those.
Example: From the files below, list the ones that contain at least *two* of `str1`, `str2`, `str3` and `str4`.
```sh
$ ls -l dir/
total 16
-rw-r--r--. 1 me me 10 Jun 22 14:22 a
-rw-r--r--. 1 me me 5 Jun 22 14:22 b
-rw-r--r--. 1 me me 10 Jun 22 14:22 c
-rw-r--r--. 1 me me 9 Jun 22 14:22 d
-rw-r--r--. 1 me me 10 Jun 22 14:22 e
$ cat dir/a
str1
str2
$ cat dir/b
str2
$ cat dir/c
str2
str3
$ cat dir/d
str
str4
$ cat dir/e
str2
str4
```
I managed to achieve this with a rather ugly `for` loop on `find` results that spawns `n` `grep` processes for each file, which obviously is super inefficient and would take ages on directories with a lot of files:
```sh
for f in $(find dir/ -type f); do
c=0
grep -qs 'str1' $f && let c++
grep -qs 'str2' $f && let c++
grep -qs 'str3' $f && let c++
grep -qs 'str4' $f && let c++
[[ $c -ge 2 ]] && echo $f
done
```
I am quite sure I could achieve this in a far better performing way, but I am not sure how to tackle it. From what I understand from the man page (i.e. on `-e` and `-m`) this is not possible with `grep` alone.
What would be the right tool to use? Is this possible with `awk`?
Bonus: By using `find` I can define the files to search more precisely (i.e. `-prune` certain sub directories or only search files with `-iname '*.txt'`), which I would like to do with other solutions, too.
---
UPDATE
======
Some statistics about the performance of different implementations below.
---
### `find` + `awk`
(Script from [this](https://stackoverflow.com/a/62516128/2656118) answer)
```
real 0m0,006s
user 0m0,002s
sys 0m0,004s
```
---
### `python`
(I'm a `python` noob, please advise if this could be optimized):
```py
import os
patterns = []
patterns = ["str1", "str2", "str3", "str4"]
for root, dirs, files in os.walk("dir"):
for file in files:
c = int(0)
filepath = os.path.join(root, file)
with open(filepath, 'r') as input:
for pattern in patterns:
for line in input:
if pattern in line:
c += 1
break
if ( c >= 2 ):
print(filepath)
```
```
real 0m0,025s
user 0m0,019s
sys 0m0,006s
```
---
### `c++`
(Script from [this](https://stackoverflow.com/a/62519890/2656118) answer)
```
real 0m0,002s
user 0m0,001s
sys 0m0,001s
``` | 2020/06/22 | [
"https://Stackoverflow.com/questions/62515497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2656118/"
] | ```
$ cat reg.txt
str1
str2
str3
str4
```
```
$ cat prog.awk
# reads regexps from the first input file
# parameterized by `m'
# requires gawk or mawk for `nextfile'
FNR == NR {
reg[NR] = $0
next
}
FNR == 1 {
for (i in reg)
tst[i]
cnt = 0
}
{
for (i in tst) {
if ($0 ~ reg[i]) {
if (++cnt == m) {
print FILENAME
nextfile
}
delete tst[i]
}
}
}
```
```
$ find dir -type f -exec awk -v m=2 -f prog.awk reg.txt {} +
dir/a
dir/c
``` | Here's an option using `awk` since you tagged it with that too:
```
find dir -type f -exec \
awk '/str1|str2|str3|str4/{c++} END{if(c>=2) print FILENAME;}' {} \;
```
It will however count duplicates, so a file containing
```
str1
str1
```
will be listed. |
62,515,497 | I have a directory with quite some files. I have `n` search patterns and would like to list all files that match `m` of those.
Example: From the files below, list the ones that contain at least *two* of `str1`, `str2`, `str3` and `str4`.
```sh
$ ls -l dir/
total 16
-rw-r--r--. 1 me me 10 Jun 22 14:22 a
-rw-r--r--. 1 me me 5 Jun 22 14:22 b
-rw-r--r--. 1 me me 10 Jun 22 14:22 c
-rw-r--r--. 1 me me 9 Jun 22 14:22 d
-rw-r--r--. 1 me me 10 Jun 22 14:22 e
$ cat dir/a
str1
str2
$ cat dir/b
str2
$ cat dir/c
str2
str3
$ cat dir/d
str
str4
$ cat dir/e
str2
str4
```
I managed to achieve this with a rather ugly `for` loop on `find` results that spawns `n` `grep` processes for each file, which obviously is super inefficient and would take ages on directories with a lot of files:
```sh
for f in $(find dir/ -type f); do
c=0
grep -qs 'str1' $f && let c++
grep -qs 'str2' $f && let c++
grep -qs 'str3' $f && let c++
grep -qs 'str4' $f && let c++
[[ $c -ge 2 ]] && echo $f
done
```
I am quite sure I could achieve this in a far better performing way, but I am not sure how to tackle it. From what I understand from the man page (i.e. on `-e` and `-m`) this is not possible with `grep` alone.
What would be the right tool to use? Is this possible with `awk`?
Bonus: By using `find` I can define the files to search more precisely (i.e. `-prune` certain sub directories or only search files with `-iname '*.txt'`), which I would like to do with other solutions, too.
---
UPDATE
======
Some statistics about the performance of different implementations below.
---
### `find` + `awk`
(Script from [this](https://stackoverflow.com/a/62516128/2656118) answer)
```
real 0m0,006s
user 0m0,002s
sys 0m0,004s
```
---
### `python`
(I'm a `python` noob, please advise if this could be optimized):
```py
import os
patterns = []
patterns = ["str1", "str2", "str3", "str4"]
for root, dirs, files in os.walk("dir"):
for file in files:
c = int(0)
filepath = os.path.join(root, file)
with open(filepath, 'r') as input:
for pattern in patterns:
for line in input:
if pattern in line:
c += 1
break
if ( c >= 2 ):
print(filepath)
```
```
real 0m0,025s
user 0m0,019s
sys 0m0,006s
```
---
### `c++`
(Script from [this](https://stackoverflow.com/a/62519890/2656118) answer)
```
real 0m0,002s
user 0m0,001s
sys 0m0,001s
``` | 2020/06/22 | [
"https://Stackoverflow.com/questions/62515497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2656118/"
] | Since the programming language doesn't matter as much as the performance, here's a version in C++. I haven't compared it with `awk` myself though.
```
#include <cstddef>
#include <filesystem>
#include <fstream>
#include <iostream>
#include <string>
#include <string_view>
#include <utility>
#include <vector>
namespace fs = std::filesystem;
int main() {
const fs::path dir = "dir";
std::vector<std::string_view> strs{ // or: std::array<std::string_view, 4>
"str1",
"str2",
"str3",
"str4",
};
std::string line;
int count; // matches in a file
size_t strsco; // number of strings to check in strs
// a lambda to find a match on a line
auto matcher = [&](const fs::directory_entry& de) {
for(size_t idx = 0; idx < strsco; ++idx) {
if(line.find(strs[idx]) != std::string::npos) {
// a match was found
if(++count >= 2) {
std::cout << de.path() << '\n';
// or the below if the quotation marks surrounding the path are
// unwanted:
// std::cout << de.path().native() << '\n';
return false;
}
// swap the found string_view with the last in the vector
// to remove it from future matches in this file.
--strsco;
std::swap(strs[idx], strs[strsco]);
}
}
return true;
};
// do a "find dir -type f"
for(const fs::directory_entry& de : fs::recursive_directory_iterator(dir)) {
if(de.is_regular_file()) { // -type f
// open the found file
if(std::ifstream file(de.path()); file) {
// reset counters
count = 0;
strsco = strs.size();
// read line by line until the file stream is depleated or matcher()
// returns false
while(std::getline(file, line) && matcher(de));
}
}
}
}
```
Save it to `prog.cpp` and compile like this (if you have `g++`):
```
g++ -std=c++17 -O3 -o prog prog.cpp
```
If you use another compiler, be sure to turn on optimization for speed and that it requires C++17. | ```
$ cat reg.txt
str1
str2
str3
str4
```
```
$ cat prog.awk
# reads regexps from the first input file
# parameterized by `m'
# requires gawk or mawk for `nextfile'
FNR == NR {
reg[NR] = $0
next
}
FNR == 1 {
for (i in reg)
tst[i]
cnt = 0
}
{
for (i in tst) {
if ($0 ~ reg[i]) {
if (++cnt == m) {
print FILENAME
nextfile
}
delete tst[i]
}
}
}
```
```
$ find dir -type f -exec awk -v m=2 -f prog.awk reg.txt {} +
dir/a
dir/c
``` |
62,515,497 | I have a directory with quite some files. I have `n` search patterns and would like to list all files that match `m` of those.
Example: From the files below, list the ones that contain at least *two* of `str1`, `str2`, `str3` and `str4`.
```sh
$ ls -l dir/
total 16
-rw-r--r--. 1 me me 10 Jun 22 14:22 a
-rw-r--r--. 1 me me 5 Jun 22 14:22 b
-rw-r--r--. 1 me me 10 Jun 22 14:22 c
-rw-r--r--. 1 me me 9 Jun 22 14:22 d
-rw-r--r--. 1 me me 10 Jun 22 14:22 e
$ cat dir/a
str1
str2
$ cat dir/b
str2
$ cat dir/c
str2
str3
$ cat dir/d
str
str4
$ cat dir/e
str2
str4
```
I managed to achieve this with a rather ugly `for` loop on `find` results that spawns `n` `grep` processes for each file, which obviously is super inefficient and would take ages on directories with a lot of files:
```sh
for f in $(find dir/ -type f); do
c=0
grep -qs 'str1' $f && let c++
grep -qs 'str2' $f && let c++
grep -qs 'str3' $f && let c++
grep -qs 'str4' $f && let c++
[[ $c -ge 2 ]] && echo $f
done
```
I am quite sure I could achieve this in a far better performing way, but I am not sure how to tackle it. From what I understand from the man page (i.e. on `-e` and `-m`) this is not possible with `grep` alone.
What would be the right tool to use? Is this possible with `awk`?
Bonus: By using `find` I can define the files to search more precisely (i.e. `-prune` certain sub directories or only search files with `-iname '*.txt'`), which I would like to do with other solutions, too.
---
UPDATE
======
Some statistics about the performance of different implementations below.
---
### `find` + `awk`
(Script from [this](https://stackoverflow.com/a/62516128/2656118) answer)
```
real 0m0,006s
user 0m0,002s
sys 0m0,004s
```
---
### `python`
(I'm a `python` noob, please advise if this could be optimized):
```py
import os
patterns = []
patterns = ["str1", "str2", "str3", "str4"]
for root, dirs, files in os.walk("dir"):
for file in files:
c = int(0)
filepath = os.path.join(root, file)
with open(filepath, 'r') as input:
for pattern in patterns:
for line in input:
if pattern in line:
c += 1
break
if ( c >= 2 ):
print(filepath)
```
```
real 0m0,025s
user 0m0,019s
sys 0m0,006s
```
---
### `c++`
(Script from [this](https://stackoverflow.com/a/62519890/2656118) answer)
```
real 0m0,002s
user 0m0,001s
sys 0m0,001s
``` | 2020/06/22 | [
"https://Stackoverflow.com/questions/62515497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2656118/"
] | Since the programming language doesn't matter as much as the performance, here's a version in C++. I haven't compared it with `awk` myself though.
```
#include <cstddef>
#include <filesystem>
#include <fstream>
#include <iostream>
#include <string>
#include <string_view>
#include <utility>
#include <vector>
namespace fs = std::filesystem;
int main() {
const fs::path dir = "dir";
std::vector<std::string_view> strs{ // or: std::array<std::string_view, 4>
"str1",
"str2",
"str3",
"str4",
};
std::string line;
int count; // matches in a file
size_t strsco; // number of strings to check in strs
// a lambda to find a match on a line
auto matcher = [&](const fs::directory_entry& de) {
for(size_t idx = 0; idx < strsco; ++idx) {
if(line.find(strs[idx]) != std::string::npos) {
// a match was found
if(++count >= 2) {
std::cout << de.path() << '\n';
// or the below if the quotation marks surrounding the path are
// unwanted:
// std::cout << de.path().native() << '\n';
return false;
}
// swap the found string_view with the last in the vector
// to remove it from future matches in this file.
--strsco;
std::swap(strs[idx], strs[strsco]);
}
}
return true;
};
// do a "find dir -type f"
for(const fs::directory_entry& de : fs::recursive_directory_iterator(dir)) {
if(de.is_regular_file()) { // -type f
// open the found file
if(std::ifstream file(de.path()); file) {
// reset counters
count = 0;
strsco = strs.size();
// read line by line until the file stream is depleated or matcher()
// returns false
while(std::getline(file, line) && matcher(de));
}
}
}
}
```
Save it to `prog.cpp` and compile like this (if you have `g++`):
```
g++ -std=c++17 -O3 -o prog prog.cpp
```
If you use another compiler, be sure to turn on optimization for speed and that it requires C++17. | Here's an option using `awk` since you tagged it with that too:
```
find dir -type f -exec \
awk '/str1|str2|str3|str4/{c++} END{if(c>=2) print FILENAME;}' {} \;
```
It will however count duplicates, so a file containing
```
str1
str1
```
will be listed. |
44,282,257 | I am new in python.
I have a scrapy project. I am using conda virtual environment where I have written a pipeline class like this:
```
from cassandra.cqlengine import connection
from cassandra.cqlengine.management import sync_table, create_keyspace_network_topology
from recentnews.cassandra.model.NewsPaperDataModel import NewspaperDataModel
from recentnews.common.Constants import DEFAULT_KEYSPACE
class RecentNewsPipeline(object):
def __init__(self):
connection.setup(["192.168.99.100"], DEFAULT_KEYSPACE, protocol_version=3, port=9042)
create_keyspace_network_topology(DEFAULT_KEYSPACE, {'DC1': 2})
sync_table(NewspaperDataModel)
def process_item(self, item, spider):
NewspaperDataModel.create(
title=item.title,
url=item.url,
domain=item.domain
)
return item
```
When I run the scrapy crawler like `scrapy crawl author`, it gives me this error:
```
(news) (C:\Miniconda2\envs\news) E:\Shoshi\Python Projects\recentnews-scrapy\recentnews>scrapy crawl author
2017-05-31 15:56:29 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: recentnews)
2017-05-31 15:56:29 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'recentnews.spiders', 'SPIDER_MODULES': ['recentnews.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'recentnews'}
2017-05-31 15:56:29 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2017-05-31 15:56:30 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-05-31 15:56:30 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
Unhandled error in Deferred:
2017-05-31 15:56:30 [twisted] CRITICAL: Unhandled error in Deferred:
2017-05-31 15:56:30 [twisted] CRITICAL:
Traceback (most recent call last):
File "C:\Miniconda2\envs\news\lib\site-packages\twisted\internet\defer.py", line 1301, in _inlineCallbacks
result = g.send(result)
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\crawler.py", line 95, in crawl
six.reraise(*exc_info)
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\crawler.py", line 77, in crawl
self.engine = self._create_engine()
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\crawler.py", line 102, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\core\engine.py", line 70, in __init__
self.scraper = Scraper(crawler)
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\core\scraper.py", line 71, in __init__
self.itemproc = itemproc_cls.from_crawler(crawler)
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\middleware.py", line 34, in from_settings
mwcls = load_object(clspath)
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\utils\misc.py", line 44, in load_object
mod = import_module(module)
File "C:\Miniconda2\envs\news\lib\importlib\__init__.py", line 37, in import_module
__import__(name)
File "E:\Shoshi\Python Projects\recentnews-scrapy\recentnews\recentnews\pipelines.py", line 7, in <module>
from cassandra.cqlengine import connection
ImportError: No module named cqlengine
```
I am using conda virtual environment.
But, when I run this code from python command line it works fine. no error:
```
(news) (C:\Miniconda2\envs\news) E:\Shoshi\Python Projects\recentnews-scrapy\recentnews>python
Python 2.7.13 |Continuum Analytics, Inc.| (default, May 11 2017, 13:17:26) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> from cassandra.cqlengine import connection
>>> from cassandra.cqlengine.management import sync_table, create_keyspace_network_topology
>>> from recentnews.cassandra.model.NewsPaperDataModel import NewspaperDataModel
>>> from recentnews.common.Constants import DEFAULT_KEYSPACE
>>> connection.setup(["192.168.99.100"], DEFAULT_KEYSPACE, protocol_version=3, port=9042)
>>> create_keyspace_network_topology(DEFAULT_KEYSPACE, {'DC1': 2})
C:\Miniconda2\envs\news\lib\site-packages\cassandra\cqlengine\management.py:545: UserWarning: CQLENG_ALLOW_SCHEMA_MANAGEMENT environment variable is not set. Future versions of this package will require this variable to enable management functions.
warnings.warn(msg)
>>> sync_table(NewspaperDataModel)
......
```
You can see that `from cassandra.cqlengine import connection` is imported perfectly.
What am I missing? Why not this code is working when I run this using `scrapy crawl author`? | 2017/05/31 | [
"https://Stackoverflow.com/questions/44282257",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1427144/"
] | So it appeared that [there was a folder named `recentnews/cassandra/`](https://stackoverflow.com/questions/44282257/importerror-no-module-named-cqlengine-but-worked-on-python-command?noredirect=1#comment75573040_44282257) in the OP's scrapy project (namespace `recentnews.cassandra`).
When scrapy imports the item pipeline class `recentnews.pipelines.RecentNewsPipeline`, `importlib`'s interpreting of `from cassandra.cqlengine import connection` (at the beginning of `recentnews/pipeline.py`) found the local `recentnews.cassandra` module before the virtualenv-installed `cassandra` package.
One way to check which module is being imported is to add `import cassandra; print(cassandra.__file__)` before the `import` statement that fails. | When you create a virtual environment, by default the user-installed packages are not copied. You would therefore have to run `pip install casandra` (or whatever the package is called) in your virtual environment. That will probably fix this problem. |
50,877,817 | The first column corresponds to a single process and the second column are the components that go into the process. I want to have a loop that can examine all the processes and evaluate what other processes have the same individual components. Ultimately, I want a loop to find what processes have 50% or more of their components match 50% or more of another process.
For example, process 1 has 4 components in common with process 2, so they have more than 50% of their components that pair, so I would want a function to identify this process pairing. The same for process 1 and 3.
```
Process Comp.
1 511
1 233
1 712
1 606
1 4223
1 123
1 456
2 511
2 233
2 606
2 4223
2 222
2 309
2 708
3 309
3 412
3 299
3 511
3 712
3 222
3 708
```
I feel like I could use a network library for this in python or maybe run it in matlab with an iterative fucntion, but I need to do it in excel, and I am new to coding in excel so any help would be appreciated! | 2018/06/15 | [
"https://Stackoverflow.com/questions/50877817",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9946677/"
] | Here is what I ended up doing:
```
// Get the tile's cartesian center.
var cartesian = new Cesium.Cartesian3(1525116.05769, -4463608.36127, 4278734.88048);
// Get the tile's cartographic center.
var cartographic = Cesium.Cartographic.fromCartesian(cartesian);
// Rotate the model.
model.rotation.x = -cartographic.latitude;
model.rotation.z = cartographic.longitude + Math.PI / 2;
``` | Just convert "gltfUpAxis" to "Z" would work fine. Or you can try "Y" too.
```
"asset": {
"gltfUpAxis": "Z",
"version": "1.0"
},
``` |
67,022,905 | I have a simple text file that has groups of key:value pairs with a blank row between each group of key:values. The number of key:value pairs can vary from group to group. Sample data and my code so far.
```
key1: value1
key2: value2
key3: value3
key1: value4
key2: value5
key3: value6
```
The code is close to what I am looking for, but the part that is missing is when it gets to a blank line I need it to close out the JSON and start a new one for the next group.
```
#!/usr/bin/python
import json
f = open("sample.txt", "r")
content = f.read()
splitcontent = content.splitlines()
d = []
for v in splitcontent:
l = v.split('\n')
print(l)
if l == ['']:
continue
d.append(dict(s.split(': ',1) for s in l))
with open("dump.json", 'w') as file:
file.write((json.dumps(d, indent=4, sort_keys= False)))
```
I tried to use the l == [''] to end the JSON and it does skip the blank, but just continues which is expected just not what I need.
Thanks for the help and if you recognize the code above an extra thank you. | 2021/04/09 | [
"https://Stackoverflow.com/questions/67022905",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3123307/"
] | Using [`Array.from()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/from) and [`Array.reduce()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce), this could be done as follows:
```
Array.from(map.entries()).reduce((a, b) => a[1] < b[1] ? b : a)[0];
``` | Here is one approach
* Convert it to an array of key/value pairs
* Sort the array by the value
* Extract the second item of the first pair
Like so
```
let map: Map<string, number> = new Map();
map.set("a", 12);
map.set("b", 124);
map.set("c", 14);
map.set("d", 155);
const key = Array.from(map).sort((a, b) => (a[1] > b[1] ? -1 : 1))[0][0];
console.log(key);
``` |
2,319,495 | I need to keep a large number of Windows XP machines running the same version of python, with an assortment of modules, one of which is python-win32. I thought about installing python on a network drive that is mounted by all the client machines, and just adjust the path on the clients. Python starts up fine from the network, but when importing win32com I get a pop-up error saying:
>
> The procedure entry point ?PyWinObject\_AsHANDLE@@YAHPAU\_object@@PAPAXH@Z could not be located in the dynamic link library pywintypes24.dll
>
>
>
after dismissing the message dialog I get in the console:
>
> ImportError: DLL load failed: The specified procedure could not be found.
>
>
>
I searched the python directory for the pywintypes24.dll and it is present in "Lib\site-packages\pywin32\_system32" .
What am I missing and is there another way in which I can install Python + Python-Win32 + additional module once and have them running on many machines? I don't have access to the Microsoft systems management tools, so I need to be a bit more low-tech than that. | 2010/02/23 | [
"https://Stackoverflow.com/questions/2319495",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18308/"
] | On every machine you have to basically run following `pywin32_postinstall.py -install` once. Assuming your python installation on the network is `N:\Python26`, run following command on every client:
```
N:\Python26\python.exe N:\Python26\Scripts\pywin32_postinstall.py -install
```
Another important thing is `Good Luck!`. The reason is that you might need to do this as `admin`. In my case such setup worked for all but one computer. I still did not figure out why. | You could use [batch files running at boot](http://isg.ee.ethz.ch/tools/realmen/index.en.html) to
* Mount the network share (`net use \\server\share`)
* Copy the Python and packages installers from the network share to a local folder
* Check version of the msi installer against the installed version
* If different, uninstall Python and all version dependent packages
* Reinstall all packages
This would be pretty much a roll your own central management system for that software. |
2,319,495 | I need to keep a large number of Windows XP machines running the same version of python, with an assortment of modules, one of which is python-win32. I thought about installing python on a network drive that is mounted by all the client machines, and just adjust the path on the clients. Python starts up fine from the network, but when importing win32com I get a pop-up error saying:
>
> The procedure entry point ?PyWinObject\_AsHANDLE@@YAHPAU\_object@@PAPAXH@Z could not be located in the dynamic link library pywintypes24.dll
>
>
>
after dismissing the message dialog I get in the console:
>
> ImportError: DLL load failed: The specified procedure could not be found.
>
>
>
I searched the python directory for the pywintypes24.dll and it is present in "Lib\site-packages\pywin32\_system32" .
What am I missing and is there another way in which I can install Python + Python-Win32 + additional module once and have them running on many machines? I don't have access to the Microsoft systems management tools, so I need to be a bit more low-tech than that. | 2010/02/23 | [
"https://Stackoverflow.com/questions/2319495",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18308/"
] | Python (or precisely, the OS) searches the DLLs using os.environ["PATH"] and not by searching sys.path.
So you could start Python using a simple .cmd file instead which adds \server\share\python26 to the path (given the installer (or you) copied the DLLs from \server\share\python26\lib\site-packages\pywin32-system32 to \server\share\python26).
Or, you can add the following code to your scripts before they try to import win32api etc:
```
# Add Python installation directory to the path,
# because on Windows 7 the pywin32 installer fails to copy
# the required DLLs to the %WINDIR%\System32 directory and
# copies them to the Python installation directory instead.
# Fortunately, in Python it is possible to modify the PATH
# before loading the DLLs.
os.environ["PATH"] = sys.prefix + ";" + os.environ.get("PATH")
import win32gui
import win32con
``` | You could use [batch files running at boot](http://isg.ee.ethz.ch/tools/realmen/index.en.html) to
* Mount the network share (`net use \\server\share`)
* Copy the Python and packages installers from the network share to a local folder
* Check version of the msi installer against the installed version
* If different, uninstall Python and all version dependent packages
* Reinstall all packages
This would be pretty much a roll your own central management system for that software. |
2,319,495 | I need to keep a large number of Windows XP machines running the same version of python, with an assortment of modules, one of which is python-win32. I thought about installing python on a network drive that is mounted by all the client machines, and just adjust the path on the clients. Python starts up fine from the network, but when importing win32com I get a pop-up error saying:
>
> The procedure entry point ?PyWinObject\_AsHANDLE@@YAHPAU\_object@@PAPAXH@Z could not be located in the dynamic link library pywintypes24.dll
>
>
>
after dismissing the message dialog I get in the console:
>
> ImportError: DLL load failed: The specified procedure could not be found.
>
>
>
I searched the python directory for the pywintypes24.dll and it is present in "Lib\site-packages\pywin32\_system32" .
What am I missing and is there another way in which I can install Python + Python-Win32 + additional module once and have them running on many machines? I don't have access to the Microsoft systems management tools, so I need to be a bit more low-tech than that. | 2010/02/23 | [
"https://Stackoverflow.com/questions/2319495",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18308/"
] | On every machine you have to basically run following `pywin32_postinstall.py -install` once. Assuming your python installation on the network is `N:\Python26`, run following command on every client:
```
N:\Python26\python.exe N:\Python26\Scripts\pywin32_postinstall.py -install
```
Another important thing is `Good Luck!`. The reason is that you might need to do this as `admin`. In my case such setup worked for all but one computer. I still did not figure out why. | Python (or precisely, the OS) searches the DLLs using os.environ["PATH"] and not by searching sys.path.
So you could start Python using a simple .cmd file instead which adds \server\share\python26 to the path (given the installer (or you) copied the DLLs from \server\share\python26\lib\site-packages\pywin32-system32 to \server\share\python26).
Or, you can add the following code to your scripts before they try to import win32api etc:
```
# Add Python installation directory to the path,
# because on Windows 7 the pywin32 installer fails to copy
# the required DLLs to the %WINDIR%\System32 directory and
# copies them to the Python installation directory instead.
# Fortunately, in Python it is possible to modify the PATH
# before loading the DLLs.
os.environ["PATH"] = sys.prefix + ";" + os.environ.get("PATH")
import win32gui
import win32con
``` |
9,776,351 | This question describes my conclusion after researching available options for creating a headless Chrome instance in Python and asks for confirmation or resources that describe a 'better way'.
From what I've seen it seems that the quickest way to get started with a headless instance of Chrome in a Python application is to use CEF (<http://code.google.com/p/chromiumembedded/>) with CEFPython (<http://code.google.com/p/cefpython/>). CEFPython seems premature though, so using it would likely mean further customization before I'm able to load a headless Chrome instance that loads web pages (and required files), resolves a completed DOM and then lets me run arbitrary JS against it from Python.
Have I missed any other projects that are more mature or would make this easier for me? | 2012/03/19 | [
"https://Stackoverflow.com/questions/9776351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193601/"
] | Any reason you haven't considered Selenium with the Chrome Driver?
<http://code.google.com/p/selenium/wiki/ChromeDriver>
<http://code.google.com/p/selenium/wiki/PythonBindings> | [casperjs](http://casperjs.org/) is a headless webkit, but it wouldn't give you python bindings that I know of; it seems command-line oriented, but that doesn't mean you couldn't run it from python in such a way that satisfies what you are after. When you run casperjs, you provide a path to the javascript you want to execute; so you would need to emit that from Python.
But all that aside, I bring up casperjs because it seems to satisfy the lightweight, headless requirement very nicely. |
9,776,351 | This question describes my conclusion after researching available options for creating a headless Chrome instance in Python and asks for confirmation or resources that describe a 'better way'.
From what I've seen it seems that the quickest way to get started with a headless instance of Chrome in a Python application is to use CEF (<http://code.google.com/p/chromiumembedded/>) with CEFPython (<http://code.google.com/p/cefpython/>). CEFPython seems premature though, so using it would likely mean further customization before I'm able to load a headless Chrome instance that loads web pages (and required files), resolves a completed DOM and then lets me run arbitrary JS against it from Python.
Have I missed any other projects that are more mature or would make this easier for me? | 2012/03/19 | [
"https://Stackoverflow.com/questions/9776351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193601/"
] | Any reason you haven't considered Selenium with the Chrome Driver?
<http://code.google.com/p/selenium/wiki/ChromeDriver>
<http://code.google.com/p/selenium/wiki/PythonBindings> | While I'm the author of [CasperJS](http://casperjs.org/), I invite you to check out [Ghost.py](http://jeanphix.me/Ghost.py/), *a webkit web client written in Python*.
While it's heavily inspired by CasperJS, it's not based on [PhantomJS](http://phantomjs.org/) — it still uses [PyQt](http://www.riverbankcomputing.co.uk/software/pyqt/intro) bindings and Webkit though. |
9,776,351 | This question describes my conclusion after researching available options for creating a headless Chrome instance in Python and asks for confirmation or resources that describe a 'better way'.
From what I've seen it seems that the quickest way to get started with a headless instance of Chrome in a Python application is to use CEF (<http://code.google.com/p/chromiumembedded/>) with CEFPython (<http://code.google.com/p/cefpython/>). CEFPython seems premature though, so using it would likely mean further customization before I'm able to load a headless Chrome instance that loads web pages (and required files), resolves a completed DOM and then lets me run arbitrary JS against it from Python.
Have I missed any other projects that are more mature or would make this easier for me? | 2012/03/19 | [
"https://Stackoverflow.com/questions/9776351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193601/"
] | Any reason you haven't considered Selenium with the Chrome Driver?
<http://code.google.com/p/selenium/wiki/ChromeDriver>
<http://code.google.com/p/selenium/wiki/PythonBindings> | I use this to get the driver:
```
def get_browser(storage_dir, headless=False):
"""
Get the browser (a "driver").
Parameters
----------
storage_dir : str
headless : bool
Results
-------
browser : selenium webdriver object
"""
# find the path with 'which chromedriver'
path_to_chromedriver = '/usr/local/bin/chromedriver'
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
if headless:
chrome_options.add_argument("--headless")
chrome_options.add_experimental_option('prefs', {
"plugins.plugins_list": [{"enabled": False,
"name": "Chrome PDF Viewer"}],
"download": {
"prompt_for_download": False,
"default_directory": storage_dir,
"directory_upgrade": False,
"open_pdf_in_system_reader": False
}
})
browser = webdriver.Chrome(path_to_chromedriver,
chrome_options=chrome_options)
return browser
```
By switching the `headless` parameter you can either watch it or not. |
9,776,351 | This question describes my conclusion after researching available options for creating a headless Chrome instance in Python and asks for confirmation or resources that describe a 'better way'.
From what I've seen it seems that the quickest way to get started with a headless instance of Chrome in a Python application is to use CEF (<http://code.google.com/p/chromiumembedded/>) with CEFPython (<http://code.google.com/p/cefpython/>). CEFPython seems premature though, so using it would likely mean further customization before I'm able to load a headless Chrome instance that loads web pages (and required files), resolves a completed DOM and then lets me run arbitrary JS against it from Python.
Have I missed any other projects that are more mature or would make this easier for me? | 2012/03/19 | [
"https://Stackoverflow.com/questions/9776351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193601/"
] | Any reason you haven't considered Selenium with the Chrome Driver?
<http://code.google.com/p/selenium/wiki/ChromeDriver>
<http://code.google.com/p/selenium/wiki/PythonBindings> | This question is 5 years old now and at the time it was a big challenge to run a headless chrome using python, but the good news is:
**Starting from version 59, released in June 2017, Chrome comes with a headless driver**, meaning we can use it in a non-graphical server environment and run tests without having pages visually rendered etc which saves a lot of time and memory for testing or scraping. Setting Selenium for that is very easy:
(I assume that you have installed selenium and chrome driver):
```
from selenium import webdriver
#set a headless browser
options = webdriver.ChromeOptions()
options.add_argument('headless')
browser = webdriver.Chrome(chrome_options=options)
```
and now your chrome will run headlessly, if you take out options from the last line, it will show you the browser. |
9,776,351 | This question describes my conclusion after researching available options for creating a headless Chrome instance in Python and asks for confirmation or resources that describe a 'better way'.
From what I've seen it seems that the quickest way to get started with a headless instance of Chrome in a Python application is to use CEF (<http://code.google.com/p/chromiumembedded/>) with CEFPython (<http://code.google.com/p/cefpython/>). CEFPython seems premature though, so using it would likely mean further customization before I'm able to load a headless Chrome instance that loads web pages (and required files), resolves a completed DOM and then lets me run arbitrary JS against it from Python.
Have I missed any other projects that are more mature or would make this easier for me? | 2012/03/19 | [
"https://Stackoverflow.com/questions/9776351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193601/"
] | While I'm the author of [CasperJS](http://casperjs.org/), I invite you to check out [Ghost.py](http://jeanphix.me/Ghost.py/), *a webkit web client written in Python*.
While it's heavily inspired by CasperJS, it's not based on [PhantomJS](http://phantomjs.org/) — it still uses [PyQt](http://www.riverbankcomputing.co.uk/software/pyqt/intro) bindings and Webkit though. | [casperjs](http://casperjs.org/) is a headless webkit, but it wouldn't give you python bindings that I know of; it seems command-line oriented, but that doesn't mean you couldn't run it from python in such a way that satisfies what you are after. When you run casperjs, you provide a path to the javascript you want to execute; so you would need to emit that from Python.
But all that aside, I bring up casperjs because it seems to satisfy the lightweight, headless requirement very nicely. |
9,776,351 | This question describes my conclusion after researching available options for creating a headless Chrome instance in Python and asks for confirmation or resources that describe a 'better way'.
From what I've seen it seems that the quickest way to get started with a headless instance of Chrome in a Python application is to use CEF (<http://code.google.com/p/chromiumembedded/>) with CEFPython (<http://code.google.com/p/cefpython/>). CEFPython seems premature though, so using it would likely mean further customization before I'm able to load a headless Chrome instance that loads web pages (and required files), resolves a completed DOM and then lets me run arbitrary JS against it from Python.
Have I missed any other projects that are more mature or would make this easier for me? | 2012/03/19 | [
"https://Stackoverflow.com/questions/9776351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193601/"
] | I use this to get the driver:
```
def get_browser(storage_dir, headless=False):
"""
Get the browser (a "driver").
Parameters
----------
storage_dir : str
headless : bool
Results
-------
browser : selenium webdriver object
"""
# find the path with 'which chromedriver'
path_to_chromedriver = '/usr/local/bin/chromedriver'
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
if headless:
chrome_options.add_argument("--headless")
chrome_options.add_experimental_option('prefs', {
"plugins.plugins_list": [{"enabled": False,
"name": "Chrome PDF Viewer"}],
"download": {
"prompt_for_download": False,
"default_directory": storage_dir,
"directory_upgrade": False,
"open_pdf_in_system_reader": False
}
})
browser = webdriver.Chrome(path_to_chromedriver,
chrome_options=chrome_options)
return browser
```
By switching the `headless` parameter you can either watch it or not. | [casperjs](http://casperjs.org/) is a headless webkit, but it wouldn't give you python bindings that I know of; it seems command-line oriented, but that doesn't mean you couldn't run it from python in such a way that satisfies what you are after. When you run casperjs, you provide a path to the javascript you want to execute; so you would need to emit that from Python.
But all that aside, I bring up casperjs because it seems to satisfy the lightweight, headless requirement very nicely. |
9,776,351 | This question describes my conclusion after researching available options for creating a headless Chrome instance in Python and asks for confirmation or resources that describe a 'better way'.
From what I've seen it seems that the quickest way to get started with a headless instance of Chrome in a Python application is to use CEF (<http://code.google.com/p/chromiumembedded/>) with CEFPython (<http://code.google.com/p/cefpython/>). CEFPython seems premature though, so using it would likely mean further customization before I'm able to load a headless Chrome instance that loads web pages (and required files), resolves a completed DOM and then lets me run arbitrary JS against it from Python.
Have I missed any other projects that are more mature or would make this easier for me? | 2012/03/19 | [
"https://Stackoverflow.com/questions/9776351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193601/"
] | This question is 5 years old now and at the time it was a big challenge to run a headless chrome using python, but the good news is:
**Starting from version 59, released in June 2017, Chrome comes with a headless driver**, meaning we can use it in a non-graphical server environment and run tests without having pages visually rendered etc which saves a lot of time and memory for testing or scraping. Setting Selenium for that is very easy:
(I assume that you have installed selenium and chrome driver):
```
from selenium import webdriver
#set a headless browser
options = webdriver.ChromeOptions()
options.add_argument('headless')
browser = webdriver.Chrome(chrome_options=options)
```
and now your chrome will run headlessly, if you take out options from the last line, it will show you the browser. | [casperjs](http://casperjs.org/) is a headless webkit, but it wouldn't give you python bindings that I know of; it seems command-line oriented, but that doesn't mean you couldn't run it from python in such a way that satisfies what you are after. When you run casperjs, you provide a path to the javascript you want to execute; so you would need to emit that from Python.
But all that aside, I bring up casperjs because it seems to satisfy the lightweight, headless requirement very nicely. |
9,776,351 | This question describes my conclusion after researching available options for creating a headless Chrome instance in Python and asks for confirmation or resources that describe a 'better way'.
From what I've seen it seems that the quickest way to get started with a headless instance of Chrome in a Python application is to use CEF (<http://code.google.com/p/chromiumembedded/>) with CEFPython (<http://code.google.com/p/cefpython/>). CEFPython seems premature though, so using it would likely mean further customization before I'm able to load a headless Chrome instance that loads web pages (and required files), resolves a completed DOM and then lets me run arbitrary JS against it from Python.
Have I missed any other projects that are more mature or would make this easier for me? | 2012/03/19 | [
"https://Stackoverflow.com/questions/9776351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193601/"
] | While I'm the author of [CasperJS](http://casperjs.org/), I invite you to check out [Ghost.py](http://jeanphix.me/Ghost.py/), *a webkit web client written in Python*.
While it's heavily inspired by CasperJS, it's not based on [PhantomJS](http://phantomjs.org/) — it still uses [PyQt](http://www.riverbankcomputing.co.uk/software/pyqt/intro) bindings and Webkit though. | I use this to get the driver:
```
def get_browser(storage_dir, headless=False):
"""
Get the browser (a "driver").
Parameters
----------
storage_dir : str
headless : bool
Results
-------
browser : selenium webdriver object
"""
# find the path with 'which chromedriver'
path_to_chromedriver = '/usr/local/bin/chromedriver'
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
if headless:
chrome_options.add_argument("--headless")
chrome_options.add_experimental_option('prefs', {
"plugins.plugins_list": [{"enabled": False,
"name": "Chrome PDF Viewer"}],
"download": {
"prompt_for_download": False,
"default_directory": storage_dir,
"directory_upgrade": False,
"open_pdf_in_system_reader": False
}
})
browser = webdriver.Chrome(path_to_chromedriver,
chrome_options=chrome_options)
return browser
```
By switching the `headless` parameter you can either watch it or not. |
9,776,351 | This question describes my conclusion after researching available options for creating a headless Chrome instance in Python and asks for confirmation or resources that describe a 'better way'.
From what I've seen it seems that the quickest way to get started with a headless instance of Chrome in a Python application is to use CEF (<http://code.google.com/p/chromiumembedded/>) with CEFPython (<http://code.google.com/p/cefpython/>). CEFPython seems premature though, so using it would likely mean further customization before I'm able to load a headless Chrome instance that loads web pages (and required files), resolves a completed DOM and then lets me run arbitrary JS against it from Python.
Have I missed any other projects that are more mature or would make this easier for me? | 2012/03/19 | [
"https://Stackoverflow.com/questions/9776351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193601/"
] | This question is 5 years old now and at the time it was a big challenge to run a headless chrome using python, but the good news is:
**Starting from version 59, released in June 2017, Chrome comes with a headless driver**, meaning we can use it in a non-graphical server environment and run tests without having pages visually rendered etc which saves a lot of time and memory for testing or scraping. Setting Selenium for that is very easy:
(I assume that you have installed selenium and chrome driver):
```
from selenium import webdriver
#set a headless browser
options = webdriver.ChromeOptions()
options.add_argument('headless')
browser = webdriver.Chrome(chrome_options=options)
```
and now your chrome will run headlessly, if you take out options from the last line, it will show you the browser. | While I'm the author of [CasperJS](http://casperjs.org/), I invite you to check out [Ghost.py](http://jeanphix.me/Ghost.py/), *a webkit web client written in Python*.
While it's heavily inspired by CasperJS, it's not based on [PhantomJS](http://phantomjs.org/) — it still uses [PyQt](http://www.riverbankcomputing.co.uk/software/pyqt/intro) bindings and Webkit though. |
9,776,351 | This question describes my conclusion after researching available options for creating a headless Chrome instance in Python and asks for confirmation or resources that describe a 'better way'.
From what I've seen it seems that the quickest way to get started with a headless instance of Chrome in a Python application is to use CEF (<http://code.google.com/p/chromiumembedded/>) with CEFPython (<http://code.google.com/p/cefpython/>). CEFPython seems premature though, so using it would likely mean further customization before I'm able to load a headless Chrome instance that loads web pages (and required files), resolves a completed DOM and then lets me run arbitrary JS against it from Python.
Have I missed any other projects that are more mature or would make this easier for me? | 2012/03/19 | [
"https://Stackoverflow.com/questions/9776351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193601/"
] | This question is 5 years old now and at the time it was a big challenge to run a headless chrome using python, but the good news is:
**Starting from version 59, released in June 2017, Chrome comes with a headless driver**, meaning we can use it in a non-graphical server environment and run tests without having pages visually rendered etc which saves a lot of time and memory for testing or scraping. Setting Selenium for that is very easy:
(I assume that you have installed selenium and chrome driver):
```
from selenium import webdriver
#set a headless browser
options = webdriver.ChromeOptions()
options.add_argument('headless')
browser = webdriver.Chrome(chrome_options=options)
```
and now your chrome will run headlessly, if you take out options from the last line, it will show you the browser. | I use this to get the driver:
```
def get_browser(storage_dir, headless=False):
"""
Get the browser (a "driver").
Parameters
----------
storage_dir : str
headless : bool
Results
-------
browser : selenium webdriver object
"""
# find the path with 'which chromedriver'
path_to_chromedriver = '/usr/local/bin/chromedriver'
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
if headless:
chrome_options.add_argument("--headless")
chrome_options.add_experimental_option('prefs', {
"plugins.plugins_list": [{"enabled": False,
"name": "Chrome PDF Viewer"}],
"download": {
"prompt_for_download": False,
"default_directory": storage_dir,
"directory_upgrade": False,
"open_pdf_in_system_reader": False
}
})
browser = webdriver.Chrome(path_to_chromedriver,
chrome_options=chrome_options)
return browser
```
By switching the `headless` parameter you can either watch it or not. |
27,916,891 | I wrote some python code like:
```
if platform == 'ios':
do_something()
elif platform == 'android':
do_something_else()
```
And this piece of code was strongly criticized by one of my colleague. He accused me why I don't have an else block to handle the rest part.
In my opinion, all I need to do is to deal with these two situations, since I did not write the else block. I meant for else situations, the program will just let it pass through this check and continue going.
Was an else block necessary (obey some PEP rules) for any if elif block?
If it is morally compulsory, should I write my code as:
```
if platform == 'ios':
do_something()
if platform == 'android':
do_something_else()
```
But I still feel my first block just OK. Am I wrong? | 2015/01/13 | [
"https://Stackoverflow.com/questions/27916891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3480199/"
] | `else` is not required from the syntax point of view and is not enforced by `PEP8`. If you intended do nothing if `platform` is not `ios` or `android` then this is perfectly ok.
Alternatively, you can have a mapping "platform > function", something along these lines:
```
mapping = {'ios': do_something, 'android': do_something_else}
f = mapping.get(platform)
if f:
f()
``` | It depends on your code but in this case either way would've been fine. There was a logical reason that your code needed to be that way and that's fine. You do not have to follow the rules all the time, you have to be able to try different stuff all the time. |
27,955,947 | We are trying to write an automated test for our iOS app using the Appium python client.
We want to imitate Swipe event on an element but none of the APIs from appium.webdriver.common.touch\_action seem to be behaving the way we want.
Basically we want to break down swipe in three events (KEYDOWN, MOVE, KEYUP).
The flow goes as below
1. Find the element.
2. Hold it, swipe it from point A to B and Hold it there. (KEYDOWN and MOVE)
3. Do something.
4. Do something more.
5. Release the element. (KEYUP)
* How can we achieve it on iOS ?
We have it working on Android using monkeyrunner. It works as below
```
X=50
Y=50
hr = MonkeyRunner.waitForConnection(timeout = 60, deviceId = dev_2)
hr.touch(X, Y,MonkeyDevice.DOWN)
for i in range(1, 13):
hr.touch(X, Y + 20*i, hr.MOVE)
time.sleep(0.1)
MonkeyRunner.sleep(2)
// Do something
hr.touch(X, Y, MonkeyDevice.UP)
```
Thanks! | 2015/01/15 | [
"https://Stackoverflow.com/questions/27955947",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/631679/"
] | Your 'data' contains 4 contours. Each contour has one point that was drawn on image. What you need is 1 contour with 4 points. Push all your points to data[0].
On side note, you don't need to call drawContours() in loop. If you provide negative index of contour (third parameter), then all contours will be drawn.
```
vector<vector<Point> > data(1);
data[0].push_back(Point(0,0));
data[0].push_back(Point(0,120));
data[0].push_back(Point(180,100));
data[0].push_back(Point(70,0));
drawContours(input, data, -1, Scalar(0,255,0), 10, LINE_8);
``` | If you have only 4 points, I suggest you to use cv::Rectangle. If you can have a lot of points, you have to write a function using [cv::Line](http://docs.opencv.org/2.4.2/modules/core/doc/drawing_functions.html#line). |
49,981,741 | I am writing a python application and trying to manage the code in a structure.
The directory structure that I have is something like the following:-
```
package/
A/
__init__.py
base.py
B/
__init__.py
base.py
app.py
__init__.py
```
so I have a line in A/**init**.py that says
```
from .base import *
```
No problem there, but when I put the same line in B/**init**.py
```
from .base import *
```
I get an error
```
E0402: Attempted relative import beyond top-level package.
```
Isn't the two supposed to be identical? what exactly am I doing wrong here?
I am using Python 3.6, the way I ran the application is from the terminal with
```
> python app.py
```
Thanks
UPDATE:
Sorry, The error is from somewhere else.
In A/base.py i have
```
class ClassA():
...
```
In B/base.py I have
```
from ..A import ClassA
class ClassB(ClassA):
...
```
The error came from the import statement in B/base.py
```
from ..A import ClassA
```
UPDATE #2
@JOHN\_16 app.py is as follows:-
```
from A import ClassA
from B import ClassB
if __name__ == "__main__":
...
```
Also updated directory to include empty **init**.py as suggested. | 2018/04/23 | [
"https://Stackoverflow.com/questions/49981741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7877397/"
] | This is occurred because you have two packages: *A* and *B*. Package *B* can't get access to content of package *A* via relative import because it cant move outside top-level package. In you case both packages are top-level.
You need reorganize you project, for example like that
```
.
├── TL
│ ├── A
│ │ ├── __init__.py
│ │ └── base.py
│ ├── B
│ │ ├── __init__.py
│ │ └── base.py
│ └── __init__.py
└── app.py
```
and change content pf you `app.py` to use package TL:
```
from TL.A import ClassA
from TL.B import ClassB
if __name__ == "__main__":
``` | My problem was forgetting `__init__.py` in my top level directory. This allowed me to use relative imports for folders in that directory. |
62,209,746 | Still fairly new to python.
I was wondering what would be a good way of detecting what output response a python program were to choose.
As an example, if you were to make a speed/distance/time calculator, if only 2 input were ever given, how would you detect which was the missing input and therefore the output? I can think of some fairly crude ways but I was wondering if there was anything else if more complex tasks were to come into play.
I guess something like:
```
def sdf(speed=0, distance=0, time=0):
# detect which parameter has no input / equals 0
# calculate result
# return result
sdf(speed=10, distance=2)
```
Any ideas? | 2020/06/05 | [
"https://Stackoverflow.com/questions/62209746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11098113/"
] | Python allows you to change types of variables on the fly. Since you are working with integers and `0` could be a useful value in your calculations, your default 'not present' value should be `None`:
```
def sdf(speed=None, time=None, distance=None):
if speed is None:
return calculate_speed(time, distance), time, distance
if time is None:
return speed, calculate_time(speed, distance), distance
if distance is None:
return speed, time, calculate_distance(speed, time)
# All paramters have been set! Maybe check if all three are correct
return speed, time, distance
speed, time, distance = sdf(speed=1, distance=2)
```
This way you don't have to find out what happened afterwards. This function will give you all three values, given you gave it at least 2 out of the 3.
If your program flow allows for multiple values be `None`, your functions `calculate_XY` should throw an exception if they detect it. So in this case:
```
def calculate_distance(speed, time)
return speed * time
```
It will throw an unsupported operand exception(TypeError), so no need to clutter your code with useless asserts.
If you really don't know how many parameters will be set, do something like this:
```
try:
retval = sdf(None, None, x)
except TypeError as e:
print(e)
handle_exception(e)
```
Also just a heads up: the `is` operator in Python checks if the objects are the same object, not their value. Since objects that are assigned to `None` are just a 'pointer to the global `None` object'(simplification), checking whether a value 'contains' `None` with `is` is preferred. However be aware of this:
```
a = b = list()
a is b
True
# a and b are 'pointers' to the same list object
a = list()
b = list()
a is b
False
a == b
True
# a and b contain 2 different list objects, but their contents are identical
```
Just be aware that to compare values use `==` and to check if they are the same object, use `is`.
HTH | You should use multiple functions and call the one needed.
```
def CalculateTravelTime(distance, speed)
def CalculateTravelSpeed(distance, time)
def CalculateTravelDistance(speed, time)
``` |
62,209,746 | Still fairly new to python.
I was wondering what would be a good way of detecting what output response a python program were to choose.
As an example, if you were to make a speed/distance/time calculator, if only 2 input were ever given, how would you detect which was the missing input and therefore the output? I can think of some fairly crude ways but I was wondering if there was anything else if more complex tasks were to come into play.
I guess something like:
```
def sdf(speed=0, distance=0, time=0):
# detect which parameter has no input / equals 0
# calculate result
# return result
sdf(speed=10, distance=2)
```
Any ideas? | 2020/06/05 | [
"https://Stackoverflow.com/questions/62209746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11098113/"
] | This is what I would do :
```
def sdf(distance=None, speed=None, time=None):
"""Calculate the missing speed, distance time value
returns a 3-tuple (speed, distance, time)
raises ValueError if more than one or no unknowns are given"""
if (distance, speed,time).count(None) > 1:
raise ValueError('Error - more than one unknown provided')
if (distance, speed,time).count(None) == 0:
raise ValueError('Not sure what to calculate - all paramaters provided')
if speed is None:
return distance/time, distance, time
if time is None:
return speed, distance, distance/speed
if distance is None:
return speed, speed*time, time
``` | You should use multiple functions and call the one needed.
```
def CalculateTravelTime(distance, speed)
def CalculateTravelSpeed(distance, time)
def CalculateTravelDistance(speed, time)
``` |
62,209,746 | Still fairly new to python.
I was wondering what would be a good way of detecting what output response a python program were to choose.
As an example, if you were to make a speed/distance/time calculator, if only 2 input were ever given, how would you detect which was the missing input and therefore the output? I can think of some fairly crude ways but I was wondering if there was anything else if more complex tasks were to come into play.
I guess something like:
```
def sdf(speed=0, distance=0, time=0):
# detect which parameter has no input / equals 0
# calculate result
# return result
sdf(speed=10, distance=2)
```
Any ideas? | 2020/06/05 | [
"https://Stackoverflow.com/questions/62209746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11098113/"
] | Python allows you to change types of variables on the fly. Since you are working with integers and `0` could be a useful value in your calculations, your default 'not present' value should be `None`:
```
def sdf(speed=None, time=None, distance=None):
if speed is None:
return calculate_speed(time, distance), time, distance
if time is None:
return speed, calculate_time(speed, distance), distance
if distance is None:
return speed, time, calculate_distance(speed, time)
# All paramters have been set! Maybe check if all three are correct
return speed, time, distance
speed, time, distance = sdf(speed=1, distance=2)
```
This way you don't have to find out what happened afterwards. This function will give you all three values, given you gave it at least 2 out of the 3.
If your program flow allows for multiple values be `None`, your functions `calculate_XY` should throw an exception if they detect it. So in this case:
```
def calculate_distance(speed, time)
return speed * time
```
It will throw an unsupported operand exception(TypeError), so no need to clutter your code with useless asserts.
If you really don't know how many parameters will be set, do something like this:
```
try:
retval = sdf(None, None, x)
except TypeError as e:
print(e)
handle_exception(e)
```
Also just a heads up: the `is` operator in Python checks if the objects are the same object, not their value. Since objects that are assigned to `None` are just a 'pointer to the global `None` object'(simplification), checking whether a value 'contains' `None` with `is` is preferred. However be aware of this:
```
a = b = list()
a is b
True
# a and b are 'pointers' to the same list object
a = list()
b = list()
a is b
False
a == b
True
# a and b contain 2 different list objects, but their contents are identical
```
Just be aware that to compare values use `==` and to check if they are the same object, use `is`.
HTH | This is what I would do :
```
def sdf(distance=None, speed=None, time=None):
"""Calculate the missing speed, distance time value
returns a 3-tuple (speed, distance, time)
raises ValueError if more than one or no unknowns are given"""
if (distance, speed,time).count(None) > 1:
raise ValueError('Error - more than one unknown provided')
if (distance, speed,time).count(None) == 0:
raise ValueError('Not sure what to calculate - all paramaters provided')
if speed is None:
return distance/time, distance, time
if time is None:
return speed, distance, distance/speed
if distance is None:
return speed, speed*time, time
``` |
48,788,169 | I was doing cs231n assignment 2 and encountered this problem.
I'm using tensorflow-gpu 1.5.0
Code as following
```
# define our input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
# define model
def complex_model(X,y,is_training):
pass
y_out = complex_model(X,y,is_training)
# Now we're going to feed a random batch into the model
# and make sure the output is the right size
x = np.random.randn(64, 32, 32,3)
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
print(ans.shape)
print(np.array_equal(ans.shape, np.array([64, 10])))
```
Complete traceback
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-97f0b6c5a72e> in <module>()
6 tf.global_variables_initializer().run()
7
----> 8 ans = sess.run(y_out,feed_dict={X:x,is_training:True})
9 get_ipython().run_line_magic('timeit', 'sess.run(y_out,feed_dict={X:x,is_training:True})')
10 print(ans.shape)
c:\users\kasper\appdata\local\programs\python\python36\lib\site- packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
893 try:
894 result = self._run(None, fetches, feed_dict, options_ptr,
--> 895 run_metadata_ptr)
896 if run_metadata:
897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
c:\users\kasper\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1111 # Create a fetch handler to take care of the structure of fetches.
1112 fetch_handler = _FetchHandler(
-> 1113 self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
1114
1115 # Run request and get response.
c:\users\kasper\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\client\session.py in __init__(self, graph, fetches, feeds, feed_handles)
419 with graph.as_default():
--> 420 self._fetch_mapper = _FetchMapper.for_fetch(fetches)
421 self._fetches = []
422 self._targets = []
c:\users\kasper\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\client\session.py in for_fetch(fetch)
235 if fetch is None:
236 raise TypeError('Fetch argument %r has invalid type %r' %
--> 237 (fetch, type(fetch)))
238 elif isinstance(fetch, (list, tuple)):
239 # NOTE(touts): This is also the code path for namedtuples.
TypeError: Fetch argument None has invalid type <class 'NoneType'>
```
I saw that similar questions have been asked on this site before,but those don't seem to solve mine.
Any help would be appreciated,thanks! | 2018/02/14 | [
"https://Stackoverflow.com/questions/48788169",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5856554/"
] | The problem is that the `y_out` argument to `sess.run()` is `None`, whereas it must be a `tf.Tensor` (or tensor-like object, such as a `tf.Variable`) or a `tf.Operation`.
In your example, `y_out` is defined by the following code:
```
# define model
def complex_model(X,y,is_training):
pass
y_out = complex_model(X,y,is_training)
```
`complex_model()` doesn't return a value, so `y_out = complex_model(...)` will set `y_out` to `None`. I'm not sure if this function is representative of your real code, but it's possible that your real `complex_model()` function is also missing a `return` statement. | I believe that **mrry** is right.
If you give a second look the the notebook [Assignment 2 - Tensorflow.ipynb](https://github.com/BedirYilmaz/cs231-stanford/blob/master/assignment2/TensorFlow.ipynb), you will notice the description cell as follows :
>
> Training a specific model
>
>
> In this section, we're going to specify a model for you to construct.
> The goal here isn't to get good performance (that'll be next), but
> instead to get comfortable with understanding the TensorFlow
> documentation and configuring your own model.
>
>
> Using the code provided above as guidance, and using the following
> TensorFlow documentation, specify a model with the following
> architecture:
>
>
>
> ```
> 7x7 Convolutional Layer with 32 filters and stride of 1
> ReLU Activation Layer
> Spatial Batch Normalization Layer (trainable parameters, with scale and centering)
> 2x2 Max Pooling layer with a stride of 2
> Affine layer with 1024 output units
> ReLU Activation Layer
> Affine layer from 1024 input units to 10 outputs
>
> ```
>
>
Which is **asking you to define a model** inside the function
```
# define model
def complex_model(X,y,is_training):
pass
```
Just like they did in
```
def simple_model(X,y):
# define our weights (e.g. init_two_layer_convnet)
# setup variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
W1 = tf.get_variable("W1", shape=[5408, 10])
b1 = tf.get_variable("b1", shape=[10])
# define our graph (e.g. two_layer_convnet)
a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1
h1 = tf.nn.relu(a1)
h1_flat = tf.reshape(h1,[-1,5408])
y_out = tf.matmul(h1_flat,W1) + b1
return y_out
```
Hope this helps! |
11,100,380 | I've been studying tkinter in python3 and find it very hard to find good documentation and answers online. To help others struggling with the same problems I decided to post a solution for a simple problem that there seems to be no documentation for online.
Problem: Create a wizard-like program, that presents the user with a series of windows and the user can move between the windows clicking next and back - buttons.
The solution is:
* Create one root window.
* Create as many frames as you have windows to present to the user. Attach all frames to the root window.
* Populate each frame with all the widgets it needs.
* When all the frames have been populated, hide each frame with the `grid_forget()` method but leave the first frame unhidden so that it becomes the visible one. All the child widgets on the frame will be hidden with the frame.
* When the user clicks on Next or Back buttons on a window, call a subroutine that hides other frames (with `grid_forget()`) and makes the one that is needed visible (with `grid()`).
* When you want the program to end, use the destroy - method for the root window.
So you will be creating a single window and showing different frames on it.
(By the way, the best place to start studying tkinter is: <http://www.tkdocs.com/tutorial/index.html>)
Here is a sample implementation in Python3. It has 3 simple windows, each with a text label and two buttons to navigate through different windows.
```
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# Creates three "windows" that the user can navigate through using Back and Next - buttons.
import tkinter
import tkinter.ttk
def create_widgets_in_first_frame():
# Create the label for the frame
first_window_label = tkinter.ttk.Label(first_frame, text='Window 1')
first_window_label.grid(column=0, row=0, pady=10, padx=10, sticky=(tkinter.N))
# Create the button for the frame
first_window_quit_button = tkinter.Button(first_frame, text = "Quit", command = quit_program)
first_window_quit_button.grid(column=0, row=1, pady=10, sticky=(tkinter.N))
first_window_next_button = tkinter.Button(first_frame, text = "Next", command = call_second_frame_on_top)
first_window_next_button.grid(column=1, row=1, pady=10, sticky=(tkinter.N))
def create_widgets_in_second_frame():
# Create the label for the frame
second_window_label = tkinter.ttk.Label(second_frame, text='Window 2')
second_window_label.grid(column=0, row=0, pady=10, padx=10, sticky=(tkinter.N))
# Create the button for the frame
second_window_back_button = tkinter.Button(second_frame, text = "Back", command = call_first_frame_on_top)
second_window_back_button.grid(column=0, row=1, pady=10, sticky=(tkinter.N))
second_window_next_button = tkinter.Button(second_frame, text = "Next", command = call_third_frame_on_top)
second_window_next_button.grid(column=1, row=1, pady=10, sticky=(tkinter.N))
def create_widgets_in_third_frame():
# Create the label for the frame
third_window_label = tkinter.ttk.Label(third_frame, text='Window 3')
third_window_label.grid(column=0, row=0, pady=10, padx=10, sticky=(tkinter.N))
# Create the button for the frame
third_window_back_button = tkinter.Button(third_frame, text = "Back", command = call_second_frame_on_top)
third_window_back_button.grid(column=0, row=1, pady=10, sticky=(tkinter.N))
third_window_quit_button = tkinter.Button(third_frame, text = "Quit", command = quit_program)
third_window_quit_button.grid(column=1, row=1, pady=10, sticky=(tkinter.N))
def call_first_frame_on_top():
# This function can be called only from the second window.
# Hide the second window and show the first window.
second_frame.grid_forget()
first_frame.grid(column=0, row=0, padx=20, pady=5, sticky=(tkinter.W, tkinter.N, tkinter.E))
def call_second_frame_on_top():
# This function can be called from the first and third windows.
# Hide the first and third windows and show the second window.
first_frame.grid_forget()
third_frame.grid_forget()
second_frame.grid(column=0, row=0, padx=20, pady=5, sticky=(tkinter.W, tkinter.N, tkinter.E))
def call_third_frame_on_top():
# This function can only be called from the second window.
# Hide the second window and show the third window.
second_frame.grid_forget()
third_frame.grid(column=0, row=0, padx=20, pady=5, sticky=(tkinter.W, tkinter.N, tkinter.E))
def quit_program():
root_window.destroy()
###############################
# Main program starts here :) #
###############################
# Create the root GUI window.
root_window = tkinter.Tk()
# Define window size
window_width = 200
window_heigth = 100
# Create frames inside the root window to hold other GUI elements. All frames must be created in the main program, otherwise they are not accessible in functions.
first_frame=tkinter.ttk.Frame(root_window, width=window_width, height=window_heigth)
first_frame['borderwidth'] = 2
first_frame['relief'] = 'sunken'
first_frame.grid(column=0, row=0, padx=20, pady=5, sticky=(tkinter.W, tkinter.N, tkinter.E))
second_frame=tkinter.ttk.Frame(root_window, width=window_width, height=window_heigth)
second_frame['borderwidth'] = 2
second_frame['relief'] = 'sunken'
second_frame.grid(column=0, row=0, padx=20, pady=5, sticky=(tkinter.W, tkinter.N, tkinter.E))
third_frame=tkinter.ttk.Frame(root_window, width=window_width, height=window_heigth)
third_frame['borderwidth'] = 2
third_frame['relief'] = 'sunken'
third_frame.grid(column=0, row=0, padx=20, pady=5, sticky=(tkinter.W, tkinter.N, tkinter.E))
# Create all widgets to all frames
create_widgets_in_third_frame()
create_widgets_in_second_frame()
create_widgets_in_first_frame()
# Hide all frames in reverse order, but leave first frame visible (unhidden).
third_frame.grid_forget()
second_frame.grid_forget()
# Start tkinter event - loop
root_window.mainloop()
``` | 2012/06/19 | [
"https://Stackoverflow.com/questions/11100380",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1466253/"
] | As you've taken the liberty to post an answer as a question. I'd like to post a comment as an answer and suggest that perhaps you should contribute this to TkDocs (click their [About tab](http://www.tkdocs.com/about.html) and they talk about contributing to the site).
I think it'd be better if that site were to improved with more examples than to turn this site into a cookbook. I think you can also contribute to the [Active State recipes](http://code.activestate.com/recipes/), and they seem to be the carriers of the torch for Tcl/Tk, so Tkinter stuff makes a lot of sense there too. | Thanks for your work- I used it as inspiration for this example that, while extremely light in terms of the content, is a cool way to make an arbitrary number of windows that you can switch between. You could move the location of the next and back buttons, turn them into arrows, whatever you want.
```
from tkinter import *
master=Tk()
class makeframe(object):
def __init__(self,i):
self.i=i
self.frame=Frame(master)
self.nextbutton=Button(self.frame,text='next',command=self.next)
self.nextbutton.grid(column=2,row=0)
self.backbutton=Button(self.frame,text='back',command=self.back)
self.backbutton.grid(column=0,row=0)
self.label=Label(self.frame,text='%i'%(self.i+1)).grid(column=1,row=0)
def next(self):
self.frame.grid_forget()
p[self.i+1].frame.grid()
def back(self):
self.frame.grid_forget()
p[self.i-1].frame.grid()
n=7
p=[0]*n
for i in range(n):
p[i]=makeframe(i)
p[0].frame.grid()
p[0].backbutton.config(state=DISABLED)
p[-1].nextbutton.config(state=DISABLED)
``` |
3,974,211 | i saw a javascript implementation of sha-256.
i waana ask if it is safe (pros/cons wathever) to use sha-256 (using javascript implementation or maybe python standard modules) alogrithm as a password generator:
i remember one password, put it in followed(etc) by the website address and use the generated text as the password for that website.
repeat process every time i need password
same for other websites | 2010/10/20 | [
"https://Stackoverflow.com/questions/3974211",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/481170/"
] | I think you are describing the approach used by [SuperGenPass](http://supergenpass.com/):
Take a master password (same for every site), concatenate it with the site's domain name, and then hash the thing.
Yes, SHA-256 would be secure for that, likely more secure than when SuperGenPass uses. However, you will end up with very long passwords, too long for many sites to accept, and also not guaranteed to contain numbers and letters and special characters at the same time, which some sites require.
Also, the general problem remains that if somehow (not by breaking the algorithm, but by other means) your master password does get leaked, all your passwords are belong to us.
Completely random passwords are most secure (if we ignore the problem of storing them securely somewhere). | SHA-256 generates *very* long strings. You're better off using `random.choice()` with a string a fixed number of times. |
52,236,797 | Built Python 3.7 on my Raspberry pi zero in a attempt to upgrade from Python 3.5.3
The build was successful, ran into the module not found for smbus and switched that to smbus2, now when I import gpiozero I get Module not found. my DungeonCube.py program was working fine under Python 3.5.3, but now Python 3.7 seems to have trouble finding gpiozero
this is what I did to test:
```
python3
Python 3.7.0 (default, Sept 7 2018, 14:22:04)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import gpiozero
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'gpiozero'
>>>
```
anyone know how to get python 3.7 to see gpiozero module? | 2018/09/08 | [
"https://Stackoverflow.com/questions/52236797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10333299/"
] | I had same problem, realized that i used `pip3` to install the package, but I was trying to use it with `python` which invokes python2. I tried with `python3` it works just fine. | did you download the gpiozero module onto the raspberry pi? it does not come preinstalled with python.
you could try to do "sudo python3 pip install gpiozero". if that doesnt work replace python3 with python @GarryOsborne . |
15,976,639 | First of all: Please keep in mind that I'm very much a beginner at programming.
I'm trying to write a simple program in Python that will replace the consonants in a string with consonant+"o"+consonant. For example "b" would be replaced with "bob" and "d" would be replaced with "dod" (so the word "python" would be changed to "popytothohonon").
To do this I created a dictionary, that contained the pairs b:bob,c:coc,d:dod etc. Then I used the replace() command to read through the word and replace the consonants with their translation in the dictionary. The full piece of code looks like this:
```
def replacer(text):
consonant='bcdfghjklmnpqrstvwxz'
lexicon={}
for x in range(0,len(consonant)):
lexicon[x]=(consonant[x]),(consonant[x]+'o'+consonant[x])
for i,j in lexicon.items():
text=(text.replace(i,j))
return text
```
Now, when I try to call this function I get the following error:
```
Traceback (most recent call last):
File "D:\x\x.py", line 37, in <module>
print(replacer("python"))
File "D:\x\x.py", line 17, in replacer
text=(text.replace(i,j))
TypeError: Can't convert 'int' object to str implicitly
```
But I'm not using any ints! There's got to be something wrong with the dictionary, because everything works when i make it "by hand" like this:
```
list={'b':'bob', 'c':'coc', 'd':'dod', 'f':'fof', 'g':'gog', 'h':'hoh'......}
```
But when I print the "non-hand-made" dictionary everything seems to be in order:
```
{0: ('b', 'bob'), 1: ('c', 'coc'), 2: ('d', 'dod'), 3: ('f', 'fof')........
```
What am I doing wrong? | 2013/04/12 | [
"https://Stackoverflow.com/questions/15976639",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2275161/"
] | `lexicon` is a dictionary with integers as keys and tuples as values. when you iterate over it's items, you're getting tuples of the form `(integer,tuple)`. You're then passing that integer and tuple to `text.replace` as `i` and `j` which is why it's complaining. Perhaps you meant:
```
for i,j in lexicon.values():
...
```
For this simple replacement, `str.replace` is fine, but for more complicated replacements, the code will probably be more robust (and possibly execute faster!) if you [use `re.sub` instead](https://stackoverflow.com/a/15324369/748858).
---
Also, as pointed out in the comments, for this case, a better data structure would be to use a `list`:
```
lexicon = [ (x,'{0}o{0}'.format(x)) for x in chars ]
```
Now you can build your dict from this list if you really want:
```
lexicon = dict(enumerate(lexicon))
```
but there's probably no need. And in this case, you'd iterate over `lexicon` directly:
```
for i,j in lexicon:
...
```
If you're only going to do this once, you could even do it lazily without ever materializing the list by using a generator expression:
```
lexicon = ( (x,'{0}o{0}'.format(x)) for x in chars )
``` | no ... your keys in the handmade version are strings ... your kets in the other version are ints ... ints have no replace method |
15,976,639 | First of all: Please keep in mind that I'm very much a beginner at programming.
I'm trying to write a simple program in Python that will replace the consonants in a string with consonant+"o"+consonant. For example "b" would be replaced with "bob" and "d" would be replaced with "dod" (so the word "python" would be changed to "popytothohonon").
To do this I created a dictionary, that contained the pairs b:bob,c:coc,d:dod etc. Then I used the replace() command to read through the word and replace the consonants with their translation in the dictionary. The full piece of code looks like this:
```
def replacer(text):
consonant='bcdfghjklmnpqrstvwxz'
lexicon={}
for x in range(0,len(consonant)):
lexicon[x]=(consonant[x]),(consonant[x]+'o'+consonant[x])
for i,j in lexicon.items():
text=(text.replace(i,j))
return text
```
Now, when I try to call this function I get the following error:
```
Traceback (most recent call last):
File "D:\x\x.py", line 37, in <module>
print(replacer("python"))
File "D:\x\x.py", line 17, in replacer
text=(text.replace(i,j))
TypeError: Can't convert 'int' object to str implicitly
```
But I'm not using any ints! There's got to be something wrong with the dictionary, because everything works when i make it "by hand" like this:
```
list={'b':'bob', 'c':'coc', 'd':'dod', 'f':'fof', 'g':'gog', 'h':'hoh'......}
```
But when I print the "non-hand-made" dictionary everything seems to be in order:
```
{0: ('b', 'bob'), 1: ('c', 'coc'), 2: ('d', 'dod'), 3: ('f', 'fof')........
```
What am I doing wrong? | 2013/04/12 | [
"https://Stackoverflow.com/questions/15976639",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2275161/"
] | `lexicon` is a dictionary with integers as keys and tuples as values. when you iterate over it's items, you're getting tuples of the form `(integer,tuple)`. You're then passing that integer and tuple to `text.replace` as `i` and `j` which is why it's complaining. Perhaps you meant:
```
for i,j in lexicon.values():
...
```
For this simple replacement, `str.replace` is fine, but for more complicated replacements, the code will probably be more robust (and possibly execute faster!) if you [use `re.sub` instead](https://stackoverflow.com/a/15324369/748858).
---
Also, as pointed out in the comments, for this case, a better data structure would be to use a `list`:
```
lexicon = [ (x,'{0}o{0}'.format(x)) for x in chars ]
```
Now you can build your dict from this list if you really want:
```
lexicon = dict(enumerate(lexicon))
```
but there's probably no need. And in this case, you'd iterate over `lexicon` directly:
```
for i,j in lexicon:
...
```
If you're only going to do this once, you could even do it lazily without ever materializing the list by using a generator expression:
```
lexicon = ( (x,'{0}o{0}'.format(x)) for x in chars )
``` | No need for dictionaries this time, just iterate over characters of text, add vovels or consonant+o+consonant to an result array and join it to a string at the end:
```
def replacer(text):
consonants = set('bcdfghjklmnpqrstvwxz')
result = []
for c in text:
if c in consonants:
result.append(c+"o"+c)
else:
result.append(c)
return "".join(result)
print(replacer("python"))
```
---
For advanced users:
```
def replacer(text):
return re.sub(r"[bcdfghjklmnpqrstvwxz]", r"\g<0>o\g<0>", text)
```
---
And to answer "What am I doing wrong?" - dictionaries are useful for arbitrary keys, usually `list.append()` is preferred from using keys 0-n in a dict. And since you are not interested in a consonant's position, you can iterate over strings directly like this:
```
for x in consonant:
lexicon[x] = x+"o"+x
``` |
15,976,639 | First of all: Please keep in mind that I'm very much a beginner at programming.
I'm trying to write a simple program in Python that will replace the consonants in a string with consonant+"o"+consonant. For example "b" would be replaced with "bob" and "d" would be replaced with "dod" (so the word "python" would be changed to "popytothohonon").
To do this I created a dictionary, that contained the pairs b:bob,c:coc,d:dod etc. Then I used the replace() command to read through the word and replace the consonants with their translation in the dictionary. The full piece of code looks like this:
```
def replacer(text):
consonant='bcdfghjklmnpqrstvwxz'
lexicon={}
for x in range(0,len(consonant)):
lexicon[x]=(consonant[x]),(consonant[x]+'o'+consonant[x])
for i,j in lexicon.items():
text=(text.replace(i,j))
return text
```
Now, when I try to call this function I get the following error:
```
Traceback (most recent call last):
File "D:\x\x.py", line 37, in <module>
print(replacer("python"))
File "D:\x\x.py", line 17, in replacer
text=(text.replace(i,j))
TypeError: Can't convert 'int' object to str implicitly
```
But I'm not using any ints! There's got to be something wrong with the dictionary, because everything works when i make it "by hand" like this:
```
list={'b':'bob', 'c':'coc', 'd':'dod', 'f':'fof', 'g':'gog', 'h':'hoh'......}
```
But when I print the "non-hand-made" dictionary everything seems to be in order:
```
{0: ('b', 'bob'), 1: ('c', 'coc'), 2: ('d', 'dod'), 3: ('f', 'fof')........
```
What am I doing wrong? | 2013/04/12 | [
"https://Stackoverflow.com/questions/15976639",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2275161/"
] | `lexicon` is a dictionary with integers as keys and tuples as values. when you iterate over it's items, you're getting tuples of the form `(integer,tuple)`. You're then passing that integer and tuple to `text.replace` as `i` and `j` which is why it's complaining. Perhaps you meant:
```
for i,j in lexicon.values():
...
```
For this simple replacement, `str.replace` is fine, but for more complicated replacements, the code will probably be more robust (and possibly execute faster!) if you [use `re.sub` instead](https://stackoverflow.com/a/15324369/748858).
---
Also, as pointed out in the comments, for this case, a better data structure would be to use a `list`:
```
lexicon = [ (x,'{0}o{0}'.format(x)) for x in chars ]
```
Now you can build your dict from this list if you really want:
```
lexicon = dict(enumerate(lexicon))
```
but there's probably no need. And in this case, you'd iterate over `lexicon` directly:
```
for i,j in lexicon:
...
```
If you're only going to do this once, you could even do it lazily without ever materializing the list by using a generator expression:
```
lexicon = ( (x,'{0}o{0}'.format(x)) for x in chars )
``` | I guess what you wanted to achieve is lexicon mapping consonant to replacement. It may be done this way:
```
lexicon = { c: c+'o'+c for c in consonant }
```
which is equivalent of:
```
for c in consonant:
lexicon[c] = c+'o'+c
``` |
15,976,639 | First of all: Please keep in mind that I'm very much a beginner at programming.
I'm trying to write a simple program in Python that will replace the consonants in a string with consonant+"o"+consonant. For example "b" would be replaced with "bob" and "d" would be replaced with "dod" (so the word "python" would be changed to "popytothohonon").
To do this I created a dictionary, that contained the pairs b:bob,c:coc,d:dod etc. Then I used the replace() command to read through the word and replace the consonants with their translation in the dictionary. The full piece of code looks like this:
```
def replacer(text):
consonant='bcdfghjklmnpqrstvwxz'
lexicon={}
for x in range(0,len(consonant)):
lexicon[x]=(consonant[x]),(consonant[x]+'o'+consonant[x])
for i,j in lexicon.items():
text=(text.replace(i,j))
return text
```
Now, when I try to call this function I get the following error:
```
Traceback (most recent call last):
File "D:\x\x.py", line 37, in <module>
print(replacer("python"))
File "D:\x\x.py", line 17, in replacer
text=(text.replace(i,j))
TypeError: Can't convert 'int' object to str implicitly
```
But I'm not using any ints! There's got to be something wrong with the dictionary, because everything works when i make it "by hand" like this:
```
list={'b':'bob', 'c':'coc', 'd':'dod', 'f':'fof', 'g':'gog', 'h':'hoh'......}
```
But when I print the "non-hand-made" dictionary everything seems to be in order:
```
{0: ('b', 'bob'), 1: ('c', 'coc'), 2: ('d', 'dod'), 3: ('f', 'fof')........
```
What am I doing wrong? | 2013/04/12 | [
"https://Stackoverflow.com/questions/15976639",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2275161/"
] | No need for dictionaries this time, just iterate over characters of text, add vovels or consonant+o+consonant to an result array and join it to a string at the end:
```
def replacer(text):
consonants = set('bcdfghjklmnpqrstvwxz')
result = []
for c in text:
if c in consonants:
result.append(c+"o"+c)
else:
result.append(c)
return "".join(result)
print(replacer("python"))
```
---
For advanced users:
```
def replacer(text):
return re.sub(r"[bcdfghjklmnpqrstvwxz]", r"\g<0>o\g<0>", text)
```
---
And to answer "What am I doing wrong?" - dictionaries are useful for arbitrary keys, usually `list.append()` is preferred from using keys 0-n in a dict. And since you are not interested in a consonant's position, you can iterate over strings directly like this:
```
for x in consonant:
lexicon[x] = x+"o"+x
``` | no ... your keys in the handmade version are strings ... your kets in the other version are ints ... ints have no replace method |
15,976,639 | First of all: Please keep in mind that I'm very much a beginner at programming.
I'm trying to write a simple program in Python that will replace the consonants in a string with consonant+"o"+consonant. For example "b" would be replaced with "bob" and "d" would be replaced with "dod" (so the word "python" would be changed to "popytothohonon").
To do this I created a dictionary, that contained the pairs b:bob,c:coc,d:dod etc. Then I used the replace() command to read through the word and replace the consonants with their translation in the dictionary. The full piece of code looks like this:
```
def replacer(text):
consonant='bcdfghjklmnpqrstvwxz'
lexicon={}
for x in range(0,len(consonant)):
lexicon[x]=(consonant[x]),(consonant[x]+'o'+consonant[x])
for i,j in lexicon.items():
text=(text.replace(i,j))
return text
```
Now, when I try to call this function I get the following error:
```
Traceback (most recent call last):
File "D:\x\x.py", line 37, in <module>
print(replacer("python"))
File "D:\x\x.py", line 17, in replacer
text=(text.replace(i,j))
TypeError: Can't convert 'int' object to str implicitly
```
But I'm not using any ints! There's got to be something wrong with the dictionary, because everything works when i make it "by hand" like this:
```
list={'b':'bob', 'c':'coc', 'd':'dod', 'f':'fof', 'g':'gog', 'h':'hoh'......}
```
But when I print the "non-hand-made" dictionary everything seems to be in order:
```
{0: ('b', 'bob'), 1: ('c', 'coc'), 2: ('d', 'dod'), 3: ('f', 'fof')........
```
What am I doing wrong? | 2013/04/12 | [
"https://Stackoverflow.com/questions/15976639",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2275161/"
] | I guess what you wanted to achieve is lexicon mapping consonant to replacement. It may be done this way:
```
lexicon = { c: c+'o'+c for c in consonant }
```
which is equivalent of:
```
for c in consonant:
lexicon[c] = c+'o'+c
``` | no ... your keys in the handmade version are strings ... your kets in the other version are ints ... ints have no replace method |
59,457,595 | I am taking the data science course from Udemy. After running the code to show the iris data set, it does not show. Instead, it downloads a data file.
I am running the following code:
```py
from IPython.display import HTML
HTML('<iframe src=http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data></iframe>')
```
Is the code correct? Could you please help how to show the iris dataset in the python using iframe?
link to the course: <https://www.udemy.com/course/introduction-to-data-science-using-python/learn/lecture/9387344#questions> | 2019/12/23 | [
"https://Stackoverflow.com/questions/59457595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4998064/"
] | If plot is not in the first line in the file, you could do this:
```
sed '1,/plot/!s/plot//'
```
If it can be on the first line, I see no other way but to loop it:
```
sed ':a;/plot/!{n;ba;};:b;n;s///;bb'
``` | In case you are ok with an `awk` solution, could you please try following.
```
awk '/plot/ && ++count==1{print;next} !/plot/' Input_file
```
***Explanation:*** Adding explanation for above code.
```
awk ' ##Starting awk program from here.
/plot/ && ++count==1{ ##Checking condition if string plot is present and variable count value is 1 then do following.
print ##Printing the current line.
next ##next will skip all further statements from here.
} ##Closing BLOCK for above condition.
!/plot/ ##Checking condition if string plot is NOT present then do print of that line.
' Input_file ##Mentioning Input_file name here.
```
***NOTE:*** In case you want to save output into Input\_file itself then append `> temp && mv temp Input_file` to above code. |
32,879,614 | I was following the instructions [here](https://people.csail.mit.edu/hubert/pyaudio/compilation.html) and I'm having trouble getting the installation to work. Basically, the first part works fine. I downloaded portaudio, followed the instructions, and it all seemed to work.
However, when I tried`python3 setup.py install`, I got an error. The error came from the /src/\_portaudiomodule.c file, and it said that "The file Python.h could not be found". I don't really understand what's going on because there was no Python.h file when I extracted the PyAudio archive. I don't know where the Python.h file was supposed to come from.
I'm kind of a noob to unix systems so I could have easily made a mistake somewhere. I've been trying to solve this for hours and I've had no luck so far. Thanks in advance for your help! | 2015/10/01 | [
"https://Stackoverflow.com/questions/32879614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3047641/"
] | To install the latest version of pyaudio using conda:
```
source activate -your environment name-
pip install pyaudio
```
You may run into the following error when installing from pip:
```
src/_portaudiomodule.c:29:23: fatal error: portaudio.h: No such file or directory
#include "portaudio.h"
compilation terminated.
error: command 'gcc' failed with exit status 1
```
That is because you don't have the PortAudio development package installed. Install it with:
```
sudo apt-get install portaudio19-dev
``` | I was able to get it install with [anaconda](https://www.continuum.io/downloads), using [this package](https://anaconda.org/bokeh/pyaudio).
Follow install instructions for linux [here](https://www.continuum.io/downloads#_unix), then do:
```
conda install -c bokeh pyaudio=0.2.7
``` |
32,879,614 | I was following the instructions [here](https://people.csail.mit.edu/hubert/pyaudio/compilation.html) and I'm having trouble getting the installation to work. Basically, the first part works fine. I downloaded portaudio, followed the instructions, and it all seemed to work.
However, when I tried`python3 setup.py install`, I got an error. The error came from the /src/\_portaudiomodule.c file, and it said that "The file Python.h could not be found". I don't really understand what's going on because there was no Python.h file when I extracted the PyAudio archive. I don't know where the Python.h file was supposed to come from.
I'm kind of a noob to unix systems so I could have easily made a mistake somewhere. I've been trying to solve this for hours and I've had no luck so far. Thanks in advance for your help! | 2015/10/01 | [
"https://Stackoverflow.com/questions/32879614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3047641/"
] | To install the latest version of pyaudio using conda:
```
source activate -your environment name-
pip install pyaudio
```
You may run into the following error when installing from pip:
```
src/_portaudiomodule.c:29:23: fatal error: portaudio.h: No such file or directory
#include "portaudio.h"
compilation terminated.
error: command 'gcc' failed with exit status 1
```
That is because you don't have the PortAudio development package installed. Install it with:
```
sudo apt-get install portaudio19-dev
``` | try to install using the the below command
```
pip install pyaudio
```
after that install the required Microsoft Visual C++ 14.0
refer the below image for the same.
[](https://i.stack.imgur.com/FFtJ5.jpg)
and restart the system and run the same command again
```
pip install pyaudio
``` |
32,879,614 | I was following the instructions [here](https://people.csail.mit.edu/hubert/pyaudio/compilation.html) and I'm having trouble getting the installation to work. Basically, the first part works fine. I downloaded portaudio, followed the instructions, and it all seemed to work.
However, when I tried`python3 setup.py install`, I got an error. The error came from the /src/\_portaudiomodule.c file, and it said that "The file Python.h could not be found". I don't really understand what's going on because there was no Python.h file when I extracted the PyAudio archive. I don't know where the Python.h file was supposed to come from.
I'm kind of a noob to unix systems so I could have easily made a mistake somewhere. I've been trying to solve this for hours and I've had no luck so far. Thanks in advance for your help! | 2015/10/01 | [
"https://Stackoverflow.com/questions/32879614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3047641/"
] | To install the latest version of pyaudio using conda:
```
source activate -your environment name-
pip install pyaudio
```
You may run into the following error when installing from pip:
```
src/_portaudiomodule.c:29:23: fatal error: portaudio.h: No such file or directory
#include "portaudio.h"
compilation terminated.
error: command 'gcc' failed with exit status 1
```
That is because you don't have the PortAudio development package installed. Install it with:
```
sudo apt-get install portaudio19-dev
``` | I have found the work arround for mac.
please refer the below steps to install pyaudio on python 3.5
Follow these steps :
* export HOMEBREW\_NO\_ENV\_FILTERING=1
* xcode-select --install
* brew update
* brew upgrade
* brew install portaudio
* pip install pyaudio |
32,879,614 | I was following the instructions [here](https://people.csail.mit.edu/hubert/pyaudio/compilation.html) and I'm having trouble getting the installation to work. Basically, the first part works fine. I downloaded portaudio, followed the instructions, and it all seemed to work.
However, when I tried`python3 setup.py install`, I got an error. The error came from the /src/\_portaudiomodule.c file, and it said that "The file Python.h could not be found". I don't really understand what's going on because there was no Python.h file when I extracted the PyAudio archive. I don't know where the Python.h file was supposed to come from.
I'm kind of a noob to unix systems so I could have easily made a mistake somewhere. I've been trying to solve this for hours and I've had no luck so far. Thanks in advance for your help! | 2015/10/01 | [
"https://Stackoverflow.com/questions/32879614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3047641/"
] | I have found the work arround for mac.
please refer the below steps to install pyaudio on python 3.5
Follow these steps :
* export HOMEBREW\_NO\_ENV\_FILTERING=1
* xcode-select --install
* brew update
* brew upgrade
* brew install portaudio
* pip install pyaudio | try to install using the the below command
```
pip install pyaudio
```
after that install the required Microsoft Visual C++ 14.0
refer the below image for the same.
[](https://i.stack.imgur.com/FFtJ5.jpg)
and restart the system and run the same command again
```
pip install pyaudio
``` |
32,879,614 | I was following the instructions [here](https://people.csail.mit.edu/hubert/pyaudio/compilation.html) and I'm having trouble getting the installation to work. Basically, the first part works fine. I downloaded portaudio, followed the instructions, and it all seemed to work.
However, when I tried`python3 setup.py install`, I got an error. The error came from the /src/\_portaudiomodule.c file, and it said that "The file Python.h could not be found". I don't really understand what's going on because there was no Python.h file when I extracted the PyAudio archive. I don't know where the Python.h file was supposed to come from.
I'm kind of a noob to unix systems so I could have easily made a mistake somewhere. I've been trying to solve this for hours and I've had no luck so far. Thanks in advance for your help! | 2015/10/01 | [
"https://Stackoverflow.com/questions/32879614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3047641/"
] | You don't need to compile pyaudio. To [install PyAudio](https://people.csail.mit.edu/hubert/pyaudio/#downloads), run:
```
$ sudo add-apt-repository universe
$ sudo apt-get install python-pyaudio python3-pyaudio
```
The first command [enables Universe Ubuntu repository](https://askubuntu.com/q/148638/3712).
If you want to compile it e.g., to use the latest version from git; install build dependencies:
```
$ sudo apt-get build-dep python-pyaudio python3-pyaudio
```
After that, you could install it from sources using `pip`:
```
$ python3 -mpip install pyaudio
```
Or to install the current version from git:
```
$ pip install -e git+http://people.csail.mit.edu/hubert/git/pyaudio.git#egg=pyaudio
```
Run `pip` commands inside a virtualenv or add `--user` command-line option, to avoid modifying the global `python3` installation (leave it to the package manager).
I've tested it on Ubuntu. Let me know if it fails on Mint. | I have found the work arround for mac.
please refer the below steps to install pyaudio on python 3.5
Follow these steps :
* export HOMEBREW\_NO\_ENV\_FILTERING=1
* xcode-select --install
* brew update
* brew upgrade
* brew install portaudio
* pip install pyaudio |
32,879,614 | I was following the instructions [here](https://people.csail.mit.edu/hubert/pyaudio/compilation.html) and I'm having trouble getting the installation to work. Basically, the first part works fine. I downloaded portaudio, followed the instructions, and it all seemed to work.
However, when I tried`python3 setup.py install`, I got an error. The error came from the /src/\_portaudiomodule.c file, and it said that "The file Python.h could not be found". I don't really understand what's going on because there was no Python.h file when I extracted the PyAudio archive. I don't know where the Python.h file was supposed to come from.
I'm kind of a noob to unix systems so I could have easily made a mistake somewhere. I've been trying to solve this for hours and I've had no luck so far. Thanks in advance for your help! | 2015/10/01 | [
"https://Stackoverflow.com/questions/32879614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3047641/"
] | To install the latest version of pyaudio using conda:
```
source activate -your environment name-
pip install pyaudio
```
You may run into the following error when installing from pip:
```
src/_portaudiomodule.c:29:23: fatal error: portaudio.h: No such file or directory
#include "portaudio.h"
compilation terminated.
error: command 'gcc' failed with exit status 1
```
That is because you don't have the PortAudio development package installed. Install it with:
```
sudo apt-get install portaudio19-dev
``` | You don't need to compile pyaudio. To [install PyAudio](https://people.csail.mit.edu/hubert/pyaudio/#downloads), run:
```
$ sudo add-apt-repository universe
$ sudo apt-get install python-pyaudio python3-pyaudio
```
The first command [enables Universe Ubuntu repository](https://askubuntu.com/q/148638/3712).
If you want to compile it e.g., to use the latest version from git; install build dependencies:
```
$ sudo apt-get build-dep python-pyaudio python3-pyaudio
```
After that, you could install it from sources using `pip`:
```
$ python3 -mpip install pyaudio
```
Or to install the current version from git:
```
$ pip install -e git+http://people.csail.mit.edu/hubert/git/pyaudio.git#egg=pyaudio
```
Run `pip` commands inside a virtualenv or add `--user` command-line option, to avoid modifying the global `python3` installation (leave it to the package manager).
I've tested it on Ubuntu. Let me know if it fails on Mint. |
32,879,614 | I was following the instructions [here](https://people.csail.mit.edu/hubert/pyaudio/compilation.html) and I'm having trouble getting the installation to work. Basically, the first part works fine. I downloaded portaudio, followed the instructions, and it all seemed to work.
However, when I tried`python3 setup.py install`, I got an error. The error came from the /src/\_portaudiomodule.c file, and it said that "The file Python.h could not be found". I don't really understand what's going on because there was no Python.h file when I extracted the PyAudio archive. I don't know where the Python.h file was supposed to come from.
I'm kind of a noob to unix systems so I could have easily made a mistake somewhere. I've been trying to solve this for hours and I've had no luck so far. Thanks in advance for your help! | 2015/10/01 | [
"https://Stackoverflow.com/questions/32879614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3047641/"
] | I have found the work arround for mac.
please refer the below steps to install pyaudio on python 3.5
Follow these steps :
* export HOMEBREW\_NO\_ENV\_FILTERING=1
* xcode-select --install
* brew update
* brew upgrade
* brew install portaudio
* pip install pyaudio | Python.h is nothing but a header file. It is used by gcc to build applications. You need to install a package called python-dev. This package includes header files, a static library and development tools for building Python modules, extending the Python interpreter or embedding Python in applications. To install this package, enter:
```
sudo apt-get install python3-dev
``` |
32,879,614 | I was following the instructions [here](https://people.csail.mit.edu/hubert/pyaudio/compilation.html) and I'm having trouble getting the installation to work. Basically, the first part works fine. I downloaded portaudio, followed the instructions, and it all seemed to work.
However, when I tried`python3 setup.py install`, I got an error. The error came from the /src/\_portaudiomodule.c file, and it said that "The file Python.h could not be found". I don't really understand what's going on because there was no Python.h file when I extracted the PyAudio archive. I don't know where the Python.h file was supposed to come from.
I'm kind of a noob to unix systems so I could have easily made a mistake somewhere. I've been trying to solve this for hours and I've had no luck so far. Thanks in advance for your help! | 2015/10/01 | [
"https://Stackoverflow.com/questions/32879614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3047641/"
] | I have found the work arround for mac.
please refer the below steps to install pyaudio on python 3.5
Follow these steps :
* export HOMEBREW\_NO\_ENV\_FILTERING=1
* xcode-select --install
* brew update
* brew upgrade
* brew install portaudio
* pip install pyaudio | I was able to get it install with [anaconda](https://www.continuum.io/downloads), using [this package](https://anaconda.org/bokeh/pyaudio).
Follow install instructions for linux [here](https://www.continuum.io/downloads#_unix), then do:
```
conda install -c bokeh pyaudio=0.2.7
``` |
32,879,614 | I was following the instructions [here](https://people.csail.mit.edu/hubert/pyaudio/compilation.html) and I'm having trouble getting the installation to work. Basically, the first part works fine. I downloaded portaudio, followed the instructions, and it all seemed to work.
However, when I tried`python3 setup.py install`, I got an error. The error came from the /src/\_portaudiomodule.c file, and it said that "The file Python.h could not be found". I don't really understand what's going on because there was no Python.h file when I extracted the PyAudio archive. I don't know where the Python.h file was supposed to come from.
I'm kind of a noob to unix systems so I could have easily made a mistake somewhere. I've been trying to solve this for hours and I've had no luck so far. Thanks in advance for your help! | 2015/10/01 | [
"https://Stackoverflow.com/questions/32879614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3047641/"
] | You don't need to compile pyaudio. To [install PyAudio](https://people.csail.mit.edu/hubert/pyaudio/#downloads), run:
```
$ sudo add-apt-repository universe
$ sudo apt-get install python-pyaudio python3-pyaudio
```
The first command [enables Universe Ubuntu repository](https://askubuntu.com/q/148638/3712).
If you want to compile it e.g., to use the latest version from git; install build dependencies:
```
$ sudo apt-get build-dep python-pyaudio python3-pyaudio
```
After that, you could install it from sources using `pip`:
```
$ python3 -mpip install pyaudio
```
Or to install the current version from git:
```
$ pip install -e git+http://people.csail.mit.edu/hubert/git/pyaudio.git#egg=pyaudio
```
Run `pip` commands inside a virtualenv or add `--user` command-line option, to avoid modifying the global `python3` installation (leave it to the package manager).
I've tested it on Ubuntu. Let me know if it fails on Mint. | Python.h is nothing but a header file. It is used by gcc to build applications. You need to install a package called python-dev. This package includes header files, a static library and development tools for building Python modules, extending the Python interpreter or embedding Python in applications. To install this package, enter:
```
sudo apt-get install python3-dev
``` |
32,879,614 | I was following the instructions [here](https://people.csail.mit.edu/hubert/pyaudio/compilation.html) and I'm having trouble getting the installation to work. Basically, the first part works fine. I downloaded portaudio, followed the instructions, and it all seemed to work.
However, when I tried`python3 setup.py install`, I got an error. The error came from the /src/\_portaudiomodule.c file, and it said that "The file Python.h could not be found". I don't really understand what's going on because there was no Python.h file when I extracted the PyAudio archive. I don't know where the Python.h file was supposed to come from.
I'm kind of a noob to unix systems so I could have easily made a mistake somewhere. I've been trying to solve this for hours and I've had no luck so far. Thanks in advance for your help! | 2015/10/01 | [
"https://Stackoverflow.com/questions/32879614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3047641/"
] | To install the latest version of pyaudio using conda:
```
source activate -your environment name-
pip install pyaudio
```
You may run into the following error when installing from pip:
```
src/_portaudiomodule.c:29:23: fatal error: portaudio.h: No such file or directory
#include "portaudio.h"
compilation terminated.
error: command 'gcc' failed with exit status 1
```
That is because you don't have the PortAudio development package installed. Install it with:
```
sudo apt-get install portaudio19-dev
``` | Python.h is nothing but a header file. It is used by gcc to build applications. You need to install a package called python-dev. This package includes header files, a static library and development tools for building Python modules, extending the Python interpreter or embedding Python in applications. To install this package, enter:
```
sudo apt-get install python3-dev
``` |
23,507,902 | I use gsutil to transfer files from a Windows machine to Google Cloud Storage.
I have not used it for more than 6 months and now when I try it I get:
Failure: invalid\_grant
From researching this I suspect the access token is no longer valid as it has not been used for 6 months, and I need a refresh token?
I cannot seem to find how to get and use this.
thanks
Running `gsutil -DD config` produces the following output:
```
C:\Python27>python c:/gsutil/gsutil -DD config
DEBUG:boto:path=/pub/gsutil.tar.gz
DEBUG:boto:auth_path=/pub/gsutil.tar.gz
DEBUG:boto:Method: HEAD
DEBUG:boto:Path: /pub/gsutil.tar.gz
DEBUG:boto:Data:
DEBUG:boto:Headers: {}
DEBUG:boto:Host: storage.googleapis.com
DEBUG:boto:Params: {}
DEBUG:boto:establishing HTTPS connection: host=storage.googleapis.com, kwargs={'timeout': 70}
DEBUG:boto:Token: None
DEBUG:oauth2_client:GetAccessToken: checking cache for key *******************************
DEBUG:oauth2_client:FileSystemTokenCache.GetToken: key=******************************* not present (cache_file= c:\users\admini~1\appdata\local\temp\2\oauth2_client-tokencache._.ea******************************)
DEBUG:oauth2_client:GetAccessToken: token from cache: None
DEBUG:oauth2_client:GetAccessToken: fetching fresh access token...
INFO:oauth2client.client:Refreshing access_token connect: (accounts.google.com, 443)
send: 'POST /o/oauth2/token HTTP/1.1\r\nHost: accounts.google.com\r\nContent-Length: 177\r\ncontent-type: application/x- www-form-urlencoded\r\naccept-encoding: gzip, deflate\r\nuser-agent: Python-httplib2/0.7.7 (gzip)\r\n\r\nclient_secret=******************&grant_type=refresh_token&refresh_token=****************************************&client_ id=****************.apps.googleusercontent.com' reply: 'HTTP/1.1 400 Bad Request\r\n'
header: Content-Type: application/json; charset=utf-8 header: Cache-Control: no-cache, no-store, max-age=0, must-revalidate header: Pragma: no-cache header: Expires: Fri, 01 Jan 1990 00:00:00 GMT header: Date: Thu, 08 May 2014 02:02:21 GMT header: Content-Disposition: attachment; filename="json.txt"; filename*=UTF-8''json.txt header: Content-Encoding: gzip header: X-Content-Type-Options: nosniff header: X-Frame-Options: SAMEORIGIN
header: X-XSS-Protection: 1; mode=block header: Server: GSE header: Alternate-Protocol: 443:quic header: Transfer-Encoding: chunked
INFO:oauth2client.client:Failed to retrieve access token: { "error" : "invalid_grant" }
Traceback (most recent call last):
File "c:/gsutil/gsutil", line 83, in <module> gslib.__main__.main() File "c:\gsutil\gslib_main_.py", line 151, in main command_runner.RunNamedCommand('ver', ['-l'])
File "c:\gsutil\gslib\command_runner.py", line 95, in RunNamedCommand self._MaybeCheckForAndOfferSoftwareUpdate(command_name, debug)):
File "c:\gsutil\gslib\command_runner.py", line 181, in _MaybeCheckForAndOfferSoftwareUpdate cur_ver = LookUpGsutilVersion(suri_builder.StorageUri(GSUTIL_PUB_TARBALL))
File "c:\gsutil\gslib\util.py", line 299, in LookUpGsutilVersion obj = uri.get_key(False)
File "c:\gsutil\third_party\boto\boto\storage_uri.py", line 342, in get_key generation=self.generation)
File "c:\gsutil\third_party\boto\boto\gs\bucket.py", line 102, in get_key query_args_l=query_args_l)
File "c:\gsutil\third_party\boto\boto\s3\bucket.py", line 176, in _get_key_internal query_args=query_args)
File "c:\gsutil\third_party\boto\boto\s3\connection.py", line 547, in make_request retry_handler=retry_handler
File "c:\gsutil\third_party\boto\boto\connection.py", line 947, in make_request retry_handler=retry_handler)
File "c:\gsutil\third_party\boto\boto\connection.py", line 838, in _mexe request.authorize(connection=self)
File "c:\gsutil\third_party\boto\boto\connection.py", line 377, in authorize connection._auth_handler.add_auth(self, *********)
File "c:\gsutil\gslib\third_party\oauth2_plugin\oauth2_plugin.py", line 22, in add_auth self.oauth2_client.GetAuthorizationHeader()
File "c:\gsutil\gslib\third_party\oauth2_plugin\oauth2_client.py", line 338, in GetAuthorizationHeader return 'Bearer %s' % self.GetAccessToken().token
File "c:\gsutil\gslib\third_party\oauth2_plugin\oauth2_client.py", line 309, in GetAccessToken access_token = self.FetchAccessToken()
File "c:\gsutil\gslib\third_party\oauth2_plugin\oauth2_client.py", line 435, in FetchAccessToken credentials.refresh(http)
File "c:\gsutil\third_party\google-api-python-client\oauth2client\client.py", line 516, in refresh self._refresh(http.request)
File "c:\gsutil\third_party\google-api-python-client\oauth2client\client.py", line 653, in _refresh self._do_refresh_request(http_request)
File "c:\gsutil\third_party\google-api-python-client\oauth2client\client.py", line 710, in _do_refresh_request raise AccessTokenRefreshError(error_msg) oauth2client.client.AccessTokenRefreshError: invalid_grant
``` | 2014/05/07 | [
"https://Stackoverflow.com/questions/23507902",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3610488/"
] | You can ask gsutil to configure itself. Go to the directory with gsutil and run this:
```
c:\gsutil> python gsutil config
```
Gsutil will lead you through the steps to setting up your credentials.
That said, access tokens only normally last about a half hour. It's more likely that the previously-configured refresh token was revoked for some reason. Alternately, you can only request new tokens at a certain rate. It's possible your account has been requesting many, many refresh tokens for some reason and has been temporarily rate limited by the access service. | Brandon Yarbrough gave me suggestions which solved this problem. He suspected that the .boto file was corrupted and suggested I delete it and run gsutil config again. I did this and it solved the problem. |
23,507,902 | I use gsutil to transfer files from a Windows machine to Google Cloud Storage.
I have not used it for more than 6 months and now when I try it I get:
Failure: invalid\_grant
From researching this I suspect the access token is no longer valid as it has not been used for 6 months, and I need a refresh token?
I cannot seem to find how to get and use this.
thanks
Running `gsutil -DD config` produces the following output:
```
C:\Python27>python c:/gsutil/gsutil -DD config
DEBUG:boto:path=/pub/gsutil.tar.gz
DEBUG:boto:auth_path=/pub/gsutil.tar.gz
DEBUG:boto:Method: HEAD
DEBUG:boto:Path: /pub/gsutil.tar.gz
DEBUG:boto:Data:
DEBUG:boto:Headers: {}
DEBUG:boto:Host: storage.googleapis.com
DEBUG:boto:Params: {}
DEBUG:boto:establishing HTTPS connection: host=storage.googleapis.com, kwargs={'timeout': 70}
DEBUG:boto:Token: None
DEBUG:oauth2_client:GetAccessToken: checking cache for key *******************************
DEBUG:oauth2_client:FileSystemTokenCache.GetToken: key=******************************* not present (cache_file= c:\users\admini~1\appdata\local\temp\2\oauth2_client-tokencache._.ea******************************)
DEBUG:oauth2_client:GetAccessToken: token from cache: None
DEBUG:oauth2_client:GetAccessToken: fetching fresh access token...
INFO:oauth2client.client:Refreshing access_token connect: (accounts.google.com, 443)
send: 'POST /o/oauth2/token HTTP/1.1\r\nHost: accounts.google.com\r\nContent-Length: 177\r\ncontent-type: application/x- www-form-urlencoded\r\naccept-encoding: gzip, deflate\r\nuser-agent: Python-httplib2/0.7.7 (gzip)\r\n\r\nclient_secret=******************&grant_type=refresh_token&refresh_token=****************************************&client_ id=****************.apps.googleusercontent.com' reply: 'HTTP/1.1 400 Bad Request\r\n'
header: Content-Type: application/json; charset=utf-8 header: Cache-Control: no-cache, no-store, max-age=0, must-revalidate header: Pragma: no-cache header: Expires: Fri, 01 Jan 1990 00:00:00 GMT header: Date: Thu, 08 May 2014 02:02:21 GMT header: Content-Disposition: attachment; filename="json.txt"; filename*=UTF-8''json.txt header: Content-Encoding: gzip header: X-Content-Type-Options: nosniff header: X-Frame-Options: SAMEORIGIN
header: X-XSS-Protection: 1; mode=block header: Server: GSE header: Alternate-Protocol: 443:quic header: Transfer-Encoding: chunked
INFO:oauth2client.client:Failed to retrieve access token: { "error" : "invalid_grant" }
Traceback (most recent call last):
File "c:/gsutil/gsutil", line 83, in <module> gslib.__main__.main() File "c:\gsutil\gslib_main_.py", line 151, in main command_runner.RunNamedCommand('ver', ['-l'])
File "c:\gsutil\gslib\command_runner.py", line 95, in RunNamedCommand self._MaybeCheckForAndOfferSoftwareUpdate(command_name, debug)):
File "c:\gsutil\gslib\command_runner.py", line 181, in _MaybeCheckForAndOfferSoftwareUpdate cur_ver = LookUpGsutilVersion(suri_builder.StorageUri(GSUTIL_PUB_TARBALL))
File "c:\gsutil\gslib\util.py", line 299, in LookUpGsutilVersion obj = uri.get_key(False)
File "c:\gsutil\third_party\boto\boto\storage_uri.py", line 342, in get_key generation=self.generation)
File "c:\gsutil\third_party\boto\boto\gs\bucket.py", line 102, in get_key query_args_l=query_args_l)
File "c:\gsutil\third_party\boto\boto\s3\bucket.py", line 176, in _get_key_internal query_args=query_args)
File "c:\gsutil\third_party\boto\boto\s3\connection.py", line 547, in make_request retry_handler=retry_handler
File "c:\gsutil\third_party\boto\boto\connection.py", line 947, in make_request retry_handler=retry_handler)
File "c:\gsutil\third_party\boto\boto\connection.py", line 838, in _mexe request.authorize(connection=self)
File "c:\gsutil\third_party\boto\boto\connection.py", line 377, in authorize connection._auth_handler.add_auth(self, *********)
File "c:\gsutil\gslib\third_party\oauth2_plugin\oauth2_plugin.py", line 22, in add_auth self.oauth2_client.GetAuthorizationHeader()
File "c:\gsutil\gslib\third_party\oauth2_plugin\oauth2_client.py", line 338, in GetAuthorizationHeader return 'Bearer %s' % self.GetAccessToken().token
File "c:\gsutil\gslib\third_party\oauth2_plugin\oauth2_client.py", line 309, in GetAccessToken access_token = self.FetchAccessToken()
File "c:\gsutil\gslib\third_party\oauth2_plugin\oauth2_client.py", line 435, in FetchAccessToken credentials.refresh(http)
File "c:\gsutil\third_party\google-api-python-client\oauth2client\client.py", line 516, in refresh self._refresh(http.request)
File "c:\gsutil\third_party\google-api-python-client\oauth2client\client.py", line 653, in _refresh self._do_refresh_request(http_request)
File "c:\gsutil\third_party\google-api-python-client\oauth2client\client.py", line 710, in _do_refresh_request raise AccessTokenRefreshError(error_msg) oauth2client.client.AccessTokenRefreshError: invalid_grant
``` | 2014/05/07 | [
"https://Stackoverflow.com/questions/23507902",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3610488/"
] | The command to authenticate is now
```
$ gcloud auth login
```
That should refresh your grant and get you going again.
You may also want to run
```
$ gcloud components update
```
to update your installation. | Brandon Yarbrough gave me suggestions which solved this problem. He suspected that the .boto file was corrupted and suggested I delete it and run gsutil config again. I did this and it solved the problem. |
32,775,258 | Trying to write `to_csv` with the following code:
```
file_name = time.strftime("Box_Office_Data_%Y/%m/%d_%H:%M.csv")
allFilms.to_csv(file_name)
```
But am getting the following error:
```
FileNotFoundError Traceback (most recent call last)
<ipython-input-36-aa2d6e13e9af> in <module>()
9
10 file_name = time.strftime("Box_Office_Data_%Y/%m/%d_%H:%M.csv")
---> 11 allFilms.to_csv(file_name)
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/core/frame.py in to_csv(self, path_or_buf, sep, na_rep, float_format, columns, header, index, index_label, mode, encoding, quoting, quotechar, line_terminator, chunksize, tupleize_cols, date_format, doublequote, escapechar, decimal, **kwds)
1187 escapechar=escapechar,
1188 decimal=decimal)
-> 1189 formatter.save()
1190
1191 if path_or_buf is None:
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/core/format.py in save(self)
1440 else:
1441 f = com._get_handle(self.path_or_buf, self.mode,
-> 1442 encoding=self.encoding)
1443 close = True
1444
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/core/common.py in _get_handle(path, mode, encoding, compression)
2827 f = open(path, mode, encoding=encoding)
2828 else:
-> 2829 f = open(path, mode, errors='replace')
2830 else:
2831 f = open(path, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'Box_Office_Data_2015/09/24_22:11.csv'
```
Since I'm writing to a csv, why would it be searching for a file/directory that is not yet created?
Anyone's help would be greatly appreciated :) | 2015/09/25 | [
"https://Stackoverflow.com/questions/32775258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5314975/"
] | The error is clear -
```
FileNotFoundError: [Errno 2] No such file or directory: 'Box_Office_Data_2015/09/24_22:11.csv'
```
If you get this error when trying to do `.to_csv()` , it means that the directory in which you are trying to save the file does not exist. So in your case, the directory - `Box_Office_Data_2015/09/` does not exist. It seems like you actually meant `Box_Office_Data_2015/09/24_22:11.csv` to be a filename (with no directory) , I am not sure if that would be possible `09/` would be considered as a directory.
A simple solution would be to use something other than `/` in between the year/month/day . Example -
```
file_name = time.strftime("Box_Office_Data_%Y_%m_%d_%H:%M.csv")
allFilms.to_csv(file_name)
``` | In your code `file_name = time.strftime("Box_Office_Data_%Y/%m/%d_%H:%M.csv")`.
File name was like this `Box_Office_Data_2015/09/24_22:11.csv`, which means a path to a file.
Try to replace the `/` with something like `_`.
Try this:
`file_name = time.strftime("Box_Office_Data_%Y_%m_%d_%H:%M.csv")` |
56,507,997 | I did `!pip install tree` on google colab notebook. It showed that `Pillow in /usr/local/lib/python3.6/dist-packages (from tree) (4.3.0)`. But when I use `!tree`. The notebook reminded me that `bin/bash: tree: command not found`. How to solve it?
I tried several times but all failed.
It showed:
```
Collecting tree
Downloading https://files.pythonhosted.org/packages/29/3f/63cbed2909786f0e5ac30a4ae5791ad597c6b5fec7167e161c55bba511ce/Tree-0.2.4.tar.gz
Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from tree) (4.3.0)
Collecting svgwrite (from tree)
Downloading https://files.pythonhosted.org/packages/87/ce/3259f75aebb12d8c7dd9e8c479ad4968db5ed18e03f24ee4f6be9d9aed23/svgwrite-1.2.1-py2.py3-none-any.whl (66kB)
|████████████████████████████████| 71kB 23.9MB/s
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from tree) (41.0.1)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from tree) (7.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from Pillow->tree) (0.46)
Requirement already satisfied: pyparsing>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from svgwrite->tree) (2.4.0)
Building wheels for collected packages: tree
Building wheel for tree (setup.py) ... done
Stored in directory: /root/.cache/pip/wheels/c7/08/aa/42261411808c634cd1d0e9fe6cde5e78bf47c2c8028f3930af
Successfully built tree
Installing collected packages: svgwrite, tree
Successfully installed svgwrite-1.2.1 tree-0.2.4
!pip install tree
!tree
```
I expect it shows the structure of the files in the directory. | 2019/06/08 | [
"https://Stackoverflow.com/questions/56507997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11618792/"
] | You seem to have confused pip with the local package manager?
`!apt-get install tree` does what you want:
```
.
└── sample_data
├── anscombe.json
├── california_housing_test.csv
├── california_housing_train.csv
├── mnist_test.csv
├── mnist_train_small.csv
└── README.md
1 directory, 6 files
``` | I think you have installed wrong tree with pip <https://pypi.org/project/Tree/>
Right code for install on Mac `brew install tree`
```
sudo apt-get install tree
```
the command for Debian / Ubuntu Linux / Mint `sudo apt install tree` |
56,507,997 | I did `!pip install tree` on google colab notebook. It showed that `Pillow in /usr/local/lib/python3.6/dist-packages (from tree) (4.3.0)`. But when I use `!tree`. The notebook reminded me that `bin/bash: tree: command not found`. How to solve it?
I tried several times but all failed.
It showed:
```
Collecting tree
Downloading https://files.pythonhosted.org/packages/29/3f/63cbed2909786f0e5ac30a4ae5791ad597c6b5fec7167e161c55bba511ce/Tree-0.2.4.tar.gz
Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from tree) (4.3.0)
Collecting svgwrite (from tree)
Downloading https://files.pythonhosted.org/packages/87/ce/3259f75aebb12d8c7dd9e8c479ad4968db5ed18e03f24ee4f6be9d9aed23/svgwrite-1.2.1-py2.py3-none-any.whl (66kB)
|████████████████████████████████| 71kB 23.9MB/s
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from tree) (41.0.1)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from tree) (7.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from Pillow->tree) (0.46)
Requirement already satisfied: pyparsing>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from svgwrite->tree) (2.4.0)
Building wheels for collected packages: tree
Building wheel for tree (setup.py) ... done
Stored in directory: /root/.cache/pip/wheels/c7/08/aa/42261411808c634cd1d0e9fe6cde5e78bf47c2c8028f3930af
Successfully built tree
Installing collected packages: svgwrite, tree
Successfully installed svgwrite-1.2.1 tree-0.2.4
!pip install tree
!tree
```
I expect it shows the structure of the files in the directory. | 2019/06/08 | [
"https://Stackoverflow.com/questions/56507997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11618792/"
] | You seem to have confused pip with the local package manager?
`!apt-get install tree` does what you want:
```
.
└── sample_data
├── anscombe.json
├── california_housing_test.csv
├── california_housing_train.csv
├── mnist_test.csv
├── mnist_train_small.csv
└── README.md
1 directory, 6 files
``` | also, it's doesn't work for me but here is an alternative code for the tree dictionary
```
import os
for path, dirs, files in os.walk('/content/sample_data'):
print (path)
for f in files:
print (f)
```
[](https://i.stack.imgur.com/dq1eB.png) |
56,507,997 | I did `!pip install tree` on google colab notebook. It showed that `Pillow in /usr/local/lib/python3.6/dist-packages (from tree) (4.3.0)`. But when I use `!tree`. The notebook reminded me that `bin/bash: tree: command not found`. How to solve it?
I tried several times but all failed.
It showed:
```
Collecting tree
Downloading https://files.pythonhosted.org/packages/29/3f/63cbed2909786f0e5ac30a4ae5791ad597c6b5fec7167e161c55bba511ce/Tree-0.2.4.tar.gz
Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from tree) (4.3.0)
Collecting svgwrite (from tree)
Downloading https://files.pythonhosted.org/packages/87/ce/3259f75aebb12d8c7dd9e8c479ad4968db5ed18e03f24ee4f6be9d9aed23/svgwrite-1.2.1-py2.py3-none-any.whl (66kB)
|████████████████████████████████| 71kB 23.9MB/s
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from tree) (41.0.1)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from tree) (7.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from Pillow->tree) (0.46)
Requirement already satisfied: pyparsing>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from svgwrite->tree) (2.4.0)
Building wheels for collected packages: tree
Building wheel for tree (setup.py) ... done
Stored in directory: /root/.cache/pip/wheels/c7/08/aa/42261411808c634cd1d0e9fe6cde5e78bf47c2c8028f3930af
Successfully built tree
Installing collected packages: svgwrite, tree
Successfully installed svgwrite-1.2.1 tree-0.2.4
!pip install tree
!tree
```
I expect it shows the structure of the files in the directory. | 2019/06/08 | [
"https://Stackoverflow.com/questions/56507997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11618792/"
] | I think you have installed wrong tree with pip <https://pypi.org/project/Tree/>
Right code for install on Mac `brew install tree`
```
sudo apt-get install tree
```
the command for Debian / Ubuntu Linux / Mint `sudo apt install tree` | also, it's doesn't work for me but here is an alternative code for the tree dictionary
```
import os
for path, dirs, files in os.walk('/content/sample_data'):
print (path)
for f in files:
print (f)
```
[](https://i.stack.imgur.com/dq1eB.png) |
68,160,205 | Before I describe the problem, here is a basic run-down of the overall process to give you a clearer picture. Additionally, I am a novice at PHP:
1. I have a WordPress website that uses CPanel as its web hosting software
2. The WordPress website has a form (made by UFB) that has the user upload an image
3. The image gets directed to the upload folder (/uploads) by using `image_upload.php`
4. The image is then downloaded onto a computer, and a program is run which generates numbers about the picture(the number generator program is in python)
5. After the numbers are generated, it calls on `report.php` and `template.xlsm`
6. Report.php gets those generated numbers and then puts them into their designated places on the xlsm file
7. The xlsm file is then converted into a pdf, which is then emailed to the user that submitted the picture.
I inherited all of this code from someone else who wanted me to help them on this project. Here is my problem:
*I don't understand how the PHP files are being called. I have python code ready to run the number generator online, however, I can't do this without figuring how the PHP files are being called.*
I understand what the PHP files do, I just don't understand how they are being called. I tried doing a `-grep` search for both `image_upload.php` and `report.php`, but I come up empty. There aren't any other PHP files that seem to do an `include(xyz.php)`, which is supposed to be how PHP files are called. I don't understand what calls `image_upload.php` to get the pictures moved into the /uploads folder. I also don't understand what calls `report.php` to make it run. I tried looking in `functions.php`, where most of the other PHP files are called, but `report.php` and `image_upload.php` aren't.
Please help me! If any clarification is needed, just comment, and I will try to provide any help I can! | 2021/06/28 | [
"https://Stackoverflow.com/questions/68160205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16332203/"
] | Set the [Request.URL](https://pkg.go.dev/net/http#Request.URL) to an [opaque URL](https://pkg.go.dev/net/url#URL.Opaque). The opaque URL is written to the request line as is.
```
request := &http.Request{
URL: &url.URL{Opaque: "http://127.0.0.1:10019/system?action=add_servers"}
Body: requestBody, //io.ReadCloser containing the body
Method: http.MethodPost,
ContentLength: int64(len(postBody)),
Header: make(http.Header),
Proto: "HTTP/1.1",
ProtoMajor: 1,
ProtoMinor: 1,
}
```
The [http.NewRequest](https://pkg.go.dev/net/http#NewRequest) and [http.NewRequestContext](https://pkg.go.dev/net/http#NewRequestContext) functions are the preferred way to create a request value. Set Request.URL to the opaque URL after creating the request with one of these functions:
```
u := "http://127.0.0.1:10019/system?action=add_servers"
request, err := http.NewRequest("POST", u, requestBody)
if err != nil {
// handle error
}
request.URL = &url.URL{Opaque: u}
res, err := http.DefaultClient.Do(request)
``` | what value of the URL variable?
I think you can define the URL variable use a specific host
```
var url = "http://127.0.0.1:10019/system?action=add_servers"
```
In case your path is dynamic from another variable, you can use `fmt.Sprintf`, like below
```
// assume url value
var path = "/system?action=add_servers"
url = fmt.Sprintf("http://127.0.0.1:10019/%s", path)
``` |
45,948,854 | I have this situation :
* *File1* named **source.txt**
* *File2* named **destination.txt**
**source.txt** contains these strings:
```
MSISDN=213471001120
MSISDN=213471001121
MSISDN=213471001122
```
I want to see **destination.txt** contains these cases:
MSISDN=213471001120 **only** for First execution of python code
MSISDN=213471001121 **only** for second execution of python code
MSISDN=213471001122 **only** for third execution of python code
I have this code:
```
F1 = open("source.txt", "r")
txt = F1.read(19)
#print txt
F2 = open("destination.txt", "w")
F2.write(txt)
F3=open("source.txt", "w")
for ligne in F1:
if ligne==txt:
F3.write("")
break
F1.close()
F2.close()
F3.close()
```
**source.txt** File is empty after first execution of code.
Thank's in advanced. | 2017/08/29 | [
"https://Stackoverflow.com/questions/45948854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6056999/"
] | You have to read the whole file, before writing again, because mode `w` empties the file:
```
with open('source.txt') as lines:
lines = list(lines)
with open('destination.txt', 'w') as first:
first.write(lines[0])
with open('source.txt', 'w') as other:
other.writelines(lines[1:])
``` | You're gonna need an external file to store the state of "how many times have I run before"
```
with open('source.txt', 'r') as source, open('counter.txt', 'r') as counter, open('destination.txt', 'w') as destination:
num_to_read = int(counter.readline().strip())
for _ in range(num_to_read):
line_to_write = source.readline()
destination.write(line_to_write)
with open('counter.txt', 'w') as counter:
counter.write(num_to_read + 1)
```
I've changed your calls to `open` to use context managers so you don't need to call `close` at the end.
I haven't run this code, so there might be some bugs. In particular, the case of `counter.txt` not existing isn't handled. I'll leave that up to you. |
45,948,854 | I have this situation :
* *File1* named **source.txt**
* *File2* named **destination.txt**
**source.txt** contains these strings:
```
MSISDN=213471001120
MSISDN=213471001121
MSISDN=213471001122
```
I want to see **destination.txt** contains these cases:
MSISDN=213471001120 **only** for First execution of python code
MSISDN=213471001121 **only** for second execution of python code
MSISDN=213471001122 **only** for third execution of python code
I have this code:
```
F1 = open("source.txt", "r")
txt = F1.read(19)
#print txt
F2 = open("destination.txt", "w")
F2.write(txt)
F3=open("source.txt", "w")
for ligne in F1:
if ligne==txt:
F3.write("")
break
F1.close()
F2.close()
F3.close()
```
**source.txt** File is empty after first execution of code.
Thank's in advanced. | 2017/08/29 | [
"https://Stackoverflow.com/questions/45948854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6056999/"
] | You need to compare the current content of `destination.txt` before deciding what to write next.
This code worked for me:
```
#!/usr/bin/env python
file_src = open('source.txt', 'r')
data_src = file_src.readlines()
file_des = open('destination.txt', 'r+') # 'r+' opens file for RW
data_des = file_des.read()
if data_des == '':
new_value = data_src[0]
elif data_des == data_src[0]:
new_value = data_src[1]
elif data_des == data_src[1]:
new_value = data_src[2]
else:
new_value = None
if new_value:
file_des.seek(0) # rewind destination.txt
file_des.write(new_value)
``` | You're gonna need an external file to store the state of "how many times have I run before"
```
with open('source.txt', 'r') as source, open('counter.txt', 'r') as counter, open('destination.txt', 'w') as destination:
num_to_read = int(counter.readline().strip())
for _ in range(num_to_read):
line_to_write = source.readline()
destination.write(line_to_write)
with open('counter.txt', 'w') as counter:
counter.write(num_to_read + 1)
```
I've changed your calls to `open` to use context managers so you don't need to call `close` at the end.
I haven't run this code, so there might be some bugs. In particular, the case of `counter.txt` not existing isn't handled. I'll leave that up to you. |
61,877,065 | I am trying to implement Okapi BM25 in python. While I have seen some tutorials how to do it, it seems I am stuck in the process.
So I have collection of documents (and has as columns 'id' and 'text') and queries (and has as columns 'id' and 'text'). I have done the pre-processing steps and I have my documents and queries as a list:
```
documents = list(train_docs['text']) #put the documents text to list
queries = list(train_queries_all['text']) #put the queries text to list
```
Then for BM25 I do this:
```
pip install rank_bm25
```
#calculate BM25
```
from rank_bm25 import BM25Okapi
bm25 = BM25Okapi(documents)
```
#compute the score
`bm_score = BM25Okapi.get_scores(documents, query=queries`)
But it wouldn't work.
---
Then I tried to do this:
```
import math
import numpy as np
from multiprocessing import Pool, cpu_count
```
`nd = len(documents) # corpus_size = 3612` (I am not sure if this is necessary)
```
class BM25:
def __init__(self, documents, tokenizer=None):
self.corpus_size = len(documents)
self.avgdl = 0
self.doc_freqs = []
self.idf = {}
self.doc_len = []
self.tokenizer = tokenizer
if tokenizer:
documents = self._tokenize_corpus(documents)
nd = self._initialize(documents)
self._calc_idf(nd)
def _initialize(self, documents):
nd = {} # word -> number of documents with word
num_doc = 0
for document in documents:
self.doc_len.append(len(document))
num_doc += len(document)
frequencies = {}
for word in document:
if word not in frequencies:
frequencies[word] = 0
frequencies[word] += 1
self.doc_freqs.append(frequencies)
for word, freq in frequencies.items():
if word not in nd:
nd[word] = 0
nd[word] += 1
self.avgdl = num_doc / self.corpus_size
return nd
def _tokenize_corpus(self, documents):
pool = Pool(cpu_count())
tokenized_corpus = pool.map(self.tokenizer, documents)
return tokenized_corpus
def _calc_idf(self, nd):
raise NotImplementedError()
def get_scores(self, queries):
raise NotImplementedError()
def get_top_n(self, queries, documents, n=5):
assert self.corpus_size == len(documents), "The documents given don't match the index corpus!"
scores = self.get_scores(queries)
top_n = np.argsort(scores)[::-1][:n]
return [documents[i] for i in top_n]
class BM25T(BM25):
def __init__(self, documents, k1=1.5, b=0.75, delta=1):
# Algorithm specific parameters
self.k1 = k1
self.b = b
self.delta = delta
super().__init__(documents)
def _calc_idf(self, nd):
for word, freq in nd.items():
idf = math.log((self.corpus_size + 1) / freq)
self.idf[word] = idf
def get_scores(self, queries):
score = np.zeros(self.corpus_size)
doc_len = np.array(self.doc_len)
for q in queries:
q_freq = np.array([(doc.get(q) or 0) for doc in self.doc_freqs])
score += (self.idf.get(q) or 0) * (self.delta + (q_freq * (self.k1 + 1)) /
(self.k1 * (1 - self.b + self.b * doc_len / self.avgdl) + q_freq))
return score
```
and then I try to get the scores:
```
score = BM25.get_scores(self=documents, queries)
```
But I get as a meesage:
score = BM25.get\_scores(self=documents, queries)
SyntaxError: positional argument follows keyword argument
---
Does anyone has an idea why there is this error? Thank you in advance. | 2020/05/18 | [
"https://Stackoverflow.com/questions/61877065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13498967/"
] | `**kwargs` expects arguments to be passed by keyword, not by position. Once you do that, you can access the individual kwargs like you would in any other dictionary:
```
class Student:
def __init__(self, **kwargs):
self.name = kwargs.get('name')
self.age = kwargs.get('age')
self.salary = kwargs.get('salary')
def show_name(self):
print("Name is : " + self.name)
def show_age(self):
print("Age is : " + str(self.age))
def show_salary(self):
print(f"Salary of {self.name} is : " + str(self.salary))
st = Student(name='John', age=25, salary=15000)
st2 = Student(name='Doe', age=25, salary=1500000)
st.show_salary()
st2.show_salary()
```
If you want to pass these arguments by position, you should use `*args` instead. | **kwargs** is created as a dictionary inside the scope of the function. You need to pass a keyword which uses them as keys in the dictionary. (Try running the print statement below)
```
class Student:
def __init__(self, **kwargs):
#print(kwargs)
self.name = kwargs["name"]
self.age = kwargs["age"]
self.salary = kwargs["salary"]
def show_name(self):
print("Name is : " + self.name)
def show_age(self):
print("Age is : " + str(self.age))
def show_salary(self):
print(f"Salary of {self.name} is : " + str(self.salary))
st = Student(name = 'John',age = 25, salary = 15000)
st2 = Student(name = 'Doe',age = 25,salary = 1500000)
st.show_salary()
st2.show_salary()
``` |
61,877,065 | I am trying to implement Okapi BM25 in python. While I have seen some tutorials how to do it, it seems I am stuck in the process.
So I have collection of documents (and has as columns 'id' and 'text') and queries (and has as columns 'id' and 'text'). I have done the pre-processing steps and I have my documents and queries as a list:
```
documents = list(train_docs['text']) #put the documents text to list
queries = list(train_queries_all['text']) #put the queries text to list
```
Then for BM25 I do this:
```
pip install rank_bm25
```
#calculate BM25
```
from rank_bm25 import BM25Okapi
bm25 = BM25Okapi(documents)
```
#compute the score
`bm_score = BM25Okapi.get_scores(documents, query=queries`)
But it wouldn't work.
---
Then I tried to do this:
```
import math
import numpy as np
from multiprocessing import Pool, cpu_count
```
`nd = len(documents) # corpus_size = 3612` (I am not sure if this is necessary)
```
class BM25:
def __init__(self, documents, tokenizer=None):
self.corpus_size = len(documents)
self.avgdl = 0
self.doc_freqs = []
self.idf = {}
self.doc_len = []
self.tokenizer = tokenizer
if tokenizer:
documents = self._tokenize_corpus(documents)
nd = self._initialize(documents)
self._calc_idf(nd)
def _initialize(self, documents):
nd = {} # word -> number of documents with word
num_doc = 0
for document in documents:
self.doc_len.append(len(document))
num_doc += len(document)
frequencies = {}
for word in document:
if word not in frequencies:
frequencies[word] = 0
frequencies[word] += 1
self.doc_freqs.append(frequencies)
for word, freq in frequencies.items():
if word not in nd:
nd[word] = 0
nd[word] += 1
self.avgdl = num_doc / self.corpus_size
return nd
def _tokenize_corpus(self, documents):
pool = Pool(cpu_count())
tokenized_corpus = pool.map(self.tokenizer, documents)
return tokenized_corpus
def _calc_idf(self, nd):
raise NotImplementedError()
def get_scores(self, queries):
raise NotImplementedError()
def get_top_n(self, queries, documents, n=5):
assert self.corpus_size == len(documents), "The documents given don't match the index corpus!"
scores = self.get_scores(queries)
top_n = np.argsort(scores)[::-1][:n]
return [documents[i] for i in top_n]
class BM25T(BM25):
def __init__(self, documents, k1=1.5, b=0.75, delta=1):
# Algorithm specific parameters
self.k1 = k1
self.b = b
self.delta = delta
super().__init__(documents)
def _calc_idf(self, nd):
for word, freq in nd.items():
idf = math.log((self.corpus_size + 1) / freq)
self.idf[word] = idf
def get_scores(self, queries):
score = np.zeros(self.corpus_size)
doc_len = np.array(self.doc_len)
for q in queries:
q_freq = np.array([(doc.get(q) or 0) for doc in self.doc_freqs])
score += (self.idf.get(q) or 0) * (self.delta + (q_freq * (self.k1 + 1)) /
(self.k1 * (1 - self.b + self.b * doc_len / self.avgdl) + q_freq))
return score
```
and then I try to get the scores:
```
score = BM25.get_scores(self=documents, queries)
```
But I get as a meesage:
score = BM25.get\_scores(self=documents, queries)
SyntaxError: positional argument follows keyword argument
---
Does anyone has an idea why there is this error? Thank you in advance. | 2020/05/18 | [
"https://Stackoverflow.com/questions/61877065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13498967/"
] | `**kwargs` expects arguments to be passed by keyword, not by position. Once you do that, you can access the individual kwargs like you would in any other dictionary:
```
class Student:
def __init__(self, **kwargs):
self.name = kwargs.get('name')
self.age = kwargs.get('age')
self.salary = kwargs.get('salary')
def show_name(self):
print("Name is : " + self.name)
def show_age(self):
print("Age is : " + str(self.age))
def show_salary(self):
print(f"Salary of {self.name} is : " + str(self.salary))
st = Student(name='John', age=25, salary=15000)
st2 = Student(name='Doe', age=25, salary=1500000)
st.show_salary()
st2.show_salary()
```
If you want to pass these arguments by position, you should use `*args` instead. | Though you can do this as some of the answers here have shown, this is not really a great idea (at least not for the code you are showing here). So I am not going to answer the subject line question you have asked, but show you what the code you seem to be trying to write should be doing (and that is not using `kwargs`). There are plenty of places where using kwargs is the best solution to a coding problem, but the constructor of a class is *usually* not one of those. This is attempting to be teaching, not preaching. I just do not want others coming along later, seeing this question and thinking this is a good idea for a constructor.
The constructor for your class, the `__init__()`, generally should be defining the parameters that it needs and expects to set up the class. It is unlikely that you really want it to take an arbitrary dictionary to use as its parameter list. It would be relatively rare that this is actually what you want in your constructor, especially when there is no inheritance involved that might suggest you do not know what the parameters are for some reason.
In your `__init__()` itself you clearly want the parameters `name`, `age` and `salary`, yet without them in the parameter list it is not clear to the caller that you do. Also, your usage of it does not seem to imply that is how you expect to use it. You call it like this:
```
st = Student('John',25,15000)
```
and so you do not even seem to want named parameters.
To handle the call structure you have shown the `__init__()` would look like this:
```
def __init__(self, name, age, salary):
self.name = name
self.age = age
self.salary = salary
```
If you want to be be able to call it without some parameters such that it uses defaults for the ones left out, then it should be like this:
```
def __init__(self, name=None, age=None, salary=None):
self.name = name
self.age = age
self.salary = salary
```
It seems very unlikely that the kwargs approach is really what you want here, though obviously you can code it that way as other answers have shown.
Perhaps you are just trying to figure out how to use kwargs, and that is fine, but a different example would be better if that is the case. |
59,596,957 | First, let me say: I know I shouldn't be iterating over a dataframe per:
[How to iterate over rows - Don't!](https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas/55557758#55557758)
[How to iterate over rows...](https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas)
etc.
However, for my application I don't think I have a better option, although I am relatively new to python & pandas and may simply lack the knowledge. However, with my iteration, as I am iterating over rows, I need to access an adjacent row's data, which I can't figure out how to do with vectorization or list comprehension.
Which leaves me with iteration. I have seen several posts on iterrows() and itertuples() which will work. Before I found out about these though, i tried:
```
for i in workingDF.index:
if i==0:
list2Add = ['NaN']
compareItem = workingDF.at[0,'name']
else:
if (workingDF.at[i,'name'] != compareItem):
list2Add.append('NaN')
compareItem = workingDF.at[i,'name']
else:
currentValue = workingDF.at[i,'value']
yesterdayValue = workingDF.at[(i-1),'value']
r = currentValue - yesterdayValue
list2Add.append(r)
```
Anyway, my naive code seemed to work fine/as intended (so far).
So the question is: Is there some inherent reason not to use "for i in workingDF.index" in favor of the standard iterrows() and itertuples? (Presumably there must be since those are the "recommended" methods...)
Thanks in advance.
Jim
EDIT:
An example was requested. In this example each row contains a name, testNumber, and score. The example code creates a new column labelled "change" which represents the change of the current score compared to the most recent prior score. Example code:
```
import pandas as pd
def createDF():
# list of name, testNo, score
nme2 = ["bob", "bob", "bob", "bob", "jim", "jim", "jim" ,"jim" ,"ed" ,"ed" ,"ed" ,"ed"]
tstNo2 = [1,2,3,4,1,2,3,4,1,2,3,4]
scr2 = [82, 81, 80, 79,93,94,95,98,78,85,90,92]
# dictionary of lists
dict = {'name': nme2, 'TestNo': tstNo2, 'score': scr2}
workingDF = pd.DataFrame(dict)
return workingDF
def addChangeColumn(workingDF):
"""
returns a Dataframe object with an added column named
"change" which represents the change in score compared to
most recent prior test result
"""
for i in workingDF.index:
if i==0:
list2Add = ['NaN']
compareItem = workingDF.at[0,'name']
else:
if (workingDF.at[i,'name'] != compareItem):
list2Add.append('NaN')
compareItem = workingDF.at[i,'name']
else:
currentScore = workingDF.at[i,'score']
yesterdayScore = workingDF.at[(i-1),'score']
r = currentScore - yesterdayScore
list2Add.append(r)
modifiedDF = pd.concat([workingDF, pd.Series(list2Add, name ='change')], axis=1)
return(modifiedDF)
if __name__ == '__main__':
myDF = createDF()
print('myDF is:')
print(myDF)
print()
newDF = addChangeColumn(myDF)
print('newDF is:')
print(newDF)
```
Example Output:
```
myDF is:
name TestNo score
0 bob 1 82
1 bob 2 81
2 bob 3 80
3 bob 4 79
4 jim 1 93
5 jim 2 94
6 jim 3 95
7 jim 4 98
8 ed 1 78
9 ed 2 85
10 ed 3 90
11 ed 4 92
newDF is:
name TestNo score change
0 bob 1 82 NaN
1 bob 2 81 -1
2 bob 3 80 -1
3 bob 4 79 -1
4 jim 1 93 NaN
5 jim 2 94 1
6 jim 3 95 1
7 jim 4 98 3
8 ed 1 78 NaN
9 ed 2 85 7
10 ed 3 90 5
11 ed 4 92 2
```
Thank you. | 2020/01/05 | [
"https://Stackoverflow.com/questions/59596957",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/554517/"
] | In short, the answer is the performance benefit of using iterrows. This [post](https://engineering.upside.com/a-beginners-guide-to-optimizing-pandas-code-for-speed-c09ef2c6a4d6) could better explain the differences between the various options. | My problem is that I wanted to create a new column which was the difference of a value in the current row and a value in a prior row without using iteration.
I think the more "panda-esque" way of doing this (without iteration) would be to use dataframe.shift() to create a new column which contains the prior rows data shifted into the current row so all necessary data is available in the current row. |
36,190,757 | I am trying to use the One Million Song Dataset, for this i had to install python tables, numpy, cython, hdf5, numexpr, and so.
Yesterday i managed to install all i needed, and after having some troubles with hdf5, i downloaded the precompiled binary packages and saved them in my /bin folder, and the respective libraries in /lib , after that i tested this python script : `http://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt`
and it worked fine, to be clear the way i made it work was to first run the script and start installing the needed dependencies, but today i restarted my laptop and it didn't work, now it throws me this error on the console :
```
python2.7 script.py
```
returns :
```
import numpy as np # get it at: http://numpy.scipy.org/
from . import random
from .mtrand import *
ImportError: /home/francisco/.local/lib/python2.7/site-packages/numpy/random/mtrand.so: undefined symbol: PyFPE_jbuf
```
seems to me that there is a missing variable in such file, my guess is that the script is looking for the numpy library in the wrong place, since i made so many failed installations maybe i broke something and it only worked out because it was loaded in the temporal memory of the computer.
I tried installing Anaconda, and i created a new environment and installed the packaged with the anaconda package manager, and even thought i listed all packaged and it returns :
```
# packages in environment at /home/francisco/anaconda2/envs/Music:
#
biopython 1.66 np110py27_0
cython 0.23.4 <pip>
hdf5 1.8.15.1 2
mkl 11.3.1 0
numexpr 2.5 np110py27_0
numpy 1.10.4 py27_1
openssl 1.0.2g 0
pillow 3.1.1 <pip>
pip 8.1.1 py27_0
pytables 3.2.2 np110py27_1
python 2.7.11 0
python-ldap 2.4.25 <pip>
readline 6.2 2
reportlab 3.3.0 <pip>
requirements 0.1 <pip>
setuptools 20.3 py27_0
sqlite 3.9.2 0
tables 3.2.2 <pip>
tk 8.5.18 0
wheel 0.29.0 py27_0
zlib 1.2.8 0
```
i still get the same error. I really need help and don't know what else to try. Thanks. | 2016/03/23 | [
"https://Stackoverflow.com/questions/36190757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2869143/"
] | I had the same problem, probably you have installed numpy without Anaconda, so there is a conflict because of this, which numpy to use: that one installed with pip or with conda. When I removed non-Anaconda numpy, error gone.
```
pip uninstall numpy
``` | First remove `numpy` from `/usr/local/lib/python2.7/dist-packages/numpy-1.11.0-py2.7-linux-x86_64.egg`
and then use the following command
`sudo pip install numpy scipy`
I had solve this error in my case. |
36,190,757 | I am trying to use the One Million Song Dataset, for this i had to install python tables, numpy, cython, hdf5, numexpr, and so.
Yesterday i managed to install all i needed, and after having some troubles with hdf5, i downloaded the precompiled binary packages and saved them in my /bin folder, and the respective libraries in /lib , after that i tested this python script : `http://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt`
and it worked fine, to be clear the way i made it work was to first run the script and start installing the needed dependencies, but today i restarted my laptop and it didn't work, now it throws me this error on the console :
```
python2.7 script.py
```
returns :
```
import numpy as np # get it at: http://numpy.scipy.org/
from . import random
from .mtrand import *
ImportError: /home/francisco/.local/lib/python2.7/site-packages/numpy/random/mtrand.so: undefined symbol: PyFPE_jbuf
```
seems to me that there is a missing variable in such file, my guess is that the script is looking for the numpy library in the wrong place, since i made so many failed installations maybe i broke something and it only worked out because it was loaded in the temporal memory of the computer.
I tried installing Anaconda, and i created a new environment and installed the packaged with the anaconda package manager, and even thought i listed all packaged and it returns :
```
# packages in environment at /home/francisco/anaconda2/envs/Music:
#
biopython 1.66 np110py27_0
cython 0.23.4 <pip>
hdf5 1.8.15.1 2
mkl 11.3.1 0
numexpr 2.5 np110py27_0
numpy 1.10.4 py27_1
openssl 1.0.2g 0
pillow 3.1.1 <pip>
pip 8.1.1 py27_0
pytables 3.2.2 np110py27_1
python 2.7.11 0
python-ldap 2.4.25 <pip>
readline 6.2 2
reportlab 3.3.0 <pip>
requirements 0.1 <pip>
setuptools 20.3 py27_0
sqlite 3.9.2 0
tables 3.2.2 <pip>
tk 8.5.18 0
wheel 0.29.0 py27_0
zlib 1.2.8 0
```
i still get the same error. I really need help and don't know what else to try. Thanks. | 2016/03/23 | [
"https://Stackoverflow.com/questions/36190757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2869143/"
] | For cython users:
[This](https://github.com/numpy/numpy/issues/8415) post was helpful. The post explains, that there is some flag (--with-fpectl), which is either set during the compilation of cpython or not. When a library has been compiled using a cpython without that flag, it is incompatible to a version with that flag being set. This effect does only show up when you use cython, as numpy itself does not use this extension.
As further stated in that post, my Ubuntu 16.04 has been created with this flag and Conda without it. For me, it was specifically the module *hmmlearn* throwing the undefined symbol error. This must be the case because it was shipped by Ubuntu with the flag being set and not by anaconda. So I uninstalled hmmlearn and manually installed it anew from [sourcescode](https://github.com/hmmlearn/hmmlearn) (Anaconda, unfortunately, does not offer hmmlearn). --> Works! | First remove `numpy` from `/usr/local/lib/python2.7/dist-packages/numpy-1.11.0-py2.7-linux-x86_64.egg`
and then use the following command
`sudo pip install numpy scipy`
I had solve this error in my case. |
36,190,757 | I am trying to use the One Million Song Dataset, for this i had to install python tables, numpy, cython, hdf5, numexpr, and so.
Yesterday i managed to install all i needed, and after having some troubles with hdf5, i downloaded the precompiled binary packages and saved them in my /bin folder, and the respective libraries in /lib , after that i tested this python script : `http://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt`
and it worked fine, to be clear the way i made it work was to first run the script and start installing the needed dependencies, but today i restarted my laptop and it didn't work, now it throws me this error on the console :
```
python2.7 script.py
```
returns :
```
import numpy as np # get it at: http://numpy.scipy.org/
from . import random
from .mtrand import *
ImportError: /home/francisco/.local/lib/python2.7/site-packages/numpy/random/mtrand.so: undefined symbol: PyFPE_jbuf
```
seems to me that there is a missing variable in such file, my guess is that the script is looking for the numpy library in the wrong place, since i made so many failed installations maybe i broke something and it only worked out because it was loaded in the temporal memory of the computer.
I tried installing Anaconda, and i created a new environment and installed the packaged with the anaconda package manager, and even thought i listed all packaged and it returns :
```
# packages in environment at /home/francisco/anaconda2/envs/Music:
#
biopython 1.66 np110py27_0
cython 0.23.4 <pip>
hdf5 1.8.15.1 2
mkl 11.3.1 0
numexpr 2.5 np110py27_0
numpy 1.10.4 py27_1
openssl 1.0.2g 0
pillow 3.1.1 <pip>
pip 8.1.1 py27_0
pytables 3.2.2 np110py27_1
python 2.7.11 0
python-ldap 2.4.25 <pip>
readline 6.2 2
reportlab 3.3.0 <pip>
requirements 0.1 <pip>
setuptools 20.3 py27_0
sqlite 3.9.2 0
tables 3.2.2 <pip>
tk 8.5.18 0
wheel 0.29.0 py27_0
zlib 1.2.8 0
```
i still get the same error. I really need help and don't know what else to try. Thanks. | 2016/03/23 | [
"https://Stackoverflow.com/questions/36190757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2869143/"
] | For cython users:
[This](https://github.com/numpy/numpy/issues/8415) post was helpful. The post explains, that there is some flag (--with-fpectl), which is either set during the compilation of cpython or not. When a library has been compiled using a cpython without that flag, it is incompatible to a version with that flag being set. This effect does only show up when you use cython, as numpy itself does not use this extension.
As further stated in that post, my Ubuntu 16.04 has been created with this flag and Conda without it. For me, it was specifically the module *hmmlearn* throwing the undefined symbol error. This must be the case because it was shipped by Ubuntu with the flag being set and not by anaconda. So I uninstalled hmmlearn and manually installed it anew from [sourcescode](https://github.com/hmmlearn/hmmlearn) (Anaconda, unfortunately, does not offer hmmlearn). --> Works! | I agree with previous posts that this seems to be caused by having multiple versions of numpy installed. For me, it wasn't enough to just use pip, as I also had multiple versions of pip installed.
Specifying the specific pip solved the problem:
```
/usr/bin/pip3 uninstall numpy
``` |
36,190,757 | I am trying to use the One Million Song Dataset, for this i had to install python tables, numpy, cython, hdf5, numexpr, and so.
Yesterday i managed to install all i needed, and after having some troubles with hdf5, i downloaded the precompiled binary packages and saved them in my /bin folder, and the respective libraries in /lib , after that i tested this python script : `http://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt`
and it worked fine, to be clear the way i made it work was to first run the script and start installing the needed dependencies, but today i restarted my laptop and it didn't work, now it throws me this error on the console :
```
python2.7 script.py
```
returns :
```
import numpy as np # get it at: http://numpy.scipy.org/
from . import random
from .mtrand import *
ImportError: /home/francisco/.local/lib/python2.7/site-packages/numpy/random/mtrand.so: undefined symbol: PyFPE_jbuf
```
seems to me that there is a missing variable in such file, my guess is that the script is looking for the numpy library in the wrong place, since i made so many failed installations maybe i broke something and it only worked out because it was loaded in the temporal memory of the computer.
I tried installing Anaconda, and i created a new environment and installed the packaged with the anaconda package manager, and even thought i listed all packaged and it returns :
```
# packages in environment at /home/francisco/anaconda2/envs/Music:
#
biopython 1.66 np110py27_0
cython 0.23.4 <pip>
hdf5 1.8.15.1 2
mkl 11.3.1 0
numexpr 2.5 np110py27_0
numpy 1.10.4 py27_1
openssl 1.0.2g 0
pillow 3.1.1 <pip>
pip 8.1.1 py27_0
pytables 3.2.2 np110py27_1
python 2.7.11 0
python-ldap 2.4.25 <pip>
readline 6.2 2
reportlab 3.3.0 <pip>
requirements 0.1 <pip>
setuptools 20.3 py27_0
sqlite 3.9.2 0
tables 3.2.2 <pip>
tk 8.5.18 0
wheel 0.29.0 py27_0
zlib 1.2.8 0
```
i still get the same error. I really need help and don't know what else to try. Thanks. | 2016/03/23 | [
"https://Stackoverflow.com/questions/36190757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2869143/"
] | I had the same problem, probably you have installed numpy without Anaconda, so there is a conflict because of this, which numpy to use: that one installed with pip or with conda. When I removed non-Anaconda numpy, error gone.
```
pip uninstall numpy
``` | Initially, I installed cython using system /usr/bin/pip and anconda pip. I uninstalled system cython using system pip and reinstalled using
`conda install cython`. Works now. |
36,190,757 | I am trying to use the One Million Song Dataset, for this i had to install python tables, numpy, cython, hdf5, numexpr, and so.
Yesterday i managed to install all i needed, and after having some troubles with hdf5, i downloaded the precompiled binary packages and saved them in my /bin folder, and the respective libraries in /lib , after that i tested this python script : `http://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt`
and it worked fine, to be clear the way i made it work was to first run the script and start installing the needed dependencies, but today i restarted my laptop and it didn't work, now it throws me this error on the console :
```
python2.7 script.py
```
returns :
```
import numpy as np # get it at: http://numpy.scipy.org/
from . import random
from .mtrand import *
ImportError: /home/francisco/.local/lib/python2.7/site-packages/numpy/random/mtrand.so: undefined symbol: PyFPE_jbuf
```
seems to me that there is a missing variable in such file, my guess is that the script is looking for the numpy library in the wrong place, since i made so many failed installations maybe i broke something and it only worked out because it was loaded in the temporal memory of the computer.
I tried installing Anaconda, and i created a new environment and installed the packaged with the anaconda package manager, and even thought i listed all packaged and it returns :
```
# packages in environment at /home/francisco/anaconda2/envs/Music:
#
biopython 1.66 np110py27_0
cython 0.23.4 <pip>
hdf5 1.8.15.1 2
mkl 11.3.1 0
numexpr 2.5 np110py27_0
numpy 1.10.4 py27_1
openssl 1.0.2g 0
pillow 3.1.1 <pip>
pip 8.1.1 py27_0
pytables 3.2.2 np110py27_1
python 2.7.11 0
python-ldap 2.4.25 <pip>
readline 6.2 2
reportlab 3.3.0 <pip>
requirements 0.1 <pip>
setuptools 20.3 py27_0
sqlite 3.9.2 0
tables 3.2.2 <pip>
tk 8.5.18 0
wheel 0.29.0 py27_0
zlib 1.2.8 0
```
i still get the same error. I really need help and don't know what else to try. Thanks. | 2016/03/23 | [
"https://Stackoverflow.com/questions/36190757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2869143/"
] | I agree with previous posts that this seems to be caused by having multiple versions of numpy installed. For me, it wasn't enough to just use pip, as I also had multiple versions of pip installed.
Specifying the specific pip solved the problem:
```
/usr/bin/pip3 uninstall numpy
``` | First remove `numpy` from `/usr/local/lib/python2.7/dist-packages/numpy-1.11.0-py2.7-linux-x86_64.egg`
and then use the following command
`sudo pip install numpy scipy`
I had solve this error in my case. |
36,190,757 | I am trying to use the One Million Song Dataset, for this i had to install python tables, numpy, cython, hdf5, numexpr, and so.
Yesterday i managed to install all i needed, and after having some troubles with hdf5, i downloaded the precompiled binary packages and saved them in my /bin folder, and the respective libraries in /lib , after that i tested this python script : `http://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt`
and it worked fine, to be clear the way i made it work was to first run the script and start installing the needed dependencies, but today i restarted my laptop and it didn't work, now it throws me this error on the console :
```
python2.7 script.py
```
returns :
```
import numpy as np # get it at: http://numpy.scipy.org/
from . import random
from .mtrand import *
ImportError: /home/francisco/.local/lib/python2.7/site-packages/numpy/random/mtrand.so: undefined symbol: PyFPE_jbuf
```
seems to me that there is a missing variable in such file, my guess is that the script is looking for the numpy library in the wrong place, since i made so many failed installations maybe i broke something and it only worked out because it was loaded in the temporal memory of the computer.
I tried installing Anaconda, and i created a new environment and installed the packaged with the anaconda package manager, and even thought i listed all packaged and it returns :
```
# packages in environment at /home/francisco/anaconda2/envs/Music:
#
biopython 1.66 np110py27_0
cython 0.23.4 <pip>
hdf5 1.8.15.1 2
mkl 11.3.1 0
numexpr 2.5 np110py27_0
numpy 1.10.4 py27_1
openssl 1.0.2g 0
pillow 3.1.1 <pip>
pip 8.1.1 py27_0
pytables 3.2.2 np110py27_1
python 2.7.11 0
python-ldap 2.4.25 <pip>
readline 6.2 2
reportlab 3.3.0 <pip>
requirements 0.1 <pip>
setuptools 20.3 py27_0
sqlite 3.9.2 0
tables 3.2.2 <pip>
tk 8.5.18 0
wheel 0.29.0 py27_0
zlib 1.2.8 0
```
i still get the same error. I really need help and don't know what else to try. Thanks. | 2016/03/23 | [
"https://Stackoverflow.com/questions/36190757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2869143/"
] | For cython users:
[This](https://github.com/numpy/numpy/issues/8415) post was helpful. The post explains, that there is some flag (--with-fpectl), which is either set during the compilation of cpython or not. When a library has been compiled using a cpython without that flag, it is incompatible to a version with that flag being set. This effect does only show up when you use cython, as numpy itself does not use this extension.
As further stated in that post, my Ubuntu 16.04 has been created with this flag and Conda without it. For me, it was specifically the module *hmmlearn* throwing the undefined symbol error. This must be the case because it was shipped by Ubuntu with the flag being set and not by anaconda. So I uninstalled hmmlearn and manually installed it anew from [sourcescode](https://github.com/hmmlearn/hmmlearn) (Anaconda, unfortunately, does not offer hmmlearn). --> Works! | Initially, I installed cython using system /usr/bin/pip and anconda pip. I uninstalled system cython using system pip and reinstalled using
`conda install cython`. Works now. |
36,190,757 | I am trying to use the One Million Song Dataset, for this i had to install python tables, numpy, cython, hdf5, numexpr, and so.
Yesterday i managed to install all i needed, and after having some troubles with hdf5, i downloaded the precompiled binary packages and saved them in my /bin folder, and the respective libraries in /lib , after that i tested this python script : `http://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt`
and it worked fine, to be clear the way i made it work was to first run the script and start installing the needed dependencies, but today i restarted my laptop and it didn't work, now it throws me this error on the console :
```
python2.7 script.py
```
returns :
```
import numpy as np # get it at: http://numpy.scipy.org/
from . import random
from .mtrand import *
ImportError: /home/francisco/.local/lib/python2.7/site-packages/numpy/random/mtrand.so: undefined symbol: PyFPE_jbuf
```
seems to me that there is a missing variable in such file, my guess is that the script is looking for the numpy library in the wrong place, since i made so many failed installations maybe i broke something and it only worked out because it was loaded in the temporal memory of the computer.
I tried installing Anaconda, and i created a new environment and installed the packaged with the anaconda package manager, and even thought i listed all packaged and it returns :
```
# packages in environment at /home/francisco/anaconda2/envs/Music:
#
biopython 1.66 np110py27_0
cython 0.23.4 <pip>
hdf5 1.8.15.1 2
mkl 11.3.1 0
numexpr 2.5 np110py27_0
numpy 1.10.4 py27_1
openssl 1.0.2g 0
pillow 3.1.1 <pip>
pip 8.1.1 py27_0
pytables 3.2.2 np110py27_1
python 2.7.11 0
python-ldap 2.4.25 <pip>
readline 6.2 2
reportlab 3.3.0 <pip>
requirements 0.1 <pip>
setuptools 20.3 py27_0
sqlite 3.9.2 0
tables 3.2.2 <pip>
tk 8.5.18 0
wheel 0.29.0 py27_0
zlib 1.2.8 0
```
i still get the same error. I really need help and don't know what else to try. Thanks. | 2016/03/23 | [
"https://Stackoverflow.com/questions/36190757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2869143/"
] | irony at it's best, i restarted my laptop without doing nothing, and it worked. Can't understand why. | I ran into this problem in a particular situation. Using **Anaconda** (3 I think) I was creating a new environment. Previously I had created a py3 env with numpy, Not sure if related. But when creating my new py2.7 environment I went to install a particular package Ta-lib via pip, but then had this same Question's import error relating to numpy for the particular case of Ta-lib.
[From this post Gaurav suggested](https://stackoverflow.com/a/49199601/927972) **the use of pip flag --no-cache-dir** to ensure a rebuild during install of numpy. I uninstalled my Ta-lib and numpy and then reinstalled them with this flag via pip and everything worked great. |
36,190,757 | I am trying to use the One Million Song Dataset, for this i had to install python tables, numpy, cython, hdf5, numexpr, and so.
Yesterday i managed to install all i needed, and after having some troubles with hdf5, i downloaded the precompiled binary packages and saved them in my /bin folder, and the respective libraries in /lib , after that i tested this python script : `http://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt`
and it worked fine, to be clear the way i made it work was to first run the script and start installing the needed dependencies, but today i restarted my laptop and it didn't work, now it throws me this error on the console :
```
python2.7 script.py
```
returns :
```
import numpy as np # get it at: http://numpy.scipy.org/
from . import random
from .mtrand import *
ImportError: /home/francisco/.local/lib/python2.7/site-packages/numpy/random/mtrand.so: undefined symbol: PyFPE_jbuf
```
seems to me that there is a missing variable in such file, my guess is that the script is looking for the numpy library in the wrong place, since i made so many failed installations maybe i broke something and it only worked out because it was loaded in the temporal memory of the computer.
I tried installing Anaconda, and i created a new environment and installed the packaged with the anaconda package manager, and even thought i listed all packaged and it returns :
```
# packages in environment at /home/francisco/anaconda2/envs/Music:
#
biopython 1.66 np110py27_0
cython 0.23.4 <pip>
hdf5 1.8.15.1 2
mkl 11.3.1 0
numexpr 2.5 np110py27_0
numpy 1.10.4 py27_1
openssl 1.0.2g 0
pillow 3.1.1 <pip>
pip 8.1.1 py27_0
pytables 3.2.2 np110py27_1
python 2.7.11 0
python-ldap 2.4.25 <pip>
readline 6.2 2
reportlab 3.3.0 <pip>
requirements 0.1 <pip>
setuptools 20.3 py27_0
sqlite 3.9.2 0
tables 3.2.2 <pip>
tk 8.5.18 0
wheel 0.29.0 py27_0
zlib 1.2.8 0
```
i still get the same error. I really need help and don't know what else to try. Thanks. | 2016/03/23 | [
"https://Stackoverflow.com/questions/36190757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2869143/"
] | irony at it's best, i restarted my laptop without doing nothing, and it worked. Can't understand why. | Initially, I installed cython using system /usr/bin/pip and anconda pip. I uninstalled system cython using system pip and reinstalled using
`conda install cython`. Works now. |
36,190,757 | I am trying to use the One Million Song Dataset, for this i had to install python tables, numpy, cython, hdf5, numexpr, and so.
Yesterday i managed to install all i needed, and after having some troubles with hdf5, i downloaded the precompiled binary packages and saved them in my /bin folder, and the respective libraries in /lib , after that i tested this python script : `http://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt`
and it worked fine, to be clear the way i made it work was to first run the script and start installing the needed dependencies, but today i restarted my laptop and it didn't work, now it throws me this error on the console :
```
python2.7 script.py
```
returns :
```
import numpy as np # get it at: http://numpy.scipy.org/
from . import random
from .mtrand import *
ImportError: /home/francisco/.local/lib/python2.7/site-packages/numpy/random/mtrand.so: undefined symbol: PyFPE_jbuf
```
seems to me that there is a missing variable in such file, my guess is that the script is looking for the numpy library in the wrong place, since i made so many failed installations maybe i broke something and it only worked out because it was loaded in the temporal memory of the computer.
I tried installing Anaconda, and i created a new environment and installed the packaged with the anaconda package manager, and even thought i listed all packaged and it returns :
```
# packages in environment at /home/francisco/anaconda2/envs/Music:
#
biopython 1.66 np110py27_0
cython 0.23.4 <pip>
hdf5 1.8.15.1 2
mkl 11.3.1 0
numexpr 2.5 np110py27_0
numpy 1.10.4 py27_1
openssl 1.0.2g 0
pillow 3.1.1 <pip>
pip 8.1.1 py27_0
pytables 3.2.2 np110py27_1
python 2.7.11 0
python-ldap 2.4.25 <pip>
readline 6.2 2
reportlab 3.3.0 <pip>
requirements 0.1 <pip>
setuptools 20.3 py27_0
sqlite 3.9.2 0
tables 3.2.2 <pip>
tk 8.5.18 0
wheel 0.29.0 py27_0
zlib 1.2.8 0
```
i still get the same error. I really need help and don't know what else to try. Thanks. | 2016/03/23 | [
"https://Stackoverflow.com/questions/36190757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2869143/"
] | I had the same problem, probably you have installed numpy without Anaconda, so there is a conflict because of this, which numpy to use: that one installed with pip or with conda. When I removed non-Anaconda numpy, error gone.
```
pip uninstall numpy
``` | irony at it's best, i restarted my laptop without doing nothing, and it worked. Can't understand why. |
36,190,757 | I am trying to use the One Million Song Dataset, for this i had to install python tables, numpy, cython, hdf5, numexpr, and so.
Yesterday i managed to install all i needed, and after having some troubles with hdf5, i downloaded the precompiled binary packages and saved them in my /bin folder, and the respective libraries in /lib , after that i tested this python script : `http://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt`
and it worked fine, to be clear the way i made it work was to first run the script and start installing the needed dependencies, but today i restarted my laptop and it didn't work, now it throws me this error on the console :
```
python2.7 script.py
```
returns :
```
import numpy as np # get it at: http://numpy.scipy.org/
from . import random
from .mtrand import *
ImportError: /home/francisco/.local/lib/python2.7/site-packages/numpy/random/mtrand.so: undefined symbol: PyFPE_jbuf
```
seems to me that there is a missing variable in such file, my guess is that the script is looking for the numpy library in the wrong place, since i made so many failed installations maybe i broke something and it only worked out because it was loaded in the temporal memory of the computer.
I tried installing Anaconda, and i created a new environment and installed the packaged with the anaconda package manager, and even thought i listed all packaged and it returns :
```
# packages in environment at /home/francisco/anaconda2/envs/Music:
#
biopython 1.66 np110py27_0
cython 0.23.4 <pip>
hdf5 1.8.15.1 2
mkl 11.3.1 0
numexpr 2.5 np110py27_0
numpy 1.10.4 py27_1
openssl 1.0.2g 0
pillow 3.1.1 <pip>
pip 8.1.1 py27_0
pytables 3.2.2 np110py27_1
python 2.7.11 0
python-ldap 2.4.25 <pip>
readline 6.2 2
reportlab 3.3.0 <pip>
requirements 0.1 <pip>
setuptools 20.3 py27_0
sqlite 3.9.2 0
tables 3.2.2 <pip>
tk 8.5.18 0
wheel 0.29.0 py27_0
zlib 1.2.8 0
```
i still get the same error. I really need help and don't know what else to try. Thanks. | 2016/03/23 | [
"https://Stackoverflow.com/questions/36190757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2869143/"
] | irony at it's best, i restarted my laptop without doing nothing, and it worked. Can't understand why. | First remove `numpy` from `/usr/local/lib/python2.7/dist-packages/numpy-1.11.0-py2.7-linux-x86_64.egg`
and then use the following command
`sudo pip install numpy scipy`
I had solve this error in my case. |
59,227,170 | i run a python program using `beautifulsoup` and `requests` to scrape embedded videos URL , but to download theses videos i need to bypass a ads popups and `javascript` reload only then the `m3u8` files start to appear in the network traffic;
so i need to simulate the clicks to get to the `javascript` reload (if there's a method better than selenium, trying to reduce script dependencies) and then when the `m3u8`files appear i need to get their url. | 2019/12/07 | [
"https://Stackoverflow.com/questions/59227170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12496538/"
] | There is no rule against using selenium side by side with beautifulsoup and requests. You can use the selenium to bypass the clicks, popups and ads and use beautifulsoup and requests to download the videos after the urls have appeared. You can redirect selenium to different urls using the results you get from running a `requests.get()` or similar or you could result to using **[scrapy](https://scrapy.org/)**(a full-blown scraping framework) and with a couple of third party plugins to handle those JavaScript and ads you should be able get those videos in not time. | >
> Blockquote
> `i run a python program using beautifulsoup and requests to scrape embedded videos URL , but to download theses videos i need to bypass a ads popups and javascript reload only then the m3u8 files start to appear in the network traffic;
>
>
>
so i need to simulate the clicks to get to the javascript reload (if there's a method better than selenium, trying to reduce script dependencies) and then when the m3u8files appear i need to get their url
`Can I have your code please? |
24,853,027 | I have installed Django 1.6.5 with PIP and Python 2.7.8 from the website.
I ran `django-admin.py startproject test123`, switched to `test123` directory, and ran the command `python manage.py runserver`, then i get this:
```
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 261, in fetch_command
commands = get_commands()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 107, in get_commands
apps = settings.INSTALLED_APPS
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 54, in __getattr__
self._setup(name)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 50, in _setup
self._configure_logging()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 72, in _configure_logging
from django.utils.log import DEFAULT_LOGGING
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/utils/log.py", line 7, in <module>
from django.views.debug import ExceptionReporter, get_exception_reporter_filter
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/views/debug.py", line 10, in <module>
from django.http import (HttpResponse, HttpResponseServerError,
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/http/__init__.py", line 2, in <module>
from django.http.request import (HttpRequest, QueryDict, UnreadablePostError,
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/http/request.py", line 11, in <module>
from django.core import signing
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/signing.py", line 45, in <module>
from django.utils.crypto import constant_time_compare, salted_hmac
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/utils/crypto.py", line 6, in <module>
import hmac
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hmac.py", line 8, in <module>
from operator import _compare_digest as compare_digest
ImportError: cannot import name _compare_digest
```
Found out that operator is a standard python library. Why cant it not import it?
P.S. I did try it in the command line, I can import the operator module, but I get an error on this statement: `from operator import _compare_digest as compare_digest` | 2014/07/20 | [
"https://Stackoverflow.com/questions/24853027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/637619/"
] | Followed this SO answer:
[Uninstall python.org version of python2.7 in favor of default OS X python2.7](https://stackoverflow.com/questions/13538586/uninstall-python-org-version-of-python2-7-in-favor-of-default-os-x-python2-7)
Then changed my `.bash_profile` Python path to `/usr/lib/python` for the default OSX python path.
Uninstalled Django and MySQL-Python:
```
sudo pip uninstall django
sudo pip uninstall MySQL-Python
```
And then again reinstalled everything, but with `MySQL-Python` being the first and second Django.
After these steps, everything is working. | You most likely have another file named `operator.py` on your `PYTHONPATH` (probably in the current working directory), which shadows the standard library `operator` module..
Remove or rename the file. |
24,853,027 | I have installed Django 1.6.5 with PIP and Python 2.7.8 from the website.
I ran `django-admin.py startproject test123`, switched to `test123` directory, and ran the command `python manage.py runserver`, then i get this:
```
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 261, in fetch_command
commands = get_commands()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 107, in get_commands
apps = settings.INSTALLED_APPS
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 54, in __getattr__
self._setup(name)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 50, in _setup
self._configure_logging()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 72, in _configure_logging
from django.utils.log import DEFAULT_LOGGING
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/utils/log.py", line 7, in <module>
from django.views.debug import ExceptionReporter, get_exception_reporter_filter
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/views/debug.py", line 10, in <module>
from django.http import (HttpResponse, HttpResponseServerError,
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/http/__init__.py", line 2, in <module>
from django.http.request import (HttpRequest, QueryDict, UnreadablePostError,
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/http/request.py", line 11, in <module>
from django.core import signing
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/signing.py", line 45, in <module>
from django.utils.crypto import constant_time_compare, salted_hmac
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/utils/crypto.py", line 6, in <module>
import hmac
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hmac.py", line 8, in <module>
from operator import _compare_digest as compare_digest
ImportError: cannot import name _compare_digest
```
Found out that operator is a standard python library. Why cant it not import it?
P.S. I did try it in the command line, I can import the operator module, but I get an error on this statement: `from operator import _compare_digest as compare_digest` | 2014/07/20 | [
"https://Stackoverflow.com/questions/24853027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/637619/"
] | I get this error with anaconda as my default python and django1.7 while trying to use startproject.
I deleted the venv and recreated it with
```
virtualenv -p /usr/bin/python2.7 venv
```
startproject was working again. | You most likely have another file named `operator.py` on your `PYTHONPATH` (probably in the current working directory), which shadows the standard library `operator` module..
Remove or rename the file. |
24,853,027 | I have installed Django 1.6.5 with PIP and Python 2.7.8 from the website.
I ran `django-admin.py startproject test123`, switched to `test123` directory, and ran the command `python manage.py runserver`, then i get this:
```
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 261, in fetch_command
commands = get_commands()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 107, in get_commands
apps = settings.INSTALLED_APPS
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 54, in __getattr__
self._setup(name)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 50, in _setup
self._configure_logging()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 72, in _configure_logging
from django.utils.log import DEFAULT_LOGGING
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/utils/log.py", line 7, in <module>
from django.views.debug import ExceptionReporter, get_exception_reporter_filter
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/views/debug.py", line 10, in <module>
from django.http import (HttpResponse, HttpResponseServerError,
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/http/__init__.py", line 2, in <module>
from django.http.request import (HttpRequest, QueryDict, UnreadablePostError,
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/http/request.py", line 11, in <module>
from django.core import signing
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/signing.py", line 45, in <module>
from django.utils.crypto import constant_time_compare, salted_hmac
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/utils/crypto.py", line 6, in <module>
import hmac
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hmac.py", line 8, in <module>
from operator import _compare_digest as compare_digest
ImportError: cannot import name _compare_digest
```
Found out that operator is a standard python library. Why cant it not import it?
P.S. I did try it in the command line, I can import the operator module, but I get an error on this statement: `from operator import _compare_digest as compare_digest` | 2014/07/20 | [
"https://Stackoverflow.com/questions/24853027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/637619/"
] | Followed this SO answer:
[Uninstall python.org version of python2.7 in favor of default OS X python2.7](https://stackoverflow.com/questions/13538586/uninstall-python-org-version-of-python2-7-in-favor-of-default-os-x-python2-7)
Then changed my `.bash_profile` Python path to `/usr/lib/python` for the default OSX python path.
Uninstalled Django and MySQL-Python:
```
sudo pip uninstall django
sudo pip uninstall MySQL-Python
```
And then again reinstalled everything, but with `MySQL-Python` being the first and second Django.
After these steps, everything is working. | For those not wanting to switch to Apple's python, simply [deleting the virtualenv and rebuilding it](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=749491#10) worked fine for me.
Tip: Don't forget to `pip freeze > requirements.txt` first if you aren't already tracking your package requirements. That way you can `pip install -r requirements.txt` to get up and running again quickly. |
24,853,027 | I have installed Django 1.6.5 with PIP and Python 2.7.8 from the website.
I ran `django-admin.py startproject test123`, switched to `test123` directory, and ran the command `python manage.py runserver`, then i get this:
```
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 261, in fetch_command
commands = get_commands()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 107, in get_commands
apps = settings.INSTALLED_APPS
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 54, in __getattr__
self._setup(name)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 50, in _setup
self._configure_logging()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 72, in _configure_logging
from django.utils.log import DEFAULT_LOGGING
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/utils/log.py", line 7, in <module>
from django.views.debug import ExceptionReporter, get_exception_reporter_filter
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/views/debug.py", line 10, in <module>
from django.http import (HttpResponse, HttpResponseServerError,
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/http/__init__.py", line 2, in <module>
from django.http.request import (HttpRequest, QueryDict, UnreadablePostError,
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/http/request.py", line 11, in <module>
from django.core import signing
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/signing.py", line 45, in <module>
from django.utils.crypto import constant_time_compare, salted_hmac
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/utils/crypto.py", line 6, in <module>
import hmac
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hmac.py", line 8, in <module>
from operator import _compare_digest as compare_digest
ImportError: cannot import name _compare_digest
```
Found out that operator is a standard python library. Why cant it not import it?
P.S. I did try it in the command line, I can import the operator module, but I get an error on this statement: `from operator import _compare_digest as compare_digest` | 2014/07/20 | [
"https://Stackoverflow.com/questions/24853027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/637619/"
] | Followed this SO answer:
[Uninstall python.org version of python2.7 in favor of default OS X python2.7](https://stackoverflow.com/questions/13538586/uninstall-python-org-version-of-python2-7-in-favor-of-default-os-x-python2-7)
Then changed my `.bash_profile` Python path to `/usr/lib/python` for the default OSX python path.
Uninstalled Django and MySQL-Python:
```
sudo pip uninstall django
sudo pip uninstall MySQL-Python
```
And then again reinstalled everything, but with `MySQL-Python` being the first and second Django.
After these steps, everything is working. | I get this error with anaconda as my default python and django1.7 while trying to use startproject.
I deleted the venv and recreated it with
```
virtualenv -p /usr/bin/python2.7 venv
```
startproject was working again. |
24,853,027 | I have installed Django 1.6.5 with PIP and Python 2.7.8 from the website.
I ran `django-admin.py startproject test123`, switched to `test123` directory, and ran the command `python manage.py runserver`, then i get this:
```
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 261, in fetch_command
commands = get_commands()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 107, in get_commands
apps = settings.INSTALLED_APPS
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 54, in __getattr__
self._setup(name)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 50, in _setup
self._configure_logging()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 72, in _configure_logging
from django.utils.log import DEFAULT_LOGGING
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/utils/log.py", line 7, in <module>
from django.views.debug import ExceptionReporter, get_exception_reporter_filter
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/views/debug.py", line 10, in <module>
from django.http import (HttpResponse, HttpResponseServerError,
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/http/__init__.py", line 2, in <module>
from django.http.request import (HttpRequest, QueryDict, UnreadablePostError,
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/http/request.py", line 11, in <module>
from django.core import signing
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/signing.py", line 45, in <module>
from django.utils.crypto import constant_time_compare, salted_hmac
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/utils/crypto.py", line 6, in <module>
import hmac
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hmac.py", line 8, in <module>
from operator import _compare_digest as compare_digest
ImportError: cannot import name _compare_digest
```
Found out that operator is a standard python library. Why cant it not import it?
P.S. I did try it in the command line, I can import the operator module, but I get an error on this statement: `from operator import _compare_digest as compare_digest` | 2014/07/20 | [
"https://Stackoverflow.com/questions/24853027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/637619/"
] | I get this error with anaconda as my default python and django1.7 while trying to use startproject.
I deleted the venv and recreated it with
```
virtualenv -p /usr/bin/python2.7 venv
```
startproject was working again. | For those not wanting to switch to Apple's python, simply [deleting the virtualenv and rebuilding it](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=749491#10) worked fine for me.
Tip: Don't forget to `pip freeze > requirements.txt` first if you aren't already tracking your package requirements. That way you can `pip install -r requirements.txt` to get up and running again quickly. |
45,718,546 | In python 3, you can now open a file safely using the `with` clause like this:
```
with open("stuff.txt") as f:
data = f.read()
```
Using this method, I don't need to worry about closing the connection
I was wondering if I could do the same for the multiprocessing. For example, my current code looks like:
```
pool = multiprocessing.Pool(processes=multiprocessing.cpu_count())
pool.starmap(function,list)
pool.close()
pool.join()
```
Is there any way I could use a with clause to simplify this? | 2017/08/16 | [
"https://Stackoverflow.com/questions/45718546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2208112/"
] | ```
with multiprocessing.Pool( ... ) as pool:
pool.starmap( ... )
```
<https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool>
>
> New in version 3.3: Pool objects now support the context management protocol – see Context Manager Types. **enter**() returns the pool object, and **exit**() calls terminate().
>
>
>
You can see an example at the bottom of the `Pool` section. | Although its more than what the OP asked, if you want something that will work for both Python 2 and Python 3, you can use:
```py
# For python 2/3 compatibility, define pool context manager
# to support the 'with' statement in Python 2
if sys.version_info[0] == 2:
from contextlib import contextmanager
@contextmanager
def multiprocessing_context(*args, **kwargs):
pool = multiprocessing.Pool(*args, **kwargs)
yield pool
pool.terminate()
else:
multiprocessing_context = multiprocessing.Pool
```
After that, you can use multiprocessing the regular Python 3 way, regardless of which version of Python you are using. For example:
```
def _function_to_run_for_each(x):
return x.lower()
with multiprocessing_context(processes=3) as pool:
results = pool.map(_function_to_run_for_each, ['Bob', 'Sue', 'Tim'])
print(results)
```
Now, this will work in Python 2 or Python 3. |
48,512,269 | Hi guys I am trying to read from subprocess.PIPE without blocking the main process. I have found this code:
```
import sys
from subprocess import PIPE, Popen
from threading import Thread
try:
from Queue import Queue, Empty
except ImportError:
from queue import Queue, Empty # python 3.x
ON_POSIX = 'posix' in sys.builtin_module_names
def enqueue_output(out, queue):
for line in iter(out.readline, b''):
queue.put(line)
out.close()
p = Popen(['myprogram.exe'], stdout=PIPE, bufsize=1, close_fds=ON_POSIX)
q = Queue()
t = Thread(target=enqueue_output, args=(p.stdout, q))
t.daemon = True # thread dies with the program
t.start()
# ... do other things here
# read line without blocking
try: line = q.get_nowait() # or q.get(timeout=.1)
except Empty:
print('no output yet')
else: # got line
# ... do something with line
```
The code does not return anything. I am using Python 3 on Windows.
Do you have any ideas what might be the problem? | 2018/01/30 | [
"https://Stackoverflow.com/questions/48512269",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7395188/"
] | I was away from my project for a long time but finally I manged to solve the issue.
```
from subprocess import PIPE, Popen
from threading import Thread
p = Popen(['myprogram.exe'], stdout=PIPE)
t = Thread(target=results)
t.daemon = True
t.start()
def results():
a = p.stdout.readline()
```
Maybe this is not exactly the right way to do it but it is working for me. I am only posting it because I personally believe that whoever ask a question should post the solution when they have found it. | On a unix environment you can simply make the stdout/stderr/stdin file descriptors nonblocking like so:
```
import os, fcntl
from subprocess import Popen, PIPE
def nonblock(stream):
fcntl.fcntl(stream, fcntl.F_SETFL, fcntl.fcntl(stream, fcntl.F_GETFL) | os.O_NONBLOCK)
proc = Popen("for ((;;)) { date; sleep 1; }", shell=True, stdout=PIPE, stderr=PIPE, universal_newlines=True,
executable='/bin/bash')
nonblock(proc.stdout)
while True:
for line in proc.stdout.readlines():
print(line, end="")
``` |
28,371,555 | I have written this script to test a single ip address for probing specific user names on smtp servers for a pentest. I am trying now to port this script to run the same tests, but to a range of ip addresses instead of a single one. Can anyone shed some light as to how that can be achieved?
```
#!/usr/bin/python
import socket
import sys
users= []
for line in sys.stdin:
line = line.strip()
if line != '':
users.append(line)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((sys.argv[1], 25))
fp = s.makefile('rwb')
fp.readline()
fp.write('HELO test.example.com\r\n')
fp.flush()
fp.readline
for user in users:
fp.write('VRFY %s\r\n\ ' % user)
fp.flush()
print '%s: %s' % (user, fp.readline().strip())
fp.write('QUIT\r\n')
fp.flush()
s.close()
``` | 2015/02/06 | [
"https://Stackoverflow.com/questions/28371555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4283164/"
] | I would implement this by turning your code as it stands into a function to probe a single host, taking the host name/ip as an argument. Then, loop over your list of hosts (either from the command line, a file, interactive querying of a user, or wherever) and make a call to your single host probe for each host in the loop. | Ok, so here is what I have done to get this going.
The solution is not elegant at all but it does the trick, and also, I could not spend more time trying to find a solution on this purely in Python, so I have decided, after reading the answer from bmhkim above(thanks for the tips) to write a bash script to have it iterate over a range of ip addresses and for each one call my python script to do its magic.
```
#!/bin/bash
for ip in $(seq 1 254); do
python smtp-probe.py 192.168.1.$ip <users.txt
done
```
I have had some problems with the output since that was giving me the servers responses to my probing attempts but not the actual ip addresses which were sending those responses, so I have adapted the original script to this:
```
#!/usr/bin/python
import socket
import sys
users= []
for line in sys.stdin:
line = line.strip()
if line != '':
users.append(line)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((sys.argv[1], 25))
print sys.argv[1] #Notice the printing of the script arguments/ip addresses for my output
fp = s.makefile('rwb')
fp.readline()
fp.write('HELO test.example.com\r\n')
fp.flush()
fp.readline
for user in users:
fp.write('VRFY %s\r\n\ ' % user)
fp.flush()
print '%s: %s' % (user, fp.readline().strip())
fp.write('QUIT\r\n')
fp.flush()
s.close()
```
Like I said above, that is a tricky way out-I know, but I am not a programmer, so that is the way out I was able to find(*if you have a way purely in Python to do it I would like very much to see it*). I will definitely re-visit this issue once I have a bit more time and I will keep studying Python until I get this right.
Thanks all for the support to my question!! |
28,371,555 | I have written this script to test a single ip address for probing specific user names on smtp servers for a pentest. I am trying now to port this script to run the same tests, but to a range of ip addresses instead of a single one. Can anyone shed some light as to how that can be achieved?
```
#!/usr/bin/python
import socket
import sys
users= []
for line in sys.stdin:
line = line.strip()
if line != '':
users.append(line)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((sys.argv[1], 25))
fp = s.makefile('rwb')
fp.readline()
fp.write('HELO test.example.com\r\n')
fp.flush()
fp.readline
for user in users:
fp.write('VRFY %s\r\n\ ' % user)
fp.flush()
print '%s: %s' % (user, fp.readline().strip())
fp.write('QUIT\r\n')
fp.flush()
s.close()
``` | 2015/02/06 | [
"https://Stackoverflow.com/questions/28371555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4283164/"
] | If you're using Python3.3+, this is mostly simple
```
import ipaddress # new in Python3.3
start_ip, end_ip = however_you_get_these_as_strings()
ip_networks = ipaddress.summarize_address_range(
ipaddress.IPv4Address(start_ip),
ipaddress.IPv4Address(end_ip))
# list of networks between those two IPs
for network in ip_networks:
for ip in network:
# ip is an ipaddress.IPv4Address object
probe(str(ip))
# which converts nicely to str
``` | Ok, so here is what I have done to get this going.
The solution is not elegant at all but it does the trick, and also, I could not spend more time trying to find a solution on this purely in Python, so I have decided, after reading the answer from bmhkim above(thanks for the tips) to write a bash script to have it iterate over a range of ip addresses and for each one call my python script to do its magic.
```
#!/bin/bash
for ip in $(seq 1 254); do
python smtp-probe.py 192.168.1.$ip <users.txt
done
```
I have had some problems with the output since that was giving me the servers responses to my probing attempts but not the actual ip addresses which were sending those responses, so I have adapted the original script to this:
```
#!/usr/bin/python
import socket
import sys
users= []
for line in sys.stdin:
line = line.strip()
if line != '':
users.append(line)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((sys.argv[1], 25))
print sys.argv[1] #Notice the printing of the script arguments/ip addresses for my output
fp = s.makefile('rwb')
fp.readline()
fp.write('HELO test.example.com\r\n')
fp.flush()
fp.readline
for user in users:
fp.write('VRFY %s\r\n\ ' % user)
fp.flush()
print '%s: %s' % (user, fp.readline().strip())
fp.write('QUIT\r\n')
fp.flush()
s.close()
```
Like I said above, that is a tricky way out-I know, but I am not a programmer, so that is the way out I was able to find(*if you have a way purely in Python to do it I would like very much to see it*). I will definitely re-visit this issue once I have a bit more time and I will keep studying Python until I get this right.
Thanks all for the support to my question!! |
57,331,667 | I'm using `poetry` library to manage project dependencies, so when I use
`docker build --tag=helloworld .`
I got this error
```
[AttributeError]
'NoneType' object has no attribute 'group'
```
Installing breaks on `umongo (2.1.0)` package
Here is my `pyproject.toml` file
```
[tool.poetry.dependencies]
python = "^3.7.0"
asyncio = "^3.4"
aiohttp = "^3.4"
motor = "^2.0"
umongo = "^2.0"
pyyaml = "^3.13"
[tool.poetry.dev-dependencies]
pytest = "^3.4"
black = {version = "^18.3-alpha.0",allows-prereleases = true}
mypy = "^0.650.0"
wemake-python-styleguide = "^0.5.1"
pytest-mock = "^1.10"
pytest-asyncio = "^0.9.0"
pytest-aiohttp = "^0.3.0"
```
And `poetry.lock`
<https://pastebin.com/kUjAKJHM>
Dockerfile:
```
FROM python:3.7.1-alpine
RUN mkdir -p /opt/project/todo_api
RUN pip --no-cache-dir install poetry
COPY ./pyproject.toml /opt/project
COPY poetry.lock /opt/project
RUN cd /opt/project && poetry install --no-dev
COPY ./todo_api /opt/project/todo_api
COPY ./todo_api.yml /opt/project/todo_api.yml
WORKDIR /opt/project
ENTRYPOINT poetry run python -m aiohttp.web todo_api.main:main
``` | 2019/08/02 | [
"https://Stackoverflow.com/questions/57331667",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6804296/"
] | The following works for me:
```
FROM python:3.7.1-alpine
WORKDIR /opt/project
RUN pip install --upgrade pip && pip --no-cache-dir install poetry
COPY ./pyproject.toml .
RUN poetry install --no-dev
```
with pyproject.toml:
```
[tool.poetry]
name = "57331667"
version = "0.0.1"
authors = ["skufler <skufler@email.com>"]
[tool.poetry.dependencies]
python = "^3.7.0"
asyncio = "^3.4"
aiohttp = "^3.4"
motor = "^2.0"
umongo = "^2.0"
pyyaml = "^3.13"
[tool.poetry.dev-dependencies]
pytest = "^3.4"
black = {version = "^18.3-alpha.0",allows-prereleases = true}
mypy = "^0.650.0"
wemake-python-styleguide = "^0.5.1"
pytest-mock = "^1.10"
pytest-asyncio = "^0.9.0"
pytest-aiohttp = "^0.3.0"
```
Then:
```sh
docker build --tag=57331667 --file=./Dockerfile .
```
results:
```sh
...
Creating virtualenv 57331667-py3.7 in /root/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...
Writing lock file
Package operations: 15 installs, 0 updates, 0 removals
- Installing idna (2.8)
- Installing multidict (4.5.2)
- Installing six (1.12.0)
- Installing async-timeout (3.0.1)
- Installing attrs (18.2.0)
- Installing chardet (3.0.4)
- Installing marshmallow (2.19.5)
- Installing pymongo (3.8.0)
- Installing python-dateutil (2.8.0)
- Installing yarl (1.3.0)
- Installing aiohttp (3.5.4)
- Installing asyncio (3.4.3)
- Installing motor (2.0.0)
- Installing pyyaml (3.13)
- Installing umongo (2.1.0)
Removing intermediate container c6a9c7652b5c
---> 89354562cf16
Successfully built 89354562cf16
Successfully tagged 57331667:latest
``` | If you want to install it with pip3 in production, here's how the latest version of Poetry (late 2021) can export a requirements.txt file:
```sh
# Production with no development dependencies
poetry export --no-interaction --no-ansi --without-hashes --format requirements.txt --output ./requirements.prod.txt
# For development, including development dependencies
poetry export --no-interaction --no-ansi --without-hashes --format requirements.txt --dev --output ./requirements.dev.txt
``` |
57,331,667 | I'm using `poetry` library to manage project dependencies, so when I use
`docker build --tag=helloworld .`
I got this error
```
[AttributeError]
'NoneType' object has no attribute 'group'
```
Installing breaks on `umongo (2.1.0)` package
Here is my `pyproject.toml` file
```
[tool.poetry.dependencies]
python = "^3.7.0"
asyncio = "^3.4"
aiohttp = "^3.4"
motor = "^2.0"
umongo = "^2.0"
pyyaml = "^3.13"
[tool.poetry.dev-dependencies]
pytest = "^3.4"
black = {version = "^18.3-alpha.0",allows-prereleases = true}
mypy = "^0.650.0"
wemake-python-styleguide = "^0.5.1"
pytest-mock = "^1.10"
pytest-asyncio = "^0.9.0"
pytest-aiohttp = "^0.3.0"
```
And `poetry.lock`
<https://pastebin.com/kUjAKJHM>
Dockerfile:
```
FROM python:3.7.1-alpine
RUN mkdir -p /opt/project/todo_api
RUN pip --no-cache-dir install poetry
COPY ./pyproject.toml /opt/project
COPY poetry.lock /opt/project
RUN cd /opt/project && poetry install --no-dev
COPY ./todo_api /opt/project/todo_api
COPY ./todo_api.yml /opt/project/todo_api.yml
WORKDIR /opt/project
ENTRYPOINT poetry run python -m aiohttp.web todo_api.main:main
``` | 2019/08/02 | [
"https://Stackoverflow.com/questions/57331667",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6804296/"
] | Alternative approach
--------------------
Don't install `poetry` into your deployment environment. It's a package management tool, which aims to improve development of and collaboration on libraries. If you want to deploy an application, you only need a package installer (read: `pip`) - and the opinionated stance of `poetry` regarding the build process and virtual environments is harmful rather than helpful there.
In this case, the artifacts you want to copy into your docker image are **1)** your most recent build of the library you work on and **2)** a wheelhouse of tested dependencies, as defined by `poetry.lock`.
The first one is easy, run `poetry build -f wheel` and you have a nicely portable wheel. The second one is not yet easy, because `poetry` doesn't support building wheelhouses (and maybe never will), and `pip wheel` does not accept `poetry.lock`'s file format. So if you want go down this road, you need to work on a beta build of `poetry` (`v1.0.0b7` is rather stable) that supports `poetry export -f requirements.txt > requirements.txt`, which lets you create a `requirements.txt` file equivalent to your current lockfile.
Once you got that, you can run `pip wheel -w dist -r requirements.txt`, and *finally*, you're done creating all the artifacts for the docker image. Now, the following will work:
```
FROM python:3.7.1-alpine
WORKDIR /opt/project
COPY dist dist
RUN pip install --no-index --find-links dist todo_api
ENTRYPOINT python -m aiohttp.web todo_api.main:main
```
Pros
----
* no unnecessary dependency on `poetry` in your server (might be relevant, since it's still `<v1.0`)
* you skip the virtualenv in your server and install everything right into the system (you might still choose to create a virtualenv on your own and install your app into that, since [installing your application into the system python's side-packages can lead to problems](https://hynek.me/articles/virtualenv-lives/))
* your installation step doesn't run against pypi, so this deployment is guaranteed to work as far as you tested it (this is a very important point in many business settings)
Cons
----
* it's a bit of a pain if you do it by hand each time, the target executor here should be a CI/CD and not a human
* if the architecture of your workstation and the docker image differ, the wheels you build and copy over might not be compatible | If you want to install it with pip3 in production, here's how the latest version of Poetry (late 2021) can export a requirements.txt file:
```sh
# Production with no development dependencies
poetry export --no-interaction --no-ansi --without-hashes --format requirements.txt --output ./requirements.prod.txt
# For development, including development dependencies
poetry export --no-interaction --no-ansi --without-hashes --format requirements.txt --dev --output ./requirements.dev.txt
``` |
48,301,318 | I have a Python script where I import `datadog` module. When I run `python datadog.py`, it fails with `ImportError: cannot import name statsd`. The script starts with following lines:
```
import os
import mysql.connector
from time import time
from datadog import statsd
```
Actual error messages are following:
```
$ python /mnt/datadog.py
Traceback (most recent call last):
File "/mnt/datadog.py", line 5, in <module>
from datadog import statsd
File "/mnt/datadog.py", line 5, in <module>
from datadog import statsd
ImportError: cannot import name statsd
```
But when I'm in Python shell (started by `python` command), I can successfully run `from datadog import statsd`. What's the difference here?
By the way, I have proper Python packages installed in my computer:
```
$ pip freeze | egrep 'datadog|mysql'
datadog==0.17.0
mysql-connector==2.1.6
$ python --version
Python 2.7.5
``` | 2018/01/17 | [
"https://Stackoverflow.com/questions/48301318",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8495751/"
] | The problem is that your script is named `datadog.py`. So when it imports the module `datadog`, it imports itself. | First install statsd by
```
pip install statsd
```
then do
```
import statsd
``` |
69,970,902 | s =[(1, 2), (2, 3), (3, 4), (1, 3)]
Output should be:
1 2
2 3
3 4
1 3
#in python only
**"WITHOUT USING FOR LOOP"**
In below code
```
ns=[[4, 4], [5, 4], [3, 3]]
for x in ns:
n=x[0]
m=x[1]
f=list(range(1,n+1))
l=list(range(2,n+1))
permut = itertools.permutations(f, 2)
permut=list(permut)
s=list(filter(lambda x: x[1]==x[0]+1 , permut))
#print(s)
m=m-len(s)
#print(m)
t=list(filter(lambda x: x[1]==x[0]+2 , permut))
#print(t)
for y in range(0,m):
s.append(t.pop(0))
print(*s, sep = "\n")
``` | 2021/11/15 | [
"https://Stackoverflow.com/questions/69970902",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17415916/"
] | I had this issue last night, tried with php 7.3 and 7.4 in the end i just used the latest php 8.1 and this issue went away. | You could try going to `illuminate/log/Logger.php` and adding `use Monolog\Logger as Monolog;` at the beginning of the file. After that, change the constructor from this:
```
/**
* Create a new log writer instance.
*
* @param \Psr\Log\LoggerInterface $logger
* @param \Illuminate\Contracts\Events\Dispatcher|null $dispatcher
* @return void
*/
public function __construct(LoggerInterface $logger, Dispatcher $dispatcher = null)
{
$this->logger = $logger;
$this->dispatcher = $dispatcher;
}
```
to this:
```
/**
* Create a new log writer instance.
*
* @param \Monolog\Logger $logger
* @param \Illuminate\Contracts\Events\Dispatcher|null $dispatcher
* @return void
*/
public function __construct(Monolog $logger, Dispatcher $dispatcher = null)
{
$this->logger = $logger;
$this->dispatcher = $dispatcher;
}
``` |
14,142,144 | I have a custom field located in my `/app/models.py` . My question is...
What is the best practice here. Should I have a separate file i.e. `customField.py` and import to the `models.py`, or should it be all in the same `models.py` file?
best practice
```
class HibernateBooleanField(models.BooleanField):
__metaclass__ = models.SubfieldBase
def get_internal_type(self):
return "HibernateBooleanField"
def db_type(self):
return 'bit(1)'
def to_python(self, value):
if value in (True, False): return value
if value in ('t', 'True', '1', '\x01'): return True
if value in ('f', 'False', '0', '\x00'): return False
def get_db_prep_value(self, value, *args, **kwargs):
return 0x01 if value else 0x00
``` | 2013/01/03 | [
"https://Stackoverflow.com/questions/14142144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/578822/"
] | If you're on Oracle 11g you can use the DBMS\_PARALLEL\_EXECUTE package to run your procedure in multiple threads. [Find out more](http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_parallel_ex.htm#CHDIJACH).
If you're on an earlier version you can implement DIY parallelism using a technique from Tom Kyte. The Hungry DBA provides [a good explanation on his blog here](http://hungrydba.blogspot.co.uk/2007/12/tom-kytes-do-it-yourself-parallelism.html). | Sounds like you need a set of queries using the MySql `LIMIT` clause to implement paging (e.g. a query would get the first 1000, another would get the second 1000 etc..).
You could form these queries and submit as `Callables` to an [Executor service](http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/Executor.html) with a set number of threads. The `Executor` will manage the threads. I suspect it may be more efficient to both query and write your records within each `Callable`, but this is an assumption that would likely require testing. |
6,367,014 | In my `settings.py`, I have the following:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending e-mail.
EMAIL_HOST = 'localhost'
# Port for sending e-mail.
EMAIL_PORT = 1025
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
```
My email code:
```
from django.core.mail import EmailMessage
email = EmailMessage('Hello', 'World', to=['user@gmail.com'])
email.send()
```
Of course, if I setup a debugging server via `python -m smtpd -n -c DebuggingServer localhost:1025`, I can see the email in my terminal.
However, how do I actually send the email not to the debugging server but to user@gmail.com?
After reading your answers, let me get something straight:
1. Can't you use localhost(simple ubuntu pc) to send e-mails?
2. I thought in django 1.3 `send_mail()` is somewhat deprecated and `EmailMessage.send()` is used instead? | 2011/06/16 | [
"https://Stackoverflow.com/questions/6367014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749477/"
] | I had actually done this from Django a while back. Open up a legitimate GMail account & enter the credentials here. Here's my code -
```
from email import Encoders
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email.MIMEMultipart import MIMEMultipart
def sendmail(to, subject, text, attach=[], mtype='html'):
ok = True
gmail_user = settings.EMAIL_HOST_USER
gmail_pwd = settings.EMAIL_HOST_PASSWORD
msg = MIMEMultipart('alternative')
msg['From'] = gmail_user
msg['To'] = to
msg['Cc'] = 'you@gmail.com'
msg['Subject'] = subject
msg.attach(MIMEText(text, mtype))
for a in attach:
part = MIMEBase('application', 'octet-stream')
part.set_payload(open(attach, 'rb').read())
Encoders.encode_base64(part)
part.add_header('Content-Disposition','attachment; filename="%s"' % os.path.basename(a))
msg.attach(part)
try:
mailServer = smtplib.SMTP("smtp.gmail.com", 687)
mailServer.ehlo()
mailServer.starttls()
mailServer.ehlo()
mailServer.login(gmail_user, gmail_pwd)
mailServer.sendmail(gmail_user, [to,msg['Cc']], msg.as_string())
mailServer.close()
except:
ok = False
return ok
``` | below formate worked for me
>
> EMAIL\_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
>
>
> EMAIL\_USE\_TLS = True EMAIL\_HOST = 'mail.xxxxxxx.xxx'
>
>
> EMAIL\_PORT = 465
>
>
> EMAIL\_HOST\_USER = 'support@xxxxx.xxx'
>
>
> EMAIL\_HOST\_PASSWORD = 'xxxxxxx'
>
>
> |
6,367,014 | In my `settings.py`, I have the following:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending e-mail.
EMAIL_HOST = 'localhost'
# Port for sending e-mail.
EMAIL_PORT = 1025
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
```
My email code:
```
from django.core.mail import EmailMessage
email = EmailMessage('Hello', 'World', to=['user@gmail.com'])
email.send()
```
Of course, if I setup a debugging server via `python -m smtpd -n -c DebuggingServer localhost:1025`, I can see the email in my terminal.
However, how do I actually send the email not to the debugging server but to user@gmail.com?
After reading your answers, let me get something straight:
1. Can't you use localhost(simple ubuntu pc) to send e-mails?
2. I thought in django 1.3 `send_mail()` is somewhat deprecated and `EmailMessage.send()` is used instead? | 2011/06/16 | [
"https://Stackoverflow.com/questions/6367014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749477/"
] | I use Gmail as my SMTP server for Django. Much easier than dealing with postfix or whatever other server. I'm not in the business of managing email servers.
In settings.py:
```
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_PORT = 587
EMAIL_HOST_USER = 'me@gmail.com'
EMAIL_HOST_PASSWORD = 'password'
```
**NOTE**: In 2016 Gmail is not allowing this anymore by default. You can either use an external service like Sendgrid, or you can follow this tutorial from Google to reduce security but allow this option: <https://support.google.com/accounts/answer/6010255> | You could use **"Test Mail Server Tool"** to test email sending on your machine or localhost. Google and Download "Test Mail Server Tool" and set it up.
Then in your settings.py:
```
EMAIL_BACKEND= 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'localhost'
EMAIL_PORT = 25
```
From shell:
```
from django.core.mail import send_mail
send_mail('subject','message','sender email',['receipient email'], fail_silently=False)
``` |
6,367,014 | In my `settings.py`, I have the following:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending e-mail.
EMAIL_HOST = 'localhost'
# Port for sending e-mail.
EMAIL_PORT = 1025
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
```
My email code:
```
from django.core.mail import EmailMessage
email = EmailMessage('Hello', 'World', to=['user@gmail.com'])
email.send()
```
Of course, if I setup a debugging server via `python -m smtpd -n -c DebuggingServer localhost:1025`, I can see the email in my terminal.
However, how do I actually send the email not to the debugging server but to user@gmail.com?
After reading your answers, let me get something straight:
1. Can't you use localhost(simple ubuntu pc) to send e-mails?
2. I thought in django 1.3 `send_mail()` is somewhat deprecated and `EmailMessage.send()` is used instead? | 2011/06/16 | [
"https://Stackoverflow.com/questions/6367014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749477/"
] | I found using SendGrid to be the easiest way to set up sending email with Django. Here's how it works:
1. [Create a SendGrid account](https://app.sendgrid.com/signup) (and verify your email)
2. Add the following to your `settings.py`:
`EMAIL_HOST = 'smtp.sendgrid.net'
EMAIL_HOST_USER = '<your sendgrid username>'
EMAIL_HOST_PASSWORD = '<your sendgrid password>'
EMAIL_PORT = 587
EMAIL_USE_TLS = True`
And you're all set!
To send email:
```
from django.core.mail import send_mail
send_mail('<Your subject>', '<Your message>', 'from@example.com', ['to@example.com'])
```
If you want Django to email you whenever there's a 500 internal server error, add the following to your `settings.py`:
```
DEFAULT_FROM_EMAIL = 'your.email@example.com'
ADMINS = [('<Your name>', 'your.email@example.com')]
```
Sending email with SendGrid is free up to 12k emails per month. | below formate worked for me
>
> EMAIL\_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
>
>
> EMAIL\_USE\_TLS = True EMAIL\_HOST = 'mail.xxxxxxx.xxx'
>
>
> EMAIL\_PORT = 465
>
>
> EMAIL\_HOST\_USER = 'support@xxxxx.xxx'
>
>
> EMAIL\_HOST\_PASSWORD = 'xxxxxxx'
>
>
> |
6,367,014 | In my `settings.py`, I have the following:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending e-mail.
EMAIL_HOST = 'localhost'
# Port for sending e-mail.
EMAIL_PORT = 1025
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
```
My email code:
```
from django.core.mail import EmailMessage
email = EmailMessage('Hello', 'World', to=['user@gmail.com'])
email.send()
```
Of course, if I setup a debugging server via `python -m smtpd -n -c DebuggingServer localhost:1025`, I can see the email in my terminal.
However, how do I actually send the email not to the debugging server but to user@gmail.com?
After reading your answers, let me get something straight:
1. Can't you use localhost(simple ubuntu pc) to send e-mails?
2. I thought in django 1.3 `send_mail()` is somewhat deprecated and `EmailMessage.send()` is used instead? | 2011/06/16 | [
"https://Stackoverflow.com/questions/6367014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749477/"
] | 1. Create a project: `django-admin.py startproject gmail`
2. Edit settings.py with code below:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = 'youremail@gmail.com'
EMAIL_HOST_PASSWORD = 'email_password'
EMAIL_PORT = 587
```
3. Run interactive mode: `python manage.py shell`
4. Import the EmailMessage module:
```
from django.core.mail import EmailMessage
```
5. Send the email:
```
email = EmailMessage('Subject', 'Body', to=['your@email.com'])
email.send()
```
For more informations, check [`send_mail`](https://docs.djangoproject.com/en/2.1/topics/email/#send-mail) and [`EmailMessage`](https://docs.djangoproject.com/en/2.1/topics/email/#the-emailmessage-class) features in [documents](https://docs.djangoproject.com/en/2.1/topics/email/).
**UPDATE for Gmail**
Also if you have problems sending email via gmail remember to check [this guides](https://support.google.com/mail/answer/7126229?visit_id=1-636656345878819046-1400238651&rd=1#cantsignin) from google.
In your Google account settings, go to `Security > Account permissions > Access for less secure apps` and enable this option.
Also [create an App specific password for your gmail](https://www.lifewire.com/get-a-password-to-access-gmail-by-pop-imap-2-1171882) after you've [turned on *2-step-verification*](https://support.google.com/accounts/answer/185839?authuser=2) for it.
Then you should use *app specific password* in settings. So change the following line:
```
EMAIL_HOST_PASSWORD = 'your_email_app_specific_password'
```
Also if you're interested to send HTML email, [check this out](https://stackoverflow.com/a/53494247/2263683). | You could use **"Test Mail Server Tool"** to test email sending on your machine or localhost. Google and Download "Test Mail Server Tool" and set it up.
Then in your settings.py:
```
EMAIL_BACKEND= 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'localhost'
EMAIL_PORT = 25
```
From shell:
```
from django.core.mail import send_mail
send_mail('subject','message','sender email',['receipient email'], fail_silently=False)
``` |
6,367,014 | In my `settings.py`, I have the following:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending e-mail.
EMAIL_HOST = 'localhost'
# Port for sending e-mail.
EMAIL_PORT = 1025
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
```
My email code:
```
from django.core.mail import EmailMessage
email = EmailMessage('Hello', 'World', to=['user@gmail.com'])
email.send()
```
Of course, if I setup a debugging server via `python -m smtpd -n -c DebuggingServer localhost:1025`, I can see the email in my terminal.
However, how do I actually send the email not to the debugging server but to user@gmail.com?
After reading your answers, let me get something straight:
1. Can't you use localhost(simple ubuntu pc) to send e-mails?
2. I thought in django 1.3 `send_mail()` is somewhat deprecated and `EmailMessage.send()` is used instead? | 2011/06/16 | [
"https://Stackoverflow.com/questions/6367014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749477/"
] | For Django version 1.7, if above solutions dont work then try the following
in **settings.py** add
```
#For email
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = 'sender@gmail.com'
#Must generate specific password for your app in [gmail settings][1]
EMAIL_HOST_PASSWORD = 'app_specific_password'
EMAIL_PORT = 587
#This did the trick
DEFAULT_FROM_EMAIL = EMAIL_HOST_USER
```
The last line did the trick for django 1.7 | You could use **"Test Mail Server Tool"** to test email sending on your machine or localhost. Google and Download "Test Mail Server Tool" and set it up.
Then in your settings.py:
```
EMAIL_BACKEND= 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'localhost'
EMAIL_PORT = 25
```
From shell:
```
from django.core.mail import send_mail
send_mail('subject','message','sender email',['receipient email'], fail_silently=False)
``` |
6,367,014 | In my `settings.py`, I have the following:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending e-mail.
EMAIL_HOST = 'localhost'
# Port for sending e-mail.
EMAIL_PORT = 1025
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
```
My email code:
```
from django.core.mail import EmailMessage
email = EmailMessage('Hello', 'World', to=['user@gmail.com'])
email.send()
```
Of course, if I setup a debugging server via `python -m smtpd -n -c DebuggingServer localhost:1025`, I can see the email in my terminal.
However, how do I actually send the email not to the debugging server but to user@gmail.com?
After reading your answers, let me get something straight:
1. Can't you use localhost(simple ubuntu pc) to send e-mails?
2. I thought in django 1.3 `send_mail()` is somewhat deprecated and `EmailMessage.send()` is used instead? | 2011/06/16 | [
"https://Stackoverflow.com/questions/6367014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749477/"
] | 1. Create a project: `django-admin.py startproject gmail`
2. Edit settings.py with code below:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = 'youremail@gmail.com'
EMAIL_HOST_PASSWORD = 'email_password'
EMAIL_PORT = 587
```
3. Run interactive mode: `python manage.py shell`
4. Import the EmailMessage module:
```
from django.core.mail import EmailMessage
```
5. Send the email:
```
email = EmailMessage('Subject', 'Body', to=['your@email.com'])
email.send()
```
For more informations, check [`send_mail`](https://docs.djangoproject.com/en/2.1/topics/email/#send-mail) and [`EmailMessage`](https://docs.djangoproject.com/en/2.1/topics/email/#the-emailmessage-class) features in [documents](https://docs.djangoproject.com/en/2.1/topics/email/).
**UPDATE for Gmail**
Also if you have problems sending email via gmail remember to check [this guides](https://support.google.com/mail/answer/7126229?visit_id=1-636656345878819046-1400238651&rd=1#cantsignin) from google.
In your Google account settings, go to `Security > Account permissions > Access for less secure apps` and enable this option.
Also [create an App specific password for your gmail](https://www.lifewire.com/get-a-password-to-access-gmail-by-pop-imap-2-1171882) after you've [turned on *2-step-verification*](https://support.google.com/accounts/answer/185839?authuser=2) for it.
Then you should use *app specific password* in settings. So change the following line:
```
EMAIL_HOST_PASSWORD = 'your_email_app_specific_password'
```
Also if you're interested to send HTML email, [check this out](https://stackoverflow.com/a/53494247/2263683). | I had actually done this from Django a while back. Open up a legitimate GMail account & enter the credentials here. Here's my code -
```
from email import Encoders
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email.MIMEMultipart import MIMEMultipart
def sendmail(to, subject, text, attach=[], mtype='html'):
ok = True
gmail_user = settings.EMAIL_HOST_USER
gmail_pwd = settings.EMAIL_HOST_PASSWORD
msg = MIMEMultipart('alternative')
msg['From'] = gmail_user
msg['To'] = to
msg['Cc'] = 'you@gmail.com'
msg['Subject'] = subject
msg.attach(MIMEText(text, mtype))
for a in attach:
part = MIMEBase('application', 'octet-stream')
part.set_payload(open(attach, 'rb').read())
Encoders.encode_base64(part)
part.add_header('Content-Disposition','attachment; filename="%s"' % os.path.basename(a))
msg.attach(part)
try:
mailServer = smtplib.SMTP("smtp.gmail.com", 687)
mailServer.ehlo()
mailServer.starttls()
mailServer.ehlo()
mailServer.login(gmail_user, gmail_pwd)
mailServer.sendmail(gmail_user, [to,msg['Cc']], msg.as_string())
mailServer.close()
except:
ok = False
return ok
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.