qid
int64 469
74.7M
| question
stringlengths 36
37.8k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 5
31.5k
| response_k
stringlengths 10
31.6k
|
---|---|---|---|---|---|
9,696,495 | My code :
```
locs = [ [1], [2] ]
for loc in locs:
loc = []
print locs
# prints => [ [1], [2] ]
```
Why is `loc` not reference of elements of `locs` ?
Python : Everything is passed as reference unless explicitly copied [ Is this not True ? ]
Please explain.. how does python decides **referencing and copying** ?
**Update :**
How to do ?
```
def compute(ob):
if isinstance(ob,list): return process_list(ob)
if isinstance(ob,dict): return process_dict(ob)
for loc in locs:
loc = compute(loc) # What to change here to make loc a reference of actual locs iteration ?
```
* locs must contain the final processed response !
* I don't want to use `enumerate`, is it possible without it ? | 2012/03/14 | [
"https://Stackoverflow.com/questions/9696495",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/731963/"
] | Effbot (aka Fredrik Lundh) has described Python's variable passing style as call-by-object: <http://effbot.org/zone/call-by-object.htm>
Objects are allocated on the heap and pointers to them can be passed around anywhere.
* When you make an assignment such as `x = 1000`, a dictionary entry is created that maps the string "x" in the current namespace to a pointer to the integer object containing one thousand.
* When you update "x" with `x = 2000`, a new integer object is created and the dictionary is updated to point at the new object. The old one thousand object is unchanged (and may or may not be alive depending on whether anything else refers to the object).
* When you do a new assignment such as `y = x`, a new dictionary entry "y" is created that points to the same object as the entry for "x".
* Objects like strings and integers are *immutable*. This simply means that there are no methods that can change the object after it has been created. For example, once the integer object one-thousand is created, it will never change. Math is done by creating new integer objects.
* Objects like lists are *mutable*. This means that the contents of the object can be changed by anything pointing to the object. For example, `x = []; y = x; x.append(10); print y` will print `[10]`. The empty list was created. Both "x" and "y" point to the same list. The *append* method mutates (updates) the list object (like adding a record to a database) and the result is visible to both "x" and "y" (just as a database update would be visible to every connection to that database).
Hope that clarifies the issue for you. | It doesn't help in Python to think in terms of references or values. Neither is correct.
In Python, variables are just names. In your for loop, `loc` is just a name that points to the current element in the list. Doing `loc = []` simply *rebinds* the name `loc` to a different list, leaving the original version alone.
But since in your example, each element is a list, you could actually *mutate* that element, and that would be reflected in the original list:
```
for loc in locs:
loc[0] = loc[0] * 2
``` |
9,696,495 | My code :
```
locs = [ [1], [2] ]
for loc in locs:
loc = []
print locs
# prints => [ [1], [2] ]
```
Why is `loc` not reference of elements of `locs` ?
Python : Everything is passed as reference unless explicitly copied [ Is this not True ? ]
Please explain.. how does python decides **referencing and copying** ?
**Update :**
How to do ?
```
def compute(ob):
if isinstance(ob,list): return process_list(ob)
if isinstance(ob,dict): return process_dict(ob)
for loc in locs:
loc = compute(loc) # What to change here to make loc a reference of actual locs iteration ?
```
* locs must contain the final processed response !
* I don't want to use `enumerate`, is it possible without it ? | 2012/03/14 | [
"https://Stackoverflow.com/questions/9696495",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/731963/"
] | Everything in Python is passed and assigned by value, in the same way that everything is passed and assigned by value in Java. Every value in Python is a reference (pointer) to an object. Objects cannot be values. Assignment always copies the value (which is a pointer); two such pointers can thus point to the same object. Objects are never copied unless you're doing something explicit to copy them.
For your case, every iteration of the loop assigns an element of the list into the variable `loc`. You then assign something else to the variable `loc`. All these values are pointers; you're assigning pointers; but you do not affect any objects in any way. | >
> Why is loc not reference of elements of locs ?
>
>
>
**It is**. Or at least, it is in the same sense that every other variable in Python is. Python variables are **names, not storage**. `loc` is a name that is used to refer to elements of `[[1,2], [3,4]]`, while `locs` is a name that refers to the entire structure.
```
loc = []
```
This **does not mean** "look at the thing that `loc` names, and cause it to turn into `[]`". It **cannot** mean that, because Python objects are **not capable** of such a thing.
Instead, it means "cause `loc` to stop being a name for the thing that it's currently a name for, and start instead being a name for `[]`". (Of course, it means the specific `[]` that's provided there, since in general there may be several objects in memory that are the same.)
Naturally, the contents of `locs` are unchanged as a result. |
11,157,157 | This is duplicate of my [question on SWIG mailing list](http://sourceforge.net/mailarchive/forum.php?thread_name=102641339869909@webcorp7.yandex-team.ru&forum_name=swig-user).
I am trying to use stl containers in my SWIG bindings. Everything works perfectly except for stl map handling in Perl. On C++ side, I have
```
std::map<std::string, std::string> TryMap(const std::map<std::string, std::string> &map) {
std::map<std::string, std::string> modified(map);
modified["7"] = "!";
return modified;
}
```
SWIG config look like this
```
%module stl
%include "std_string.i"
%include "std_map.i"
%template(StringStringMap) std::map<std::string, std::string>;
%{
#include "stl.h"
%}
%include "stl.h"
```
In my Python script I can call TryMap this way
```
print dict(stl.TryMap({'a': '4'}))
```
and get beautiful output
```
{'a': '4', '7': '!'}
```
but in Perl I call
```
print Dumper stl::TryMap({'a' => '4'});
```
and get an error
```
TypeError in method 'TryMap', argument 1 of type 'std::map< std::string,std::string > const &' at perl.pl line 7.
```
I can actually do something like
```
my $map = stl::TryMap(stl::StringStringMap->new());
print $map->get('7');
```
and get '!', but this is not an option because there is a lot of legacy code using "TryMap" that expects normal Perl hash as its output.
I believe there is a way work this out because SWIG solves this particular problem nicely in Python and even in Perl if I use stl vectors and strings but not maps.
Is there any way to handle stl map with Perl in SWIG? I am using latest SWIG 2.0.7
**UPDATE** Maybe there is something wrong with `perl5/std_map.i`. It is too short =)
```
$ wc -l perl5/std_map.i python/std_map.i
74 perl5/std_map.i
305 python/std_map.i
``` | 2012/06/22 | [
"https://Stackoverflow.com/questions/11157157",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/482770/"
] | I put your C++ function into header file as an inline function for testing.
I was then able to construct a SWIG interface that does what you are looking for. It has two key parts. Firstly I wrote a typemap that will allow either a `std::map`, *or* a perl hash to be given as input to C++ functions that expect a `std::map`. In the case of the latter it builds a temporary map from the perl hash to use as the argument. (Which is convenient but potentially slow). The typemap picks the correct behaviour by checking what it was actually passed in.
The second part of the solution is to map some of the C++ map's member functions onto the special functions that perl uses for overloading operations on hashes. Most of these are implemented simply with `%rename` where the C++ function and perl functions are compatible however `FIRSTKEY` and `NEXTKEY` don't map well onto C++'s iterators, so these were implemented using `%extend` and (internally) another `std::map` to store the iteration state of the maps we're wrapping.
There are no special typemaps implemented here for returning the maps, however there is extra behaviour via the special operations that are now implemented.
The SWIG interface looks like:
```c
%module stl
%include <std_string.i>
%include <exception.i>
%rename(FETCH) std::map<std::string, std::string>::get;
%rename(STORE) std::map<std::string, std::string>::set;
%rename(EXISTS) std::map<std::string, std::string>::has_key;
%rename(DELETE) std::map<std::string, std::string>::del;
%rename(SCALAR) std::map<std::string, std::string>::size;
%rename(CLEAR) std::map<std::string, std::string>::clear;
%{
#include <map>
#include <string>
// For iteration support, will leak if iteration stops before the end ever.
static std::map<void*, std::map<std::string, std::string>::const_iterator> iterstate;
const char *current(std::map<std::string, std::string>& map) {
std::map<void*, std::map<std::string, std::string>::const_iterator>::iterator it = iterstate.find(&map);
if (it != iterstate.end() && map.end() == it->second) {
// clean up entry in the global map
iterstate.erase(it);
it = iterstate.end();
}
if (it == iterstate.end())
return NULL;
else
return it->second->first.c_str();
}
%}
%extend std::map<std::string, std::string> {
std::map<std::string, std::string> *TIEHASH() {
return $self;
}
const char *FIRSTKEY() {
iterstate[$self] = $self->begin();
return current(*$self);
}
const char *NEXTKEY(const std::string&) {
++iterstate[$self];
return current(*$self);
}
}
%include <std_map.i>
%typemap(in,noblock=1) const std::map<std::string, std::string>& (void *argp=0, int res=0, $1_ltype tempmap=0) {
res = SWIG_ConvertPtr($input, &argp, $descriptor, %convertptr_flags);
if (!SWIG_IsOK(res)) {
if (SvROK($input) && SvTYPE(SvRV($input)) == SVt_PVHV) {
fprintf(stderr, "Convert HV to map\n");
tempmap = new $1_basetype;
HV *hv = (HV*)SvRV($input);
HE *hentry;
hv_iterinit(hv);
while ((hentry = hv_iternext(hv))) {
std::string *val=0;
// TODO: handle errors here
SWIG_AsPtr_std_string SWIG_PERL_CALL_ARGS_2(HeVAL(hentry), &val);
fprintf(stderr, "%s => %s\n", HeKEY(hentry), val->c_str());
(*tempmap)[HeKEY(hentry)] = *val;
delete val;
}
argp = tempmap;
}
else {
%argument_fail(res, "$type", $symname, $argnum);
}
}
if (!argp) { %argument_nullref("$type", $symname, $argnum); }
$1 = %reinterpret_cast(argp, $ltype);
}
%typemap(freearg,noblock=1) const std::map<std::string, std::string>& {
delete tempmap$argnum;
}
%template(StringStringMap) std::map<std::string, std::string>;
%{
#include "stl.h"
%}
%include "stl.h"
```
I then adapted your sample perl to test:
```perl
use Data::Dumper;
use stl;
my $v = stl::TryMap(stl::StringStringMap->new());
$v->{'a'} = '1';
print Dumper $v;
print Dumper stl::TryMap({'a' => '4'});
print Dumper stl::TryMap($v);
foreach my $key (keys %{$v}) {
print "$key => $v->{$key}\n";
}
print $v->{'7'}."\n";
```
Which I was able to run successfully:
```none
Got map: 0x22bfb80
$VAR1 = bless( {
'7' => '!',
'a' => '1'
}, 'stl::StringStringMap' );
Convert HV to map
a => 4
Got map: 0x22af710
In C++ map: a => 4
$VAR1 = bless( {
'7' => '!',
'a' => '4'
}, 'stl::StringStringMap' );
Got map: 0x22bfb20
In C++ map: 7 => !
In C++ map: a => 1
$VAR1 = bless( {
'7' => '!',
'a' => '1'
}, 'stl::StringStringMap' );
7 => !
a => 1
!
```
You can also tie this object to a hash, for example:
```perl
use stl;
my $v = stl::TryMap(stl::StringStringMap->new());
print "$v\n";
tie %foo, "stl::StringStringMap", $v;
print $foo{'a'}."\n";
print tied(%foo)."\n";
```
In theory you can write an out typemap to set up this tie automatically on return from every function call, but so far I've not succeeded in writing a typemap that works with both the tying and the SWIG runtime type system.
It should be noted that this isn't production ready code. There's a thread safety issue for the internal map and some error handling missing too that I know of. I've also not fully tested all of hash operations work from the perl side beyond what you see above. It would also be nice to make it more generic, by interacting with the `swig_map_common` macro. Finally I'm not a perl guru by any means and I've not used the C API much so some caution in that area would be in order. | Have you tried:
```
print Dumper stl::TryMap(('a' => '4'));
``` |
56,289,389 | I have a pandas dataframe:
```
df2 = pd.DataFrame({'c':[1,1,1,2,2,2,2,3],
'type':['m','n','o','m','m','n','n', 'p']})
```
And I would like to find which values of `c` have more than one unique type and for those return the `c` value, the number of unique types and all the unique types concatenated in one string.
I have used those two questions to get so far:
[pandas add column to groupby dataframe](https://stackoverflow.com/questions/37189878/pandas-add-column-to-groupby-dataframe)
[Python Pandas: concatenate rows with unique values](https://stackoverflow.com/questions/27174009/python-pandas-concatenate-rows-with-unique-values)
```
df2['Unique counts'] = df2.groupby('c')['type'].transform('nunique')
df2[df2['Unique counts'] > 1].groupby(['c', 'Unique counts']).\
agg(lambda x: '-'.join(x))
Out[226]:
type
c Unique counts
1 3 m-n-o
2 2 m-m-n-n
```
This works but I cannot get the unique values (so for example in the second row I would like to have only one `m` and one `n`.
My questions would be the following:
1. Can I skip the in between step for creating the 'Unique counts' and
create something temporary?
2. How can I filter for only unique values
in the second step? | 2019/05/24 | [
"https://Stackoverflow.com/questions/56289389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4767610/"
] | Solution for remove unique rows first and then count values - create helper Series `s` and for unique strings is used `set`s:
```
s= df2.groupby('c')['type'].transform('nunique').rename('Unique counts')
a = df2[s > 1].groupby(['c', s]).agg(lambda x: '-'.join(set(x)))
print (a)
type
c Unique counts
1 3 o-m-n
2 2 m-n
```
Another idea is removing duplicates first by [`DataFrame.duplicated`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.duplicated.html):
```
df3 = df2[df2.duplicated(['c'],keep=False) & ~df2.duplicated(['c','type'])]
print (df3)
c type
0 1 m
1 1 n
2 1 o
3 2 m
5 2 n
```
And then aggregate counts with join:
```
a = df3.groupby('c')['type'].agg([('Unique Counts', 'size'), ('Type', '-'.join)])
print (a)
Unique Counts Type
c
1 3 m-n-o
2 2 m-n
```
---
Or if need all values aggregate first:
```
df4 = df2.groupby('c')['type'].agg([('Unique Counts', 'nunique'),
('Type', lambda x: '-'.join(set(x)))])
print (df4)
Unique Counts Type
c
1 3 o-m-n
2 2 m-n
3 1 p
```
And last remove unique rows by [`boolean indexing`](http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing):
```
df5 = df4[df4['Unique Counts'] > 1]
print (df5)
Unique Counts Type
c
1 3 o-m-n
2 2 m-n
``` | Use [`DataFrame.groupby.agg`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.core.groupby.DataFrameGroupBy.agg.html) and pass `tuple`'s of `(column name, function)`:
```
df2.groupby('c')['type'].agg([('Unique Counts', 'nunique'), ('Type', lambda x: '-'.join(x.unique()))])
```
[out]
```
Unique Counts Type
c
1 3 m-n-o
2 2 m-n
3 1 p
``` |
56,289,389 | I have a pandas dataframe:
```
df2 = pd.DataFrame({'c':[1,1,1,2,2,2,2,3],
'type':['m','n','o','m','m','n','n', 'p']})
```
And I would like to find which values of `c` have more than one unique type and for those return the `c` value, the number of unique types and all the unique types concatenated in one string.
I have used those two questions to get so far:
[pandas add column to groupby dataframe](https://stackoverflow.com/questions/37189878/pandas-add-column-to-groupby-dataframe)
[Python Pandas: concatenate rows with unique values](https://stackoverflow.com/questions/27174009/python-pandas-concatenate-rows-with-unique-values)
```
df2['Unique counts'] = df2.groupby('c')['type'].transform('nunique')
df2[df2['Unique counts'] > 1].groupby(['c', 'Unique counts']).\
agg(lambda x: '-'.join(x))
Out[226]:
type
c Unique counts
1 3 m-n-o
2 2 m-m-n-n
```
This works but I cannot get the unique values (so for example in the second row I would like to have only one `m` and one `n`.
My questions would be the following:
1. Can I skip the in between step for creating the 'Unique counts' and
create something temporary?
2. How can I filter for only unique values
in the second step? | 2019/05/24 | [
"https://Stackoverflow.com/questions/56289389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4767610/"
] | Solution for remove unique rows first and then count values - create helper Series `s` and for unique strings is used `set`s:
```
s= df2.groupby('c')['type'].transform('nunique').rename('Unique counts')
a = df2[s > 1].groupby(['c', s]).agg(lambda x: '-'.join(set(x)))
print (a)
type
c Unique counts
1 3 o-m-n
2 2 m-n
```
Another idea is removing duplicates first by [`DataFrame.duplicated`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.duplicated.html):
```
df3 = df2[df2.duplicated(['c'],keep=False) & ~df2.duplicated(['c','type'])]
print (df3)
c type
0 1 m
1 1 n
2 1 o
3 2 m
5 2 n
```
And then aggregate counts with join:
```
a = df3.groupby('c')['type'].agg([('Unique Counts', 'size'), ('Type', '-'.join)])
print (a)
Unique Counts Type
c
1 3 m-n-o
2 2 m-n
```
---
Or if need all values aggregate first:
```
df4 = df2.groupby('c')['type'].agg([('Unique Counts', 'nunique'),
('Type', lambda x: '-'.join(set(x)))])
print (df4)
Unique Counts Type
c
1 3 o-m-n
2 2 m-n
3 1 p
```
And last remove unique rows by [`boolean indexing`](http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing):
```
df5 = df4[df4['Unique Counts'] > 1]
print (df5)
Unique Counts Type
c
1 3 o-m-n
2 2 m-n
``` | Use [`groupby.agg`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.core.groupby.DataFrameGroupBy.agg.html) and filter on `Unique counts` column as you want:
```
df2 = (df2.groupby('c', as_index=False)
.agg({'type': ['nunique', lambda x: '-'.join(np.unique(x))]}))
df2.columns = ['c','Unique counts','type']
print(df2)
c Unique counts type
0 1 3 m-n-o
1 2 2 m-n
2 3 1 p
```
Filtering on `Unique counts`:
```
df2 = df2.loc[df2['Unique counts']>1,:]
print(df2)
c Unique counts type
0 1 3 m-n-o
1 2 2 m-n
``` |
56,289,389 | I have a pandas dataframe:
```
df2 = pd.DataFrame({'c':[1,1,1,2,2,2,2,3],
'type':['m','n','o','m','m','n','n', 'p']})
```
And I would like to find which values of `c` have more than one unique type and for those return the `c` value, the number of unique types and all the unique types concatenated in one string.
I have used those two questions to get so far:
[pandas add column to groupby dataframe](https://stackoverflow.com/questions/37189878/pandas-add-column-to-groupby-dataframe)
[Python Pandas: concatenate rows with unique values](https://stackoverflow.com/questions/27174009/python-pandas-concatenate-rows-with-unique-values)
```
df2['Unique counts'] = df2.groupby('c')['type'].transform('nunique')
df2[df2['Unique counts'] > 1].groupby(['c', 'Unique counts']).\
agg(lambda x: '-'.join(x))
Out[226]:
type
c Unique counts
1 3 m-n-o
2 2 m-m-n-n
```
This works but I cannot get the unique values (so for example in the second row I would like to have only one `m` and one `n`.
My questions would be the following:
1. Can I skip the in between step for creating the 'Unique counts' and
create something temporary?
2. How can I filter for only unique values
in the second step? | 2019/05/24 | [
"https://Stackoverflow.com/questions/56289389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4767610/"
] | Use [`DataFrame.groupby.agg`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.core.groupby.DataFrameGroupBy.agg.html) and pass `tuple`'s of `(column name, function)`:
```
df2.groupby('c')['type'].agg([('Unique Counts', 'nunique'), ('Type', lambda x: '-'.join(x.unique()))])
```
[out]
```
Unique Counts Type
c
1 3 m-n-o
2 2 m-n
3 1 p
``` | Use [`groupby.agg`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.core.groupby.DataFrameGroupBy.agg.html) and filter on `Unique counts` column as you want:
```
df2 = (df2.groupby('c', as_index=False)
.agg({'type': ['nunique', lambda x: '-'.join(np.unique(x))]}))
df2.columns = ['c','Unique counts','type']
print(df2)
c Unique counts type
0 1 3 m-n-o
1 2 2 m-n
2 3 1 p
```
Filtering on `Unique counts`:
```
df2 = df2.loc[df2['Unique counts']>1,:]
print(df2)
c Unique counts type
0 1 3 m-n-o
1 2 2 m-n
``` |
59,506,553 | I installed Firefox and using Ubuntu 18.04.
```py
from splinter import Browser
with Browser() as browser:
# Visit URL
url = "http://www.google.com"
browser.visit(url)
```
Results in:
```py
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
```
I'm not sure how to solve this problem. I checked the documentation from Splinter but there is no hint for this error.
What I am doing wrong ?
After updating the Stringer Lib:
```
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 92, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 70, in get_driver
raise err
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 66, in get_driver
return driver(*args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/driver/webdriver/firefox.py", line 88, in __init__
**kwargs
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/selenium/webdriver/firefox/webdriver.py", line 164, in __init__
self.service.start()
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/selenium/webdriver/common/service.py", line 83, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH.
``` | 2019/12/27 | [
"https://Stackoverflow.com/questions/59506553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12610119/"
] | Finally it works ! I had to update the Splinter lib from Github and put the Geckodriver file into the /usr/bin | I had the same issues. I reverted back to splinter-0.11.0.
To expand on the last issue for Windows:
Download the appropriate geckodriver (<https://github.com/mozilla/geckodriver/releases>) and put the geckodriver.exe file into the location your PATH variable is referring to. |
59,506,553 | I installed Firefox and using Ubuntu 18.04.
```py
from splinter import Browser
with Browser() as browser:
# Visit URL
url = "http://www.google.com"
browser.visit(url)
```
Results in:
```py
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
```
I'm not sure how to solve this problem. I checked the documentation from Splinter but there is no hint for this error.
What I am doing wrong ?
After updating the Stringer Lib:
```
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 92, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 70, in get_driver
raise err
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 66, in get_driver
return driver(*args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/driver/webdriver/firefox.py", line 88, in __init__
**kwargs
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/selenium/webdriver/firefox/webdriver.py", line 164, in __init__
self.service.start()
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/selenium/webdriver/common/service.py", line 83, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH.
``` | 2019/12/27 | [
"https://Stackoverflow.com/questions/59506553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12610119/"
] | Finally it works ! I had to update the Splinter lib from Github and put the Geckodriver file into the /usr/bin | Using Chrome on Windows 10, I made two edits and I'm not sure which one solved the error.
I downgraded my chrome driver version (to one lower than my current version).
I added this line of code.
```
executable_path = {'executable_path': '/usr/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
``` |
59,506,553 | I installed Firefox and using Ubuntu 18.04.
```py
from splinter import Browser
with Browser() as browser:
# Visit URL
url = "http://www.google.com"
browser.visit(url)
```
Results in:
```py
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
```
I'm not sure how to solve this problem. I checked the documentation from Splinter but there is no hint for this error.
What I am doing wrong ?
After updating the Stringer Lib:
```
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 92, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 70, in get_driver
raise err
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 66, in get_driver
return driver(*args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/driver/webdriver/firefox.py", line 88, in __init__
**kwargs
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/selenium/webdriver/firefox/webdriver.py", line 164, in __init__
self.service.start()
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/selenium/webdriver/common/service.py", line 83, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH.
``` | 2019/12/27 | [
"https://Stackoverflow.com/questions/59506553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12610119/"
] | First, check your chrome version, by going to "Help-About Google Chrome".
Second, go to <https://chromedriver.chromium.org/> AND download the version that matched your current Google Chrome.
Third, then put the Chromedriver into your bin.
For Mac-users, open Finder, then do shift+command+G, and then type "/usr/local/bin/" | I had the same issues. I reverted back to splinter-0.11.0.
To expand on the last issue for Windows:
Download the appropriate geckodriver (<https://github.com/mozilla/geckodriver/releases>) and put the geckodriver.exe file into the location your PATH variable is referring to. |
59,506,553 | I installed Firefox and using Ubuntu 18.04.
```py
from splinter import Browser
with Browser() as browser:
# Visit URL
url = "http://www.google.com"
browser.visit(url)
```
Results in:
```py
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
```
I'm not sure how to solve this problem. I checked the documentation from Splinter but there is no hint for this error.
What I am doing wrong ?
After updating the Stringer Lib:
```
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 92, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 70, in get_driver
raise err
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 66, in get_driver
return driver(*args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/driver/webdriver/firefox.py", line 88, in __init__
**kwargs
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/selenium/webdriver/firefox/webdriver.py", line 164, in __init__
self.service.start()
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/selenium/webdriver/common/service.py", line 83, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH.
``` | 2019/12/27 | [
"https://Stackoverflow.com/questions/59506553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12610119/"
] | I had the same issue. I realized that the pip install of splinter had a bug in the get\_broswer() function in the splinter/browser.py file.
pip install version of splinter that gave error `UnboundLocalError: local variable 'e' referenced before assignment`:
```
def get_driver(driver, retry_count=3, *args, **kwargs):
"""Try to instantiate the driver.
Common selenium errors are caught and a retry attempt occurs.
This can mitigate issues running on Remote WebDriver.
"""
for _ in range(retry_count):
try:
return driver(*args, **kwargs)
except (IOError, HTTPException, WebDriverException, MaxRetryError) as e:
pass
raise e
```
GitHub version:
```
def get_driver(driver, retry_count=3, *args, **kwargs):
"""Try to instantiate the driver.
Common selenium errors are caught and a retry attempt occurs.
This can mitigate issues running on Remote WebDriver.
"""
err = None
for _ in range(retry_count):
try:
return driver(*args, **kwargs)
except (IOError, HTTPException, WebDriverException, MaxRetryError) as e:
err = e
raise err
```
After updating to the GitHub version, I was able to find the real root cause problem, which was that I was using an older version of setting up chromedriver. I found a good solution [here](https://stackoverflow.com/a/52878725/12087852) for that issue.
In this solution, [Navarasu](https://stackoverflow.com/users/10474999/navarasu) suggests you `pip install webdriver-manager`, then you can call your browser as follows:
```
from splinter import Browser
from webdriver_manager.chrome import ChromeDriverManager
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path)
```
A similar approach can be used for Firefox. | I had the same issues. I reverted back to splinter-0.11.0.
To expand on the last issue for Windows:
Download the appropriate geckodriver (<https://github.com/mozilla/geckodriver/releases>) and put the geckodriver.exe file into the location your PATH variable is referring to. |
59,506,553 | I installed Firefox and using Ubuntu 18.04.
```py
from splinter import Browser
with Browser() as browser:
# Visit URL
url = "http://www.google.com"
browser.visit(url)
```
Results in:
```py
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
```
I'm not sure how to solve this problem. I checked the documentation from Splinter but there is no hint for this error.
What I am doing wrong ?
After updating the Stringer Lib:
```
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 92, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 70, in get_driver
raise err
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 66, in get_driver
return driver(*args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/driver/webdriver/firefox.py", line 88, in __init__
**kwargs
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/selenium/webdriver/firefox/webdriver.py", line 164, in __init__
self.service.start()
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/selenium/webdriver/common/service.py", line 83, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH.
``` | 2019/12/27 | [
"https://Stackoverflow.com/questions/59506553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12610119/"
] | First, check your chrome version, by going to "Help-About Google Chrome".
Second, go to <https://chromedriver.chromium.org/> AND download the version that matched your current Google Chrome.
Third, then put the Chromedriver into your bin.
For Mac-users, open Finder, then do shift+command+G, and then type "/usr/local/bin/" | Using Chrome on Windows 10, I made two edits and I'm not sure which one solved the error.
I downgraded my chrome driver version (to one lower than my current version).
I added this line of code.
```
executable_path = {'executable_path': '/usr/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
``` |
59,506,553 | I installed Firefox and using Ubuntu 18.04.
```py
from splinter import Browser
with Browser() as browser:
# Visit URL
url = "http://www.google.com"
browser.visit(url)
```
Results in:
```py
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment
```
I'm not sure how to solve this problem. I checked the documentation from Splinter but there is no hint for this error.
What I am doing wrong ?
After updating the Stringer Lib:
```
Traceback (most recent call last):
File "test.py", line 3, in <module>
with Browser() as browser:
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 92, in Browser
return get_driver(driver, *args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 70, in get_driver
raise err
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/browser.py", line 66, in get_driver
return driver(*args, **kwargs)
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/splinter/driver/webdriver/firefox.py", line 88, in __init__
**kwargs
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/selenium/webdriver/firefox/webdriver.py", line 164, in __init__
self.service.start()
File "/home/sebastian/PycharmProjects/SupremeBot/venv/lib/python3.6/site-packages/selenium/webdriver/common/service.py", line 83, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH.
``` | 2019/12/27 | [
"https://Stackoverflow.com/questions/59506553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12610119/"
] | I had the same issue. I realized that the pip install of splinter had a bug in the get\_broswer() function in the splinter/browser.py file.
pip install version of splinter that gave error `UnboundLocalError: local variable 'e' referenced before assignment`:
```
def get_driver(driver, retry_count=3, *args, **kwargs):
"""Try to instantiate the driver.
Common selenium errors are caught and a retry attempt occurs.
This can mitigate issues running on Remote WebDriver.
"""
for _ in range(retry_count):
try:
return driver(*args, **kwargs)
except (IOError, HTTPException, WebDriverException, MaxRetryError) as e:
pass
raise e
```
GitHub version:
```
def get_driver(driver, retry_count=3, *args, **kwargs):
"""Try to instantiate the driver.
Common selenium errors are caught and a retry attempt occurs.
This can mitigate issues running on Remote WebDriver.
"""
err = None
for _ in range(retry_count):
try:
return driver(*args, **kwargs)
except (IOError, HTTPException, WebDriverException, MaxRetryError) as e:
err = e
raise err
```
After updating to the GitHub version, I was able to find the real root cause problem, which was that I was using an older version of setting up chromedriver. I found a good solution [here](https://stackoverflow.com/a/52878725/12087852) for that issue.
In this solution, [Navarasu](https://stackoverflow.com/users/10474999/navarasu) suggests you `pip install webdriver-manager`, then you can call your browser as follows:
```
from splinter import Browser
from webdriver_manager.chrome import ChromeDriverManager
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path)
```
A similar approach can be used for Firefox. | Using Chrome on Windows 10, I made two edits and I'm not sure which one solved the error.
I downgraded my chrome driver version (to one lower than my current version).
I added this line of code.
```
executable_path = {'executable_path': '/usr/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
``` |
62,323,897 | A multidimensional array is an array containing one or more arrays.This is a definition of multidimensional array in php and below is an example of multidimensional array
```
[employee_experiences] => Array
(
[0] => Array
(
[company_name] => xyz
[designation] => worker
[job_description] => abc
[started] => 2020-06-09T19:00:00.000Z
[ended] => 2020-06-09T19:00:00.000Z
)
[1] => Array
(
[company_name] => zyz
[designation] => worker
[job_description] => def
[started] => 2020-06-09T19:00:00.000Z
[ended] => 2020-06-08T19:00:00.000Z
)
)
```
My question is that how can I get this format in python and save it to the the database I know python can't handle array instead python use lists | 2020/06/11 | [
"https://Stackoverflow.com/questions/62323897",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13348956/"
] | It looks like you're using Blaze template engine. You should use React instead.
<https://www.meteor.com/tutorials/react/components> | Material UI is a UI framework for use with React. It doesn't work with Blaze, and I don't think there is any way to use both Blaze and React in the same page.
To add Material UI to a Meteor/React project, install the package from the command line:
```
npm install @material-ui/core
```
And include the Roboto font in the head of your HTML:
```
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap" />
```
For me this just worked, with nothing special needed for Meteor.
More instructions here: <https://material-ui.com/getting-started/installation/> |
45,128,515 | From my understanding, tensorflow's freeze\_graph.py is supposed to support the new checkpoint format, and I should just be able to use something like
```
freeze_graph.py --input_saver ./checkpoints/model-49-295 --output_graph ./graph.pb --output_node_names "predictions:0"
```
Just to be clear,
```
ls ./checkpoints
checkpoint
model-49-295.data-00000-of-00001
model-49-295.index
model-49-295.meta
```
However, when I do this I get the following error:
```
Traceback (most recent call last):
File "~/.local/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py", line 255, in <module>
app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "~/.local/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "~/.local/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py", line 187, in main
FLAGS.variable_names_blacklist)
File "~/.local/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py", line 165, in freeze_graph
input_graph_def = _parse_input_graph_proto(input_graph, input_binary)
File "~/.local/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py", line 134, in _parse_input_graph_proto
text_format.Merge(f.read(), input_graph_def)
File "~/.local/lib/python3.5/site-packages/tensorflow/python/lib/io/file_io.py", line 125, in read
pywrap_tensorflow.ReadFromStream(self._read_buf, length, status))
File "/usr/lib/python3.5/contextlib.py", line 66, in __exit__
next(self.gen)
File "~/.local/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.FailedPreconditionError: .
```
I am really confused by this, because `.` doesn't seem like a very helpful error code, and all of the references to FailedPreconditionError I can find have something like `FailedPreconditionError: Attempting to use uninitialized value ...`
Anyone have any clue as to what's going on here? | 2017/07/16 | [
"https://Stackoverflow.com/questions/45128515",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2640224/"
] | Looking at the code from freeze\_graph.py I'm not really sure it does support the new format, or at least I can't figure out how it would, even though I've seen a number of places that claim it does. Anyway, my workaround for now was to write a simple script that does basically the same thing, but actually loads the checkpoint properly:
```
import tensorflow as tf
from tensorflow.python.framework import graph_util
from google.protobuf import text_format
with tf.Session() as sess:
saver = tf.train.import_meta_graph('./checkpoints/model-49-295.meta', clear_devices=True)
saver.restore(sess, './checkpoints/model-49-295')
graph_def = sess.graph.as_graph_def()
output_graph_def = graph_util.convert_variables_to_constants(sess, graph_def, ['predictions'])
with tf.gfile.GFile('./graph.pb', "wb") as f:
f.write(output_graph_def.SerializeToString())
``` | From the call stack, it looks like the parsing of the GraphDef .pb file is failing. The error message isn't very useful or informative though unfortunately!
My guess is that you need to pass in `--input_binary=true` as an argument, since by default it assumes the input graph is stored as a text protobuf. |
61,188,997 | I want to ask a question about how functions are assigned to variables.
I am a beginner in Python and studying functions.
Consider the following block of code from [w3schools](https://www.w3schools.com/python/trypython.asp?filename=demo_lambda_double):
```
def myfunc(n):
return lambda a : a * n
mydoubler = myfunc(2)
print(mydoubler(11))
```
I know that a `lambda` is an anonymous function with only *one* expression that is evaluated and returned.
This is shown inside `myfunc(n)`:
```
return lambda a: a * n
```
`myfunc` takes in an argument, which in this case is 2, so any number `a` will be doubled.
However, I'm confused here.
In this line:
```
mydoubler = myfunc(2)
```
I thought we assign the result of `myfunc` to `mydoubler` but in the print statement:
```
print(mydoubler(11))
```
we are passing an argument `11` but we never specified the parameter `a` anywhere in the function declaration.
How does the python code know that `11` in this case inside the print statement is associated with `a`? | 2020/04/13 | [
"https://Stackoverflow.com/questions/61188997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12909453/"
] | The function created by `lambda a: a * n` is a *closure*. When the function is called as `mydoubler(11)`, it uses `11` as the value for `a`, and `2` (the value of `n` inside `myfunc` at the time `lambda a: a * n` was evaluated) as the value for `n`.
`mydoubler` behaves as if it were defined as
```
def mydoubler(a):
return a * 2
```
You can see that the value `2` is stored in the function using a bit of digging:
```
>>> mydoubler.__closure__[0].cell_contents
2
```
(In fact, you can change that, though you would only do so if you want to make your code hopelessly complicated:
```
>>> mydoubler.__closure__[0].cell_contents = 3
>>> mydoubler(2)
6
```
) | In this case you actually assigned the doubler-function to `mydoubler`:
```
mydoubler = myfunc(2)
```
There you also specified the parameter `a`, namly 2. So when you pass a number to the `mydoubler()` function, Python know that you already specified `n` |
61,188,997 | I want to ask a question about how functions are assigned to variables.
I am a beginner in Python and studying functions.
Consider the following block of code from [w3schools](https://www.w3schools.com/python/trypython.asp?filename=demo_lambda_double):
```
def myfunc(n):
return lambda a : a * n
mydoubler = myfunc(2)
print(mydoubler(11))
```
I know that a `lambda` is an anonymous function with only *one* expression that is evaluated and returned.
This is shown inside `myfunc(n)`:
```
return lambda a: a * n
```
`myfunc` takes in an argument, which in this case is 2, so any number `a` will be doubled.
However, I'm confused here.
In this line:
```
mydoubler = myfunc(2)
```
I thought we assign the result of `myfunc` to `mydoubler` but in the print statement:
```
print(mydoubler(11))
```
we are passing an argument `11` but we never specified the parameter `a` anywhere in the function declaration.
How does the python code know that `11` in this case inside the print statement is associated with `a`? | 2020/04/13 | [
"https://Stackoverflow.com/questions/61188997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12909453/"
] | The function created by `lambda a: a * n` is a *closure*. When the function is called as `mydoubler(11)`, it uses `11` as the value for `a`, and `2` (the value of `n` inside `myfunc` at the time `lambda a: a * n` was evaluated) as the value for `n`.
`mydoubler` behaves as if it were defined as
```
def mydoubler(a):
return a * 2
```
You can see that the value `2` is stored in the function using a bit of digging:
```
>>> mydoubler.__closure__[0].cell_contents
2
```
(In fact, you can change that, though you would only do so if you want to make your code hopelessly complicated:
```
>>> mydoubler.__closure__[0].cell_contents = 3
>>> mydoubler(2)
6
```
) | The return value from `myfunc(2)` is equivalent to this lambda
```
lambda a: a * 2
```
That lambda takes 1 argument, and doubles it. When you call it, in this case by assigning it to the name `mydoubler`, whatever argument you pass to it is `a`, the one argument that the lambda accepts.
Equivalent code:
```
mydoubler = lambda a: a * 2
output = mydoubler(11) # returns 11 * 2
print(output) # prints 22
``` |
19,647,248 | I am receiving a decimal variable that correlates to 8 relay values of on or off. If off its value is 0 if on it variable is as follows
```
Relay1 = 1
Relay2 = 2
Relay3 = 4
Relay4 = 8
Relay5 = 16
Relay6 = 32
Relay7 = 64
Relay8 = 128
```
So if Relay1 and Relay8 were on I would receive 129.
```
Relay1 = 1
Relay2 = 0
Relay3 = 0
Relay4 = 0
Relay5 = 0
Relay6 = 0
Relay7 = 0
Relay8 = 128
```
I need to create some logic to figure out when I receive a value between 0-255 what the relay values would be. Ultimately I'm just spitting out some XML code that will have something as follows
```
<map key="00">
<update state="Relay1" type="boolean">Off</update>
<update state="Relay2" type="boolean">Off</update>
<update state="Relay3" type="boolean">Off</update>
<update state="Relay4" type="boolean">Off</update>
<update state="Relay5" type="boolean">Off</update>
<update state="Relay6" type="boolean">Off</update>
<update state="Relay7" type="boolean">Off</update>
<update state="Relay8" type="boolean">Off</update>
</map>
<map key="01">
<update state="Relay1" type="boolean">On</update>
<update state="Relay2" type="boolean">Off</update>
<update state="Relay3" type="boolean">Off</update>
<update state="Relay4" type="boolean">Off</update>
<update state="Relay5" type="boolean">Off</update>
<update state="Relay6" type="boolean">Off</update>
<update state="Relay7" type="boolean">Off</update>
<update state="Relay8" type="boolean">Off</update>
</map>
<map key="02">
<update state="Relay1" type="boolean">Off</update>
<update state="Relay2" type="boolean">On</update>
<update state="Relay3" type="boolean">Off</update>
<update state="Relay4" type="boolean">Off</update>
<update state="Relay5" type="boolean">Off</update>)
<update state="Relay6" type="boolean">Off</update>
<update state="Relay7" type="boolean">Off</update>
<update state="Relay8" type="boolean">Off</update>
</map>
<map key="129">
<update state="Relay1" type="boolean">On</update>
<update state="Relay2" type="boolean">Off</update>
<update state="Relay3" type="boolean">Off</update>
<update state="Relay4" type="boolean">Off</update>
<update state="Relay5" type="boolean">Off</update>
<update state="Relay6" type="boolean">Off</update>
<update state="Relay7" type="boolean">Off</update>
<update state="Relay8" type="boolean">On</update>
</map>
```
so the programming language is not as important but help with the logic would be great. I don't want to have to write out all 255 scenarios as this xml is simplified. If someone can point me in the right direction that would be great. What I'm struggling with is the correlation between 129 and say relay5.
Most familier with python so going to clasify it there. | 2013/10/28 | [
"https://Stackoverflow.com/questions/19647248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1311259/"
] | I'd use [bit-wise shifting](http://docs.python.org/2/reference/expressions.html#shifting-operations) (or powers of 2) and [bit-wise comparisons](http://docs.python.org/2/reference/expressions.html#binary-bitwise-operations) to get the relay values from a given input. A little cleaner in my opinion vs. converting it to a string with [`bin`](http://docs.python.org/2/library/functions.html#bin)
```
value = 53
for relay in range(8):
print 'Relay{} = {}'.format(relay + 1, 2**relay & value)
```
Prints:
```
Relay1 = 1
Relay2 = 0
Relay3 = 4
Relay4 = 0
Relay5 = 16
Relay6 = 32
Relay7 = 0
Relay8 = 0
```
When you use `&`, Python's *bit-wise* and-operator, it's individually and'ing each bit of the number together. Powers of two, 1, 2, 4, 8, and so forth, only have one bit set in their binary representations, so when you `&` them with the value in question, if the bits align, they return a non-zero number (True), and if not, 0 (False).
```
53 = 00110101
--------------
1 = 00000001 --&-> 00000001 # the 1's place lined up, so you get it back
2 = 00000010 --&-> 00000000 # nothing at the 2's in the key
4 = 00000100 --&-> 00000100 # 4's place lines up
... and so on.
```
For all the things:
```
for key in range(256):
print '<map key="{}">'.format(key)
for relay in range(8):
print ' <update state="Relay{}" type="boolean">{}</update>'.format(
relay + 1, 'On' if key & 2**relay else 'Off')
print '</map>'
```
Regarding shifting, if you're a C programmer, you could also use `1 >> relay` for powers of 2. | Convert the input to binary using `bin(n)`, and the resulting bits will correspond to the state of the relays: 0 means off, and 1 means on.
```
>>> bin(129)
'0b10000001'
>>>
```
The right-most (least significant) bit corresponds to relay 1 (showing it is On) and the left-most (most significant) bit corresponds to relay 8 (currently On). |
19,647,248 | I am receiving a decimal variable that correlates to 8 relay values of on or off. If off its value is 0 if on it variable is as follows
```
Relay1 = 1
Relay2 = 2
Relay3 = 4
Relay4 = 8
Relay5 = 16
Relay6 = 32
Relay7 = 64
Relay8 = 128
```
So if Relay1 and Relay8 were on I would receive 129.
```
Relay1 = 1
Relay2 = 0
Relay3 = 0
Relay4 = 0
Relay5 = 0
Relay6 = 0
Relay7 = 0
Relay8 = 128
```
I need to create some logic to figure out when I receive a value between 0-255 what the relay values would be. Ultimately I'm just spitting out some XML code that will have something as follows
```
<map key="00">
<update state="Relay1" type="boolean">Off</update>
<update state="Relay2" type="boolean">Off</update>
<update state="Relay3" type="boolean">Off</update>
<update state="Relay4" type="boolean">Off</update>
<update state="Relay5" type="boolean">Off</update>
<update state="Relay6" type="boolean">Off</update>
<update state="Relay7" type="boolean">Off</update>
<update state="Relay8" type="boolean">Off</update>
</map>
<map key="01">
<update state="Relay1" type="boolean">On</update>
<update state="Relay2" type="boolean">Off</update>
<update state="Relay3" type="boolean">Off</update>
<update state="Relay4" type="boolean">Off</update>
<update state="Relay5" type="boolean">Off</update>
<update state="Relay6" type="boolean">Off</update>
<update state="Relay7" type="boolean">Off</update>
<update state="Relay8" type="boolean">Off</update>
</map>
<map key="02">
<update state="Relay1" type="boolean">Off</update>
<update state="Relay2" type="boolean">On</update>
<update state="Relay3" type="boolean">Off</update>
<update state="Relay4" type="boolean">Off</update>
<update state="Relay5" type="boolean">Off</update>)
<update state="Relay6" type="boolean">Off</update>
<update state="Relay7" type="boolean">Off</update>
<update state="Relay8" type="boolean">Off</update>
</map>
<map key="129">
<update state="Relay1" type="boolean">On</update>
<update state="Relay2" type="boolean">Off</update>
<update state="Relay3" type="boolean">Off</update>
<update state="Relay4" type="boolean">Off</update>
<update state="Relay5" type="boolean">Off</update>
<update state="Relay6" type="boolean">Off</update>
<update state="Relay7" type="boolean">Off</update>
<update state="Relay8" type="boolean">On</update>
</map>
```
so the programming language is not as important but help with the logic would be great. I don't want to have to write out all 255 scenarios as this xml is simplified. If someone can point me in the right direction that would be great. What I'm struggling with is the correlation between 129 and say relay5.
Most familier with python so going to clasify it there. | 2013/10/28 | [
"https://Stackoverflow.com/questions/19647248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1311259/"
] | I'd use [bit-wise shifting](http://docs.python.org/2/reference/expressions.html#shifting-operations) (or powers of 2) and [bit-wise comparisons](http://docs.python.org/2/reference/expressions.html#binary-bitwise-operations) to get the relay values from a given input. A little cleaner in my opinion vs. converting it to a string with [`bin`](http://docs.python.org/2/library/functions.html#bin)
```
value = 53
for relay in range(8):
print 'Relay{} = {}'.format(relay + 1, 2**relay & value)
```
Prints:
```
Relay1 = 1
Relay2 = 0
Relay3 = 4
Relay4 = 0
Relay5 = 16
Relay6 = 32
Relay7 = 0
Relay8 = 0
```
When you use `&`, Python's *bit-wise* and-operator, it's individually and'ing each bit of the number together. Powers of two, 1, 2, 4, 8, and so forth, only have one bit set in their binary representations, so when you `&` them with the value in question, if the bits align, they return a non-zero number (True), and if not, 0 (False).
```
53 = 00110101
--------------
1 = 00000001 --&-> 00000001 # the 1's place lined up, so you get it back
2 = 00000010 --&-> 00000000 # nothing at the 2's in the key
4 = 00000100 --&-> 00000100 # 4's place lines up
... and so on.
```
For all the things:
```
for key in range(256):
print '<map key="{}">'.format(key)
for relay in range(8):
print ' <update state="Relay{}" type="boolean">{}</update>'.format(
relay + 1, 'On' if key & 2**relay else 'Off')
print '</map>'
```
Regarding shifting, if you're a C programmer, you could also use `1 >> relay` for powers of 2. | The key is converting the integer into binary, the simplest way is using the built in [`bin()`](http://docs.python.org/2/library/functions.html#bin).
From there, iterate over each bit in the binary number, and then convert that to the number of the index. As binary numbers are most significant bit first, so you need to iterate in reverse order (`x[::-1]`)
```
>>> for i,x in enumerate(bin(8)[:1:-1]):
... print "Relay ",i+1," is ",['off','on'][int(x)]
...
Relay 1 is off
Relay 2 is off
Relay 3 is off
Relay 4 is on
```
You can package this as a function like this:
```
# Returns an array, with 'True' if the relay is 'on', false otherwise.
def relays(in):
return [i=='1' for i in enumerate(bin(in)[:1:-1]]
```
It then becomes a matter of either calling this 255 times to generate your XML (which is a confusing idea) or using this to determine the state of the relays and altering logic based on that. |
58,830,918 | How do I click an element using selenium and beautifulsoup in python? I got these lines of code and I find it difficult to achieve. I want to click every element in every iteration. There are no pagination or next page. There are only like about 10 elements and after clicking the last element, it should stop. Does anyone know what should I do. Here are my code
```
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
import urllib
import urllib.request
from bs4 import BeautifulSoup
chrome_path = r"C:\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
url = 'https://www.99.co/singapore/condos-apartments/a-treasure-trove'
driver.get(url)
html = driver.page_source
soup = BeautifulSoup(html,'lxml')
details = soup.select('.FloorPlans__container__rwH_w') //Whole container of the result
for d in details:
picture = d.find('span',{'class':'Tappable-inactive'}).click() //the single element.
print(d)
driver.close()
```
Here is the site <https://www.99.co/singapore/condos-apartments/a-treasure-trove> . I want to scrape the details and the image in every floor plans section but it is difficult because the image only appears after you click the specific element. I can only get the details except for the image itself. Try it yourself so that you know what I mean.
EDIT:
I tried this method
```
for d in driver.find_elements_by_xpath('//*[@id="floorPlans"]/div/div/div/div/span'):
d.click()
```
The problem is it clicks too fast that the image couldn't load. And also im using selenium here. Is there any method like selecting a beautifulsoup like this format `picture = d.find('span',{'class':'Tappable-inactive'}).click()` ? | 2019/11/13 | [
"https://Stackoverflow.com/questions/58830918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11667606/"
] | You cannot interact with website widgets by using beautifulSoup you need to work with **selenium.** There are 2 ways to handle this problem.
* First is to get the main wrapper (class) of the 10 elements and then iterate to each child element of the main class.
* You can get the element by xpath and increment the last number in xpath by one in each iteration to move to the next element. | I print some result to check your code.
"details" only has one item.
And "picture" is not element. (So it's not clickable.)
```
details = soup.select('.FloorPlans__container__rwH_w')
print(details)
print(len(details))
for d in details:
print(d)
picture = d.find('span',{'class':'Tappable-inactive'})
print(picture)
```
Output:
[![enter image description here](https://i.stack.imgur.com/i49yp.png)](https://i.stack.imgur.com/i49yp.png)
For your edited version, you can check images have been visible before you do click().
Use visibility\_of\_element\_located to do.
Reference: <https://selenium-python.readthedocs.io/waits.html> |
38,992,850 | I am trying to run cloudera/clusterdock in a docker image for a university project. This is my first time using docker and so far I have been using the instructions on the cloudera website which are a little sparse.
I successfully downloaded docker and the cloudera image and when I run the `docker-images` command I get the following:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
cloudera/clusterdock latest 9b4d4f1dda22 7 days ago 467.5 MB
```
When I try and run up the container with this image. Using the following command
```
docker run cloudera/clusterdock:latest /bin/bash
```
I get the following message
```
File "/bin/bash", line 1
SyntaxError: Non-ASCII character '\x80' in file /bin/bash on line 2,
but no encoding declared; see http://www.python.org/peps/pep-0263.html for details
```
Having read the pep mentioned pep I know I need to change the encoding in a file but the pep concentrates on python files and I am unaware of having a python file so have no idea where to find it to correct it. Also, having limited knowledge I am uneasy changing the bin/bash file as I know it can affect your machine.
Any help will have to assume I have little knowledge of this as I have little experience. | 2016/08/17 | [
"https://Stackoverflow.com/questions/38992850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5511218/"
] | If you look at [Dockerfile](https://github.com/cloudera/clusterdock/blob/master/Dockerfile#L54) for `cloudera/clusterdock:latest`, you can see:
```
ENTRYPOINT ["python"]
```
So, when you do `docker run cloudera/clusterdock:latest /bin/bash`, you are basically doing `python /bin/bash` inside the container. You will see the same error if you type that in your terminal, normally:
```
$ python /bin/bash
File "/bin/bash", line 1
SyntaxError: Non-ASCII character '\xe0' in file /bin/bash on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
```
You probably wanted to do:
```
docker run -it --entrypoint=/bin/bash cloudera/clusterdock:latest
```
Look at [clusterdock.sh](https://github.com/cloudera/clusterdock/blob/master/clusterdock.sh#L86-L97) to see how actually the container is supposed to be run. | The associated docs (e.g. the description on the image's Docker Hub page or our blog post) describe that clusterdock is intended to be run by sourcing clusterdock.sh. This is required because the framework controls Docker on the host machine. |
65,999,640 | I have a file contains two columns and need to apply this equation on them like
```
x y
1.2 6.8
2.5 7.0
3 8
4 9
5 10
```
the equation is
```
de = sqrt((xi-xj)^2-(yi-yj)^2)
```
it means the result will be a column
```
row1 = sqrt((x1-x2)^2-(y1-y2)^2)
row2 = sqrt((x1-x3)^2-(y1-y3)^2)
```
and do this equation for each point x1 to other points and y1 for other points until finished then start to calculate
```
row 6 = sqrt((x2-x3)^2-(y2-y3)^2)
row 7 = sqrt((x2-x4)^2-(y2-y4)^2)
```
and do this equation for each point x2 to other points and y2 for other points until finished and so on until finished all x and y and store the result in a file
I tried to do this by using 2 arrays and stored the numbers on them then make calculations but the data is too huge and the array will be the wrong choice .. how can I do this in python .. reading from file the I and j for each value
my tries and sorry if it's too bad
```
import math
with open('columnss.txt', 'r', encoding='utf-8') as f:
for line in f:
[x, y] = (int(n) for n in line.split())
d = math.sqrt(((x[0] - y[0])**2) + ((x[1] - y[1])** 2))
with open('result.txt', 'w', encoding='utf-8') as f1:
f1.write( str(d) + '\n')
```
i got
```
ValueError: invalid literal for int() with base 10: '-9.2'
```
I did the calculations in excel but trying to use python for it too
Should I put each column in a separate file to be easier for catching numbers or can I do this with the same file?
\* | 2021/02/01 | [
"https://Stackoverflow.com/questions/65999640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3103578/"
] | You need to loop through the input file twice. The second loop can skip all the lines that are before the line from the first loop.
If you could load the file contents into a list or array, you could do this more easily by iterating over indexes rather than skipping lines.
Also, you should only open the output file once. You're overwriting it every time through the loop.
```
import cmath
with open('columnss.txt', 'r', encoding='utf-8') as f1, open('columnss.txt', 'r', encoding='utf-8') as f2, open('result.txt', 'w', encoding='utf-8') as outfile:
for i1, line in enumerate(f1):
x1, y1 = (float(n) for n in line.split())
f2.seek(0)
for i2, line in enumerate(f2):
if i1 < i2:
x2, y2 = (float(n) for n in line.split())
print(cmath.sqrt((x1-x2)**2-(y1-y2)**2), file=outfile)
``` | try this :
```
import math
with open('result.txt', 'w', encoding='utf-8') as f1:
with open('columnss.txt', 'r', encoding='utf-8') as f:
while True :
line=f.readline()
[x, y] = (int(float(n)) for n in line.split())
if ((x[0] - y[0])**2) + ((x[1] - y[1])** 2)> 0:
d = math.sqrt(((x[0] - y[0])**2) + ((x[1] - y[1])** 2))
f1.write(line +':' str(d) + '\n')
if not line :
break
f.close()
f.close()
``` |
65,999,640 | I have a file contains two columns and need to apply this equation on them like
```
x y
1.2 6.8
2.5 7.0
3 8
4 9
5 10
```
the equation is
```
de = sqrt((xi-xj)^2-(yi-yj)^2)
```
it means the result will be a column
```
row1 = sqrt((x1-x2)^2-(y1-y2)^2)
row2 = sqrt((x1-x3)^2-(y1-y3)^2)
```
and do this equation for each point x1 to other points and y1 for other points until finished then start to calculate
```
row 6 = sqrt((x2-x3)^2-(y2-y3)^2)
row 7 = sqrt((x2-x4)^2-(y2-y4)^2)
```
and do this equation for each point x2 to other points and y2 for other points until finished and so on until finished all x and y and store the result in a file
I tried to do this by using 2 arrays and stored the numbers on them then make calculations but the data is too huge and the array will be the wrong choice .. how can I do this in python .. reading from file the I and j for each value
my tries and sorry if it's too bad
```
import math
with open('columnss.txt', 'r', encoding='utf-8') as f:
for line in f:
[x, y] = (int(n) for n in line.split())
d = math.sqrt(((x[0] - y[0])**2) + ((x[1] - y[1])** 2))
with open('result.txt', 'w', encoding='utf-8') as f1:
f1.write( str(d) + '\n')
```
i got
```
ValueError: invalid literal for int() with base 10: '-9.2'
```
I did the calculations in excel but trying to use python for it too
Should I put each column in a separate file to be easier for catching numbers or can I do this with the same file?
\* | 2021/02/01 | [
"https://Stackoverflow.com/questions/65999640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3103578/"
] | You need to loop through the input file twice. The second loop can skip all the lines that are before the line from the first loop.
If you could load the file contents into a list or array, you could do this more easily by iterating over indexes rather than skipping lines.
Also, you should only open the output file once. You're overwriting it every time through the loop.
```
import cmath
with open('columnss.txt', 'r', encoding='utf-8') as f1, open('columnss.txt', 'r', encoding='utf-8') as f2, open('result.txt', 'w', encoding='utf-8') as outfile:
for i1, line in enumerate(f1):
x1, y1 = (float(n) for n in line.split())
f2.seek(0)
for i2, line in enumerate(f2):
if i1 < i2:
x2, y2 = (float(n) for n in line.split())
print(cmath.sqrt((x1-x2)**2-(y1-y2)**2), file=outfile)
``` | Whenever there is a problem which usually looks like something which can be done in an excel sheet, and want to enable a python way of doing it I use pandas.
I am assuming `pandas` is ok for you to use too.
Here is the code for 'columns.txt' file read and output as 'output.csv'
```
import pandas as pd
import cmath
df = pd.read_csv('columns.txt', sep=r"\s+") # read columns.txt into a dataframe, using space as deliimter
df.dropna(inplace=True,axis=1) # multiple whitespaces create NA columns. Better to use csv file
df = df.astype(float) # specify the columsn as type float
print("-"*20 + "Input" + "-"*20)
print(df) #
print("-"*50)
for index, row in df.iterrows():
origin=row # specify current row as origin
'''
Adding equation column
Here we are using a lambda function (same as de used in the question)
and creating a new column called equation
'''
df["equation from row {}".format(index)]=df.apply(lambda row_lambda: cmath.sqrt((origin.x-row_lambda.x)**2 - (origin.y-row_lambda.y)**2), axis=1)
print("-"*20 + "Output" + "-"*20)
print(df)
print("-"*50)
# Save this output as csv file (even excel is possible)
df.to_csv('Output.csv')```
The output will look like:
--------------------Input--------------------
x y
0 -99.9580 -28.84930
1 -71.5378 -26.77280
2 -91.6913 -40.90390
3 -69.0989 -12.95010
4 -79.6443 -9.20575
5 -92.1975 -20.02760
6 -99.7732 -14.26070
7 -80.3767 -18.16040
--------------------------------------------------
--------------------Output--------------------
x y distance from row 0 distance from row 1 \
0 -99.9580 -28.84930 0j (28.344239552155912+0j)
1 -71.5378 -26.77280 (28.344239552155912+0j) 0j
2 -91.6913 -40.90390 8.773542743384796j (14.369257985017867+0j)
3 -69.0989 -12.95010 (26.448052710360358+0j) 13.605837059144871j
4 -79.6443 -9.20575 (5.174683670283624+0j) 15.584797189970107j
5 -92.1975 -20.02760 4.194881481043308j (19.527556965734348+0j)
6 -99.7732 -14.26070 14.587429482948666j (25.31175945583396+0j)
7 -80.3767 -18.16040 (16.40654523292457+0j) (1.9881447256173002+0j)
distance from row 2 distance from row 3 distance from row 4 \
0 8.773542743384796j (26.448052710360358+0j) (5.174683670283624+0j)
1 (14.369257985017867+0j) 13.605837059144871j -15.584797189970107j
2 0j 16.462028935705348j 29.319660714655278j
3 16.462028935705348j 0j (9.858260710566546-0j)
4 29.319660714655278j (9.858260710566546+0j) 0j
5 20.87016203219323j (21.987594586720945+0j) (6.361634445447185+0j)
6 25.387851398454337j (30.646288651809048+0j) (19.483841913429192+0j)
7 19.72933397482034j (10.002077121778257+0j) 8.924648276682952j
distance from row 5 distance from row 6 distance from row 7
0 4.194881481043308j 14.587429482948666j (16.40654523292457+0j)
1 (19.527556965734348-0j) (25.31175945583396-0j) (1.9881447256173002-0j)
2 -20.87016203219323j -25.387851398454337j 19.72933397482034j
3 (21.987594586720945+0j) (30.646288651809048+0j) (10.002077121778257+0j)
4 (6.361634445447185+0j) (19.483841913429192+0j) 8.924648276682952j
5 0j (4.912646423263124-0j) (11.672398074089152+0j)
6 (4.912646423263124+0j) 0j (19.000435578165046+0j)
7 (11.672398074089152+0j) (19.000435578165046-0j) 0j
--------------------------------------------------
To know more about pandas:
[https://pandas.pydata.org/docs/][1]
Stackoverflow itself is an excellent resource for gathering all way of using pandas.
[1]: https://pandas.pydata.org/docs/
Here column names are defined as 'x' and 'y' in the header.
If the column names are not specified you can add a new header by:
df.columns=['x','y']
after reading the csv file (or text file).
If it already has a header and want to use that name just specify that in the lambdas formula.
Please see:
https://stackoverflow.com/questions/14365542/import-csv-file-as-a-pandas-dataframe
Hope this helps
``` |
65,999,640 | I have a file contains two columns and need to apply this equation on them like
```
x y
1.2 6.8
2.5 7.0
3 8
4 9
5 10
```
the equation is
```
de = sqrt((xi-xj)^2-(yi-yj)^2)
```
it means the result will be a column
```
row1 = sqrt((x1-x2)^2-(y1-y2)^2)
row2 = sqrt((x1-x3)^2-(y1-y3)^2)
```
and do this equation for each point x1 to other points and y1 for other points until finished then start to calculate
```
row 6 = sqrt((x2-x3)^2-(y2-y3)^2)
row 7 = sqrt((x2-x4)^2-(y2-y4)^2)
```
and do this equation for each point x2 to other points and y2 for other points until finished and so on until finished all x and y and store the result in a file
I tried to do this by using 2 arrays and stored the numbers on them then make calculations but the data is too huge and the array will be the wrong choice .. how can I do this in python .. reading from file the I and j for each value
my tries and sorry if it's too bad
```
import math
with open('columnss.txt', 'r', encoding='utf-8') as f:
for line in f:
[x, y] = (int(n) for n in line.split())
d = math.sqrt(((x[0] - y[0])**2) + ((x[1] - y[1])** 2))
with open('result.txt', 'w', encoding='utf-8') as f1:
f1.write( str(d) + '\n')
```
i got
```
ValueError: invalid literal for int() with base 10: '-9.2'
```
I did the calculations in excel but trying to use python for it too
Should I put each column in a separate file to be easier for catching numbers or can I do this with the same file?
\* | 2021/02/01 | [
"https://Stackoverflow.com/questions/65999640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3103578/"
] | You need to loop through the input file twice. The second loop can skip all the lines that are before the line from the first loop.
If you could load the file contents into a list or array, you could do this more easily by iterating over indexes rather than skipping lines.
Also, you should only open the output file once. You're overwriting it every time through the loop.
```
import cmath
with open('columnss.txt', 'r', encoding='utf-8') as f1, open('columnss.txt', 'r', encoding='utf-8') as f2, open('result.txt', 'w', encoding='utf-8') as outfile:
for i1, line in enumerate(f1):
x1, y1 = (float(n) for n in line.split())
f2.seek(0)
for i2, line in enumerate(f2):
if i1 < i2:
x2, y2 = (float(n) for n in line.split())
print(cmath.sqrt((x1-x2)**2-(y1-y2)**2), file=outfile)
``` | Whenever there is a problem which usually looks like something which can be done in an excel sheet, and want to enable a python way of doing it I use pandas.
I am assuming `pandas` is ok for you to use too.
Here is the code for 'columns.txt' file read and output as 'output.csv' which finds distance of each rows from others and adds a new column
```
import pandas as pd
import cmath
df = pd.read_csv('columns.txt', sep=r"\s+") # read columns.txt into a dataframe, using space as deliimter
df.dropna(inplace=True,axis=1) # multiple whitespaces create NA columns. Better to use csv file
df = df.astype(float) # specify the columsn as type float
print("-"*20 + "Input" + "-"*20)
print(df) #
print("-"*50)
for index, row in df.iterrows():
origin=row # specify first row as origin
'''
Adding distance column
Here we are using a lambda function (same as de used in the question)
and creating a new column called distance
'''
df["distance from row {}".format(index)]=df.apply(lambda row_lambda: cmath.sqrt((origin.x-row_lambda.x)**2 - (origin.y-row_lambda.y)**2), axis=1)
print("-"*20 + "Output" + "-"*20)
print(df)
print("-"*50)
# Save this output as csv file (even excel is possible)
df.to_csv('Output.csv')```
The output will look like:
--------------------Input--------------------
x y
0 -99.9580 -28.84930
1 -71.5378 -26.77280
2 -91.6913 -40.90390
3 -69.0989 -12.95010
4 -79.6443 -9.20575
5 -92.1975 -20.02760
6 -99.7732 -14.26070
7 -80.3767 -18.16040
--------------------------------------------------
--------------------Output--------------------
x y distance from row 0 distance from row 1 \
0 -99.9580 -28.84930 0j (28.344239552155912+0j)
1 -71.5378 -26.77280 (28.344239552155912+0j) 0j
2 -91.6913 -40.90390 8.773542743384796j (14.369257985017867+0j)
3 -69.0989 -12.95010 (26.448052710360358+0j) 13.605837059144871j
4 -79.6443 -9.20575 (5.174683670283624+0j) 15.584797189970107j
5 -92.1975 -20.02760 4.194881481043308j (19.527556965734348+0j)
6 -99.7732 -14.26070 14.587429482948666j (25.31175945583396+0j)
7 -80.3767 -18.16040 (16.40654523292457+0j) (1.9881447256173002+0j)
distance from row 2 distance from row 3 distance from row 4 \
0 8.773542743384796j (26.448052710360358+0j) (5.174683670283624+0j)
1 (14.369257985017867+0j) 13.605837059144871j -15.584797189970107j
2 0j 16.462028935705348j 29.319660714655278j
3 16.462028935705348j 0j (9.858260710566546-0j)
4 29.319660714655278j (9.858260710566546+0j) 0j
5 20.87016203219323j (21.987594586720945+0j) (6.361634445447185+0j)
6 25.387851398454337j (30.646288651809048+0j) (19.483841913429192+0j)
7 19.72933397482034j (10.002077121778257+0j) 8.924648276682952j
distance from row 5 distance from row 6 distance from row 7
0 4.194881481043308j 14.587429482948666j (16.40654523292457+0j)
1 (19.527556965734348-0j) (25.31175945583396-0j) (1.9881447256173002-0j)
2 -20.87016203219323j -25.387851398454337j 19.72933397482034j
3 (21.987594586720945+0j) (30.646288651809048+0j) (10.002077121778257+0j)
4 (6.361634445447185+0j) (19.483841913429192+0j) 8.924648276682952j
5 0j (4.912646423263124-0j) (11.672398074089152+0j)
6 (4.912646423263124+0j) 0j (19.000435578165046+0j)
7 (11.672398074089152+0j) (19.000435578165046-0j) 0j
--------------------------------------------------
To know more about pandas:
[https://pandas.pydata.org/docs/][1]
Stackoverflow itself is an excellent resource for gathering all way of using pandas.
[1]: https://pandas.pydata.org/docs/
``` |
65,999,640 | I have a file contains two columns and need to apply this equation on them like
```
x y
1.2 6.8
2.5 7.0
3 8
4 9
5 10
```
the equation is
```
de = sqrt((xi-xj)^2-(yi-yj)^2)
```
it means the result will be a column
```
row1 = sqrt((x1-x2)^2-(y1-y2)^2)
row2 = sqrt((x1-x3)^2-(y1-y3)^2)
```
and do this equation for each point x1 to other points and y1 for other points until finished then start to calculate
```
row 6 = sqrt((x2-x3)^2-(y2-y3)^2)
row 7 = sqrt((x2-x4)^2-(y2-y4)^2)
```
and do this equation for each point x2 to other points and y2 for other points until finished and so on until finished all x and y and store the result in a file
I tried to do this by using 2 arrays and stored the numbers on them then make calculations but the data is too huge and the array will be the wrong choice .. how can I do this in python .. reading from file the I and j for each value
my tries and sorry if it's too bad
```
import math
with open('columnss.txt', 'r', encoding='utf-8') as f:
for line in f:
[x, y] = (int(n) for n in line.split())
d = math.sqrt(((x[0] - y[0])**2) + ((x[1] - y[1])** 2))
with open('result.txt', 'w', encoding='utf-8') as f1:
f1.write( str(d) + '\n')
```
i got
```
ValueError: invalid literal for int() with base 10: '-9.2'
```
I did the calculations in excel but trying to use python for it too
Should I put each column in a separate file to be easier for catching numbers or can I do this with the same file?
\* | 2021/02/01 | [
"https://Stackoverflow.com/questions/65999640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3103578/"
] | Whenever there is a problem which usually looks like something which can be done in an excel sheet, and want to enable a python way of doing it I use pandas.
I am assuming `pandas` is ok for you to use too.
Here is the code for 'columns.txt' file read and output as 'output.csv'
```
import pandas as pd
import cmath
df = pd.read_csv('columns.txt', sep=r"\s+") # read columns.txt into a dataframe, using space as deliimter
df.dropna(inplace=True,axis=1) # multiple whitespaces create NA columns. Better to use csv file
df = df.astype(float) # specify the columsn as type float
print("-"*20 + "Input" + "-"*20)
print(df) #
print("-"*50)
for index, row in df.iterrows():
origin=row # specify current row as origin
'''
Adding equation column
Here we are using a lambda function (same as de used in the question)
and creating a new column called equation
'''
df["equation from row {}".format(index)]=df.apply(lambda row_lambda: cmath.sqrt((origin.x-row_lambda.x)**2 - (origin.y-row_lambda.y)**2), axis=1)
print("-"*20 + "Output" + "-"*20)
print(df)
print("-"*50)
# Save this output as csv file (even excel is possible)
df.to_csv('Output.csv')```
The output will look like:
--------------------Input--------------------
x y
0 -99.9580 -28.84930
1 -71.5378 -26.77280
2 -91.6913 -40.90390
3 -69.0989 -12.95010
4 -79.6443 -9.20575
5 -92.1975 -20.02760
6 -99.7732 -14.26070
7 -80.3767 -18.16040
--------------------------------------------------
--------------------Output--------------------
x y distance from row 0 distance from row 1 \
0 -99.9580 -28.84930 0j (28.344239552155912+0j)
1 -71.5378 -26.77280 (28.344239552155912+0j) 0j
2 -91.6913 -40.90390 8.773542743384796j (14.369257985017867+0j)
3 -69.0989 -12.95010 (26.448052710360358+0j) 13.605837059144871j
4 -79.6443 -9.20575 (5.174683670283624+0j) 15.584797189970107j
5 -92.1975 -20.02760 4.194881481043308j (19.527556965734348+0j)
6 -99.7732 -14.26070 14.587429482948666j (25.31175945583396+0j)
7 -80.3767 -18.16040 (16.40654523292457+0j) (1.9881447256173002+0j)
distance from row 2 distance from row 3 distance from row 4 \
0 8.773542743384796j (26.448052710360358+0j) (5.174683670283624+0j)
1 (14.369257985017867+0j) 13.605837059144871j -15.584797189970107j
2 0j 16.462028935705348j 29.319660714655278j
3 16.462028935705348j 0j (9.858260710566546-0j)
4 29.319660714655278j (9.858260710566546+0j) 0j
5 20.87016203219323j (21.987594586720945+0j) (6.361634445447185+0j)
6 25.387851398454337j (30.646288651809048+0j) (19.483841913429192+0j)
7 19.72933397482034j (10.002077121778257+0j) 8.924648276682952j
distance from row 5 distance from row 6 distance from row 7
0 4.194881481043308j 14.587429482948666j (16.40654523292457+0j)
1 (19.527556965734348-0j) (25.31175945583396-0j) (1.9881447256173002-0j)
2 -20.87016203219323j -25.387851398454337j 19.72933397482034j
3 (21.987594586720945+0j) (30.646288651809048+0j) (10.002077121778257+0j)
4 (6.361634445447185+0j) (19.483841913429192+0j) 8.924648276682952j
5 0j (4.912646423263124-0j) (11.672398074089152+0j)
6 (4.912646423263124+0j) 0j (19.000435578165046+0j)
7 (11.672398074089152+0j) (19.000435578165046-0j) 0j
--------------------------------------------------
To know more about pandas:
[https://pandas.pydata.org/docs/][1]
Stackoverflow itself is an excellent resource for gathering all way of using pandas.
[1]: https://pandas.pydata.org/docs/
Here column names are defined as 'x' and 'y' in the header.
If the column names are not specified you can add a new header by:
df.columns=['x','y']
after reading the csv file (or text file).
If it already has a header and want to use that name just specify that in the lambdas formula.
Please see:
https://stackoverflow.com/questions/14365542/import-csv-file-as-a-pandas-dataframe
Hope this helps
``` | try this :
```
import math
with open('result.txt', 'w', encoding='utf-8') as f1:
with open('columnss.txt', 'r', encoding='utf-8') as f:
while True :
line=f.readline()
[x, y] = (int(float(n)) for n in line.split())
if ((x[0] - y[0])**2) + ((x[1] - y[1])** 2)> 0:
d = math.sqrt(((x[0] - y[0])**2) + ((x[1] - y[1])** 2))
f1.write(line +':' str(d) + '\n')
if not line :
break
f.close()
f.close()
``` |
65,999,640 | I have a file contains two columns and need to apply this equation on them like
```
x y
1.2 6.8
2.5 7.0
3 8
4 9
5 10
```
the equation is
```
de = sqrt((xi-xj)^2-(yi-yj)^2)
```
it means the result will be a column
```
row1 = sqrt((x1-x2)^2-(y1-y2)^2)
row2 = sqrt((x1-x3)^2-(y1-y3)^2)
```
and do this equation for each point x1 to other points and y1 for other points until finished then start to calculate
```
row 6 = sqrt((x2-x3)^2-(y2-y3)^2)
row 7 = sqrt((x2-x4)^2-(y2-y4)^2)
```
and do this equation for each point x2 to other points and y2 for other points until finished and so on until finished all x and y and store the result in a file
I tried to do this by using 2 arrays and stored the numbers on them then make calculations but the data is too huge and the array will be the wrong choice .. how can I do this in python .. reading from file the I and j for each value
my tries and sorry if it's too bad
```
import math
with open('columnss.txt', 'r', encoding='utf-8') as f:
for line in f:
[x, y] = (int(n) for n in line.split())
d = math.sqrt(((x[0] - y[0])**2) + ((x[1] - y[1])** 2))
with open('result.txt', 'w', encoding='utf-8') as f1:
f1.write( str(d) + '\n')
```
i got
```
ValueError: invalid literal for int() with base 10: '-9.2'
```
I did the calculations in excel but trying to use python for it too
Should I put each column in a separate file to be easier for catching numbers or can I do this with the same file?
\* | 2021/02/01 | [
"https://Stackoverflow.com/questions/65999640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3103578/"
] | Whenever there is a problem which usually looks like something which can be done in an excel sheet, and want to enable a python way of doing it I use pandas.
I am assuming `pandas` is ok for you to use too.
Here is the code for 'columns.txt' file read and output as 'output.csv' which finds distance of each rows from others and adds a new column
```
import pandas as pd
import cmath
df = pd.read_csv('columns.txt', sep=r"\s+") # read columns.txt into a dataframe, using space as deliimter
df.dropna(inplace=True,axis=1) # multiple whitespaces create NA columns. Better to use csv file
df = df.astype(float) # specify the columsn as type float
print("-"*20 + "Input" + "-"*20)
print(df) #
print("-"*50)
for index, row in df.iterrows():
origin=row # specify first row as origin
'''
Adding distance column
Here we are using a lambda function (same as de used in the question)
and creating a new column called distance
'''
df["distance from row {}".format(index)]=df.apply(lambda row_lambda: cmath.sqrt((origin.x-row_lambda.x)**2 - (origin.y-row_lambda.y)**2), axis=1)
print("-"*20 + "Output" + "-"*20)
print(df)
print("-"*50)
# Save this output as csv file (even excel is possible)
df.to_csv('Output.csv')```
The output will look like:
--------------------Input--------------------
x y
0 -99.9580 -28.84930
1 -71.5378 -26.77280
2 -91.6913 -40.90390
3 -69.0989 -12.95010
4 -79.6443 -9.20575
5 -92.1975 -20.02760
6 -99.7732 -14.26070
7 -80.3767 -18.16040
--------------------------------------------------
--------------------Output--------------------
x y distance from row 0 distance from row 1 \
0 -99.9580 -28.84930 0j (28.344239552155912+0j)
1 -71.5378 -26.77280 (28.344239552155912+0j) 0j
2 -91.6913 -40.90390 8.773542743384796j (14.369257985017867+0j)
3 -69.0989 -12.95010 (26.448052710360358+0j) 13.605837059144871j
4 -79.6443 -9.20575 (5.174683670283624+0j) 15.584797189970107j
5 -92.1975 -20.02760 4.194881481043308j (19.527556965734348+0j)
6 -99.7732 -14.26070 14.587429482948666j (25.31175945583396+0j)
7 -80.3767 -18.16040 (16.40654523292457+0j) (1.9881447256173002+0j)
distance from row 2 distance from row 3 distance from row 4 \
0 8.773542743384796j (26.448052710360358+0j) (5.174683670283624+0j)
1 (14.369257985017867+0j) 13.605837059144871j -15.584797189970107j
2 0j 16.462028935705348j 29.319660714655278j
3 16.462028935705348j 0j (9.858260710566546-0j)
4 29.319660714655278j (9.858260710566546+0j) 0j
5 20.87016203219323j (21.987594586720945+0j) (6.361634445447185+0j)
6 25.387851398454337j (30.646288651809048+0j) (19.483841913429192+0j)
7 19.72933397482034j (10.002077121778257+0j) 8.924648276682952j
distance from row 5 distance from row 6 distance from row 7
0 4.194881481043308j 14.587429482948666j (16.40654523292457+0j)
1 (19.527556965734348-0j) (25.31175945583396-0j) (1.9881447256173002-0j)
2 -20.87016203219323j -25.387851398454337j 19.72933397482034j
3 (21.987594586720945+0j) (30.646288651809048+0j) (10.002077121778257+0j)
4 (6.361634445447185+0j) (19.483841913429192+0j) 8.924648276682952j
5 0j (4.912646423263124-0j) (11.672398074089152+0j)
6 (4.912646423263124+0j) 0j (19.000435578165046+0j)
7 (11.672398074089152+0j) (19.000435578165046-0j) 0j
--------------------------------------------------
To know more about pandas:
[https://pandas.pydata.org/docs/][1]
Stackoverflow itself is an excellent resource for gathering all way of using pandas.
[1]: https://pandas.pydata.org/docs/
``` | try this :
```
import math
with open('result.txt', 'w', encoding='utf-8') as f1:
with open('columnss.txt', 'r', encoding='utf-8') as f:
while True :
line=f.readline()
[x, y] = (int(float(n)) for n in line.split())
if ((x[0] - y[0])**2) + ((x[1] - y[1])** 2)> 0:
d = math.sqrt(((x[0] - y[0])**2) + ((x[1] - y[1])** 2))
f1.write(line +':' str(d) + '\n')
if not line :
break
f.close()
f.close()
``` |
47,932,215 | I am new to Chef and following a tutorial which is providing information on running a default recipe inside a cookbook or a specific recipe. The tree output for my Cookbook is as follows:
```
pwd
/opt/dk-chef/python_code/Chef
[root@LUMOS Chef]# tree Cookbooks/BasicLinux/
Cookbooks/BasicLinux/
├── Berksfile
├── chefignore
├── LICENSE
├── metadata.rb
├── nodes
│ └── LUMOS.RMT.com.json
├── README.md
├── recipes
│ ├── default.rb
│ ├── nodes
│ │ └── LUMOS.RMT.com.json
│ └── setup.rb
├── spec
│ ├── spec_helper.rb
│ └── unit
│ └── recipes
│ └── default_spec.rb
└── test
└── smoke
└── default
└── default_test.rb
```
Running the chef-client command as follows tells me that the Cookbook is always missing. Is there a configuration parameter that I need to set so that the cookbook is found?
```
chef-client -z -r "recipe[BasicLinux::setup]"
[2017-12-21T15:18:20-05:00] WARN: No config file found or specified on command line, using command line options.
[2017-12-21T15:18:20-05:00] WARN: No cookbooks directory found at or above current directory. Assuming /opt/dk-chef/python_code/Chef.
[2017-12-21T15:18:20-05:00] WARN: No cookbooks directory found at or above current directory. Assuming /opt/dk-chef/python_code/Chef.
Starting Chef Client, version 13.6.4
resolving cookbooks for run list: ["BasicLinux::setup"]
================================================================================
Error Resolving Cookbooks for Run List:
================================================================================
Missing Cookbooks:
------------------
No such cookbook: BasicLinux
Expanded Run List:
------------------
* BasicLinux::setup
System Info:
------------
chef_version=13.6.4
platform=redhat
platform_version=6.6
ruby=ruby 2.4.2p198 (2017-09-14 revision 59899) [x86_64-linux]
program_name=chef-client worker: ppid=23170;start=15:18:20;
executable=/opt/chefdk/bin/chef-client
Running handlers:
[2017-12-21T15:18:23-05:00] ERROR: Running exception handlers
[2017-12-21T15:18:23-05:00] ERROR: Running exception handlers
Running handlers complete
[2017-12-21T15:18:23-05:00] ERROR: Exception handlers complete
[2017-12-21T15:18:23-05:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 03 seconds
[2017-12-21T15:18:23-05:00] FATAL: Stacktrace dumped to /root/.chef/local-mode-cache/cache/chef-stacktrace.out
[2017-12-21T15:18:23-05:00] FATAL: Stacktrace dumped to /root/.chef/local-mode-cache/cache/chef-stacktrace.out
[2017-12-21T15:18:23-05:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2017-12-21T15:18:23-05:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2017-12-21T15:18:23-05:00] ERROR: 412 "Precondition Failed"
[2017-12-21T15:18:23-05:00] ERROR: 412 "Precondition Failed"
[2017-12-21T15:18:23-05:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
[2017-12-21T15:18:23-05:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
``` | 2017/12/21 | [
"https://Stackoverflow.com/questions/47932215",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5266232/"
] | 1.Create the directory with name "**cookbooks**" and put(or create) your cookbook in that directory
2.From the outside of cookbooks directory, fire your or following command.
**sudo chef-client -z --runlist "your\_cookbook\_name::your\_recipe\_name"** | That needs to be `cookbooks/BasicLinux/`, and you also need `name 'BasicLinux'` in the `metadata.rb`. Given the `recipes/nodes/` folder also make sure you are in the right folder, you need to be in `/opt/dk-chef/python_code/Chef`, not `/opt/dk-chef/python_code/Chef/cookbooks/BasicLinux/recipes/` or anything else (you can delete the `nodes/` folders inside the cookbook). |
41,317,043 | I want to know how to use bind method with ribbon button in wxpython for python 3.4 (Phoenix version 3.0.3) because I tried all possible ways used with menus and buttons but all the time I have an error looks like:
File "C:\Anaconda3\lib\site-packages\wx\core.py", line 1200, in \_EvtHandler\_Bind
assert source is None or hasattr(source, 'GetId')
AssertionError
please help with simple example if possible. Thanks in advance. | 2016/12/24 | [
"https://Stackoverflow.com/questions/41317043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4434914/"
] | You could use an IIFE with a destruction.
```js
const source = { foo: 1, bar: 2, baz: 3 },
target = (({ foo, bar, baz }) => ({ foo, bar, baz }))(source);
console.log(target);
``` | If you've got an object that contains many properties you need, and and a small amount you don't, you can use the [object rest syntax](https://github.com/sebmarkbage/ecmascript-rest-spread):
```js
const source = { foo: 1, bar: 2, baz: 3, whatever: 4 };
const { whatever, ...target } = source;
console.log(target);
```
**Note** - Object rest is a a Stage 3 proposal for ECMAScript, and a transpiler (babel with the [Object rest spread transform](https://babeljs.io/docs/plugins/transform-object-rest-spread/)) is needed. |
41,317,043 | I want to know how to use bind method with ribbon button in wxpython for python 3.4 (Phoenix version 3.0.3) because I tried all possible ways used with menus and buttons but all the time I have an error looks like:
File "C:\Anaconda3\lib\site-packages\wx\core.py", line 1200, in \_EvtHandler\_Bind
assert source is None or hasattr(source, 'GetId')
AssertionError
please help with simple example if possible. Thanks in advance. | 2016/12/24 | [
"https://Stackoverflow.com/questions/41317043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4434914/"
] | If you've got an object that contains many properties you need, and and a small amount you don't, you can use the [object rest syntax](https://github.com/sebmarkbage/ecmascript-rest-spread):
```js
const source = { foo: 1, bar: 2, baz: 3, whatever: 4 };
const { whatever, ...target } = source;
console.log(target);
```
**Note** - Object rest is a a Stage 3 proposal for ECMAScript, and a transpiler (babel with the [Object rest spread transform](https://babeljs.io/docs/plugins/transform-object-rest-spread/)) is needed. | You can use destructuring assignment
```js
const source = {foo: 1, bar:2, baz:3, abc: 4, def: 5};
const result = {};
({foo:result.foo, bar:result.bar, baz:result.baz} = source);
console.log(result);
```
Alternatively you can set property names as elements of an array, use `for..of` loop with destructuring assignment to set properties, values of `target`
```js
const source = {foo: 1, bar:2, baz:3, abc:4, def: 5};
const result = {};
for (let prop of ["foo", "bar", "baz"]) ({[prop]:result[prop]} = source);
console.log(result);
``` |
41,317,043 | I want to know how to use bind method with ribbon button in wxpython for python 3.4 (Phoenix version 3.0.3) because I tried all possible ways used with menus and buttons but all the time I have an error looks like:
File "C:\Anaconda3\lib\site-packages\wx\core.py", line 1200, in \_EvtHandler\_Bind
assert source is None or hasattr(source, 'GetId')
AssertionError
please help with simple example if possible. Thanks in advance. | 2016/12/24 | [
"https://Stackoverflow.com/questions/41317043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4434914/"
] | You could use an IIFE with a destruction.
```js
const source = { foo: 1, bar: 2, baz: 3 },
target = (({ foo, bar, baz }) => ({ foo, bar, baz }))(source);
console.log(target);
``` | You can use destructuring assignment
```js
const source = {foo: 1, bar:2, baz:3, abc: 4, def: 5};
const result = {};
({foo:result.foo, bar:result.bar, baz:result.baz} = source);
console.log(result);
```
Alternatively you can set property names as elements of an array, use `for..of` loop with destructuring assignment to set properties, values of `target`
```js
const source = {foo: 1, bar:2, baz:3, abc:4, def: 5};
const result = {};
for (let prop of ["foo", "bar", "baz"]) ({[prop]:result[prop]} = source);
console.log(result);
``` |
69,597,476 | I have a excel (.xslx) file with 4 columns:
pmid (int)
gene (string)
disease (string)
label (string)
I attempt to load this directly into python with `pandas.read_excel`
```py
df = pd.read_excel(path, parse_dates=False)
```
capture from excel
[![excel snippet](https://i.stack.imgur.com/zP86v.png)](https://i.stack.imgur.com/zP86v.png)
capture from pandas using my ide debugger
[![pandas](https://i.stack.imgur.com/HY6KS.png)](https://i.stack.imgur.com/HY6KS.png)
As shown above, **pandas** tries to be smart, automatically converting some of **gene** fields such as ***3.Oct***, ***4.Oct*** to a datetime type. The issue is that ***3.Oct*** or ***4.Oct*** is a abbreviation of Gene type and totally different meaning. so I don't want pandas to do so. How can I prevent pandas from converting types automatically? | 2021/10/16 | [
"https://Stackoverflow.com/questions/69597476",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11247129/"
] | **Update**:
In fact, there is no conversion. The value appears as `2020-10-03 00:00:00` in Pandas because it is the real value stored in the cell. Excel show this value in another format
[![excel date](https://i.stack.imgur.com/NOA93.png)](https://i.stack.imgur.com/NOA93.png)
---
**Update 2**:
To keep the same format as Excel, you can use `pd.to_datetime` and a custom function to reformat the date.
```
# Sample
>>> df
gene
0 PDGFRA
1 2021-10-03 00:00:00 # Want: 3.Oct
2 2021-10-04 00:00:00 # Want: 4.Oct
>>> df['gene'] = (pd.to_datetime(df['gene'], errors='coerce')
.apply(lambda dt: f"{dt.day}.{calendar.month_abbr[dt.month]}"
if dt is not pd.NaT else np.NaN)
.fillna(df['gene']))
>>> df
gene
0 PDGFRA
1 3.Oct
2 4.Oct
```
---
**Old answer**
Force `dtype=str` to prevent Pandas try to transform your dataframe
```
df = pd.read_excel(path, dtype=str)
```
Or use `converters={'colX': str, ...}` to map the dtype for each columns. | [`pd.read_excel`](https://pandas.pydata.org/docs/reference/api/pandas.read_excel.html) has a `dtype` argument you can use to specify data types explicitly. |
49,665,324 | I am using python FTP server and client program. My need is to run *Python FTP server* on a *remote machine* that is connected on the same network as my local machine. *FTP client* will run from *local machine*, I need to connect FTP server with my FTP client running on local machine.
Please help!
This is my `ftpserver.py`:
```
from pyftpdlib.servers import FTPServer
from pyftpdlib.authorizers import DummyAuthorizer
from pyftpdlib.handlers import FTPHandler
authorizer = DummyAuthorizer()
authorizer.add_user("lokesh", "123", "current_dir", perm="elradfmw")
authorizer.add_anonymous("curent_dir", perm="elradfmw")
handler = FTPHandler
handler.authorizer = authorizer
server=FTPServer(("localhost",8080),handler)
server.serve_forever()
```
This is my `ftpclient.py` that needs to connect with the above server:
```
from ftplib import FTP
ftp = FTP('')
host='localhost'
port=8080
ftp.connect(host,port)
ftp.login()
print(ftp.getwelcome())
print('Current Directory ',ftp.pwd())
ftp.dir()
ftp.quit()
```
When I test my server and client on same machine it worked. But when I run the same server on another machine and tried to connect with my client it gave me error:
>
> error: [Errno 10061] No connection could be made because the target
> machine actively refused it
>
>
> | 2018/04/05 | [
"https://Stackoverflow.com/questions/49665324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9573121/"
] | If you run the client on another machine, you have to connect to the host of the server, not to "localhost":
```
host='<server_host>'
```
Run `ipconfig` on your Windows server machine and look for "IPv4 address". | Replace `port = 1027` with `port = 8080` in your ftpclient file. |
59,451,996 | It's my first python project after 10 years and my first experience with python multiprocessing, so there may just be some very basic mistakes I haven't seen.
I'm stuck with python and a multiprocessing web crawler. My crawler checks a main page for changes and then iterates through subcategories in parallel, adding items to a list. These items are then checked in parallel and extracted via selenium (as I couldn't figure out how to do it otherwise, because content is dynamically loaded into the page when clicking the items).
Main loop:
```
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
import time
from bs4 import BeautifulSoup
import pickledb
import random
import multiprocessing
import itertools
import config
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
def getAllSubCategories(pageNumber, items):
# check website and look for subcategories that are "worth" extracting
url = 'https://www.google.com' + str(pageNumber)
response = requests.get(url, verify=False, headers=config.headers, cookies=config.cookies)
pageSoup = BeautifulSoup(response.content, features='html.parser')
elements = soup.find(...)
if not elements: # website not loading properly
return getAllSubCategories(items)
for element in elements:
items.append(element)
def checkAndExtract(item, ignoredItems, itemsToIgnore):
# check if items are already extracted; if not, extract them if they contain a keyword
import checker
import extractor
if item not in ignoredItems:
if checker.check(item):
extractor.extract(item, itemsToIgnore)
else: itemsToIgnore.append(item)
if __name__ == '__main__':
multiprocessing.freeze_support()
itemsToIgnore = multiprocessing.Manager().list()
crawlUrl = 'https://www.google.com/'
db = pickledb.load('myDB.db', False)
while True:
try:
# check main website for changes
response = requests.get(crawlUrl, verify=False, headers=config.headers, cookies=config.cookies)
soup = BeautifulSoup(response.content, features='html.parser')
mainCondition = soup.find(...)
if mainCondition:
numberOfPages = soup.find(...)
ignoredItems = db.get('ignoredItems')
if not ignoredItems:
db.lcreate('ignoredItems')
ignoredItems = db.get('ignoredItems')
items = multiprocessing.Manager().list()
# get all items from subcategories
with multiprocessing.Pool(30) as pool:
pool.starmap(getAllSubCategories, zip(range(numberOfPages, 0, -1), itertools.repeat(items)))
itemsToIgnore[:] = []
# loop through all items
with multiprocessing.Pool(30) as pool:
pool.starmap(checkAndExtract, zip(items, itertools.repeat(ignoredItems), itertools.repeat(itemsToIgnore)))
for item in itemsToIgnore:
if item not in db.get('ignoredItems'): db.ladd('ignoredItems', item)
db.dump()
time.sleep(random.randint(10, 20))
except KeyboardInterrupt:
break
except Exception as e:
print(e)
continue
```
Checker:
```
import config
def check(item):
title = item...
try:
for keyword in config.keywords: # just a string array
if keyword.lower() in title.lower():
return True
except Exception as e:
print(e)
return False
```
Extractor:
```
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
import time
import config
def extract(item, itemsToIgnore):
driver = webdriver.Chrome('./chromedriver')
driver.implicitly_wait(3)
driver.get('https://www.google.com')
for key in config.cookies:
driver.add_cookie({'name': key, 'value': config.cookies[key], 'domain': '.google.com'})
try:
driver.get('https://www.google.com')
wait = WebDriverWait(driver, 10)
if driver.title == 'Page Not Found':
extract(item, itemsToIgnore)
return
driver.find_element_by_xpath('...').click()
time.sleep(1)
button = wait.until(EC.element_to_be_clickable((By.XPATH, '...')))
button.click()
# and some extraction magic
except:
extract(item, itemsToIgnore) # try again
```
Everything is working fine and some test runs were successful. But sometimes the loop would start again before the pool has finished its work. In the logs I can see how the item checker returns true, but the extractor is not even starting and the main process begins the next iteration:
```
2019-12-23 00:21:16,614 [SpawnPoolWorker-6220] [INFO ] check returns true
2019-12-23 00:21:18,142 [MainProcess ] [DEBUG] starting next iteration
2019-12-23 00:21:39,630 [SpawnPoolWorker-6247] [INFO ] checking subcategory
```
Also I guess that the pool does not clean up somehow as I doubt the `SpawnPoolWorker-XXXX` number should be that high. It also freezes after ~1 hour. This may be connected to this issue. | 2019/12/23 | [
"https://Stackoverflow.com/questions/59451996",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6464358/"
] | I fixed the loop issue with either switching from Win7 to Win10 or switching from starmap to starmap\_async and calling get() on the result afterwards.
The freeze was most probably caused by calling requests.get() without passing a value for timeout. | You may try this for your pool jobs:
```
poolJob1 = pool.starmap(getAllSubCategories, zip(range(numberOfPages, 0, -1), itertools.repeat(items)))
poolJob1.close()
poolJob1.join()
``` |
54,785,148 | In Javascript, I can use [destructuring](https://medium.com/podiihq/destructuring-objects-in-javascript-4de5a3b0e4cb) to extract properties I want from a javascript objects in one liner. For example:
```js
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
let { id, username } = this.currentUser;
console.log(id) // 24
console.log(username) //johndoe
```
Do we have something similar in Python for Python dicts and Python objects? Example of Python way of doing for python objects:
```py
class User:
def __init__(self, id, name, website, description, email, gender, phone_number, username):
self.id = id
self.name = name
self.website = website
self.description = description
self.email = email
self.gender = gender
self.phone_number = phone_number
self.username = username
current_user = User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "M", "+12345678", "johndoe")
# This is a pain
id = current_user.id
email = current_user.email
gender = current_user.gender
username = current_user.username
print(id, email, gender, username)
```
Writing those 4 lines (as mentioned in example above) vs writing a single line (as mentioned below) to fetch values I need from an object is a real pain point.
```py
(id, email, gender, username) = current_user
``` | 2019/02/20 | [
"https://Stackoverflow.com/questions/54785148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7707677/"
] | You can use [`operator`](https://docs.python.org/3/library/operator.html) module from standard library as follows:
```
from operator import attrgetter
id, email, gender, username = attrgetter('id', 'email', 'gender', 'username')(current_user)
print(id, email, gender, username)
```
In case you have a dict like from your example
```
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
```
just use `itemgetter` instead of `attrgetter`:
```
from operator import itemgetter
id, email, gender, username = itemgetter('id', 'email', 'gender', 'username')(currentUser)
print(id, email, gender, username)
``` | #### (Ab)using the import system
Python already has a compact destructuring syntax in the form of `from x import y`. This can be re-purposed to destructure dicts and objects:
```py
import sys, types
class MyClass:
def __init__(self, a, b):
self.a = a
self.b = b
sys.modules["myobj"] = MyClass(1, 2)
from myobj import a, b
assert a + b == 3
mydict = {"c": 3, "d": 4}
sys.modules["mydict"] = types.SimpleNamespace(**mydict)
from mydict import c, d
assert c + d == 7
```
Cluttering `sys.modules` with our objects isn't very nice though.
#### Context manager
A more serious hack would be a context manager that temporarily adds a module to `sys.modules`, and makes sure the `__getattr__` method of the module points to the `__getattribute__` or `__getitem__` method of the object/dict in question.
That would let us do:
```py
mydict = {"a": 1, "b": 2}
with obj_as_module(mydict, "mydict"):
from mydict import a, b
assert a + b == 3
assert "mydict" not in sys.modules
```
Implementation:
```py
import sys, types
from contextlib import contextmanager
@contextmanager
def obj_as_module(obj, name):
"Temporarily load an object/dict as a module, to import its attributes/values"
module = types.ModuleType(name)
get = obj.__getitem__ if isinstance(obj, dict) else obj.__getattribute__
module.__getattr__ = lambda attr: get(attr) if attr != "__path__" else None
try:
if name in sys.modules:
raise Exception(f"Name '{name}' already in sys.modules")
else:
sys.modules[name] = module
yield module
finally:
if sys.modules[name] == module:
del sys.modules[name]
```
This was my first time playing around with the import system, and I have no idea if this might break something, or what the performance is like. But I think it is a valuable observation that the `import` statement already provides a very convenient destructuring syntax.
#### Replacing `sys.modules` entirely
Using an even more questionable hack, we can arrive at an even more compact syntax:
```py
with from_(mydict): import a, b
```
Implementation:
```py
import sys
@contextmanager
def from_(target):
"Temporarily replace the sys.modules dict with target dict or it's __dict__."
if not isinstance(target, dict):
target = target.__dict__
sysmodules = sys.modules
try:
sys.modules = target
yield
finally:
sys.modules = sysmodules
```
#### Class decorator
For working with classes we could use a decorator:
```py
def self_as_module(cls):
"For those who like to write self-less methods"
cls.as_module = lambda self: obj_as_module(self, "self")
return cls
```
Then we can unpack attributes without cluttering our methods with lines like `a = self.a`:
```py
@self_as_module
class MyClass:
def __init__(self):
self.a = 1
self.b = 2
def check(self):
with self.as_module():
from self import a, b
assert a + b == 3
MyClass().check()
```
For classes with many attributes and math-heavy methods, this is quite nice.
#### Keyword arguments
By using keyword arguments we can save on typing the string quotes, as well as loading multiple modules in one go:
```py
from contextlib import ExitStack
class kwargs_as_modules(ExitStack):
"If you like 'obj_as_module', but want to save even more typing"
def __init__(self, **kwargs):
super().__init__()
for name, obj in kwargs.items():
self.enter_context(obj_as_module(obj, name))
```
Test:
```
myobj = types.SimpleNamespace(x=1, y=2)
mydict = {"a": 1, "b": 2}
with kwargs_as_modules(one=myobj, two=mydict):
from one import a, b
from two import x, y
assert a == x, b == y
``` |
54,785,148 | In Javascript, I can use [destructuring](https://medium.com/podiihq/destructuring-objects-in-javascript-4de5a3b0e4cb) to extract properties I want from a javascript objects in one liner. For example:
```js
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
let { id, username } = this.currentUser;
console.log(id) // 24
console.log(username) //johndoe
```
Do we have something similar in Python for Python dicts and Python objects? Example of Python way of doing for python objects:
```py
class User:
def __init__(self, id, name, website, description, email, gender, phone_number, username):
self.id = id
self.name = name
self.website = website
self.description = description
self.email = email
self.gender = gender
self.phone_number = phone_number
self.username = username
current_user = User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "M", "+12345678", "johndoe")
# This is a pain
id = current_user.id
email = current_user.email
gender = current_user.gender
username = current_user.username
print(id, email, gender, username)
```
Writing those 4 lines (as mentioned in example above) vs writing a single line (as mentioned below) to fetch values I need from an object is a real pain point.
```py
(id, email, gender, username) = current_user
``` | 2019/02/20 | [
"https://Stackoverflow.com/questions/54785148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7707677/"
] | Building off of other answers, I would recommend also using Python's `dataclasses` and use `__getitem__` to get specific fields:
```
from dataclasses import astuple, dataclass
@dataclass
class User:
id: int
name: str
website: str
description: str
email: str
gender: str
phone_number: str
username: str
def __iter__(self):
return iter(astuple(self))
def __getitem__(self, keys):
return iter(getattr(self, k) for k in keys)
current_user = User(id=24, name="Jon Doe", website="http://mywebsite.com", description="I am an actor", email="example@example.com", gender="M", phone_number="+12345678", username="johndoe")
# Access fields sequentially:
id, _, email, *_ = current_user
# Access fields out of order:
id, email, gender, username = current_user["id", "email", "gender", "username"]
``` | You can destruct a python dictionary and extract properties by unpacking with `.values()` method:
```
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
id, _, _, _, _, _, _, username, *other = currentUser.values()
print('distructuring:', { 'id': id, 'username': username })
``` |
54,785,148 | In Javascript, I can use [destructuring](https://medium.com/podiihq/destructuring-objects-in-javascript-4de5a3b0e4cb) to extract properties I want from a javascript objects in one liner. For example:
```js
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
let { id, username } = this.currentUser;
console.log(id) // 24
console.log(username) //johndoe
```
Do we have something similar in Python for Python dicts and Python objects? Example of Python way of doing for python objects:
```py
class User:
def __init__(self, id, name, website, description, email, gender, phone_number, username):
self.id = id
self.name = name
self.website = website
self.description = description
self.email = email
self.gender = gender
self.phone_number = phone_number
self.username = username
current_user = User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "M", "+12345678", "johndoe")
# This is a pain
id = current_user.id
email = current_user.email
gender = current_user.gender
username = current_user.username
print(id, email, gender, username)
```
Writing those 4 lines (as mentioned in example above) vs writing a single line (as mentioned below) to fetch values I need from an object is a real pain point.
```py
(id, email, gender, username) = current_user
``` | 2019/02/20 | [
"https://Stackoverflow.com/questions/54785148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7707677/"
] | In Python 3.10 you can do it using `match`:
```py
match current_user:
case User(id=id, username=username):
# In this block, id = current_user.id, username = current_user.username
```
See <https://docs.python.org/3.10/tutorial/controlflow.html#match-statements> | Don't flatten the arguments in the first place. When you write a 8-ary function like you did with `User`, you're bound to make mistakes like passing arguments in the wrong order.
Which of the following will produce User you intend?
1. `User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "M", "+12345678", "johndoe")`
2. `User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "+12345678", "M", "johndoe")`
Impossible to know! If your function takes a descriptor, you do not have this problem -
```
class User:
def __init__ (self, desc = {}):
self.desc = desc # whitelist items, if necessary
def __str__ (self):
# invent our own "destructuring" syntax
[ name, age, gender ] = \
destructure(self.desc, 'name', 'age', 'gender')
return f"{name} ({gender}) is {age} years old"
# create users with a "descriptor"
u = User({ 'age': 2, 'gender': 'M' })
v = User({ 'gender': 'F', 'age': 3 })
x = User({ 'gender': 'F', 'name': 'Alice', 'age': 4 })
print(u) # None (M) is 2 years old
print(v) # None (F) is 3 years old
print(x) # Alice (F) is 4 years old
```
We can define our own `destructure` as -
```
def destructure (d, *keys):
return [ d[k] if k in d else None for k in keys ]
```
This still could result in long chains, but the order is dependent on the caller, therefore it's not fragile like the 8-ary function in the original question -
```
[ name, age, gender ] = \
destructure(self.desc, 'name', 'age', 'gender')
# works the same as
[ gender, name, age ] = \
destructure(self.desc, 'gender', 'name', 'age')
```
---
Another option is to use keyword arguments -
```
class User:
def __init__ (self, **desc):
self.desc = desc # whitelist items, if necessary
def __str__ (self):
[ name, age, gender ] = \
destructure(self.desc, 'name', 'age', 'gender')
return f"{name} ({gender}) is {age} years old"
# create users with keyword arguments
u = User(age = 2, gender = 'M')
v = User(gender = 'F', age = 3)
x = User(gender = 'F', name = 'Alice', age = 4)
print(u) # None (M) is 2 years old
print(v) # None (F) is 3 years old
print(x) # Alice (F) is 4 years old
``` |
54,785,148 | In Javascript, I can use [destructuring](https://medium.com/podiihq/destructuring-objects-in-javascript-4de5a3b0e4cb) to extract properties I want from a javascript objects in one liner. For example:
```js
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
let { id, username } = this.currentUser;
console.log(id) // 24
console.log(username) //johndoe
```
Do we have something similar in Python for Python dicts and Python objects? Example of Python way of doing for python objects:
```py
class User:
def __init__(self, id, name, website, description, email, gender, phone_number, username):
self.id = id
self.name = name
self.website = website
self.description = description
self.email = email
self.gender = gender
self.phone_number = phone_number
self.username = username
current_user = User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "M", "+12345678", "johndoe")
# This is a pain
id = current_user.id
email = current_user.email
gender = current_user.gender
username = current_user.username
print(id, email, gender, username)
```
Writing those 4 lines (as mentioned in example above) vs writing a single line (as mentioned below) to fetch values I need from an object is a real pain point.
```py
(id, email, gender, username) = current_user
``` | 2019/02/20 | [
"https://Stackoverflow.com/questions/54785148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7707677/"
] | In this way JavaScript has better domain of objects than Python. You also can build a method or function to replicate the functionality, but JavaScript do it really easy.
Something similar on Python could be "packing/unpacking" functionalities applied to dictionaries (JSON objects).
You can find related documentation on the internet:
<https://www.geeksforgeeks.org/packing-and-unpacking-arguments-in-python/> | You can destruct a python dictionary and extract properties by unpacking with `.values()` method:
```
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
id, _, _, _, _, _, _, username, *other = currentUser.values()
print('distructuring:', { 'id': id, 'username': username })
``` |
54,785,148 | In Javascript, I can use [destructuring](https://medium.com/podiihq/destructuring-objects-in-javascript-4de5a3b0e4cb) to extract properties I want from a javascript objects in one liner. For example:
```js
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
let { id, username } = this.currentUser;
console.log(id) // 24
console.log(username) //johndoe
```
Do we have something similar in Python for Python dicts and Python objects? Example of Python way of doing for python objects:
```py
class User:
def __init__(self, id, name, website, description, email, gender, phone_number, username):
self.id = id
self.name = name
self.website = website
self.description = description
self.email = email
self.gender = gender
self.phone_number = phone_number
self.username = username
current_user = User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "M", "+12345678", "johndoe")
# This is a pain
id = current_user.id
email = current_user.email
gender = current_user.gender
username = current_user.username
print(id, email, gender, username)
```
Writing those 4 lines (as mentioned in example above) vs writing a single line (as mentioned below) to fetch values I need from an object is a real pain point.
```py
(id, email, gender, username) = current_user
``` | 2019/02/20 | [
"https://Stackoverflow.com/questions/54785148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7707677/"
] | In this way JavaScript has better domain of objects than Python. You also can build a method or function to replicate the functionality, but JavaScript do it really easy.
Something similar on Python could be "packing/unpacking" functionalities applied to dictionaries (JSON objects).
You can find related documentation on the internet:
<https://www.geeksforgeeks.org/packing-and-unpacking-arguments-in-python/> | #### (Ab)using the import system
Python already has a compact destructuring syntax in the form of `from x import y`. This can be re-purposed to destructure dicts and objects:
```py
import sys, types
class MyClass:
def __init__(self, a, b):
self.a = a
self.b = b
sys.modules["myobj"] = MyClass(1, 2)
from myobj import a, b
assert a + b == 3
mydict = {"c": 3, "d": 4}
sys.modules["mydict"] = types.SimpleNamespace(**mydict)
from mydict import c, d
assert c + d == 7
```
Cluttering `sys.modules` with our objects isn't very nice though.
#### Context manager
A more serious hack would be a context manager that temporarily adds a module to `sys.modules`, and makes sure the `__getattr__` method of the module points to the `__getattribute__` or `__getitem__` method of the object/dict in question.
That would let us do:
```py
mydict = {"a": 1, "b": 2}
with obj_as_module(mydict, "mydict"):
from mydict import a, b
assert a + b == 3
assert "mydict" not in sys.modules
```
Implementation:
```py
import sys, types
from contextlib import contextmanager
@contextmanager
def obj_as_module(obj, name):
"Temporarily load an object/dict as a module, to import its attributes/values"
module = types.ModuleType(name)
get = obj.__getitem__ if isinstance(obj, dict) else obj.__getattribute__
module.__getattr__ = lambda attr: get(attr) if attr != "__path__" else None
try:
if name in sys.modules:
raise Exception(f"Name '{name}' already in sys.modules")
else:
sys.modules[name] = module
yield module
finally:
if sys.modules[name] == module:
del sys.modules[name]
```
This was my first time playing around with the import system, and I have no idea if this might break something, or what the performance is like. But I think it is a valuable observation that the `import` statement already provides a very convenient destructuring syntax.
#### Replacing `sys.modules` entirely
Using an even more questionable hack, we can arrive at an even more compact syntax:
```py
with from_(mydict): import a, b
```
Implementation:
```py
import sys
@contextmanager
def from_(target):
"Temporarily replace the sys.modules dict with target dict or it's __dict__."
if not isinstance(target, dict):
target = target.__dict__
sysmodules = sys.modules
try:
sys.modules = target
yield
finally:
sys.modules = sysmodules
```
#### Class decorator
For working with classes we could use a decorator:
```py
def self_as_module(cls):
"For those who like to write self-less methods"
cls.as_module = lambda self: obj_as_module(self, "self")
return cls
```
Then we can unpack attributes without cluttering our methods with lines like `a = self.a`:
```py
@self_as_module
class MyClass:
def __init__(self):
self.a = 1
self.b = 2
def check(self):
with self.as_module():
from self import a, b
assert a + b == 3
MyClass().check()
```
For classes with many attributes and math-heavy methods, this is quite nice.
#### Keyword arguments
By using keyword arguments we can save on typing the string quotes, as well as loading multiple modules in one go:
```py
from contextlib import ExitStack
class kwargs_as_modules(ExitStack):
"If you like 'obj_as_module', but want to save even more typing"
def __init__(self, **kwargs):
super().__init__()
for name, obj in kwargs.items():
self.enter_context(obj_as_module(obj, name))
```
Test:
```
myobj = types.SimpleNamespace(x=1, y=2)
mydict = {"a": 1, "b": 2}
with kwargs_as_modules(one=myobj, two=mydict):
from one import a, b
from two import x, y
assert a == x, b == y
``` |
54,785,148 | In Javascript, I can use [destructuring](https://medium.com/podiihq/destructuring-objects-in-javascript-4de5a3b0e4cb) to extract properties I want from a javascript objects in one liner. For example:
```js
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
let { id, username } = this.currentUser;
console.log(id) // 24
console.log(username) //johndoe
```
Do we have something similar in Python for Python dicts and Python objects? Example of Python way of doing for python objects:
```py
class User:
def __init__(self, id, name, website, description, email, gender, phone_number, username):
self.id = id
self.name = name
self.website = website
self.description = description
self.email = email
self.gender = gender
self.phone_number = phone_number
self.username = username
current_user = User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "M", "+12345678", "johndoe")
# This is a pain
id = current_user.id
email = current_user.email
gender = current_user.gender
username = current_user.username
print(id, email, gender, username)
```
Writing those 4 lines (as mentioned in example above) vs writing a single line (as mentioned below) to fetch values I need from an object is a real pain point.
```py
(id, email, gender, username) = current_user
``` | 2019/02/20 | [
"https://Stackoverflow.com/questions/54785148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7707677/"
] | Building off of other answers, I would recommend also using Python's `dataclasses` and use `__getitem__` to get specific fields:
```
from dataclasses import astuple, dataclass
@dataclass
class User:
id: int
name: str
website: str
description: str
email: str
gender: str
phone_number: str
username: str
def __iter__(self):
return iter(astuple(self))
def __getitem__(self, keys):
return iter(getattr(self, k) for k in keys)
current_user = User(id=24, name="Jon Doe", website="http://mywebsite.com", description="I am an actor", email="example@example.com", gender="M", phone_number="+12345678", username="johndoe")
# Access fields sequentially:
id, _, email, *_ = current_user
# Access fields out of order:
id, email, gender, username = current_user["id", "email", "gender", "username"]
``` | Don't flatten the arguments in the first place. When you write a 8-ary function like you did with `User`, you're bound to make mistakes like passing arguments in the wrong order.
Which of the following will produce User you intend?
1. `User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "M", "+12345678", "johndoe")`
2. `User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "+12345678", "M", "johndoe")`
Impossible to know! If your function takes a descriptor, you do not have this problem -
```
class User:
def __init__ (self, desc = {}):
self.desc = desc # whitelist items, if necessary
def __str__ (self):
# invent our own "destructuring" syntax
[ name, age, gender ] = \
destructure(self.desc, 'name', 'age', 'gender')
return f"{name} ({gender}) is {age} years old"
# create users with a "descriptor"
u = User({ 'age': 2, 'gender': 'M' })
v = User({ 'gender': 'F', 'age': 3 })
x = User({ 'gender': 'F', 'name': 'Alice', 'age': 4 })
print(u) # None (M) is 2 years old
print(v) # None (F) is 3 years old
print(x) # Alice (F) is 4 years old
```
We can define our own `destructure` as -
```
def destructure (d, *keys):
return [ d[k] if k in d else None for k in keys ]
```
This still could result in long chains, but the order is dependent on the caller, therefore it's not fragile like the 8-ary function in the original question -
```
[ name, age, gender ] = \
destructure(self.desc, 'name', 'age', 'gender')
# works the same as
[ gender, name, age ] = \
destructure(self.desc, 'gender', 'name', 'age')
```
---
Another option is to use keyword arguments -
```
class User:
def __init__ (self, **desc):
self.desc = desc # whitelist items, if necessary
def __str__ (self):
[ name, age, gender ] = \
destructure(self.desc, 'name', 'age', 'gender')
return f"{name} ({gender}) is {age} years old"
# create users with keyword arguments
u = User(age = 2, gender = 'M')
v = User(gender = 'F', age = 3)
x = User(gender = 'F', name = 'Alice', age = 4)
print(u) # None (M) is 2 years old
print(v) # None (F) is 3 years old
print(x) # Alice (F) is 4 years old
``` |
54,785,148 | In Javascript, I can use [destructuring](https://medium.com/podiihq/destructuring-objects-in-javascript-4de5a3b0e4cb) to extract properties I want from a javascript objects in one liner. For example:
```js
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
let { id, username } = this.currentUser;
console.log(id) // 24
console.log(username) //johndoe
```
Do we have something similar in Python for Python dicts and Python objects? Example of Python way of doing for python objects:
```py
class User:
def __init__(self, id, name, website, description, email, gender, phone_number, username):
self.id = id
self.name = name
self.website = website
self.description = description
self.email = email
self.gender = gender
self.phone_number = phone_number
self.username = username
current_user = User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "M", "+12345678", "johndoe")
# This is a pain
id = current_user.id
email = current_user.email
gender = current_user.gender
username = current_user.username
print(id, email, gender, username)
```
Writing those 4 lines (as mentioned in example above) vs writing a single line (as mentioned below) to fetch values I need from an object is a real pain point.
```py
(id, email, gender, username) = current_user
``` | 2019/02/20 | [
"https://Stackoverflow.com/questions/54785148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7707677/"
] | In Python 3.10 you can do it using `match`:
```py
match current_user:
case User(id=id, username=username):
# In this block, id = current_user.id, username = current_user.username
```
See <https://docs.python.org/3.10/tutorial/controlflow.html#match-statements> | In this way JavaScript has better domain of objects than Python. You also can build a method or function to replicate the functionality, but JavaScript do it really easy.
Something similar on Python could be "packing/unpacking" functionalities applied to dictionaries (JSON objects).
You can find related documentation on the internet:
<https://www.geeksforgeeks.org/packing-and-unpacking-arguments-in-python/> |
54,785,148 | In Javascript, I can use [destructuring](https://medium.com/podiihq/destructuring-objects-in-javascript-4de5a3b0e4cb) to extract properties I want from a javascript objects in one liner. For example:
```js
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
let { id, username } = this.currentUser;
console.log(id) // 24
console.log(username) //johndoe
```
Do we have something similar in Python for Python dicts and Python objects? Example of Python way of doing for python objects:
```py
class User:
def __init__(self, id, name, website, description, email, gender, phone_number, username):
self.id = id
self.name = name
self.website = website
self.description = description
self.email = email
self.gender = gender
self.phone_number = phone_number
self.username = username
current_user = User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "M", "+12345678", "johndoe")
# This is a pain
id = current_user.id
email = current_user.email
gender = current_user.gender
username = current_user.username
print(id, email, gender, username)
```
Writing those 4 lines (as mentioned in example above) vs writing a single line (as mentioned below) to fetch values I need from an object is a real pain point.
```py
(id, email, gender, username) = current_user
``` | 2019/02/20 | [
"https://Stackoverflow.com/questions/54785148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7707677/"
] | You can use [`operator`](https://docs.python.org/3/library/operator.html) module from standard library as follows:
```
from operator import attrgetter
id, email, gender, username = attrgetter('id', 'email', 'gender', 'username')(current_user)
print(id, email, gender, username)
```
In case you have a dict like from your example
```
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
```
just use `itemgetter` instead of `attrgetter`:
```
from operator import itemgetter
id, email, gender, username = itemgetter('id', 'email', 'gender', 'username')(currentUser)
print(id, email, gender, username)
``` | Building off of other answers, I would recommend also using Python's `dataclasses` and use `__getitem__` to get specific fields:
```
from dataclasses import astuple, dataclass
@dataclass
class User:
id: int
name: str
website: str
description: str
email: str
gender: str
phone_number: str
username: str
def __iter__(self):
return iter(astuple(self))
def __getitem__(self, keys):
return iter(getattr(self, k) for k in keys)
current_user = User(id=24, name="Jon Doe", website="http://mywebsite.com", description="I am an actor", email="example@example.com", gender="M", phone_number="+12345678", username="johndoe")
# Access fields sequentially:
id, _, email, *_ = current_user
# Access fields out of order:
id, email, gender, username = current_user["id", "email", "gender", "username"]
``` |
54,785,148 | In Javascript, I can use [destructuring](https://medium.com/podiihq/destructuring-objects-in-javascript-4de5a3b0e4cb) to extract properties I want from a javascript objects in one liner. For example:
```js
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
let { id, username } = this.currentUser;
console.log(id) // 24
console.log(username) //johndoe
```
Do we have something similar in Python for Python dicts and Python objects? Example of Python way of doing for python objects:
```py
class User:
def __init__(self, id, name, website, description, email, gender, phone_number, username):
self.id = id
self.name = name
self.website = website
self.description = description
self.email = email
self.gender = gender
self.phone_number = phone_number
self.username = username
current_user = User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "M", "+12345678", "johndoe")
# This is a pain
id = current_user.id
email = current_user.email
gender = current_user.gender
username = current_user.username
print(id, email, gender, username)
```
Writing those 4 lines (as mentioned in example above) vs writing a single line (as mentioned below) to fetch values I need from an object is a real pain point.
```py
(id, email, gender, username) = current_user
``` | 2019/02/20 | [
"https://Stackoverflow.com/questions/54785148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7707677/"
] | You can use [`operator`](https://docs.python.org/3/library/operator.html) module from standard library as follows:
```
from operator import attrgetter
id, email, gender, username = attrgetter('id', 'email', 'gender', 'username')(current_user)
print(id, email, gender, username)
```
In case you have a dict like from your example
```
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
```
just use `itemgetter` instead of `attrgetter`:
```
from operator import itemgetter
id, email, gender, username = itemgetter('id', 'email', 'gender', 'username')(currentUser)
print(id, email, gender, username)
``` | You can implement an `__iter__` method to enable unpacking:
```
class User:
def __init__(self, **data):
self.__dict__ = data
def __iter__(self):
yield from [getattr(self, i) for i in ('id', 'email', 'gender', 'username')]
current_user = User(**currentUser)
id, email, gender, username = current_user
print([id, email, gender, username])
```
Output:
```
[24, 'example@example.com', 'M', 'johndoe']
```
Edit: Python2 solution:
```
class User:
def __init__(self, **data):
self.__dict__ = data
def __iter__(self):
for i in ('id', 'email', 'gender', 'username'):
yield getattr(self, i)
```
Edit 2:
Getting select attributes:
```
class User:
def __init__(self, **data):
self.__dict__ = data
def __getattr__(self, _vals):
yield from [getattr(self, i) for i in _vals.split('_')]
current_user = User(**currentUser)
id, email, gender, username = current_user.id_email_gender_username
id, gender = current_user.id_gender
``` |
54,785,148 | In Javascript, I can use [destructuring](https://medium.com/podiihq/destructuring-objects-in-javascript-4de5a3b0e4cb) to extract properties I want from a javascript objects in one liner. For example:
```js
currentUser = {
"id": 24,
"name": "John Doe",
"website": "http://mywebsite.com",
"description": "I am an actor",
"email": "example@example.com",
"gender": "M",
"phone_number": "+12345678",
"username": "johndoe",
"birth_date": "1991-02-23",
"followers": 46263,
"following": 345,
"like": 204,
"comments": 9
}
let { id, username } = this.currentUser;
console.log(id) // 24
console.log(username) //johndoe
```
Do we have something similar in Python for Python dicts and Python objects? Example of Python way of doing for python objects:
```py
class User:
def __init__(self, id, name, website, description, email, gender, phone_number, username):
self.id = id
self.name = name
self.website = website
self.description = description
self.email = email
self.gender = gender
self.phone_number = phone_number
self.username = username
current_user = User(24, "Jon Doe", "http://mywebsite.com", "I am an actor", "example@example.com", "M", "+12345678", "johndoe")
# This is a pain
id = current_user.id
email = current_user.email
gender = current_user.gender
username = current_user.username
print(id, email, gender, username)
```
Writing those 4 lines (as mentioned in example above) vs writing a single line (as mentioned below) to fetch values I need from an object is a real pain point.
```py
(id, email, gender, username) = current_user
``` | 2019/02/20 | [
"https://Stackoverflow.com/questions/54785148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7707677/"
] | Building off of other answers, I would recommend also using Python's `dataclasses` and use `__getitem__` to get specific fields:
```
from dataclasses import astuple, dataclass
@dataclass
class User:
id: int
name: str
website: str
description: str
email: str
gender: str
phone_number: str
username: str
def __iter__(self):
return iter(astuple(self))
def __getitem__(self, keys):
return iter(getattr(self, k) for k in keys)
current_user = User(id=24, name="Jon Doe", website="http://mywebsite.com", description="I am an actor", email="example@example.com", gender="M", phone_number="+12345678", username="johndoe")
# Access fields sequentially:
id, _, email, *_ = current_user
# Access fields out of order:
id, email, gender, username = current_user["id", "email", "gender", "username"]
``` | #### (Ab)using the import system
Python already has a compact destructuring syntax in the form of `from x import y`. This can be re-purposed to destructure dicts and objects:
```py
import sys, types
class MyClass:
def __init__(self, a, b):
self.a = a
self.b = b
sys.modules["myobj"] = MyClass(1, 2)
from myobj import a, b
assert a + b == 3
mydict = {"c": 3, "d": 4}
sys.modules["mydict"] = types.SimpleNamespace(**mydict)
from mydict import c, d
assert c + d == 7
```
Cluttering `sys.modules` with our objects isn't very nice though.
#### Context manager
A more serious hack would be a context manager that temporarily adds a module to `sys.modules`, and makes sure the `__getattr__` method of the module points to the `__getattribute__` or `__getitem__` method of the object/dict in question.
That would let us do:
```py
mydict = {"a": 1, "b": 2}
with obj_as_module(mydict, "mydict"):
from mydict import a, b
assert a + b == 3
assert "mydict" not in sys.modules
```
Implementation:
```py
import sys, types
from contextlib import contextmanager
@contextmanager
def obj_as_module(obj, name):
"Temporarily load an object/dict as a module, to import its attributes/values"
module = types.ModuleType(name)
get = obj.__getitem__ if isinstance(obj, dict) else obj.__getattribute__
module.__getattr__ = lambda attr: get(attr) if attr != "__path__" else None
try:
if name in sys.modules:
raise Exception(f"Name '{name}' already in sys.modules")
else:
sys.modules[name] = module
yield module
finally:
if sys.modules[name] == module:
del sys.modules[name]
```
This was my first time playing around with the import system, and I have no idea if this might break something, or what the performance is like. But I think it is a valuable observation that the `import` statement already provides a very convenient destructuring syntax.
#### Replacing `sys.modules` entirely
Using an even more questionable hack, we can arrive at an even more compact syntax:
```py
with from_(mydict): import a, b
```
Implementation:
```py
import sys
@contextmanager
def from_(target):
"Temporarily replace the sys.modules dict with target dict or it's __dict__."
if not isinstance(target, dict):
target = target.__dict__
sysmodules = sys.modules
try:
sys.modules = target
yield
finally:
sys.modules = sysmodules
```
#### Class decorator
For working with classes we could use a decorator:
```py
def self_as_module(cls):
"For those who like to write self-less methods"
cls.as_module = lambda self: obj_as_module(self, "self")
return cls
```
Then we can unpack attributes without cluttering our methods with lines like `a = self.a`:
```py
@self_as_module
class MyClass:
def __init__(self):
self.a = 1
self.b = 2
def check(self):
with self.as_module():
from self import a, b
assert a + b == 3
MyClass().check()
```
For classes with many attributes and math-heavy methods, this is quite nice.
#### Keyword arguments
By using keyword arguments we can save on typing the string quotes, as well as loading multiple modules in one go:
```py
from contextlib import ExitStack
class kwargs_as_modules(ExitStack):
"If you like 'obj_as_module', but want to save even more typing"
def __init__(self, **kwargs):
super().__init__()
for name, obj in kwargs.items():
self.enter_context(obj_as_module(obj, name))
```
Test:
```
myobj = types.SimpleNamespace(x=1, y=2)
mydict = {"a": 1, "b": 2}
with kwargs_as_modules(one=myobj, two=mydict):
from one import a, b
from two import x, y
assert a == x, b == y
``` |
57,322,104 | if I launch chrome webdriver from a python function why does it automatically close the browser window after execution and how do I prevent this?
Here's the code:
```
from selenium import webdriver
def open_chrome_driver():
chrome_driver = webdriver.Chrome(executable_path=r'C:/Users/User/Documents/pythonfiles/chromedriver.exe')
return chrome_driver
open_chrome_driver()
``` | 2019/08/02 | [
"https://Stackoverflow.com/questions/57322104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11841153/"
] | It is possible, but get mixed values - numeric with empty strings, so numeric operation failed:
```
df = df.replace(0, '')
```
So better is replace by missing values - all values are numeric, because `NaN` is float value:
```
df = df.replace(0, np.nan)
``` | You can also try [`df.mask()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mask.html):
```
df=df.mask(df.eq(0),'')
```
Or:
```
df=df.mask(df.eq(0)) #this will replace 0 with NaN
```
Similarly [`df.where()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.where.html)
```
df=df.where(df.ne(0),'')
```
Or:
```
df=df.where(df.ne(0)) #this replaces with NaN
``` |
6,170,960 | Why does
```
Import["!python --version", "Text"]
```
work on the commandline but not in the frontend of Mathematica 8 (running on a Mac)?
Shell:
```
"Python 2.7.1 -- EPD 7.0-2 (64-bit)"
```
Frontend:
```
""
```
Update:
Ok, the path is not (really) the problem, as
```
Import["!which python", "Text"]
```
yields
```
"/usr/bin/python"
```
in the frontend and
```
"/Library/Frameworks/EPD64.framework/Versions/Current/bin/python"
```
in the shell (which is a different python version I have installed on my system). Nevertheless, neither
```
Import["!/usr/bin/python --version", "Text"]
```
nor
```
Import[
"!/Library/Frameworks/EPD64.framework/Versions/Current/bin/python --version",
"Text"]
```
yield the correct output in the frontend. But the usage of different shells in the frontend and the terminal version could be a hint to why Mathematica is misbehaving. | 2011/05/29 | [
"https://Stackoverflow.com/questions/6170960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/280182/"
] | `python --version` writes its response to the standard error stream, but `Import` only captures the standard output stream. To see the response, redirect *stderr* to *stdout*. In most shells (even Windows), this can be achieved using the magic incantation `2>&1`:
```
Import["!python --version 2>&1", "Text"]
```
**Front-end Different From Command-line?**
The `Import` command *appears* to function differently when run in the command-line version of Mathematica, but appearances can be deceiving. Here is a transcript:
```
$ math
Mathematica 8.0 for Microsoft Windows (64-bit)
Copyright 1988-2011 Wolfram Research, Inc.
In[1]:= Import["!python --version","Text"]
Python 2.6.4
Out[1]=
```
Note that `Out[1]` is blank. The version string appears in the transcript, but this is due to the fact that the standard error stream is being displayed in the terminal window, interspersed with the standard output from Mathematica. This is even more clear if we assign the result to a variable and (attempt to) suppress the output using `;`:
```
In[2]:= v=Import["!python --version","Text"];
Python 2.6.4
In[3]:= v
Out[3]=
```
There shouldn't have been any output, but we still see the standard error stream displayed in the terminal window. `v` is blank, showing that the value of the `Import` expression was blank as well. | [WReach](https://stackoverflow.com/questions/6170960/very-weird-behavior-when-running-external-command-in-mathematica/6171556#6171556) has the answer to your problem. However, my point still remains that the instance of the shell invoked by mathematica does not have the path variable set correctly. Here's some info from mine:
![enter image description here](https://i.stack.imgur.com/NAvdm.png)
The shell is correct, but the path is the default path. So source my modified path and then invoke `python --version`:
![enter image description here](https://i.stack.imgur.com/kOURh.png) |
30,124,861 | I'm trying to read the distance from an ultrasonic sensor (HC-SR04) but the only values I get is 0 and 265.xx.
I am using an Raspberry Pi 2 with Windows 10 IoT Core installed.
I've written the code in C#.
This is the ultrasonic sensor class:
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Diagnostics;
using Windows.Devices.Gpio;
namespace RaspberryPi
{
class UcSensor
{
GpioController gpio = GpioController.GetDefault();
GpioPin TriggerPin;
GpioPin EchoPin;
//Contructor
public UcSensor(int TriggerPin, int EchoPin)
{
//Setting up gpio pin's
this.TriggerPin = gpio.OpenPin(TriggerPin);
this.EchoPin = gpio.OpenPin(EchoPin);
this.TriggerPin.SetDriveMode(GpioPinDriveMode.Output);
this.EchoPin.SetDriveMode(GpioPinDriveMode.Input);
this.TriggerPin.Write(GpioPinValue.Low);
}
public double GetDistance()
{
ManualResetEvent mre = new ManualResetEvent(false);
mre.WaitOne(500);
//Send pulse
this.TriggerPin.Write(GpioPinValue.High);
mre.WaitOne(TimeSpan.FromMilliseconds(0.01));
this.TriggerPin.Write(GpioPinValue.Low);
//Recieve pusle
while (this.EchoPin.Read() == GpioPinValue.Low)
{
}
DateTime start = DateTime.Now;
while (this.EchoPin.Read() == GpioPinValue.High)
{
}
DateTime stop = DateTime.Now;
//Calculating distance
double timeBetween = (stop - start).TotalSeconds;
double distance = timeBetween * 17000;
return distance;
}
}
}
```
I've also written a script in python to read the values from the ultrasonic sensor and then it works but in c# I can't get it working.
At the bottom you can find the debug log:
>
> 'BACKGROUNDTASKHOST.EXE' (CoreCLR: DefaultDomain): Loaded 'C:\Program Files\WindowsApps\Microsoft.NET.CoreRuntime.1.0\_1.0.22816.1\_arm\_\_8wekyb3d8bbwe\mscorlib.ni.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
> 'BACKGROUNDTASKHOST.EXE' (CoreCLR: CoreCLR\_UAP\_Domain): Loaded 'C:\Users\DefaultAccount\AppData\Local\DevelopmentFiles\RaspiCarVS.Debug\_ARM.chris\RaspiCar.winmd'. Symbols loaded.
> 'BACKGROUNDTASKHOST.EXE' (CoreCLR: CoreCLR\_UAP\_Domain): Loaded 'C:\Users\DefaultAccount\AppData\Local\DevelopmentFiles\RaspiCarVS.Debug\_ARM.chris\System.Runtime.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
> 'BACKGROUNDTASKHOST.EXE' (CoreCLR: CoreCLR\_UAP\_Domain): Loaded 'C:\Users\DefaultAccount\AppData\Local\DevelopmentFiles\RaspiCarVS.Debug\_ARM.chris\WinMetadata\Windows.winmd'. Module was built without symbols.
> 'BACKGROUNDTASKHOST.EXE' (CoreCLR: CoreCLR\_UAP\_Domain): Loaded 'C:\Users\DefaultAccount\AppData\Local\DevelopmentFiles\RaspiCarVS.Debug\_ARM.chris\System.Runtime.InteropServices.WindowsRuntime.dll'. Module was built without symbols.
> 'BACKGROUNDTASKHOST.EXE' (CoreCLR: CoreCLR\_UAP\_Domain): Loaded 'C:\Users\DefaultAccount\AppData\Local\DevelopmentFiles\RaspiCarVS.Debug\_ARM.chris\System.Threading.dll'. Module was built without symbols.
> 'BACKGROUNDTASKHOST.EXE' (CoreCLR: CoreCLR\_UAP\_Domain): Loaded 'C:\Users\DefaultAccount\AppData\Local\DevelopmentFiles\RaspiCarVS.Debug\_ARM.chris\System.Diagnostics.Debug.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
> 'BACKGROUNDTASKHOST.EXE' (CoreCLR: CoreCLR\_UAP\_Domain): Loaded 'C:\Users\DefaultAccount\AppData\Local\DevelopmentFiles\RaspiCarVS.Debug\_ARM.chris\System.Runtime.WindowsRuntime.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
> Distance: 265.7457
> Distance: 0
> Distance: 0
> Distance: 0
> The program '[2508] BACKGROUNDTASKHOST.EXE' has exited with code 0 (0x0).
>
>
> | 2015/05/08 | [
"https://Stackoverflow.com/questions/30124861",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4879053/"
] | Thanks for the reactions. DateTime was the problem i've now used the stopwatch class and now it works. Thanks a lot!
The working class:
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Diagnostics;
using Windows.Devices.Gpio;
namespace RaspberryPi
{
class UcSensor
{
GpioController gpio = GpioController.GetDefault();
GpioPin TriggerPin;
GpioPin EchoPin;
public UcSensor(int TriggerPin, int EchoPin)
{
this.TriggerPin = gpio.OpenPin(TriggerPin);
this.EchoPin = gpio.OpenPin(EchoPin);
this.TriggerPin.SetDriveMode(GpioPinDriveMode.Output);
this.EchoPin.SetDriveMode(GpioPinDriveMode.Input);
this.TriggerPin.Write(GpioPinValue.Low);
}
public double GetDistance()
{
ManualResetEvent mre = new ManualResetEvent(false);
mre.WaitOne(500);
Stopwatch pulseLength = new Stopwatch();
//Send pulse
this.TriggerPin.Write(GpioPinValue.High);
mre.WaitOne(TimeSpan.FromMilliseconds(0.01));
this.TriggerPin.Write(GpioPinValue.Low);
//Recieve pusle
while (this.EchoPin.Read() == GpioPinValue.Low)
{
}
pulseLength.Start();
while (this.EchoPin.Read() == GpioPinValue.High)
{
}
pulseLength.Stop();
//Calculating distance
TimeSpan timeBetween = pulseLength.Elapsed;
Debug.WriteLine(timeBetween.ToString());
double distance = timeBetween.TotalSeconds * 17000;
return distance;
}
}
}
``` | There is a better solution as the currently proposed answer will occasionally lock while getting the distance. The improved version of the code, which times out after 100 milliseconds (hardcoded). You can return a null or 0.0. Use double? if you want to return null.
```
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Windows.Devices.Gpio;
namespace MTP.IoT.Devices.Sensors
{
public class HCSR04
{
private GpioPin triggerPin { get; set; }
private GpioPin echoPin { get; set; }
private Stopwatch timeWatcher;
public HCSR04(int triggerPin, int echoPin)
{
GpioController controller = GpioController.GetDefault();
timeWatcher = new Stopwatch();
//initialize trigger pin.
this.triggerPin = controller.OpenPin(triggerPin);
this.triggerPin.SetDriveMode(GpioPinDriveMode.Output);
this.triggerPin.Write(GpioPinValue.Low);
//initialize echo pin.
this.echoPin = controller.OpenPin(echoPin);
this.echoPin.SetDriveMode(GpioPinDriveMode.Input);
}
public double GetDistance()
{
ManualResetEvent mre = new ManualResetEvent(false);
mre.WaitOne(500);
timeWatcher.Reset();
//Send pulse
this.triggerPin.Write(GpioPinValue.High);
mre.WaitOne(TimeSpan.FromMilliseconds(0.01));
this.triggerPin.Write(GpioPinValue.Low);
return this.PulseIn(echoPin, GpioPinValue.High);
}
private double PulseIn(GpioPin echoPin, GpioPinValue value)
{
var t = Task.Run(() =>
{
//Recieve pusle
while (this.echoPin.Read() != value)
{
}
timeWatcher.Start();
while (this.echoPin.Read() == value)
{
}
timeWatcher.Stop();
//Calculating distance
double distance = timeWatcher.Elapsed.TotalSeconds * 17000;
return distance;
});
bool didComplete = t.Wait(TimeSpan.FromMilliseconds(100));
if(didComplete)
{
return t.Result;
}
else
{
return 0.0;
}
}
}
}
``` |
48,019,851 | I am trying to send email from python using smtp server but it throws the
error. How can I solve it?
I also have get permission from gmail to use this feature
Here is the code
```
import smtplib
content='Hello I am just checking email.'
mail=smtplib.SMTP('smtp.gmail.com',587)
mail.ehlo()
mail.starttls()
mail.login('My email','Mypassword')
mail.send('From email','destiation password',content)
mail.close()
```
This code throws this error
TypeError: send() takes 2 positional arguments but 4 were given
Please fix this error. | 2017/12/29 | [
"https://Stackoverflow.com/questions/48019851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9140833/"
] | `sendmail` is what you should use:
```
smtplib.SMTP.sendmail(self, from_addr, to_addrs, msg, mail_options=[], rcpt_options=[])
```
>
> This command performs an entire mail transaction.
>
>
> The arguments are:
>
>
>
```
- from_addr : The address sending this mail.
- to_addrs : A list of addresses to send this mail to. A bare
string will be treated as a list with 1 address.
- msg : The message to send.
- mail_options : List of ESMTP options (such as 8bitmime) for the
mail command.
- rcpt_options : List of ESMTP options (such as DSN commands) for
all the rcpt commands.
``` | ```
import smtplib
import email
from email.MIMEMultipart import MIMEMultipart
from email.Utils import COMMASPACE
from email.MIMEBase import MIMEBase
from email.parser import Parser
from email.MIMEImage import MIMEImage
from email.MIMEText import MIMEText
from email.MIMEAudio import MIMEAudio
import mimetypes
def send(user, password, fromaddr, to, subject, body):
smtp_host = 'smtp.gmail.com'
smtp_port = 587
server = smtplib.SMTP()
server.connect(smtp_host,smtp_port)
server.ehlo()
server.starttls()
server.login(user, password)
msg = email.MIMEMultipart.MIMEMultipart()
msg['From'] = fromaddr
msg['To'] = email.Utils.COMMASPACE.join(to)
msg['Subject'] = subject
msg.attach(MIMEText(body))
server.sendmail(user,to,msg.as_string())
``` |
42,306,626 | I'm reading in a web page using beautifulSoup in python.
Many elements are spans, but with different values for their class attribute. e.g.
```
Value1 = property.findChild("span", {"class" : "search-result-Val1"}).text
Value2 = property.findChild("span", {"class" : "search-result-Val2"}).text
```
The issue is if a user didn't enter a value for Val1 or Val2 when saving their item, they won't appear on the search results page, so I need to check if a span tag exists, wtih an attribute "class" with a specific value for that attribute, before I can try to extract its value.
How might I do this? GetAttr() method doesn't help as it just tells me if the attribute exists, but I can't check the value of that attribute, which is the differentiating factor in this case. | 2017/02/17 | [
"https://Stackoverflow.com/questions/42306626",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2015461/"
] | We were eventually able to figure it out.
The `Marshal` module can *serialize* ruby objects to the database. It can also *deserialize* stuff from the database in order to recreate the objects.
When you serialize an object, there appears to be some low level info that goes in there too, like the mysql2 connection adapter.
When the Rails 5 app tries to *deserialize* the data, it throws this error because the constant that exists in the MySql2 connection adapter for Rails 4 does not exist in the Rails 5 version of that adapter.
Our work around was to just not store or retrieve any of the serialized objects from the sessions table for our rails 5 app. That did the trick.
If we had really needed to retrieve serialized objects from the sessions table for our rails 5 app: then I think we would have had to come up with a custom solution.
Hope this helps others in the future! | It may be due to changes in release i.e. Rails 5.0.
As In [Ruby on Rails 5.0 Release Notes](http://guides.rubyonrails.org/5_0_release_notes.html):
Removed support for the legacy mysql database adapter from core. Most users should be able to use mysql2. It will be converted to a separate gem when we find someone to maintain it.
Deprecated passing arguments to #tables - the #tables method of some adapters (mysql2, sqlite3) would return both tables and views while others (postgresql) just return tables. To make their behavior consistent, #tables will return only tables in the future.
Difference in constant as you described in your answer may be due to they going to change gem for mysql2 adapter. |
5,473,369 | A few weeks ago I wrote a CSV parser in python and it was working great with the provided text file. But when we tried to test is with other files the problems started.
First was the
>
> ValueError: empty string for float()
>
>
>
for a string like "313.44". The problem was that in unicode there was some empty bytes betwee the numbers '\x0'.
Ok I decoded to read it as an unicode with
>
> codecs.open(filename, 'r', 'utf-16')
>
>
>
And then the hell opened, missing BOM, problems with the line end characters (LF vs CR+LF) etc.
So can you provide me or give me hint for a workaround about parsing unicode and non-unicode files if I do not know what the encoding is, is BOM present, what line ending are etc.
P.S. I am using Python 2.7 | 2011/03/29 | [
"https://Stackoverflow.com/questions/5473369",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/423283/"
] | The problem was solved using the csv module as proposed by Daenyth | It mainly depends on the Python version you are using but those 2 links shopuld help you out:
* <http://docs.python.org/howto/unicode.html>
* [Character reading from file in Python](https://stackoverflow.com/questions/147741/character-reading-from-file-in-python) |
13,210,596 | I set up a function that is meant to be called every minute to send an email. I call it every minute using the following:
```
import smtplib
def messages_emailed():
fromaddr = FROMADDRESS
toaddrs = TOADDRESS
msg = "this is a test message."
username = USER
password = PASSWORD
server = smtplib.SMTP('smtp.gmail.com:587')
server.starttls()
server.login(username,password)
server.sendmail(fromaddr, toaddrs, msg)
server.quit()
threading.Timer(60, messages_emailed).start() #runs func every min
messages_emailed()
```
This worked perfectly, although despite me stopping the application in Terminal, using `control–C`, I am continuing to receive mail every minute, and refreshing the page in which my application is running in my browser, `127.0.0.1:5000`, continues to display my application. I can edit my script to add a cancel statement, but hitting save did not make any changes, and trying to reload my application in terminal returned an error
```
> * Running on ``http://127.0.0.1:5000/ ``Traceback (most recent call
> last): File "bit.py", line 79, in <module>
> app.run() File "/Library/Python/2.7/site-packages/Flask-0.9-py2.7.egg/flask/app.py",
> line 739, in run
> run_simple(host, port, self, **options) File "/Library/Python/2.7/site-packages/Werkzeug-0.8.3-py2.7.egg/werkzeug/serving.py",
> line 613, in run_simple
> test_socket.bind((hostname, port)) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py",
> line 224, in meth socket.error: [Errno 48] Address already in use
```
For now, I have stopped the influx of emails by deleting the mail account I used to send messages from. However, I am wondering what a long term solution would look like, something that I can ideally stop from the terminal or stops executing when the program does. Research has suggested using `sys.exit(0)` though I do not know where in my program to place this or when it will quit the function.
Any help would be greatly appreciated. | 2012/11/03 | [
"https://Stackoverflow.com/questions/13210596",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1438105/"
] | at first you have to check formatting.
if you want to use thread, you have to write your threading manager which will be encapsulates the `start()`, `stop()` methods for your threads.
`thread1 = threading.Timer(60, sender()).start()`
for stopping just call `thread1.stop()` | It seems that your script has start a new process to rerun the email-sending function periodically. You may check the active process by running `ps aux | grep python`. |
28,196,103 | I've been learning basic python, but I am new to NLTK. I want to use nltk to extract hyponyms for a given list of words. It works fine when I enter every term manually, but it does not seem to work when I try to iterate through items of a list.
This works:
```
from nltk.corpus import wordnet as wn
syn_sets = wn.synsets("car")
for syn_set in syn_sets:
print(syn_set, syn_set.lemma_names())
print(syn_set.hyponyms())
```
But how do I get Wordnet methods to work with a list of items like
```
token = ["cat", "dog", "car"]
syn_sets = wn.synsets((*get each item from the list*))
```
in a loop?
Thank you! | 2015/01/28 | [
"https://Stackoverflow.com/questions/28196103",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4503550/"
] | List comprehensions to the rescue!
Totally possible, even using very similar syntax to what you had before. Python has a construct known as a `[list comprehension][1]` made exactly for this application. Basically, it's a functional syntax for inline for loops, but tend to be cleaner, more robust implementations with slightly lower overhead.
Example:
```
tokens = ["cat", "dog", "car"]
syn_sets = [wn.synsets(token) for token in tokens]
```
This will even scale to slightly more complex data structures pretty easily, for instance:
```
split_syn_sets = [(syn_set.lemma_names(), syn_set.hyponyms()) for syn_set in syn_sets]
```
Not sure if that's exactly what you're looking for, but it should generalize to whatever you are looking to do similar to this.
If it's useful I asked a question about grabbing all related synsets [here](https://stackoverflow.com/questions/11005529/general-synonym-and-part-of-speech-processing-using-nltk) a while ago. | I believe you have no choice but to loop through your words. I modified your code to have an outer loop, and it seems to work:
```
from nltk.corpus import wordnet as wn
tokens = ["cat", "dog", "car"]
for token in tokens:
syn_sets = wn.synsets(token)
for syn_set in syn_sets:
print(syn_set, syn_set.lemma_names())
print(syn_set.hyponyms())
```
Here is the output:
```
(Synset('cat.n.01'), [u'cat', u'true_cat'])
[Synset('domestic_cat.n.01'), Synset('wildcat.n.03')]
(Synset('guy.n.01'), [u'guy', u'cat', u'hombre', u'bozo'])
[Synset('sod.n.04')]
...
(Synset('cable_car.n.01'), [u'cable_car', u'car'])
[]
``` |
24,380,442 | I'm running **`pip install numpy`** on `windows7 64bit` and i'm getting **`error: Unable to find vcvarsall.bat`**
I've already installed some packages with pip, e.g. `pyzmq`,`pysolr`,`enum`,etc., so I really don't know what went wrong.
The only thing that might be different is that I've install `.NET framework version 4.5` -> I suspect that could be the reason because in some posts I saw it might have to do with `Visual Studio` (That I didn't install)
The full error/traceback:
```
Downloading/unpacking numpy
Running setup.py (path:c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\setup.py) egg_info for package numpy
Running from numpy source directory.
warning: no files found matching 'tools\py3tool.py'
warning: no files found matching '*' under directory 'doc\f2py'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
Installing collected packages: numpy
Running setup.py install for numpy
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
build_src
building py_modules sources
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile:
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-2.7
creating build\src.win-amd64-2.7\numpy
creating build\src.win-amd64-2.7\numpy\distutils
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
----------------------------------------
Cleaning up...
Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy
Storing debug log for failure in C:\Users\zebra\pip\pip.log
``` | 2014/06/24 | [
"https://Stackoverflow.com/questions/24380442",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2580393/"
] | Had the same problem on my 64-Bit Windows. The issue was resolved by installing the [Microsoft Visual C++ Compiler for Python 2.7](http://www.microsoft.com/en-us/download/confirmation.aspx?id=44266), which is described by Microsoft as:
>
> This package contains the compiler and set of system headers necessary for producing binary wheels for Python packages. A binary wheel of a Python package can then be installed on any Windows system without requiring access to a C compiler.
>
>
> The typical error message you will receive if you need this compiler package is **Unable to find vcvarsall.bat**
>
>
> ...
>
>
>
Works like a charm. | You need to download and install vcsetup.exe(Visual C++ 2008 express edition) file
And then add newly created vcvarsall.bat file path to "PATH" environment variable.
Make sure there are no special symbols in your PATH environment variable after adding |
24,380,442 | I'm running **`pip install numpy`** on `windows7 64bit` and i'm getting **`error: Unable to find vcvarsall.bat`**
I've already installed some packages with pip, e.g. `pyzmq`,`pysolr`,`enum`,etc., so I really don't know what went wrong.
The only thing that might be different is that I've install `.NET framework version 4.5` -> I suspect that could be the reason because in some posts I saw it might have to do with `Visual Studio` (That I didn't install)
The full error/traceback:
```
Downloading/unpacking numpy
Running setup.py (path:c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\setup.py) egg_info for package numpy
Running from numpy source directory.
warning: no files found matching 'tools\py3tool.py'
warning: no files found matching '*' under directory 'doc\f2py'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
Installing collected packages: numpy
Running setup.py install for numpy
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
build_src
building py_modules sources
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile:
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-2.7
creating build\src.win-amd64-2.7\numpy
creating build\src.win-amd64-2.7\numpy\distutils
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
----------------------------------------
Cleaning up...
Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy
Storing debug log for failure in C:\Users\zebra\pip\pip.log
``` | 2014/06/24 | [
"https://Stackoverflow.com/questions/24380442",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2580393/"
] | Maybe you want to use the prebuilt binaries here: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>? Using pip likely wont yield any good results. The reason is that numpy doesn't compile with Visual Studio at all and needs to be built with gcc.
If you still really want to compile numpy, you need to setup a Linux machine with Vagrant and follow the official build instructions here: <https://github.com/juliantaylor/numpy-vendor> | A procedure which works on my Windows 7, 64 bit, and Python 2.7 is to download the binaries of [numpy](http://sourceforge.net/projects/numpy/files/NumPy/) directly from Sourceforge. E.g. numpy-1.9.2-win32-superpack-python2.7.exe.
Then extract the EXE files for example with 7z. There will be three EXE files, e.g. numpy-1.9.2-nosse.exe, numpy-1.9.2-sse2.exe, numpy-1.9.2-sse3.exe.
Choose the appropriate now. On more or less modern processors SSE3 will be fine.
SciPy works with the same procedure: [SF-link](http://sourceforge.net/projects/scipy/)
If binaries are OK for you, the only disadvantage is that you cannot update the packages via pip. |
24,380,442 | I'm running **`pip install numpy`** on `windows7 64bit` and i'm getting **`error: Unable to find vcvarsall.bat`**
I've already installed some packages with pip, e.g. `pyzmq`,`pysolr`,`enum`,etc., so I really don't know what went wrong.
The only thing that might be different is that I've install `.NET framework version 4.5` -> I suspect that could be the reason because in some posts I saw it might have to do with `Visual Studio` (That I didn't install)
The full error/traceback:
```
Downloading/unpacking numpy
Running setup.py (path:c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\setup.py) egg_info for package numpy
Running from numpy source directory.
warning: no files found matching 'tools\py3tool.py'
warning: no files found matching '*' under directory 'doc\f2py'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
Installing collected packages: numpy
Running setup.py install for numpy
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
build_src
building py_modules sources
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile:
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-2.7
creating build\src.win-amd64-2.7\numpy
creating build\src.win-amd64-2.7\numpy\distutils
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
----------------------------------------
Cleaning up...
Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy
Storing debug log for failure in C:\Users\zebra\pip\pip.log
``` | 2014/06/24 | [
"https://Stackoverflow.com/questions/24380442",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2580393/"
] | You need to download and install vcsetup.exe(Visual C++ 2008 express edition) file
And then add newly created vcvarsall.bat file path to "PATH" environment variable.
Make sure there are no special symbols in your PATH environment variable after adding | For 64-bit systems, this problem can be resolved by the following 5 steps. (taken from <http://springflex.blogspot.in/2014/02/how-to-fix-valueerror-when-trying-to.html>)
1. download vcsetup.exe (Visual Studio 2008 Express installer) and install from :
go.microsoft.com/?linkid=7729279
2. Install the Microsoft Windows SDK from:
<http://www.microsoft.com/en-us/download/details.aspx?id=24826>
select web setup link under installation instructions to get an installer.
3. Run the installer file
unselect samples and documentation if they are not required
4. Create a copy of the batch file "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars64.bat" and rename it to "vcvarsamd64.bat" in the same folder.
5. Copy the file "vcvarsamd64.bat" and paste it in the folder "C:\Program Files (x86)\Microsoft Visual Studio 9.0/VC/bin/amd64" |
24,380,442 | I'm running **`pip install numpy`** on `windows7 64bit` and i'm getting **`error: Unable to find vcvarsall.bat`**
I've already installed some packages with pip, e.g. `pyzmq`,`pysolr`,`enum`,etc., so I really don't know what went wrong.
The only thing that might be different is that I've install `.NET framework version 4.5` -> I suspect that could be the reason because in some posts I saw it might have to do with `Visual Studio` (That I didn't install)
The full error/traceback:
```
Downloading/unpacking numpy
Running setup.py (path:c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\setup.py) egg_info for package numpy
Running from numpy source directory.
warning: no files found matching 'tools\py3tool.py'
warning: no files found matching '*' under directory 'doc\f2py'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
Installing collected packages: numpy
Running setup.py install for numpy
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
build_src
building py_modules sources
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile:
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-2.7
creating build\src.win-amd64-2.7\numpy
creating build\src.win-amd64-2.7\numpy\distutils
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
----------------------------------------
Cleaning up...
Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy
Storing debug log for failure in C:\Users\zebra\pip\pip.log
``` | 2014/06/24 | [
"https://Stackoverflow.com/questions/24380442",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2580393/"
] | Maybe you want to use the prebuilt binaries here: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>? Using pip likely wont yield any good results. The reason is that numpy doesn't compile with Visual Studio at all and needs to be built with gcc.
If you still really want to compile numpy, you need to setup a Linux machine with Vagrant and follow the official build instructions here: <https://github.com/juliantaylor/numpy-vendor> | For 64-bit systems, this problem can be resolved by the following 5 steps. (taken from <http://springflex.blogspot.in/2014/02/how-to-fix-valueerror-when-trying-to.html>)
1. download vcsetup.exe (Visual Studio 2008 Express installer) and install from :
go.microsoft.com/?linkid=7729279
2. Install the Microsoft Windows SDK from:
<http://www.microsoft.com/en-us/download/details.aspx?id=24826>
select web setup link under installation instructions to get an installer.
3. Run the installer file
unselect samples and documentation if they are not required
4. Create a copy of the batch file "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars64.bat" and rename it to "vcvarsamd64.bat" in the same folder.
5. Copy the file "vcvarsamd64.bat" and paste it in the folder "C:\Program Files (x86)\Microsoft Visual Studio 9.0/VC/bin/amd64" |
24,380,442 | I'm running **`pip install numpy`** on `windows7 64bit` and i'm getting **`error: Unable to find vcvarsall.bat`**
I've already installed some packages with pip, e.g. `pyzmq`,`pysolr`,`enum`,etc., so I really don't know what went wrong.
The only thing that might be different is that I've install `.NET framework version 4.5` -> I suspect that could be the reason because in some posts I saw it might have to do with `Visual Studio` (That I didn't install)
The full error/traceback:
```
Downloading/unpacking numpy
Running setup.py (path:c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\setup.py) egg_info for package numpy
Running from numpy source directory.
warning: no files found matching 'tools\py3tool.py'
warning: no files found matching '*' under directory 'doc\f2py'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
Installing collected packages: numpy
Running setup.py install for numpy
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
build_src
building py_modules sources
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile:
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-2.7
creating build\src.win-amd64-2.7\numpy
creating build\src.win-amd64-2.7\numpy\distutils
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
----------------------------------------
Cleaning up...
Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy
Storing debug log for failure in C:\Users\zebra\pip\pip.log
``` | 2014/06/24 | [
"https://Stackoverflow.com/questions/24380442",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2580393/"
] | Maybe you want to use the prebuilt binaries here: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>? Using pip likely wont yield any good results. The reason is that numpy doesn't compile with Visual Studio at all and needs to be built with gcc.
If you still really want to compile numpy, you need to setup a Linux machine with Vagrant and follow the official build instructions here: <https://github.com/juliantaylor/numpy-vendor> | You need to download and install vcsetup.exe(Visual C++ 2008 express edition) file
And then add newly created vcvarsall.bat file path to "PATH" environment variable.
Make sure there are no special symbols in your PATH environment variable after adding |
24,380,442 | I'm running **`pip install numpy`** on `windows7 64bit` and i'm getting **`error: Unable to find vcvarsall.bat`**
I've already installed some packages with pip, e.g. `pyzmq`,`pysolr`,`enum`,etc., so I really don't know what went wrong.
The only thing that might be different is that I've install `.NET framework version 4.5` -> I suspect that could be the reason because in some posts I saw it might have to do with `Visual Studio` (That I didn't install)
The full error/traceback:
```
Downloading/unpacking numpy
Running setup.py (path:c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\setup.py) egg_info for package numpy
Running from numpy source directory.
warning: no files found matching 'tools\py3tool.py'
warning: no files found matching '*' under directory 'doc\f2py'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
Installing collected packages: numpy
Running setup.py install for numpy
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
build_src
building py_modules sources
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile:
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-2.7
creating build\src.win-amd64-2.7\numpy
creating build\src.win-amd64-2.7\numpy\distutils
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
----------------------------------------
Cleaning up...
Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy
Storing debug log for failure in C:\Users\zebra\pip\pip.log
``` | 2014/06/24 | [
"https://Stackoverflow.com/questions/24380442",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2580393/"
] | Had the same problem on my 64-Bit Windows. The issue was resolved by installing the [Microsoft Visual C++ Compiler for Python 2.7](http://www.microsoft.com/en-us/download/confirmation.aspx?id=44266), which is described by Microsoft as:
>
> This package contains the compiler and set of system headers necessary for producing binary wheels for Python packages. A binary wheel of a Python package can then be installed on any Windows system without requiring access to a C compiler.
>
>
> The typical error message you will receive if you need this compiler package is **Unable to find vcvarsall.bat**
>
>
> ...
>
>
>
Works like a charm. | Try to run below commands. I faced similar issue, but for some other module. It got resolved after running below commands.
python -m pip install -U pip
============================
pip install -U setuptools
=========================
pip install -U virtualenv
========================= |
24,380,442 | I'm running **`pip install numpy`** on `windows7 64bit` and i'm getting **`error: Unable to find vcvarsall.bat`**
I've already installed some packages with pip, e.g. `pyzmq`,`pysolr`,`enum`,etc., so I really don't know what went wrong.
The only thing that might be different is that I've install `.NET framework version 4.5` -> I suspect that could be the reason because in some posts I saw it might have to do with `Visual Studio` (That I didn't install)
The full error/traceback:
```
Downloading/unpacking numpy
Running setup.py (path:c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\setup.py) egg_info for package numpy
Running from numpy source directory.
warning: no files found matching 'tools\py3tool.py'
warning: no files found matching '*' under directory 'doc\f2py'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
Installing collected packages: numpy
Running setup.py install for numpy
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
build_src
building py_modules sources
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile:
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-2.7
creating build\src.win-amd64-2.7\numpy
creating build\src.win-amd64-2.7\numpy\distutils
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
----------------------------------------
Cleaning up...
Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy
Storing debug log for failure in C:\Users\zebra\pip\pip.log
``` | 2014/06/24 | [
"https://Stackoverflow.com/questions/24380442",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2580393/"
] | Had the same problem on my 64-Bit Windows. The issue was resolved by installing the [Microsoft Visual C++ Compiler for Python 2.7](http://www.microsoft.com/en-us/download/confirmation.aspx?id=44266), which is described by Microsoft as:
>
> This package contains the compiler and set of system headers necessary for producing binary wheels for Python packages. A binary wheel of a Python package can then be installed on any Windows system without requiring access to a C compiler.
>
>
> The typical error message you will receive if you need this compiler package is **Unable to find vcvarsall.bat**
>
>
> ...
>
>
>
Works like a charm. | A procedure which works on my Windows 7, 64 bit, and Python 2.7 is to download the binaries of [numpy](http://sourceforge.net/projects/numpy/files/NumPy/) directly from Sourceforge. E.g. numpy-1.9.2-win32-superpack-python2.7.exe.
Then extract the EXE files for example with 7z. There will be three EXE files, e.g. numpy-1.9.2-nosse.exe, numpy-1.9.2-sse2.exe, numpy-1.9.2-sse3.exe.
Choose the appropriate now. On more or less modern processors SSE3 will be fine.
SciPy works with the same procedure: [SF-link](http://sourceforge.net/projects/scipy/)
If binaries are OK for you, the only disadvantage is that you cannot update the packages via pip. |
24,380,442 | I'm running **`pip install numpy`** on `windows7 64bit` and i'm getting **`error: Unable to find vcvarsall.bat`**
I've already installed some packages with pip, e.g. `pyzmq`,`pysolr`,`enum`,etc., so I really don't know what went wrong.
The only thing that might be different is that I've install `.NET framework version 4.5` -> I suspect that could be the reason because in some posts I saw it might have to do with `Visual Studio` (That I didn't install)
The full error/traceback:
```
Downloading/unpacking numpy
Running setup.py (path:c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\setup.py) egg_info for package numpy
Running from numpy source directory.
warning: no files found matching 'tools\py3tool.py'
warning: no files found matching '*' under directory 'doc\f2py'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
Installing collected packages: numpy
Running setup.py install for numpy
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
build_src
building py_modules sources
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile:
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-2.7
creating build\src.win-amd64-2.7\numpy
creating build\src.win-amd64-2.7\numpy\distutils
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
----------------------------------------
Cleaning up...
Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy
Storing debug log for failure in C:\Users\zebra\pip\pip.log
``` | 2014/06/24 | [
"https://Stackoverflow.com/questions/24380442",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2580393/"
] | Had the same problem on my 64-Bit Windows. The issue was resolved by installing the [Microsoft Visual C++ Compiler for Python 2.7](http://www.microsoft.com/en-us/download/confirmation.aspx?id=44266), which is described by Microsoft as:
>
> This package contains the compiler and set of system headers necessary for producing binary wheels for Python packages. A binary wheel of a Python package can then be installed on any Windows system without requiring access to a C compiler.
>
>
> The typical error message you will receive if you need this compiler package is **Unable to find vcvarsall.bat**
>
>
> ...
>
>
>
Works like a charm. | For 64-bit systems, this problem can be resolved by the following 5 steps. (taken from <http://springflex.blogspot.in/2014/02/how-to-fix-valueerror-when-trying-to.html>)
1. download vcsetup.exe (Visual Studio 2008 Express installer) and install from :
go.microsoft.com/?linkid=7729279
2. Install the Microsoft Windows SDK from:
<http://www.microsoft.com/en-us/download/details.aspx?id=24826>
select web setup link under installation instructions to get an installer.
3. Run the installer file
unselect samples and documentation if they are not required
4. Create a copy of the batch file "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars64.bat" and rename it to "vcvarsamd64.bat" in the same folder.
5. Copy the file "vcvarsamd64.bat" and paste it in the folder "C:\Program Files (x86)\Microsoft Visual Studio 9.0/VC/bin/amd64" |
24,380,442 | I'm running **`pip install numpy`** on `windows7 64bit` and i'm getting **`error: Unable to find vcvarsall.bat`**
I've already installed some packages with pip, e.g. `pyzmq`,`pysolr`,`enum`,etc., so I really don't know what went wrong.
The only thing that might be different is that I've install `.NET framework version 4.5` -> I suspect that could be the reason because in some posts I saw it might have to do with `Visual Studio` (That I didn't install)
The full error/traceback:
```
Downloading/unpacking numpy
Running setup.py (path:c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\setup.py) egg_info for package numpy
Running from numpy source directory.
warning: no files found matching 'tools\py3tool.py'
warning: no files found matching '*' under directory 'doc\f2py'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
Installing collected packages: numpy
Running setup.py install for numpy
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
build_src
building py_modules sources
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile:
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-2.7
creating build\src.win-amd64-2.7\numpy
creating build\src.win-amd64-2.7\numpy\distutils
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
----------------------------------------
Cleaning up...
Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy
Storing debug log for failure in C:\Users\zebra\pip\pip.log
``` | 2014/06/24 | [
"https://Stackoverflow.com/questions/24380442",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2580393/"
] | You need to download and install vcsetup.exe(Visual C++ 2008 express edition) file
And then add newly created vcvarsall.bat file path to "PATH" environment variable.
Make sure there are no special symbols in your PATH environment variable after adding | Try to run below commands. I faced similar issue, but for some other module. It got resolved after running below commands.
python -m pip install -U pip
============================
pip install -U setuptools
=========================
pip install -U virtualenv
========================= |
24,380,442 | I'm running **`pip install numpy`** on `windows7 64bit` and i'm getting **`error: Unable to find vcvarsall.bat`**
I've already installed some packages with pip, e.g. `pyzmq`,`pysolr`,`enum`,etc., so I really don't know what went wrong.
The only thing that might be different is that I've install `.NET framework version 4.5` -> I suspect that could be the reason because in some posts I saw it might have to do with `Visual Studio` (That I didn't install)
The full error/traceback:
```
Downloading/unpacking numpy
Running setup.py (path:c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\setup.py) egg_info for package numpy
Running from numpy source directory.
warning: no files found matching 'tools\py3tool.py'
warning: no files found matching '*' under directory 'doc\f2py'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
Installing collected packages: numpy
Running setup.py install for numpy
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
build_src
building py_modules sources
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile:
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
openblas_info:
libraries not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_info:
libraries blas not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
non-existing path in 'numpy\\lib': 'benchmarks'
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python27\lib
libraries lapack_atlas not found in C:\Python27\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python27\libs
libraries lapack_atlas not found in C:\Python27\libs
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python27\\lib', 'C:\\', 'C:\\Python27\\libs']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-2.7
creating build\src.win-amd64-2.7\numpy
creating build\src.win-amd64-2.7\numpy\distutils
building library "npymath" sources
No module named msvccompiler in numpy.distutils; trying from distutils
Running from numpy source directory.
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1521: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1530: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1533: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1427: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1438: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy\numpy\distutils\system_info.py:1441: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
----------------------------------------
Cleaning up...
Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\zebra\\appdata\\local\\temp\\pip_build_zebra\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\zebra\appdata\local\temp\pip-py_oa_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\zebra\appdata\local\temp\pip_build_zebra\numpy
Storing debug log for failure in C:\Users\zebra\pip\pip.log
``` | 2014/06/24 | [
"https://Stackoverflow.com/questions/24380442",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2580393/"
] | Maybe you want to use the prebuilt binaries here: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>? Using pip likely wont yield any good results. The reason is that numpy doesn't compile with Visual Studio at all and needs to be built with gcc.
If you still really want to compile numpy, you need to setup a Linux machine with Vagrant and follow the official build instructions here: <https://github.com/juliantaylor/numpy-vendor> | Try to run below commands. I faced similar issue, but for some other module. It got resolved after running below commands.
python -m pip install -U pip
============================
pip install -U setuptools
=========================
pip install -U virtualenv
========================= |
44,349,031 | I have been reading up about the `random.sample()` function in the `random` module and have not seen anything that solves my problem.
I know that using `random.sample(range(1,100),5)` would give me 5 unique samples from the 'population'...
I would like to get a random number in `range(0,999)`. I could use `random.sample(range(0,999),1)` but why then am I thinking about using `random.sample()` ?
I need the random number in that range to not match any number in a separate array (Say, `[443,122,738]`)
Is there a relatively easy way I could go about doing this?
Also, I am pretty new to python and am definitely a beginner -- If you would like me to update the question with any information I may have missed then I will.
EDIT:
Accidentally said `random.range()` once. Whoops. | 2017/06/03 | [
"https://Stackoverflow.com/questions/44349031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8108610/"
] | One way you can accomplish that is by simply checking the number and then appending it to a list where you can then use the numbers.
```
import random
non_match = [443, 122, 738]
match = []
while len(match) < 6: # Where 6 can be replaced with how many numbers you want minus 1
x = random.sample(range(0,999),1)
if x not in non_match:
match.append(x)
``` | There are two main ways:
```
import random
def method1(lower, upper, exclude):
choices = set(range(lower, upper + 1)) - set(exclude)
return random.choice(list(choices))
def method2(lower, upper, exclude):
exclude = set(exclude)
while True:
val = random.randint(lower, upper)
if val not in exclude:
return val
```
Example usage:
```
for method in method1, method2:
for i in range(10):
print(method(1, 5, [2, 4]))
print('----')
```
Output:
```
1
1
5
3
1
1
3
5
5
1
----
5
3
5
1
5
3
5
3
1
3
----
```
The first is better for a smaller range or a larger list `exclude` (so the `choices` list won't be too big), the second is better for the opposite (so it doesn't loop too many times looking for an appropriate option). |
64,766,593 | TwoSum, needs to return indices of the integers that add up to the target: Input: nums = [2,7,11,15], target = 9 Output: [0,1] Output: Because nums[0] + nums[1] == 9, we return [0, 1].
I'm new to JavaScript and I dont understand why this returns undefined. After a few tests, i noticed it doesn't even enter the 2nd forloop, however when i wrote this in python it works perfectly
```js
var twoSum = function(nums, target) {
for (let i = 0; i < nums.length; i++) {
if (nums[i] >= target) {
continue;
}
for (let j = i; j < nums.legth; j++) {
if (nums[j] >= target) {
continue;
}
if (nums[i] + nums[j] === target) {
const ans = [i, j]
return ans;
}
}
}
};
console.log(twoSum([2,7,11,15],9));
```
any help would be appreciated | 2020/11/10 | [
"https://Stackoverflow.com/questions/64766593",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11415871/"
] | You made a typo mistake. Fix `legth` to `length` in second loop.
```js
var twoSum = function (nums, target) {
for (let i = 0; i < nums.length; i++) {
if (nums[i] >= target) {
continue;
}
for (let j = i; j < nums.length; j++) {
if (nums[j] >= target) {
continue;
}
if (nums[i] + nums[j] === target) {
const ans = [i, j];
return ans;
}
}
}
};
console.log(twoSum([2, 7, 11, 15], 9));
``` | I see this is an old question but still...
You have nested loops and that can be very confusing. One way I'd advise you to explore is to have your var ans defined in the function's main scope because you're not returning anything otherwise;
But I'd much rather advise you to use Array.reduce() as is current practice.
Here is my solution in [leetcode](https://leetcode.com/problems/two-sum/discuss/1605794/js-solution-with-arrayreduce):
```js
var twoSum = function(nums, target) {
let acc = nums.reduce((acc, curr, currIndex, array) => {
array.forEach((number, index) => {
if(number + curr === target && index !== currIndex) {
acc = [currIndex, index];
}
})
return acc;
}, []);
return acc.sort();
};
``` |
8,440,446 | I'm trying to send an email in python. Here is my code.
```
import smtplib
if __name__ == '__main__':
SERVER = "localhost"
FROM = "sender@example.com"
TO = ["wmh1993@gmail.com"] # must be a list
SUBJECT = "Hello!"
TEXT = "This message was sent with Python's smtplib."
# Prepare actual message
message = """\
From: %s
To: %s
Subject: %s
%s
""" % (FROM, ", ".join(TO), SUBJECT, TEXT)
# Send the mail
server = smtplib.SMTP(SERVER)
server.sendmail(FROM, TO[0], message)
server.quit()
print "Message sent!"
```
This runs without error, but no email is sent to `wmh1993@gmail.com`.
**Questions**
One thing I don't understand about this code --- what restrictions do I have when setting the `FROM` field?
Do I somehow have to say that it was from my computer?
What is in place to prevent me from spoofing someone else's email?
Or am I at liberty to do that? | 2011/12/09 | [
"https://Stackoverflow.com/questions/8440446",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/895603/"
] | >
> This runs without error, but no email is sent to wmh1993@gmail.com.
>
>
>
This usually means, the message was transferred to your MTA (mailserver) on 'localhost', but this server could not relay it to gmail. it probably tried to send a bounce message to "sender@example.com" and that failed as well. or it sent the message successfully but it landed in gmails spam folder (the message could trigger spam rules since it is missing a date header)
>
> One thing I don't understand about this code --- what restrictions do I have when setting the FROM field?
>
>
>
it must be a syntactically valid email address
>
> Do I somehow have to say that it was from my computer?
>
>
>
no. but that could be the problem why it was not delivered. is your computer on a home/dynamic/dial-up IP? gmail (and many many many other providers) don't accept mail from such IPs. the HELO of your mailserver might be wrong, DNS settings might be incorrect etc. you need to check the server logs. you probably have to configure your local mailserver to relay the message via a smarthost instead of trying to contact the target server directly.
>
> What is in place to prevent me from spoofing someone else's email?
>
>
>
not much, that's why we have so much spam from forged adresses. things like SPF/DKIM can help a bit, but the SMTP protocol itself doesn't offer protection against spoofing.
>
> Or am I at liberty to do that?
>
>
>
technically yes. | Well, since you don't specify exactly what kind of email server you are using and its settings, there are several things that might be wrong here.
First of all, you need to specify the HOST and the PORT of your server and connect to it.
Example:
```
HOST = "smtp.gmail.com"
PORT = "587"
SERVER = smtplib.SMTP()
SERVER.connect(HOST, PORT)
```
Then you need to specify an user and his password to this host.
Example:
```
USER = "myuser@gmail.com"
PASSWD = "123456"
```
Some servers require the TLS protocol.
Example:
```
SERVER.starttls()
```
Then you need to login.
Example:
```
SERVER.login(USER,PASSWD)
```
Only then you are able to send the email with your `sendmail`.
This example works pretty well in most common servers.
If you are using, as it seems, your own server, there aren't much changes you need to apply. But you need to know what kind of requirements this server has. |
18,826,864 | I am trying to cut of few words from the scraped data.
```
3 Bedroom, Residential Apartment in Velachery
```
There are many rows of data like this. I am trying to remove the word 'Bedroom' from the string. I am using beautiful soup and python to scrape the webpage, and here I am using this
```
for eachproperty in properties:
print eachproperty.string[2:]
```
I know what the above code will do. But I cannot figure out how to just remove the "Bedroom" which is between 3 and ,Residen.... | 2013/09/16 | [
"https://Stackoverflow.com/questions/18826864",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1525997/"
] | ```
>>> import re
>>> strs = "3 Bedroom, Residential Apartment in Velachery"
>>> re.sub(r'\s*Bedroom\s*', '', strs)
'3, Residential Apartment in Velachery'
```
or:
```
>>> strs.replace(' Bedroom', '')
'3, Residential Apartment in Velachery'
```
Note that strings are immutable, so you need to assign the result off `re.sub` and `str.replace` to a variable. | What you need is the `replace` method:
```
line = "3 Bedroom, Residential Apartment in Velachery"
line = line.replace("Bedroom", "")
# For multiple lines use a for loop
for line in lines:
line = line.replace("Bedroom", "")
``` |
18,826,864 | I am trying to cut of few words from the scraped data.
```
3 Bedroom, Residential Apartment in Velachery
```
There are many rows of data like this. I am trying to remove the word 'Bedroom' from the string. I am using beautiful soup and python to scrape the webpage, and here I am using this
```
for eachproperty in properties:
print eachproperty.string[2:]
```
I know what the above code will do. But I cannot figure out how to just remove the "Bedroom" which is between 3 and ,Residen.... | 2013/09/16 | [
"https://Stackoverflow.com/questions/18826864",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1525997/"
] | ```
>>> import re
>>> strs = "3 Bedroom, Residential Apartment in Velachery"
>>> re.sub(r'\s*Bedroom\s*', '', strs)
'3, Residential Apartment in Velachery'
```
or:
```
>>> strs.replace(' Bedroom', '')
'3, Residential Apartment in Velachery'
```
Note that strings are immutable, so you need to assign the result off `re.sub` and `str.replace` to a variable. | A quick answer is
```
k = input_string.split()
if "Bedroom" in k:
k.remove("Bedroom")
answer = ' '.join(k)
```
This won't handle punctuation like in your question. To do that you need
```
rem = "Bedroom"
answer = ""
for i in range(len(input_string)-len(rem)):
if (input_string[i:i+len(rem)]==rem):
answer = input_string[:i]+input_string[i+len(rem)]
break
``` |
56,757,737 | when I try to run the code. I am getting an error.
The error is
>
>
> ```
> Traceback (most recent call last):
> File "A2.py", line 2, in <module>
> from easysnmp import Session
> ImportError: No module named 'easysnmp'
>
> ```
>
>
Note: I am getting the above error.Even though, I have installed easysnmp module.
The code is
```
#!/usr/bin/python
from easysnmp import Session
import argparse
import time
parser = argparse.ArgumentParser(description='probe')
parser.add_argument('cred',help='credentials')
parser.add_argument('freq',type=float,help='enter frequency')
parser.add_argument('samples',type=int,help='enter samples')
parser.add_argument('oid',nargs='+',help='enter oid')
args=parser.parse_args()
t=1/args.freq
s=args.samples
cred1=args.cred
ip,port,comm=cred1.split(":")
count=0
session=Session(hostname=ip,remote_port=port,community=comm, version=2,timeout=2,retries=1)
args.oid.insert(0, '1.3.6.1.2.1.1.3.0')
old=[]
out1=[]
t4=0
while (count!=s):
t1=time.time()
new = session.get(args.oid)
t2=time.time()
if len(new)==len(old):
newtime=float(new[0].value)/100
oldtime=float(old[0].value)/100
if args.freq > 1:
tdiff = newtime-oldtime
if args.freq <= 1:
tdiff1 = t1-t4
if tdiff!=0:
tdiff = int(tdiff1)
else:
tdiff = int(t)
for i in range(1,len(args.oid)):
if new[i].value!="NOSUCHINSTANCE" and old[i].value!="NOSUCHINSTANCE":
a=int(new[i].value)
b=int(old[i].value)
if a>=b:
out=(a-b)/tdiff
out1.append(out)
if a<b and new[i].snmp_type=="COUNTER64":
out=((2**64+a)-b)/tdiff
out1.append(out)
if a<b and new[i].snmp_type=="COUNTER32":
out=((2**32+a)-b)/tdiff
out1.append(out)
else:
print t1, "|"
count=count+1
if len(out1)!=0:
sar = [str(get) for get in out1]
print int(t1) ,'|', ("|" . join(sar))
old = new[:]
t4=t1
del out1[:]
t3=time.time()
if t-t3+t1>0:
time.sleep(t-t3+t1)
else:
time.sleep(0.0)
``` | 2019/06/25 | [
"https://Stackoverflow.com/questions/56757737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11685289/"
] | Try to put `import easysnmp` at the top of your code, it solved the problem for me at a similar situation! | My wild guess is that you installed the module for python 3 and used python 2 or the other way around.
Try
```
pip install easysnmp
```
or
```
pip3 install easysnmp
``` |
71,981,033 | So, I was messing around with operations in Pandas, and I reached conditional operations. For reference, I have two dataframes like this:
df\_1:
| Time | Coupons\_Sold |
| --- | --- |
| First\_Quarter-2021 | 1041 |
| Second\_Quarter-2021 | 2145 |
| Third\_Quarter-2021 | 1809 |
| Fourth\_Quarter-2021 | 1104 |
df\_2:
| Time | Coupons\_Sold |
| --- | --- |
| First\_Quarter-2022 | 861 |
| Second\_Quarter-2022 | 1024 |
| Third\_Quarter-2021 | 902 |
| Fourth\_Quarter-2021 | 1011 |
I wanted to do a conditional subtraction on these two datasets, such that the new column contains the absolute values from subtraction of the individual elements of the two columns, if and only if the time periods match.
I want something like:
| Time | Coupons\_Sold |
| --- | --- |
| Third\_Quarter-2021 | 907 |
| Fourth\_Quarter-2021 | 93 |
because there are mappings for third and fourth quarters in both dataframes.
I tried this piece of code:
```
new_column = df_1['Coupons_Sold'] - df_2['Coupons_Sold']
```
But, this just gave me:
| center |
| --- |
| 180 |
| 1121 |
| 907 |
| 93 |
Then I tried a few conditional statements like we do in python:
`if df_1['Time'] == df_2['Time']:`
```
df_1['Coupons_Sold'] - df_2['Coupons_Sold']
```
I tried the above code with `in` keyword, but got error.
but these conditional statements just gave me errors. Is there any way to do these kind of operations(py 2.7 or py3.7, both are okay)?
Thanks in advance.
If you need any more info, please ask and I will add the same. | 2022/04/23 | [
"https://Stackoverflow.com/questions/71981033",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7190421/"
] | You could use `merge` + `diff` for the specific columns:
```
cols = ['Time','Coupons_Sold']
out = df1[cols].merge(df2[cols], on='Time', suffixes=('_','')).set_index('Time').diff(axis=1).abs().dropna(axis=1).reset_index()
```
Output:
```
Time Coupons_Sold
2 Third_Quarter-2021 907
3 Fourth_Quarter-2021 93
``` | You may try:
```
tset = set(df1['Time']).intersection(set(df2['Time']))
df3 = df1.loc[df1['Time'].isin(tset)].merge(df2.loc[df2['Time'].isin(tset)], on='Time')
df3['Coupons_Sold']=df3['Coupons_Sold_x']-df3['Coupons_Sold_y']
df3.drop(['Coupons_Sold_x','Coupons_Sold_y'], axis=1,inplace=True)
```
Output (df3):
```
Time Coupons_Sold
0 Third_Quarter-2021 907
1 Fourth_Quarter-2021 93
``` |
71,981,033 | So, I was messing around with operations in Pandas, and I reached conditional operations. For reference, I have two dataframes like this:
df\_1:
| Time | Coupons\_Sold |
| --- | --- |
| First\_Quarter-2021 | 1041 |
| Second\_Quarter-2021 | 2145 |
| Third\_Quarter-2021 | 1809 |
| Fourth\_Quarter-2021 | 1104 |
df\_2:
| Time | Coupons\_Sold |
| --- | --- |
| First\_Quarter-2022 | 861 |
| Second\_Quarter-2022 | 1024 |
| Third\_Quarter-2021 | 902 |
| Fourth\_Quarter-2021 | 1011 |
I wanted to do a conditional subtraction on these two datasets, such that the new column contains the absolute values from subtraction of the individual elements of the two columns, if and only if the time periods match.
I want something like:
| Time | Coupons\_Sold |
| --- | --- |
| Third\_Quarter-2021 | 907 |
| Fourth\_Quarter-2021 | 93 |
because there are mappings for third and fourth quarters in both dataframes.
I tried this piece of code:
```
new_column = df_1['Coupons_Sold'] - df_2['Coupons_Sold']
```
But, this just gave me:
| center |
| --- |
| 180 |
| 1121 |
| 907 |
| 93 |
Then I tried a few conditional statements like we do in python:
`if df_1['Time'] == df_2['Time']:`
```
df_1['Coupons_Sold'] - df_2['Coupons_Sold']
```
I tried the above code with `in` keyword, but got error.
but these conditional statements just gave me errors. Is there any way to do these kind of operations(py 2.7 or py3.7, both are okay)?
Thanks in advance.
If you need any more info, please ask and I will add the same. | 2022/04/23 | [
"https://Stackoverflow.com/questions/71981033",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7190421/"
] | The subtraction is done on the row index. By default its just 0, 1, 2, etc... You could make `Time` the index of both dataframes and then the subtraction will work. You'll get a `Series` with values and `NaN`. If you want the new column to match the shape of the original dataframes, you can just use it. Otherwise, apply `.dropna()` to collapse it.
```
>>> df_1.set_index("Time", inplace=True)
>>> df_2.set_index("Time", inplace=True)
>>> df_1["Coupons_Sold"] - df_2["Coupons_Sold"]
Time
First_Quarter-2021 NaN
First_Quarter-2022 NaN
Fourth_Quarter-2021 93.0
Second_Quarter-2021 NaN
Second_Quarter-2022 NaN
Third_Quarter-2021 907.0
Name: Coupons_Sold, dtype: float64
>>> (df_1["Coupons_Sold"] - df_2["Coupons_Sold"]).dropna()
Time
Fourth_Quarter-2021 93.0
Third_Quarter-2021 907.0
Name: Coupons_Sold, dtype: float64
``` | You may try:
```
tset = set(df1['Time']).intersection(set(df2['Time']))
df3 = df1.loc[df1['Time'].isin(tset)].merge(df2.loc[df2['Time'].isin(tset)], on='Time')
df3['Coupons_Sold']=df3['Coupons_Sold_x']-df3['Coupons_Sold_y']
df3.drop(['Coupons_Sold_x','Coupons_Sold_y'], axis=1,inplace=True)
```
Output (df3):
```
Time Coupons_Sold
0 Third_Quarter-2021 907
1 Fourth_Quarter-2021 93
``` |
31,388,247 | I'm running Python 2.7.
I have an array called "altitude" with the following points
```
[0,1,2,3,4,5,6,7,8,9]
```
I also have an array called "arming\_pin"
```
[0,0,0,0,0,0,1,1,1,1]
```
In my program when arming\_pin is greater than zero I would like to use the "altitude" array data points and ignore the previous points when `"arming_pin"` was = to 0. I would like to call this new array `"altitude_new"`. The "altitude\_new" array would look like:
```
[6,7,8,9]
```
How can I do create this new array in python? Using a conditional statement of some sort? | 2015/07/13 | [
"https://Stackoverflow.com/questions/31388247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5041365/"
] | You can use [`zip`](https://docs.python.org/2/library/functions.html#zip) function within a list comprehension to filter your array :
```
>>> f=[0,1,2,3,4,5,6,7,8,9]
>>> sec=[0,0,0,0,0,0,1,1,1,1]
>>>
>>> [i for i,j in zip(f,sec) if j]
[6, 7, 8, 9]
```
You can also use `itertools.compress` Which is more efficient when you are dealing with larger list :
```
>>> from itertools import compress
>>> list(compress(f,sec))
[6, 7, 8, 9]
```
Or use `numpy.compress`:
```
>>> import numpy as np
>>> np.compress(sec,f)
array([6, 7, 8, 9])
``` | ```
altitude_new=[]
for i in range(len(arming_pin)):
if arming_pin[i] == 1:
altitude_new.append(altitude[i])
```
one line list comprehension:
```
altitude_new = [altitude[i] for i in range(len(arming_pin)) if arming_pin[i]]
```
night shade's comment is now more succinct [j for i,j in enumerate(altitude) if arming\_pin[i]] |
31,388,247 | I'm running Python 2.7.
I have an array called "altitude" with the following points
```
[0,1,2,3,4,5,6,7,8,9]
```
I also have an array called "arming\_pin"
```
[0,0,0,0,0,0,1,1,1,1]
```
In my program when arming\_pin is greater than zero I would like to use the "altitude" array data points and ignore the previous points when `"arming_pin"` was = to 0. I would like to call this new array `"altitude_new"`. The "altitude\_new" array would look like:
```
[6,7,8,9]
```
How can I do create this new array in python? Using a conditional statement of some sort? | 2015/07/13 | [
"https://Stackoverflow.com/questions/31388247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5041365/"
] | You can use [`zip`](https://docs.python.org/2/library/functions.html#zip) function within a list comprehension to filter your array :
```
>>> f=[0,1,2,3,4,5,6,7,8,9]
>>> sec=[0,0,0,0,0,0,1,1,1,1]
>>>
>>> [i for i,j in zip(f,sec) if j]
[6, 7, 8, 9]
```
You can also use `itertools.compress` Which is more efficient when you are dealing with larger list :
```
>>> from itertools import compress
>>> list(compress(f,sec))
[6, 7, 8, 9]
```
Or use `numpy.compress`:
```
>>> import numpy as np
>>> np.compress(sec,f)
array([6, 7, 8, 9])
``` | You can also use the [compress method from itertools](https://docs.python.org/2/library/itertools.html?highlight=itertools#itertools.compress) module, this way:
```
>>> import itertools as it
>>> l1 = [0,1,2,3,4,5,6,7,8,9]
>>> l2 = [0,0,0,0,0,0,1,1,1,1]
>>> list(it.compress(l1,l2))
[6, 7, 8, 9]
``` |
31,388,247 | I'm running Python 2.7.
I have an array called "altitude" with the following points
```
[0,1,2,3,4,5,6,7,8,9]
```
I also have an array called "arming\_pin"
```
[0,0,0,0,0,0,1,1,1,1]
```
In my program when arming\_pin is greater than zero I would like to use the "altitude" array data points and ignore the previous points when `"arming_pin"` was = to 0. I would like to call this new array `"altitude_new"`. The "altitude\_new" array would look like:
```
[6,7,8,9]
```
How can I do create this new array in python? Using a conditional statement of some sort? | 2015/07/13 | [
"https://Stackoverflow.com/questions/31388247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5041365/"
] | You can use [`zip`](https://docs.python.org/2/library/functions.html#zip) function within a list comprehension to filter your array :
```
>>> f=[0,1,2,3,4,5,6,7,8,9]
>>> sec=[0,0,0,0,0,0,1,1,1,1]
>>>
>>> [i for i,j in zip(f,sec) if j]
[6, 7, 8, 9]
```
You can also use `itertools.compress` Which is more efficient when you are dealing with larger list :
```
>>> from itertools import compress
>>> list(compress(f,sec))
[6, 7, 8, 9]
```
Or use `numpy.compress`:
```
>>> import numpy as np
>>> np.compress(sec,f)
array([6, 7, 8, 9])
``` | This is my solution, is meant to be easy to understand for people not accustomed to list comprehension.
```
altitude = [0,1,2,3,4,5,6,7,8,9]
arming_pin = [0,0,0,0,0,0,1,1,1,1]
altitude_new = []
idx = 0 # track the indices
for item in arming_pin:
if item > 0:
altitude_new.append(altitude[idx])
idx += 1
print altitude_new
>>> [6, 7, 8, 9]
``` |
31,388,247 | I'm running Python 2.7.
I have an array called "altitude" with the following points
```
[0,1,2,3,4,5,6,7,8,9]
```
I also have an array called "arming\_pin"
```
[0,0,0,0,0,0,1,1,1,1]
```
In my program when arming\_pin is greater than zero I would like to use the "altitude" array data points and ignore the previous points when `"arming_pin"` was = to 0. I would like to call this new array `"altitude_new"`. The "altitude\_new" array would look like:
```
[6,7,8,9]
```
How can I do create this new array in python? Using a conditional statement of some sort? | 2015/07/13 | [
"https://Stackoverflow.com/questions/31388247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5041365/"
] | You can also use the [compress method from itertools](https://docs.python.org/2/library/itertools.html?highlight=itertools#itertools.compress) module, this way:
```
>>> import itertools as it
>>> l1 = [0,1,2,3,4,5,6,7,8,9]
>>> l2 = [0,0,0,0,0,0,1,1,1,1]
>>> list(it.compress(l1,l2))
[6, 7, 8, 9]
``` | ```
altitude_new=[]
for i in range(len(arming_pin)):
if arming_pin[i] == 1:
altitude_new.append(altitude[i])
```
one line list comprehension:
```
altitude_new = [altitude[i] for i in range(len(arming_pin)) if arming_pin[i]]
```
night shade's comment is now more succinct [j for i,j in enumerate(altitude) if arming\_pin[i]] |
31,388,247 | I'm running Python 2.7.
I have an array called "altitude" with the following points
```
[0,1,2,3,4,5,6,7,8,9]
```
I also have an array called "arming\_pin"
```
[0,0,0,0,0,0,1,1,1,1]
```
In my program when arming\_pin is greater than zero I would like to use the "altitude" array data points and ignore the previous points when `"arming_pin"` was = to 0. I would like to call this new array `"altitude_new"`. The "altitude\_new" array would look like:
```
[6,7,8,9]
```
How can I do create this new array in python? Using a conditional statement of some sort? | 2015/07/13 | [
"https://Stackoverflow.com/questions/31388247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5041365/"
] | You can also use the [compress method from itertools](https://docs.python.org/2/library/itertools.html?highlight=itertools#itertools.compress) module, this way:
```
>>> import itertools as it
>>> l1 = [0,1,2,3,4,5,6,7,8,9]
>>> l2 = [0,0,0,0,0,0,1,1,1,1]
>>> list(it.compress(l1,l2))
[6, 7, 8, 9]
``` | This is my solution, is meant to be easy to understand for people not accustomed to list comprehension.
```
altitude = [0,1,2,3,4,5,6,7,8,9]
arming_pin = [0,0,0,0,0,0,1,1,1,1]
altitude_new = []
idx = 0 # track the indices
for item in arming_pin:
if item > 0:
altitude_new.append(altitude[idx])
idx += 1
print altitude_new
>>> [6, 7, 8, 9]
``` |
24,835,100 | I'm trying to get a queried-excel file from a site. When I enter the direct link, it will lead to a login page and once I've entered my username and password, it will proceed to download the excel file automatically. I am trying to avoid installing additional module that's not part of the standard python (This script will be running on a "standardize machine" and it won't work if the module is not installed)
I've tried the following but I see a "page login" information in the excel file itself :-|
```
import urllib
url = "myLink_queriedResult/result.xls"
urllib.urlretrieve(url,"C:\\test.xls")
```
SO.. then I looked into using urllib2 with password authentication but then I'm stuck.
I have the following code:
```
import urllib2
import urllib
theurl = 'myLink_queriedResult/result.xls'
username = 'myName'
password = 'myPassword'
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, theurl, username, password)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
urllib2.install_opener(opener)
pagehandle = urllib2.urlopen(theurl)
pagehandle.read() ##but seems like it still only contain a 'login page'
```
Appreciate any advice in advance. :) | 2014/07/18 | [
"https://Stackoverflow.com/questions/24835100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3854706/"
] | Urllib is generally eschewed these days for [Requests](http://docs.python-requests.org/en/latest/user/quickstart/#response-content).
This would do what you want:
```
import requests
from requests.auth import HTTPBasicAuth
theurl= 'myLink_queriedResult/result.xls'
username = 'myUsername'
password = 'myPassword'
r=requests.get(theurl, auth=HTTPBasicAuth(username, password))
```
Here you can find more [information on authentication using request.](http://docs.python-requests.org/en/latest/user/authentication/) | You will need to use cookies to allow authentication.
`
```
# check the input name for login information by inspecting source
values ={'username' : username, 'password':password}
data = urllib.parse.urlencode(values).encode("utf-8")
cookies = cookielib.CookieJar()
# create "opener" (OpenerDirector instance)
opener = urllib.request.build_opener(
urllib.request.HTTPRedirectHandler(),
urllib.request.HTTPHandler(debuglevel=0),
urllib.request.HTTPSHandler(debuglevel=0),
urllib.request.HTTPCookieProcessor(cookies))
# use the opener to fetch a URL
response = opener.open(the_url,data)
# Install the opener.
# Now all calls to urllib.request.urlopen use our opener.
urllib.request.install_opener(opener)`
``` |
24,835,100 | I'm trying to get a queried-excel file from a site. When I enter the direct link, it will lead to a login page and once I've entered my username and password, it will proceed to download the excel file automatically. I am trying to avoid installing additional module that's not part of the standard python (This script will be running on a "standardize machine" and it won't work if the module is not installed)
I've tried the following but I see a "page login" information in the excel file itself :-|
```
import urllib
url = "myLink_queriedResult/result.xls"
urllib.urlretrieve(url,"C:\\test.xls")
```
SO.. then I looked into using urllib2 with password authentication but then I'm stuck.
I have the following code:
```
import urllib2
import urllib
theurl = 'myLink_queriedResult/result.xls'
username = 'myName'
password = 'myPassword'
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, theurl, username, password)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
urllib2.install_opener(opener)
pagehandle = urllib2.urlopen(theurl)
pagehandle.read() ##but seems like it still only contain a 'login page'
```
Appreciate any advice in advance. :) | 2014/07/18 | [
"https://Stackoverflow.com/questions/24835100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3854706/"
] | Urllib is generally eschewed these days for [Requests](http://docs.python-requests.org/en/latest/user/quickstart/#response-content).
This would do what you want:
```
import requests
from requests.auth import HTTPBasicAuth
theurl= 'myLink_queriedResult/result.xls'
username = 'myUsername'
password = 'myPassword'
r=requests.get(theurl, auth=HTTPBasicAuth(username, password))
```
Here you can find more [information on authentication using request.](http://docs.python-requests.org/en/latest/user/authentication/) | You may try through this way with Python 3,
```
import requests
#import necessary Authentication Method
from requests_ntlm import HttpNtlmAuth
from xlrd import open_workbook
import pandas as pd
from io import BytesIO
r = requests.get("http://example.website",auth=HttpNtlmAuth('acc','password'))
xd = pd.read_excel(BytesIO(r.content))
```
Ref:
1. <https://medium.com/ibm-data-science-experience/excel-files-loading-from-object-storage-python-a54a2cbf4609>
2. <http://www.python-requests.org/en/latest/user/authentication/#basic-authentication>
3. [Pandas read\_csv from url](https://stackoverflow.com/questions/32400867/pandas-read-csv-from-url) |
24,835,100 | I'm trying to get a queried-excel file from a site. When I enter the direct link, it will lead to a login page and once I've entered my username and password, it will proceed to download the excel file automatically. I am trying to avoid installing additional module that's not part of the standard python (This script will be running on a "standardize machine" and it won't work if the module is not installed)
I've tried the following but I see a "page login" information in the excel file itself :-|
```
import urllib
url = "myLink_queriedResult/result.xls"
urllib.urlretrieve(url,"C:\\test.xls")
```
SO.. then I looked into using urllib2 with password authentication but then I'm stuck.
I have the following code:
```
import urllib2
import urllib
theurl = 'myLink_queriedResult/result.xls'
username = 'myName'
password = 'myPassword'
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, theurl, username, password)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
urllib2.install_opener(opener)
pagehandle = urllib2.urlopen(theurl)
pagehandle.read() ##but seems like it still only contain a 'login page'
```
Appreciate any advice in advance. :) | 2014/07/18 | [
"https://Stackoverflow.com/questions/24835100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3854706/"
] | Urllib is generally eschewed these days for [Requests](http://docs.python-requests.org/en/latest/user/quickstart/#response-content).
This would do what you want:
```
import requests
from requests.auth import HTTPBasicAuth
theurl= 'myLink_queriedResult/result.xls'
username = 'myUsername'
password = 'myPassword'
r=requests.get(theurl, auth=HTTPBasicAuth(username, password))
```
Here you can find more [information on authentication using request.](http://docs.python-requests.org/en/latest/user/authentication/) | You can use requests.get to download file. Try the sample code:
```
import requests
from requests.auth import HTTPBasicAuth
def download_file(user_name, user_pwd, url, file_path):
file_name = url.rsplit('/', 1)[-1]
with requests.get(url, stream = True, auth = HTTPBasicAuth(user_name, user_pwd)) as response:
with open(file_path + "/" + file_name, 'wb') as f:
for chunk in response.iter_content(chunk_size = 8192):
f.write(chunk)
# You will download the login.html file to /home/dan/
download_file("dan", "password", "http://www.example.com/login.html", "/home/dan/")
```
Enjoy it!! |
24,835,100 | I'm trying to get a queried-excel file from a site. When I enter the direct link, it will lead to a login page and once I've entered my username and password, it will proceed to download the excel file automatically. I am trying to avoid installing additional module that's not part of the standard python (This script will be running on a "standardize machine" and it won't work if the module is not installed)
I've tried the following but I see a "page login" information in the excel file itself :-|
```
import urllib
url = "myLink_queriedResult/result.xls"
urllib.urlretrieve(url,"C:\\test.xls")
```
SO.. then I looked into using urllib2 with password authentication but then I'm stuck.
I have the following code:
```
import urllib2
import urllib
theurl = 'myLink_queriedResult/result.xls'
username = 'myName'
password = 'myPassword'
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, theurl, username, password)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
urllib2.install_opener(opener)
pagehandle = urllib2.urlopen(theurl)
pagehandle.read() ##but seems like it still only contain a 'login page'
```
Appreciate any advice in advance. :) | 2014/07/18 | [
"https://Stackoverflow.com/questions/24835100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3854706/"
] | You may try through this way with Python 3,
```
import requests
#import necessary Authentication Method
from requests_ntlm import HttpNtlmAuth
from xlrd import open_workbook
import pandas as pd
from io import BytesIO
r = requests.get("http://example.website",auth=HttpNtlmAuth('acc','password'))
xd = pd.read_excel(BytesIO(r.content))
```
Ref:
1. <https://medium.com/ibm-data-science-experience/excel-files-loading-from-object-storage-python-a54a2cbf4609>
2. <http://www.python-requests.org/en/latest/user/authentication/#basic-authentication>
3. [Pandas read\_csv from url](https://stackoverflow.com/questions/32400867/pandas-read-csv-from-url) | You will need to use cookies to allow authentication.
`
```
# check the input name for login information by inspecting source
values ={'username' : username, 'password':password}
data = urllib.parse.urlencode(values).encode("utf-8")
cookies = cookielib.CookieJar()
# create "opener" (OpenerDirector instance)
opener = urllib.request.build_opener(
urllib.request.HTTPRedirectHandler(),
urllib.request.HTTPHandler(debuglevel=0),
urllib.request.HTTPSHandler(debuglevel=0),
urllib.request.HTTPCookieProcessor(cookies))
# use the opener to fetch a URL
response = opener.open(the_url,data)
# Install the opener.
# Now all calls to urllib.request.urlopen use our opener.
urllib.request.install_opener(opener)`
``` |
24,835,100 | I'm trying to get a queried-excel file from a site. When I enter the direct link, it will lead to a login page and once I've entered my username and password, it will proceed to download the excel file automatically. I am trying to avoid installing additional module that's not part of the standard python (This script will be running on a "standardize machine" and it won't work if the module is not installed)
I've tried the following but I see a "page login" information in the excel file itself :-|
```
import urllib
url = "myLink_queriedResult/result.xls"
urllib.urlretrieve(url,"C:\\test.xls")
```
SO.. then I looked into using urllib2 with password authentication but then I'm stuck.
I have the following code:
```
import urllib2
import urllib
theurl = 'myLink_queriedResult/result.xls'
username = 'myName'
password = 'myPassword'
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, theurl, username, password)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
urllib2.install_opener(opener)
pagehandle = urllib2.urlopen(theurl)
pagehandle.read() ##but seems like it still only contain a 'login page'
```
Appreciate any advice in advance. :) | 2014/07/18 | [
"https://Stackoverflow.com/questions/24835100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3854706/"
] | You may try through this way with Python 3,
```
import requests
#import necessary Authentication Method
from requests_ntlm import HttpNtlmAuth
from xlrd import open_workbook
import pandas as pd
from io import BytesIO
r = requests.get("http://example.website",auth=HttpNtlmAuth('acc','password'))
xd = pd.read_excel(BytesIO(r.content))
```
Ref:
1. <https://medium.com/ibm-data-science-experience/excel-files-loading-from-object-storage-python-a54a2cbf4609>
2. <http://www.python-requests.org/en/latest/user/authentication/#basic-authentication>
3. [Pandas read\_csv from url](https://stackoverflow.com/questions/32400867/pandas-read-csv-from-url) | You can use requests.get to download file. Try the sample code:
```
import requests
from requests.auth import HTTPBasicAuth
def download_file(user_name, user_pwd, url, file_path):
file_name = url.rsplit('/', 1)[-1]
with requests.get(url, stream = True, auth = HTTPBasicAuth(user_name, user_pwd)) as response:
with open(file_path + "/" + file_name, 'wb') as f:
for chunk in response.iter_content(chunk_size = 8192):
f.write(chunk)
# You will download the login.html file to /home/dan/
download_file("dan", "password", "http://www.example.com/login.html", "/home/dan/")
```
Enjoy it!! |
34,944,879 | How to pass data from PHP to Python?
This is my code.
PHP:
```
$data = 'hello';
$result = shell_exec('/home/pi/Python.py' .$data);
```
Python:
```
result = sys.argv[1]
print(result)
```
But when run python code it show error:
`"IndexError: list index out of range"`.
don't know Why?
Is it have other code for pass data from PHP to python? | 2016/01/22 | [
"https://Stackoverflow.com/questions/34944879",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5825766/"
] | Provide space between command and argument:
try the following snippet
**php** : test.php
```
<?php
$data = 'hello';
$output=shell_exec("python test.py " .$data);
echo $output;
?>
```
**python** : test.py
```
import sys
result = sys.argv[1]
print(result+" by python!")
``` | You should add space after script name
```
$data = 'hello';
$result = shell_exec('/home/pi/Python.py ' .$data);
``` |
48,114,601 | is there a way to mount a volume under `volumes:` directive in ansible docker\_service module?
I want to write one docker\_compose, and have a variable to choose the correct volume to mount.
```html
- name: Add docker_service for the Ansible Container
docker_service:
project_name: jro
definition:
version: '3'
services:
ansible:
image: python:3.7.0a3-alpine3.7
volumes:
- {CONDITION xxx} then "xxx:ccc"
- {CONDITION yyy} then "yyy:ccc:
``` | 2018/01/05 | [
"https://Stackoverflow.com/questions/48114601",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6693072/"
] | Literally for the problem defined in the question:
```
volumes:
- "{{ 'xxx:ccc' if xxx else omit }}"
- "{{ 'yyy:ccc' if yyy else omit }}"
```
I cannot verify now, if `omit` would work in this particular pattern
Or (a guess what you might in fact be looking for):
```
volumes:
- "{{ 'xxx:ccc' if my_condition else 'yyy:ccc' }}"
``` | That's not how you would typically handle conditionals in an Ansible playbook. You haven't told us what your conditions are, so it's hard to directly answer your question, but you could do something like:
```
- set_fact:
volume_to_mount: foo
when: condition1
- set_fact:
volume_to_mount: bar
when: condition2
- name: Add docker_service for the Ansible Container
docker_service:
project_name: jro
definition:
version: '3'
services:
ansible:
image: python:3.7.0a3-alpine3.7
volumes:
- "{{volume_to_mount}}:/mountpoint"
```
If your conditions are hostnames or groups, you would generally handle that through the use of group or host vars files. E.g., if you wanted to mount volume "foo" for all hosts in the "foo\_group" hostgroup, and volume "bar" for all hosts in the "bar\_group" hostgroup, you would create `group_vars/foo_group.yml` with:
```
volume_to_mount: foo
```
And `groups_vars/bar_group.yml` with:
```
volume_to_mount: bar
```
If you need to mount *multiple* volumes, you would do something similar to the above but using lists instead of single values, e.g.:
```
volumes_to_mount:
- "bar:/mount_for_bar"
- "foo:/mount_for_foo"
```
And then in your `docker_service` task:
```
- name: Add docker_service for the Ansible Container
docker_service:
project_name: jro
definition:
version: '3'
services:
ansible:
image: python:3.7.0a3-alpine3.7
volumes: "{{ volumes_to_mount }}"
```
Hopefully something here points you in the right direction. Feel free to provide more details about what you're trying to do if you'd like a more targeted answer. |
46,751,152 | I'm trying to find multiple matches across multiple lines of text with a delimiter to stop the search using regex in python... my query works well for what I'm trying to accomplish if what I need is all on the same line:
re.findall('([a-zA-Z]{3}\d-[aAeE][rRsS]\d.\*), output)
the problem is, sometimes the additional data I'm trying to capture doesn't fit on the same line and goes to the next... is there a way to set the pattern match to stop if it either finds the next match or hits a delimiter (= in this case)? Simplified example with two matches below, and I need the ability to capture both...
Example
```
Port Id Description
3/2/4 Part of aggregate interface lag-4. Next device in path sea1-as2
lag-4, sea1-as2 3/1/2.
``` | 2017/10/15 | [
"https://Stackoverflow.com/questions/46751152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8682720/"
] | It seems that all you have to do is to add `[\s\S]*?` to capture whatever coming in the next line and include the expected stops `, | .` to stop the match. Note that it is important to make `[\s\S]*?` lazy, otherwise, it will match the whole thing.
```
print(re.findall(r'([a-zA-Z]{3}\d-[aAeE][rRsS]\d[\s\S]*?\d)(?:,|\.)', output))
```
output
```
['sea1-as2 lag-4', 'sea1-as2 3/1/2']
``` | You mentioned `[a-zA-Z]` and `[aAeE][rRsS]`. There are several ways to set
[re.IGNORECASE](https://docs.python.org/3/library/re.html) so that `[ae][rs]` would suffice.
You didn't make it clear if you're using `re.MULTILINE` or if you're deleting newlines before evaluating the regex. You end with `.*` which could trivially become
```
[^=]*
```
if you want everything up to the `=` delimiter.
Alternatively, before evaluating the regex you could split on `\n` newline and `=` equal, so you hand in appropriate size chunks for evaluation. |
15,456,689 | I installed [Python 2.7.3](http://www.python.org/download/) on my Windows 7 computer using the binary, the first link. After installing it, IDLE works but nothing else recognizes Python. For example, typing python at the command prompt returns the message "'Python is not recognized as an internal or external command, operable program or bath file."
Following [this post](https://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-in-windows-7), I made sure that python 2.7 was in the PYTHONPATH environment variable. However, that didn't help.
What should I do? | 2013/03/17 | [
"https://Stackoverflow.com/questions/15456689",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1743986/"
] | `PYTHONPATH` system variable is used by Python itself to find directories with installed packages.
`PATH` system variable is used by OS (particularly Windows) to find executables which can open certain files like `*.py` scripts.
So, you need to add directory with python.exe (for example `C:\Python27`) to `PATH` system (or user) variable and not to `PYTHONPATH`. It can be done the same way as described in the link you've found in the same tool window.
For example on my machine `PATH` system variable is set to `C:\Python27;C:\MinGW\bin;...` | Like Vladimir commented, for [setting up python in windows](http://www.anthonydebarros.com/2011/10/15/setting-up-python-in-windows-7/), you need to add the directory where your python.exe is located (for example `C:\Python27`) to **PATH**
You can **confirm** if python is in your environment variables by looking at the output of echo `%path%`
Keep in mind that after editing the PATH variable using the control panel, you have to **open a new terminal**, as the setting will NOT be updated in existing terminals.
Another possibility is that you added the wrong path to the PATH variable. Verify it.
The bottom line is, if the directory of your python.exe is really in PATH, then running python will really work. |
15,456,689 | I installed [Python 2.7.3](http://www.python.org/download/) on my Windows 7 computer using the binary, the first link. After installing it, IDLE works but nothing else recognizes Python. For example, typing python at the command prompt returns the message "'Python is not recognized as an internal or external command, operable program or bath file."
Following [this post](https://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-in-windows-7), I made sure that python 2.7 was in the PYTHONPATH environment variable. However, that didn't help.
What should I do? | 2013/03/17 | [
"https://Stackoverflow.com/questions/15456689",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1743986/"
] | `PYTHONPATH` system variable is used by Python itself to find directories with installed packages.
`PATH` system variable is used by OS (particularly Windows) to find executables which can open certain files like `*.py` scripts.
So, you need to add directory with python.exe (for example `C:\Python27`) to `PATH` system (or user) variable and not to `PYTHONPATH`. It can be done the same way as described in the link you've found in the same tool window.
For example on my machine `PATH` system variable is set to `C:\Python27;C:\MinGW\bin;...` | Here are your steps:
Right-click **Computer** and select **Properties**.
In the dialog box, select **Advanced System Settings**.
In the next dialog, select **Environment Variables**. In the **User Variables** section, edit the `PATH` statement to include this:
```
C:\Python27;C:\Python27\Lib\site-packages\;C:\Python27\Scripts\;
```
Now, you can open a command prompt (`Start Menu|Accessorie`s or `Start Menu|Run|cmd`) and type:
```
C:\> python
```
That will load the Python interpreter! |
15,456,689 | I installed [Python 2.7.3](http://www.python.org/download/) on my Windows 7 computer using the binary, the first link. After installing it, IDLE works but nothing else recognizes Python. For example, typing python at the command prompt returns the message "'Python is not recognized as an internal or external command, operable program or bath file."
Following [this post](https://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-in-windows-7), I made sure that python 2.7 was in the PYTHONPATH environment variable. However, that didn't help.
What should I do? | 2013/03/17 | [
"https://Stackoverflow.com/questions/15456689",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1743986/"
] | `PYTHONPATH` system variable is used by Python itself to find directories with installed packages.
`PATH` system variable is used by OS (particularly Windows) to find executables which can open certain files like `*.py` scripts.
So, you need to add directory with python.exe (for example `C:\Python27`) to `PATH` system (or user) variable and not to `PYTHONPATH`. It can be done the same way as described in the link you've found in the same tool window.
For example on my machine `PATH` system variable is set to `C:\Python27;C:\MinGW\bin;...` | You can install for single user rather than choosing the option of "Install for all users". I was facing the same issue, but when I tried installing just for myself, I was able to install successfully. |
37,771,237 | I am building a web crawler using python. But the `urlopen(url)` download the files in the page. I just want to read the html, and skip if the url points to a downloadable file.
I have tried using timeouts
```
urlopen(url, timeout = 5).read()
```
so that large files can be avoided, but this doesn't seem to work.
I also thought to make a list of common file extensions, and skip the url whenever the url ends with the extension.
```
flag = False
extensions = ['.zip', '.mp3',....]
for extension in extensions:
if url.endswith(extension):
flag = True
continue
if not flag:
x = urlopen(url).read()
```
But this method will not be very efficient I suppose.
Any ideas ? | 2016/06/12 | [
"https://Stackoverflow.com/questions/37771237",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You can use the `Content-Type` HTTP header to find out if it's HTML or something else:
```
x= urlopen(url)
if 'text/html' in x.headers.get('Content-Type'):
x= x.read()
``` | To narrow the amount of file content to check, check `retcode` before checking file content.
```
doc = urllib.urlopen(url, timeout=5)
if doc and doc.getCode() == 200 and doc.headers.get('Content-Type').startswith("text/html"):
x = doc.read()
``` |
37,771,237 | I am building a web crawler using python. But the `urlopen(url)` download the files in the page. I just want to read the html, and skip if the url points to a downloadable file.
I have tried using timeouts
```
urlopen(url, timeout = 5).read()
```
so that large files can be avoided, but this doesn't seem to work.
I also thought to make a list of common file extensions, and skip the url whenever the url ends with the extension.
```
flag = False
extensions = ['.zip', '.mp3',....]
for extension in extensions:
if url.endswith(extension):
flag = True
continue
if not flag:
x = urlopen(url).read()
```
But this method will not be very efficient I suppose.
Any ideas ? | 2016/06/12 | [
"https://Stackoverflow.com/questions/37771237",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You can use the `Content-Type` HTTP header to find out if it's HTML or something else:
```
x= urlopen(url)
if 'text/html' in x.headers.get('Content-Type'):
x= x.read()
``` | you can achieve this by [python requests](http://docs.python-requests.org/en/master/)
```
In [8]: import requests
In [9]: h = requests.head("http://stackoverflow.com/questions/37771237/avoid-downloadable-files-in-python-urlopen")
In [10]: if "text/html" in h.headers["content-type"]:
....: content = requests.get("http://stackoverflow.com/questions/37771237/avoid-downloadable-files-in-python-urlopen").text
....:
``` |
37,771,237 | I am building a web crawler using python. But the `urlopen(url)` download the files in the page. I just want to read the html, and skip if the url points to a downloadable file.
I have tried using timeouts
```
urlopen(url, timeout = 5).read()
```
so that large files can be avoided, but this doesn't seem to work.
I also thought to make a list of common file extensions, and skip the url whenever the url ends with the extension.
```
flag = False
extensions = ['.zip', '.mp3',....]
for extension in extensions:
if url.endswith(extension):
flag = True
continue
if not flag:
x = urlopen(url).read()
```
But this method will not be very efficient I suppose.
Any ideas ? | 2016/06/12 | [
"https://Stackoverflow.com/questions/37771237",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | you can achieve this by [python requests](http://docs.python-requests.org/en/master/)
```
In [8]: import requests
In [9]: h = requests.head("http://stackoverflow.com/questions/37771237/avoid-downloadable-files-in-python-urlopen")
In [10]: if "text/html" in h.headers["content-type"]:
....: content = requests.get("http://stackoverflow.com/questions/37771237/avoid-downloadable-files-in-python-urlopen").text
....:
``` | To narrow the amount of file content to check, check `retcode` before checking file content.
```
doc = urllib.urlopen(url, timeout=5)
if doc and doc.getCode() == 200 and doc.headers.get('Content-Type').startswith("text/html"):
x = doc.read()
``` |