qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
1,312,524 | I'm attempting to build a webpage that loads depending on the input provided. I'm having some trouble wrapping my head around event handling in javascript, basically. Coming from python, if I wanted to wait for a specific keyboard input before moving on to the next object to display, I would create a *while* loop and put a key listener inside it.
Python:
```
def getInput():
while 1:
for event in pygame.event.get(): #returns a list of events from the keyboard/mouse
if event.type == KEYDOWN:
if event.key == "enter": # for example
do function()
return
elif event.key == "up":
do function2()
continue
else: continue # for clarity
```
In trying to find a way to implement this in DOM/javascript, I seem to just crash the page (I assume due to the While Loop), but I presume this is because my event handling is poorly written. Also, registering event handlers with "element.onkeydown = function;" difficult for me to wrap my head around, and setInterval(foo(), interval] hasn't brought me much success.
Basically, I want a "listening" loop to do a certain behavior for key X, but to break when key Y is hit. | 2009/08/21 | [
"https://Stackoverflow.com/questions/1312524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/160854/"
] | In JavaScript, you give up control of the main loop. The browser runs the main loop and calls back down into your code when an event or timeout/interval occurs. You have to handle the event and then return so that the browser can get on with doing other things, firing events, and so on.
So you cannot have a ‘listening’ loop. The browser does that for you, giving you the event and letting you deal with it, but once you've finished handling the event you must return. You can't fall back into a different loop. This means you can't write step-by-step procedural code; if you have state that persists between event calls you must store it, eg. in a variable.
This approach cannot work:
```
<input type="text" readonly="readonly" value="" id="status" />
var s= document.getElementById('status');
s.value= 'Press A now';
while (true) {
var e= eventLoop.nextKeyEvent(); // THERE IS NO SUCH THING AS THIS
if (e.which=='a')
break
}
s.value= 'Press Y or N';
while (true) {
var e= eventLoop.nextKeyEvent();
if (e.which=='y') ...
```
Step-by-step code has to be turned inside out so that the browser calls down to you, instead of you calling up to the browser:
```
var state= 0;
function keypressed(event) {
var key= String.fromCharCode(event? event.which : window.event.keyCode); // IE compatibility
switch (state) {
case 0:
if (key=='a') {
s.value= 'Press Y or N';
state++;
}
break;
case 1:
if (key=='y') ...
break;
}
}
s.value= 'Press A now';
document.onkeypress= keypressed;
```
You can also make code look a little more linear and clean up some of the state stuff by using nested anonymous functions:
```
s.value= 'Press A now';
document.onkeypress= function(event) {
var key= String.fromCharCode(event? event.which : window.event.keyCode);
if (key=='a') {
s.value= 'Press Y or N';
document.onkeypress= function(event) {
var key= String.fromCharCode(event? event.which : window.event.keyCode);
if (key=='y') ...
};
}
};
``` | ```
document.onkeydown = function(e) {
//do what you need to do
}
```
That's all it takes in javascript. You don't need to loop to wait for the event to happen, whenever the event occurs that function will be called, which in turn can call other functions, do whatever needs to be be done. Think of it as that instead of you having to wait for the event your looking for to happen, the event your looking for will let you know when it happens. |
1,312,524 | I'm attempting to build a webpage that loads depending on the input provided. I'm having some trouble wrapping my head around event handling in javascript, basically. Coming from python, if I wanted to wait for a specific keyboard input before moving on to the next object to display, I would create a *while* loop and put a key listener inside it.
Python:
```
def getInput():
while 1:
for event in pygame.event.get(): #returns a list of events from the keyboard/mouse
if event.type == KEYDOWN:
if event.key == "enter": # for example
do function()
return
elif event.key == "up":
do function2()
continue
else: continue # for clarity
```
In trying to find a way to implement this in DOM/javascript, I seem to just crash the page (I assume due to the While Loop), but I presume this is because my event handling is poorly written. Also, registering event handlers with "element.onkeydown = function;" difficult for me to wrap my head around, and setInterval(foo(), interval] hasn't brought me much success.
Basically, I want a "listening" loop to do a certain behavior for key X, but to break when key Y is hit. | 2009/08/21 | [
"https://Stackoverflow.com/questions/1312524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/160854/"
] | you should not use such loops in javascript. basically you do not want to block the browser from doing its job. Thus you work with events (onkeyup/down).
also instead of a loop you should use setTimeout if you want to wait a little and continue if something happened
you can do sth like that:
```
<html>
<script>
var dataToLoad = new Array('data1', 'data2', 'data3' );
var pos = 0;
function continueData(ev) {
// do whatever checks you need about key
var ele = document.getElementById("mydata");
if (pos < dataToLoad.length)
{
ele.appendChild(document.createTextNode(dataToLoad[pos]));
pos++;
}
}
</script>
<body onkeyup="continueData()"><div id="mydata"></div></body></html>
```
everytime a key is released the next data field is appended | ```
document.onkeydown = function(e) {
//do what you need to do
}
```
That's all it takes in javascript. You don't need to loop to wait for the event to happen, whenever the event occurs that function will be called, which in turn can call other functions, do whatever needs to be be done. Think of it as that instead of you having to wait for the event your looking for to happen, the event your looking for will let you know when it happens. |
1,312,524 | I'm attempting to build a webpage that loads depending on the input provided. I'm having some trouble wrapping my head around event handling in javascript, basically. Coming from python, if I wanted to wait for a specific keyboard input before moving on to the next object to display, I would create a *while* loop and put a key listener inside it.
Python:
```
def getInput():
while 1:
for event in pygame.event.get(): #returns a list of events from the keyboard/mouse
if event.type == KEYDOWN:
if event.key == "enter": # for example
do function()
return
elif event.key == "up":
do function2()
continue
else: continue # for clarity
```
In trying to find a way to implement this in DOM/javascript, I seem to just crash the page (I assume due to the While Loop), but I presume this is because my event handling is poorly written. Also, registering event handlers with "element.onkeydown = function;" difficult for me to wrap my head around, and setInterval(foo(), interval] hasn't brought me much success.
Basically, I want a "listening" loop to do a certain behavior for key X, but to break when key Y is hit. | 2009/08/21 | [
"https://Stackoverflow.com/questions/1312524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/160854/"
] | In JavaScript, you give up control of the main loop. The browser runs the main loop and calls back down into your code when an event or timeout/interval occurs. You have to handle the event and then return so that the browser can get on with doing other things, firing events, and so on.
So you cannot have a ‘listening’ loop. The browser does that for you, giving you the event and letting you deal with it, but once you've finished handling the event you must return. You can't fall back into a different loop. This means you can't write step-by-step procedural code; if you have state that persists between event calls you must store it, eg. in a variable.
This approach cannot work:
```
<input type="text" readonly="readonly" value="" id="status" />
var s= document.getElementById('status');
s.value= 'Press A now';
while (true) {
var e= eventLoop.nextKeyEvent(); // THERE IS NO SUCH THING AS THIS
if (e.which=='a')
break
}
s.value= 'Press Y or N';
while (true) {
var e= eventLoop.nextKeyEvent();
if (e.which=='y') ...
```
Step-by-step code has to be turned inside out so that the browser calls down to you, instead of you calling up to the browser:
```
var state= 0;
function keypressed(event) {
var key= String.fromCharCode(event? event.which : window.event.keyCode); // IE compatibility
switch (state) {
case 0:
if (key=='a') {
s.value= 'Press Y or N';
state++;
}
break;
case 1:
if (key=='y') ...
break;
}
}
s.value= 'Press A now';
document.onkeypress= keypressed;
```
You can also make code look a little more linear and clean up some of the state stuff by using nested anonymous functions:
```
s.value= 'Press A now';
document.onkeypress= function(event) {
var key= String.fromCharCode(event? event.which : window.event.keyCode);
if (key=='a') {
s.value= 'Press Y or N';
document.onkeypress= function(event) {
var key= String.fromCharCode(event? event.which : window.event.keyCode);
if (key=='y') ...
};
}
};
``` | Check out the YUI key listener
<http://developer.yahoo.com/yui/docs/YAHOO.util.KeyListener.html>
using the key listener, YUI takes care of capturing any events. IN javascript, there will almost never be an instance where you must wait in a while loop for something to happen.
If you need examples of how event handling works, check out these pages.
<http://developer.yahoo.com/yui/examples/event/eventsimple.html> |
1,312,524 | I'm attempting to build a webpage that loads depending on the input provided. I'm having some trouble wrapping my head around event handling in javascript, basically. Coming from python, if I wanted to wait for a specific keyboard input before moving on to the next object to display, I would create a *while* loop and put a key listener inside it.
Python:
```
def getInput():
while 1:
for event in pygame.event.get(): #returns a list of events from the keyboard/mouse
if event.type == KEYDOWN:
if event.key == "enter": # for example
do function()
return
elif event.key == "up":
do function2()
continue
else: continue # for clarity
```
In trying to find a way to implement this in DOM/javascript, I seem to just crash the page (I assume due to the While Loop), but I presume this is because my event handling is poorly written. Also, registering event handlers with "element.onkeydown = function;" difficult for me to wrap my head around, and setInterval(foo(), interval] hasn't brought me much success.
Basically, I want a "listening" loop to do a certain behavior for key X, but to break when key Y is hit. | 2009/08/21 | [
"https://Stackoverflow.com/questions/1312524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/160854/"
] | For easier implementation of event handling I recommend you to use a library such as [Prototype](http://www.prototypejs.org/api/event) or [Jquery](http://docs.jquery.com/Events) (Note that both links take you to their respective Event handling documentation.
In order to use them you have to keep in mind 3 things:
* What DOM element you want to observe
* What Event you want to capture
* What action will the event trigger
This three points are mutually inclusive, meaning you need to take care of the 3 when writing the code.
So having this in mind, using Prototype, you could do this:
```
Event.observe($('id_of_the_element_to_observe'), 'keypress', function(ev) {
// the argument ev is the event object that has some useful information such
// as which keycode was pressed.
code_to_run;
});
```
Here is the code of a more useful example, a CharacterCounter (such as the one found in Twitter, but surely a lot less reliable ;) ):
```
var CharacterCounter = Class.create({
initialize: function(input, counter, max_chars) {
this.input = input;
this.counter = counter;
this.max_chars = max_chars;
Event.observe(this.input, 'keypress', this.keyPressHandler.bind(this));
Event.observe(this.input, 'keyup', this.keyUpHandler.bind(this));
},
keyUpHandler: function() {
words_left = this.max_chars - $F(this.input).length;
this.counter.innerHTML = words_left;
},
keyPressHandler: function(e) {
words_left = this.max_chars - $F(this.input).length;
if (words_left <= 0 && this.allowedChars(e.keyCode)) {
e.stop();
}
},
allowedChars: function(keycode) {
// 8: backspace, 37-40: arrow keys, 46: delete
allowed_keycodes = [ 8, 37, 38, 39, 40, 46 ];
if (allowed_keycodes.include(keycode)) {
return false;
}
return true
}
});
``` | Check out the YUI key listener
<http://developer.yahoo.com/yui/docs/YAHOO.util.KeyListener.html>
using the key listener, YUI takes care of capturing any events. IN javascript, there will almost never be an instance where you must wait in a while loop for something to happen.
If you need examples of how event handling works, check out these pages.
<http://developer.yahoo.com/yui/examples/event/eventsimple.html> |
1,312,524 | I'm attempting to build a webpage that loads depending on the input provided. I'm having some trouble wrapping my head around event handling in javascript, basically. Coming from python, if I wanted to wait for a specific keyboard input before moving on to the next object to display, I would create a *while* loop and put a key listener inside it.
Python:
```
def getInput():
while 1:
for event in pygame.event.get(): #returns a list of events from the keyboard/mouse
if event.type == KEYDOWN:
if event.key == "enter": # for example
do function()
return
elif event.key == "up":
do function2()
continue
else: continue # for clarity
```
In trying to find a way to implement this in DOM/javascript, I seem to just crash the page (I assume due to the While Loop), but I presume this is because my event handling is poorly written. Also, registering event handlers with "element.onkeydown = function;" difficult for me to wrap my head around, and setInterval(foo(), interval] hasn't brought me much success.
Basically, I want a "listening" loop to do a certain behavior for key X, but to break when key Y is hit. | 2009/08/21 | [
"https://Stackoverflow.com/questions/1312524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/160854/"
] | In JavaScript, you give up control of the main loop. The browser runs the main loop and calls back down into your code when an event or timeout/interval occurs. You have to handle the event and then return so that the browser can get on with doing other things, firing events, and so on.
So you cannot have a ‘listening’ loop. The browser does that for you, giving you the event and letting you deal with it, but once you've finished handling the event you must return. You can't fall back into a different loop. This means you can't write step-by-step procedural code; if you have state that persists between event calls you must store it, eg. in a variable.
This approach cannot work:
```
<input type="text" readonly="readonly" value="" id="status" />
var s= document.getElementById('status');
s.value= 'Press A now';
while (true) {
var e= eventLoop.nextKeyEvent(); // THERE IS NO SUCH THING AS THIS
if (e.which=='a')
break
}
s.value= 'Press Y or N';
while (true) {
var e= eventLoop.nextKeyEvent();
if (e.which=='y') ...
```
Step-by-step code has to be turned inside out so that the browser calls down to you, instead of you calling up to the browser:
```
var state= 0;
function keypressed(event) {
var key= String.fromCharCode(event? event.which : window.event.keyCode); // IE compatibility
switch (state) {
case 0:
if (key=='a') {
s.value= 'Press Y or N';
state++;
}
break;
case 1:
if (key=='y') ...
break;
}
}
s.value= 'Press A now';
document.onkeypress= keypressed;
```
You can also make code look a little more linear and clean up some of the state stuff by using nested anonymous functions:
```
s.value= 'Press A now';
document.onkeypress= function(event) {
var key= String.fromCharCode(event? event.which : window.event.keyCode);
if (key=='a') {
s.value= 'Press Y or N';
document.onkeypress= function(event) {
var key= String.fromCharCode(event? event.which : window.event.keyCode);
if (key=='y') ...
};
}
};
``` | For easier implementation of event handling I recommend you to use a library such as [Prototype](http://www.prototypejs.org/api/event) or [Jquery](http://docs.jquery.com/Events) (Note that both links take you to their respective Event handling documentation.
In order to use them you have to keep in mind 3 things:
* What DOM element you want to observe
* What Event you want to capture
* What action will the event trigger
This three points are mutually inclusive, meaning you need to take care of the 3 when writing the code.
So having this in mind, using Prototype, you could do this:
```
Event.observe($('id_of_the_element_to_observe'), 'keypress', function(ev) {
// the argument ev is the event object that has some useful information such
// as which keycode was pressed.
code_to_run;
});
```
Here is the code of a more useful example, a CharacterCounter (such as the one found in Twitter, but surely a lot less reliable ;) ):
```
var CharacterCounter = Class.create({
initialize: function(input, counter, max_chars) {
this.input = input;
this.counter = counter;
this.max_chars = max_chars;
Event.observe(this.input, 'keypress', this.keyPressHandler.bind(this));
Event.observe(this.input, 'keyup', this.keyUpHandler.bind(this));
},
keyUpHandler: function() {
words_left = this.max_chars - $F(this.input).length;
this.counter.innerHTML = words_left;
},
keyPressHandler: function(e) {
words_left = this.max_chars - $F(this.input).length;
if (words_left <= 0 && this.allowedChars(e.keyCode)) {
e.stop();
}
},
allowedChars: function(keycode) {
// 8: backspace, 37-40: arrow keys, 46: delete
allowed_keycodes = [ 8, 37, 38, 39, 40, 46 ];
if (allowed_keycodes.include(keycode)) {
return false;
}
return true
}
});
``` |
1,312,524 | I'm attempting to build a webpage that loads depending on the input provided. I'm having some trouble wrapping my head around event handling in javascript, basically. Coming from python, if I wanted to wait for a specific keyboard input before moving on to the next object to display, I would create a *while* loop and put a key listener inside it.
Python:
```
def getInput():
while 1:
for event in pygame.event.get(): #returns a list of events from the keyboard/mouse
if event.type == KEYDOWN:
if event.key == "enter": # for example
do function()
return
elif event.key == "up":
do function2()
continue
else: continue # for clarity
```
In trying to find a way to implement this in DOM/javascript, I seem to just crash the page (I assume due to the While Loop), but I presume this is because my event handling is poorly written. Also, registering event handlers with "element.onkeydown = function;" difficult for me to wrap my head around, and setInterval(foo(), interval] hasn't brought me much success.
Basically, I want a "listening" loop to do a certain behavior for key X, but to break when key Y is hit. | 2009/08/21 | [
"https://Stackoverflow.com/questions/1312524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/160854/"
] | you should not use such loops in javascript. basically you do not want to block the browser from doing its job. Thus you work with events (onkeyup/down).
also instead of a loop you should use setTimeout if you want to wait a little and continue if something happened
you can do sth like that:
```
<html>
<script>
var dataToLoad = new Array('data1', 'data2', 'data3' );
var pos = 0;
function continueData(ev) {
// do whatever checks you need about key
var ele = document.getElementById("mydata");
if (pos < dataToLoad.length)
{
ele.appendChild(document.createTextNode(dataToLoad[pos]));
pos++;
}
}
</script>
<body onkeyup="continueData()"><div id="mydata"></div></body></html>
```
everytime a key is released the next data field is appended | you could attach an event listener to the window object like this
```
window.captureEvents(Event.KEYPRESS);
window.onkeypress = output;
function output(event) {
alert("you pressed" + event.which);
}
``` |
41,596,143 | I am trying to find an elegant way to calculate a bivariate normal CDF with python where one upper bound of the CDF is a function of two variables, of which one is a variable of the bivariate normal density (integral variable).
Example:
```
from scipy import integrate
import numpy as np
# First define f(x, y) as the bivariate normal distribution with fixed correlation p
p = 0.4
def f(x, y):
Q = x**2 + y**2 - 2*p*x*y
return 1/(2*np.pi*np.sqrt(1-p**2))*np.exp(-1/(2*(1-p**2))*Q)
# Define N2(a, b) as the cumulative bivariate normal distribution f where a is constant
# and b is a function of two variables
def N2(a, b):
prob, error = integrate.dblquad(f, np.NINF, a, lambda x: np.NINF, b)
return prob
# Upper bound function of 2 variables example where x is an integral variable
def upper_bound(x, v):
return 0.5*v*x
# My approach which doesn't work
# Calculate bivariate normal CDF for different values of v in the upper bound
results = [N2(1, upper_bound(x, v)) for v in np.linspace(0.01, 4, 200)]
```
Any ideas on how I could change my approach so the call to `upper_bound(x, v)` in
`results = [N2(1, upper_bound(x, v)) for v in np.linspace(0.01, 4, 200)]` would work? Other approaches to tackle the problem also welcome.
Edit: This is the integral I want to compute, where f(x,y) is the bivariate normal density function. Note that the actual upper bound f(x,v) = 0.5\*v\*x I want to compute is way more complicated, this is just as an example, therefore I do not want to compute it symbolically, for instance with sympy. Also, my goal is to compute the integral for a few hundreds different values of v.
The integral: [](https://i.stack.imgur.com/CV5Tr.png) | 2017/01/11 | [
"https://Stackoverflow.com/questions/41596143",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3023486/"
] | Although it's slow this approach seems to work.
The first few lines, up to 'this should produce 1', are a sanity check. I wanted to verify that my approach would correctly calculate the volume under the density. It does.
I use a variance-covariance matrix to get the desired correlation of 0.4 and avoid writing my own pdf.
I curry functions in two places so that functions have only single parameters. This makes it possible to calculate the inner integral as a function of *x*. It also makes it possible to take the *v* parameter 'outside' the other calculations.
```
from toolz import curry
from scipy.stats import multivariate_normal
from scipy.integrate import quad
import numpy as np
@curry
def bivariate(x,y):
return multivariate_normal.pdf([x,y],cov=[[1,.4],[.4,1]])
def out_y(x):
marginal = bivariate(x)
return quad(marginal, np.NINF, np.PINF)[0]
# this should produce 1
print (quad(out_y, np.NINF, np.PINF)[0])
# now to what the OP wants
@curry
def inner_integral(v,x):
marginal = bivariate(x)
return quad(marginal, np.NINF, 0.5*v*x)[0]
inner_integral_for_one_v = inner_integral(0.8)
print (quad(inner_integral_for_one_v,np.NINF, 1)[0])
```
To use this code you would write something equivalent to:
```
for v in range(0,1,0.1):
inner_integral_for_one_v = inner_integral(v)
print (quad(inner_integral_for_one_v,np.NINF, 1)[0])
``` | I had to write an option model that was using a bivariate distribution in Python. However, I did not find a prebuilt function that was fast - some seem to be using the random scipy generator to emulate it with the multivariate function. BUT... if you really dig deep and see what the other financial packages are using, it's pretty much ALL code written by one guy, Alan Genz from the University of Washington. He pretty much writes everything in Fortan or MATLAB, and that's it. So if you look into packages that have the bivariate CDF you'll find his name and his code there (I found an old version in MATLAB actually). <http://www.math.wsu.edu/faculty/genz/software/software.html>
So why this isn't built into SciPy or NumPy already, I have no idea. But I rewrote it in several hours, using both MATLAB and Python to check the resulting code. He's solving with Gauss Legendre quadrature so this will always be much faster than a solution that uses a random number generator: <https://en.wikipedia.org/wiki/Gauss%E2%80%93Legendre_quadrature>
```
# Alan Genz - original MATLAB code converted by Matt Slezak to Python
# http://www.math.wsu.edu/faculty/genz/software/matlab/bvnl.m
# Department of Mathematics
# Washington State University
# This is the bivariate CDF distribution
#
# dh 1st upper integration limit
# dk 2nd upper integration limit
# r correlation coefficient
import numpy as np
from scipy.stats import norm
def bivariate_cdf(dh, dk, r):
# dh and dk get signs flipped right away
dh = float(-dh)
dk = float(-dk)
if (dh == np.inf) or (dk == np.inf):
return 0
else:
if dh == - np.inf:
if dk == - np.inf:
return 1
else:
return norm.cdf(- dk)
if dk == - np.inf:
return norm.cdf(- dh)
else:
if r == 0:
return norm.cdf(- dh)*norm.cdf(- dk)
else:
tp=2*np.pi
h=dh
k=dk
hk=h*k
bvn=0
if abs(r) < 0.3:
w=np.array([0.1713244923791705,0.3607615730481384,0.4679139345726904])
x=np.array([0.9324695142031522,0.6612093864662647,0.238619186083197])
else:
if abs(r) < 0.75:
w=np.array([0.04717533638651177,0.1069393259953183,0.1600783285433464,0.2031674267230659,0.2334925365383547,0.2491470458134029])
x=np.array([0.9815606342467191,0.904117256370475,0.769902674194305,0.5873179542866171,0.3678314989981802,0.1252334085114692])
else:
w=np.array([0.01761400713915212,0.04060142980038694,0.06267204833410905,0.08327674157670475,0.1019301198172404,0.1181945319615184,0.1316886384491766,0.1420961093183821,0.1491729864726037,0.1527533871307259])
x=np.array([0.9931285991850949, 0.9639719272779138, 0.9122344282513259,0.8391169718222188, 0.7463319064601508, 0.6360536807265150,0.5108670019508271,0.3737060887154196,0.2277858511416451,0.07652652113349733])
w = np.concatenate((w, w), axis=0)
x = np.concatenate((1 - x, 1 + x), axis=0)
if abs(r) < 0.925:
hs=(h*h+k*k) / 2
asr=np.arcsin(r) / 2
sn=np.sin(asr*x)
bvn=np.dot(np.exp((sn*hk-hs)/(1-sn**2)),w.T)
bvn=bvn*asr/tp + norm.cdf(-h)*norm.cdf(-k)
else:
if r < 0:
k=- k
hk=- hk
if abs(r) < 1:
as1=1 - r ** 2
a=np.sqrt(as1)
bs=(h - k) ** 2
asr=- (bs / as1 + hk) / 2
c=(4 - hk) / 8
d=(12 - hk) / 80
if asr > - 100:
bvn= a*np.exp(asr)*(1-c*(bs-as1)*(1-d*bs)/3+c*d*as1**2)
if hk > - 100:
b=np.sqrt(bs)
sp=np.sqrt(tp)*norm.cdf(-b/a)
bvn=bvn - np.exp(-hk/2)*sp*b*( 1 - c*bs*(1-d*bs)/3 )
a=a / 2
xs=(a*x) ** 2
asr=- (bs / xs + hk) / 2
ix=asr > - 100
xs=xs[ix]
sp=( 1 + c*xs*(1+5*d*xs) )
rs=np.sqrt(1 - xs)
ep=np.exp(- (hk / 2)*xs / (1 + rs) ** 2) / rs
bvn=( a*( np.dot((np.exp(asr[ix])*(sp-ep)),w[ix].T) ) - bvn )/tp
if r > 0:
bvn=bvn + norm.cdf(- max(h,k))
else:
if h >= k:
bvn=- bvn
else:
if h < 0:
L=norm.cdf(k) - norm.cdf(h)
else:
L=norm.cdf(- h) - norm.cdf(- k)
bvn=L - bvn
return max(0,min(1,bvn))
``` |
47,784,693 | I am not able to handle to pass optional parameters in python `**kwargs`
```
def ExecuteyourQuery(self, queryStatement, *args, **kwargs):
if self.cursorOBJ is not None:
resultOBJ = self.cursorOBJ.execute(queryStatement, *args,**kwargs)
self.resultsVal = resultOBJ.fetchall()
```
---
The below statement works fine when I am calling the function
```
ExecuteyourQuery('select * from item_table where x = :x and y:y', x, y)
```
But when I give an extra argument which is not mentioned in the query, like
```
ExecuteyourQuery('select * from item_table where x = :x and y:y', x, y, z)
```
It returns 0 as output | 2017/12/13 | [
"https://Stackoverflow.com/questions/47784693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4949381/"
] | When using args as the last parameter of your function, you can pass any number of **arguments** after the formal arguments, when existing. Args is a [tuple](https://www.tutorialspoint.com/python/python_tuples.htm).
```
def my_method(farg, *args):
print('FARG: ', farg)
print('ARGS: ', args)
my_method('Formal Argument', 1, 2, 3)
```
When using kwargs as the last parameter of your function, you can pass any number of **named arguments** after the formal arguments, when existing. Kwargs is a [dictionary](https://www.tutorialspoint.com/python/python_dictionary.htm).
```
def my_method(farg, **kwargs):
print('FARG: ', farg)
print('KWARGS: ', kwargs)
my_method('Formal Argument', kwarg_1 = 1, kwarg_2 = 2, kwarg_3 = 3)
```
When using both args and kwargs as the last parameteres of your function, you can pass any number of **arguments** (that will be mapped to args) and **named arguments** (that will be mapped to kwargs) after the formal arguments, when existing.
```
def my_method(farg, *args, **kwargs):
print('FARG: ', farg)
print('ARGS: ', args)
print('KWARGS: ', kwargs)
my_method('Formal Argument', 1, 2, 3, kwarg_1 = 1, kwarg_2 = 2, kwarg_3 = 3)
```
Try this out and see the results in your console, hopefully it will help you solve your original problem :) | another way to do this is by setting a default value for a parameter
```
def method(one, two, three=3):
print(one)
print(two)
if three != 3: # don't have to use it like this but it is a default value
print(three)
```
this set a default value if the parameter is not filled
if it is filled it will overrule the default value and use that
```
method(11, 22) # prints first two parameters
method(23, 54, 89) # prints all three
```
Not sure if this is what your looking for but it is another way |
28,750,643 | okay, im a new guy at all this, just randomly picked it up with my neighbor and we are both stuck at this. We have been following this tutorial([here](http://www.swaroopch.com/notes/python/#intro)) and have made it to 6.6 in the tutorial. I have searched the forums looking for a way to get passed my problem but all the of questions people have are too complex for me as of right now. I am running windows 8.1 on my laptop, i have python27. So here we go i put in,
```
>>> cd c:\\py
```
and i get
```
File "<stdin>", line 1
cd c:\\py
^
SyntaxError: invalid syntax
```
Then i searched around and found a thread saying to use os.chdir so i gave that a shot and got;
```
>>> os.chdir("c:\\py")
>>> os.getcwd()
'C:\\py'
>>>
```
So my guess is that it worked? so then i go ahead and try to run my program like it says to do, so i put in
```
python hello.py
```
and i get this in return
```
>>> python hello.py
File "<stdin>", line 1
python hello.py
^
SyntaxError: invalid syntax
```
I'm literally stuck, i have no clue what to do now. If someone can help me through this i will love you long time.
Thank you | 2015/02/26 | [
"https://Stackoverflow.com/questions/28750643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4611612/"
] | First of all, Python shell differs from system shell (cmd.exe). You try to run `python script.py` in Python interpreter instead of `cmd.exe`.
Open `cmd.exe` and type in `python script.py` to solve this. It'll run fine if it doesn't contain any errors. `cd c:\\` doesn't work due to the same reason.
First `quit()` or `exit()` the Python interpreter (type one of them right in it) then type the commands you want to execute (such as `cd`) into terminal.
If you want to run `code.py` in Python interpreter, you can `os.chdir("...")` to the directory where your script resides and type `import code`. That may not work if your script contains
```
if __name__=="__main__":
```
All in all, Python interpreter is for running Python code right in it and command prompt (terminal, cmd.exe) is for running other non-GUI programs and much more. | You are in the python interpreter which is an interactive shell. You can consider it "scratch paper" to test out or try different things.
To run your script :
quit()
in the command prompt run python.exe hello.py ( on windows.. on \*nix just python) |
74,158,560 | I am going through JavaScript course on freecodecamp and I came across this ['Steamroller' challenge](https://www.freecodecamp.org/learn/javascript-algorithms-and-data-structures/intermediate-algorithm-scripting/steamroller). Coming from python I really like one-liner solutions so I managed to write one for this challenge:
```
function steamrollArray(arr) {
return Array.isArray(arr) ? [].concat(...arr.map(steamrollArray)) : arr;
}
steamrollArray([1, [2], [3, [[4]]]]) // returns [1, 2, 3, 4]
```
The goal is basically to flatten an array of arbitrary depth. What puzzles me though (still new to JavaScript) is why seemingly similar code behaves very differently. Something I wrote at the beginning doesn't work, but code I arrived at through trial and (mostly) error works:
```
[...arr.map(steamrollArray)] // this doesn't work, returns an unchanged array
[].concat(...arr.map(steamrollArray)) // this works, returns a flat array
```
This seems strange to me because 'unfolding' the recursion would suggest it should be the other way around
```
[...arr.map(steamrollArray)]
[1, ...[2], ...[3, ...[...[4]]]] // this should work
[].concat(...arr.map(steamrollArray))
[].concat(...[1, [].concat(...[2]), [].concat(...[3, [].concat(...[].concat(...[4]))])]) // what 'is going on'
```
Can anyone please explain this behaviour? | 2022/10/21 | [
"https://Stackoverflow.com/questions/74158560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14625103/"
] | Quote from [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat#description):
>
> Then, for each argument, its value will be concatenated into the array — for normal objects or primitives, the argument itself will become an element of the final array; **for arrays ..., each element of the argument will be independently added to the final array.**
>
>
>
You can observe this behavior below; both give us `[1, 2, 3, 4, 5, 6]`, even though the second call uses an array.
```js
console.log([1, 2, 3].concat(4, 5, 6));
console.log([1, 2, 3].concat([4, 5, 6]));
```
If you want to concat the whole array, then you must wrap it in another array:
```js
console.log([1, 2, 3].concat([[4, 5, 6]])); // [1, 2, 3, [4, 5, 6]]
```
So how does this relate to your problem? Well... If we have an array like `[1, 2, [3, 4], 5, [6]]`, and we spread that into `concat`, we would get something like this:
```js
console.log([].concat(1, 2, [3, 4], 5, [6]));
```
... essentially "flattening" it by one level. Your function is recursive, so it flattens the array like this until it is completely flat. | So others answer your question, but here are simplified code
```js
const steamrollArray1 = arr => Array.isArray(arr) ? [].concat(...arr.map(steamrollArray1)) : arr;
const steamrollArray2 = arr => Array.isArray(arr) ? arr.flat(Infinity) : arr;
console.log(
steamrollArray1([1, [2], [3, [[4]]]])
); // returns [1, 2, 3, 4]
console.log(
steamrollArray2([1, [2], [3, [[4]]]])
); // returns [1, 2, 3, 4]
``` |
74,158,560 | I am going through JavaScript course on freecodecamp and I came across this ['Steamroller' challenge](https://www.freecodecamp.org/learn/javascript-algorithms-and-data-structures/intermediate-algorithm-scripting/steamroller). Coming from python I really like one-liner solutions so I managed to write one for this challenge:
```
function steamrollArray(arr) {
return Array.isArray(arr) ? [].concat(...arr.map(steamrollArray)) : arr;
}
steamrollArray([1, [2], [3, [[4]]]]) // returns [1, 2, 3, 4]
```
The goal is basically to flatten an array of arbitrary depth. What puzzles me though (still new to JavaScript) is why seemingly similar code behaves very differently. Something I wrote at the beginning doesn't work, but code I arrived at through trial and (mostly) error works:
```
[...arr.map(steamrollArray)] // this doesn't work, returns an unchanged array
[].concat(...arr.map(steamrollArray)) // this works, returns a flat array
```
This seems strange to me because 'unfolding' the recursion would suggest it should be the other way around
```
[...arr.map(steamrollArray)]
[1, ...[2], ...[3, ...[...[4]]]] // this should work
[].concat(...arr.map(steamrollArray))
[].concat(...[1, [].concat(...[2]), [].concat(...[3, [].concat(...[].concat(...[4]))])]) // what 'is going on'
```
Can anyone please explain this behaviour? | 2022/10/21 | [
"https://Stackoverflow.com/questions/74158560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14625103/"
] | `Array.prototype.concat`'s arguments, if they are arrays, are essentially flattened into the resulting array. So
```
[2].concat(3, [4, 5])
```
results in `[2, 3, 4, 5]`. Similarly
```
[].concat(3, [4, 5])
```
results in `[3, 4, 5]`.
But spreading an array into another array does not perform such flattening on *each element of the spread array*.
```
const arr = [3, [4, 5]];
const newArr = [...arr];
```
takes each element of `arr` and puts it into a new array - but that's it, giving you `[3, [4, 5]]` - the same values of the original array, but inside a different array.
For the code in your question:
```
[...arr.map(steamrollArray)]
```
fails because it's `steamrollArray` that does the flattening - but you'll also need to flatten each returned array. For example, if you pass in `[[1, 2], [3, [4, 5]]]`:
```
[...[[1, 2], [3, [4, 5]]].map(steamrollArray)]
[...[[1, 2], [3, 4, 5]]]
```
which doesn't flatten the top level. But `[].concat(...arr.map(steamrollArray))` works because `concat` (shallowly) flattens *each of its array arguments* into the result. | So others answer your question, but here are simplified code
```js
const steamrollArray1 = arr => Array.isArray(arr) ? [].concat(...arr.map(steamrollArray1)) : arr;
const steamrollArray2 = arr => Array.isArray(arr) ? arr.flat(Infinity) : arr;
console.log(
steamrollArray1([1, [2], [3, [[4]]]])
); // returns [1, 2, 3, 4]
console.log(
steamrollArray2([1, [2], [3, [[4]]]])
); // returns [1, 2, 3, 4]
``` |
48,946,036 | I'm getting started with docker compose and have been working through the simple demo flask application. The thing is, I'm running this from inside of an organization that intercepts all communication in such a way that SSL errors are thrown right and left. They provide us with three root certificates we need to install, and I've generally got these working on my own machine, but I'm lost when it comes to getting these to work inside docker-compose deployments.
When I run docker-compose up, I get the following:
```
$ sudo docker-compose up
Creating network "project_default" with the default driver
Building web
Step 1/5 : FROM python:3.4-alpine
3.4-alpine: Pulling from library/python
81033e7c1d6a: Pull complete
9b61101706a6: Pull complete
415e2a07c89b: Pull complete
f22df7a3f000: Pull complete
8c16bf19c1f9: Pull complete
Digest: sha256:fe436cb066394d81cf49448a04dec7c765082445a500bc44f1ae5e8a455793bd
Status: Downloaded newer image for python:3.4-alpine
---> 5c72717ec319
Step 2/5 : ADD . /code
---> a5790c0e3e94
Removing intermediate container 052c614e41d0
Step 3/5 : WORKDIR /code
---> a2ea9acb3005
Removing intermediate container 77f2375ca0a6
Step 4/5 : RUN pip install -r requirements.txt
---> Running in 5f4fe856776d
Collecting flask (from -r requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1d30>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f19b0>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1828>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1588>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1390>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Could not find a version that satisfies the requirement flask (from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for flask (from -r requirements.txt (line 1))
```
Pip fails to install anything.
The docker-compose.yml file looks like this:
```
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"
```
And the main Dockerfile looks like this:
```
FROM python:3.4-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
```
Is there any way to be able to make this work in this particular case? Is there a general solution to this sort of problem that would allow me to pass to any container deployed the SSL certificates and have them be used? | 2018/02/23 | [
"https://Stackoverflow.com/questions/48946036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3929175/"
] | This isn't really a docker-specific question: you are asking, in effect, "how do I install certificate authorities under Linux"? The answer is going to be the same regardless of whether you are running your ssl client inside or outside of a container.
Your Python image is based on alpine, and alpine uses the "ca-certificates" package to manage CA certificiates. To install your local CA certificates, you would need to (a) copy them into the `/usr/share/ca-certificates` directory and (b) run `update-ca-certificates`.
For example, adding something like this to your Dockerfile (before your `pip install`):
```
COPY company-ca.crt /usr/share/ca-certificates
RUN update-ca-certificates
``` | In my case, Host machine's MTU is 1450, and Docker's MTU is 1500.
Which causes docker set MSS to 1460, and then TLS "server hello" packet got bigger than 1450 bytes, so the Host machine discard it.
To see if it's your case too, run ifconfig on both you Docker container and your host machine. If Host's MTU is less than 1500, it's easy to run into this discarding packet situation. Especially in HTTPS, for that "server hello" needs to send the certificate, which is a big packet |
48,946,036 | I'm getting started with docker compose and have been working through the simple demo flask application. The thing is, I'm running this from inside of an organization that intercepts all communication in such a way that SSL errors are thrown right and left. They provide us with three root certificates we need to install, and I've generally got these working on my own machine, but I'm lost when it comes to getting these to work inside docker-compose deployments.
When I run docker-compose up, I get the following:
```
$ sudo docker-compose up
Creating network "project_default" with the default driver
Building web
Step 1/5 : FROM python:3.4-alpine
3.4-alpine: Pulling from library/python
81033e7c1d6a: Pull complete
9b61101706a6: Pull complete
415e2a07c89b: Pull complete
f22df7a3f000: Pull complete
8c16bf19c1f9: Pull complete
Digest: sha256:fe436cb066394d81cf49448a04dec7c765082445a500bc44f1ae5e8a455793bd
Status: Downloaded newer image for python:3.4-alpine
---> 5c72717ec319
Step 2/5 : ADD . /code
---> a5790c0e3e94
Removing intermediate container 052c614e41d0
Step 3/5 : WORKDIR /code
---> a2ea9acb3005
Removing intermediate container 77f2375ca0a6
Step 4/5 : RUN pip install -r requirements.txt
---> Running in 5f4fe856776d
Collecting flask (from -r requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1d30>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f19b0>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1828>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1588>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1390>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Could not find a version that satisfies the requirement flask (from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for flask (from -r requirements.txt (line 1))
```
Pip fails to install anything.
The docker-compose.yml file looks like this:
```
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"
```
And the main Dockerfile looks like this:
```
FROM python:3.4-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
```
Is there any way to be able to make this work in this particular case? Is there a general solution to this sort of problem that would allow me to pass to any container deployed the SSL certificates and have them be used? | 2018/02/23 | [
"https://Stackoverflow.com/questions/48946036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3929175/"
] | This isn't really a docker-specific question: you are asking, in effect, "how do I install certificate authorities under Linux"? The answer is going to be the same regardless of whether you are running your ssl client inside or outside of a container.
Your Python image is based on alpine, and alpine uses the "ca-certificates" package to manage CA certificiates. To install your local CA certificates, you would need to (a) copy them into the `/usr/share/ca-certificates` directory and (b) run `update-ca-certificates`.
For example, adding something like this to your Dockerfile (before your `pip install`):
```
COPY company-ca.crt /usr/share/ca-certificates
RUN update-ca-certificates
``` | I was trying to read data from an API in my Go code and I was facing similar ssl error:
```
x509: certificate signed by unknown authority
```
My container was based on `debian:stretch` which is really really small ~100MB.
This happens when `ca-certificates` are not installed.
I installed `ca-certificates` (which also installs `openssl`) as shown below:
```
FROM debian:stretch
RUN apt-get update && apt-get install -y ca-certificates --no-install-recommends && rm -rf /var/lib/apt/lists/*
# ...
```
This will also work for any other base images which might not have `ca-certificates` installed. |
48,946,036 | I'm getting started with docker compose and have been working through the simple demo flask application. The thing is, I'm running this from inside of an organization that intercepts all communication in such a way that SSL errors are thrown right and left. They provide us with three root certificates we need to install, and I've generally got these working on my own machine, but I'm lost when it comes to getting these to work inside docker-compose deployments.
When I run docker-compose up, I get the following:
```
$ sudo docker-compose up
Creating network "project_default" with the default driver
Building web
Step 1/5 : FROM python:3.4-alpine
3.4-alpine: Pulling from library/python
81033e7c1d6a: Pull complete
9b61101706a6: Pull complete
415e2a07c89b: Pull complete
f22df7a3f000: Pull complete
8c16bf19c1f9: Pull complete
Digest: sha256:fe436cb066394d81cf49448a04dec7c765082445a500bc44f1ae5e8a455793bd
Status: Downloaded newer image for python:3.4-alpine
---> 5c72717ec319
Step 2/5 : ADD . /code
---> a5790c0e3e94
Removing intermediate container 052c614e41d0
Step 3/5 : WORKDIR /code
---> a2ea9acb3005
Removing intermediate container 77f2375ca0a6
Step 4/5 : RUN pip install -r requirements.txt
---> Running in 5f4fe856776d
Collecting flask (from -r requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1d30>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f19b0>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1828>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1588>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1390>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Could not find a version that satisfies the requirement flask (from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for flask (from -r requirements.txt (line 1))
```
Pip fails to install anything.
The docker-compose.yml file looks like this:
```
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"
```
And the main Dockerfile looks like this:
```
FROM python:3.4-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
```
Is there any way to be able to make this work in this particular case? Is there a general solution to this sort of problem that would allow me to pass to any container deployed the SSL certificates and have them be used? | 2018/02/23 | [
"https://Stackoverflow.com/questions/48946036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3929175/"
] | In my case, I must add in my Dockerfile these sentences:
```
COPY company.crt /usr/local/share/ca-certificates/company.crt
RUN update-ca-certificates
...
RUN pip install --cert /etc/ssl/certs/company.pem -r requirements.txt
```
You need the certificate of your company in .crt format. When docker execute update-ca-certificates, linux will create a .pem file with the same name in the path: /etc/ssl/certs/. It will convert the network between pip in SSL. | In my case, Host machine's MTU is 1450, and Docker's MTU is 1500.
Which causes docker set MSS to 1460, and then TLS "server hello" packet got bigger than 1450 bytes, so the Host machine discard it.
To see if it's your case too, run ifconfig on both you Docker container and your host machine. If Host's MTU is less than 1500, it's easy to run into this discarding packet situation. Especially in HTTPS, for that "server hello" needs to send the certificate, which is a big packet |
48,946,036 | I'm getting started with docker compose and have been working through the simple demo flask application. The thing is, I'm running this from inside of an organization that intercepts all communication in such a way that SSL errors are thrown right and left. They provide us with three root certificates we need to install, and I've generally got these working on my own machine, but I'm lost when it comes to getting these to work inside docker-compose deployments.
When I run docker-compose up, I get the following:
```
$ sudo docker-compose up
Creating network "project_default" with the default driver
Building web
Step 1/5 : FROM python:3.4-alpine
3.4-alpine: Pulling from library/python
81033e7c1d6a: Pull complete
9b61101706a6: Pull complete
415e2a07c89b: Pull complete
f22df7a3f000: Pull complete
8c16bf19c1f9: Pull complete
Digest: sha256:fe436cb066394d81cf49448a04dec7c765082445a500bc44f1ae5e8a455793bd
Status: Downloaded newer image for python:3.4-alpine
---> 5c72717ec319
Step 2/5 : ADD . /code
---> a5790c0e3e94
Removing intermediate container 052c614e41d0
Step 3/5 : WORKDIR /code
---> a2ea9acb3005
Removing intermediate container 77f2375ca0a6
Step 4/5 : RUN pip install -r requirements.txt
---> Running in 5f4fe856776d
Collecting flask (from -r requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1d30>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f19b0>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1828>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1588>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1390>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Could not find a version that satisfies the requirement flask (from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for flask (from -r requirements.txt (line 1))
```
Pip fails to install anything.
The docker-compose.yml file looks like this:
```
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"
```
And the main Dockerfile looks like this:
```
FROM python:3.4-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
```
Is there any way to be able to make this work in this particular case? Is there a general solution to this sort of problem that would allow me to pass to any container deployed the SSL certificates and have them be used? | 2018/02/23 | [
"https://Stackoverflow.com/questions/48946036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3929175/"
] | In my case, I must add in my Dockerfile these sentences:
```
COPY company.crt /usr/local/share/ca-certificates/company.crt
RUN update-ca-certificates
...
RUN pip install --cert /etc/ssl/certs/company.pem -r requirements.txt
```
You need the certificate of your company in .crt format. When docker execute update-ca-certificates, linux will create a .pem file with the same name in the path: /etc/ssl/certs/. It will convert the network between pip in SSL. | I was trying to read data from an API in my Go code and I was facing similar ssl error:
```
x509: certificate signed by unknown authority
```
My container was based on `debian:stretch` which is really really small ~100MB.
This happens when `ca-certificates` are not installed.
I installed `ca-certificates` (which also installs `openssl`) as shown below:
```
FROM debian:stretch
RUN apt-get update && apt-get install -y ca-certificates --no-install-recommends && rm -rf /var/lib/apt/lists/*
# ...
```
This will also work for any other base images which might not have `ca-certificates` installed. |
48,946,036 | I'm getting started with docker compose and have been working through the simple demo flask application. The thing is, I'm running this from inside of an organization that intercepts all communication in such a way that SSL errors are thrown right and left. They provide us with three root certificates we need to install, and I've generally got these working on my own machine, but I'm lost when it comes to getting these to work inside docker-compose deployments.
When I run docker-compose up, I get the following:
```
$ sudo docker-compose up
Creating network "project_default" with the default driver
Building web
Step 1/5 : FROM python:3.4-alpine
3.4-alpine: Pulling from library/python
81033e7c1d6a: Pull complete
9b61101706a6: Pull complete
415e2a07c89b: Pull complete
f22df7a3f000: Pull complete
8c16bf19c1f9: Pull complete
Digest: sha256:fe436cb066394d81cf49448a04dec7c765082445a500bc44f1ae5e8a455793bd
Status: Downloaded newer image for python:3.4-alpine
---> 5c72717ec319
Step 2/5 : ADD . /code
---> a5790c0e3e94
Removing intermediate container 052c614e41d0
Step 3/5 : WORKDIR /code
---> a2ea9acb3005
Removing intermediate container 77f2375ca0a6
Step 4/5 : RUN pip install -r requirements.txt
---> Running in 5f4fe856776d
Collecting flask (from -r requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1d30>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f19b0>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1828>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1588>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fb0061f1390>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/flask/
Could not find a version that satisfies the requirement flask (from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for flask (from -r requirements.txt (line 1))
```
Pip fails to install anything.
The docker-compose.yml file looks like this:
```
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"
```
And the main Dockerfile looks like this:
```
FROM python:3.4-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
```
Is there any way to be able to make this work in this particular case? Is there a general solution to this sort of problem that would allow me to pass to any container deployed the SSL certificates and have them be used? | 2018/02/23 | [
"https://Stackoverflow.com/questions/48946036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3929175/"
] | In my case, Host machine's MTU is 1450, and Docker's MTU is 1500.
Which causes docker set MSS to 1460, and then TLS "server hello" packet got bigger than 1450 bytes, so the Host machine discard it.
To see if it's your case too, run ifconfig on both you Docker container and your host machine. If Host's MTU is less than 1500, it's easy to run into this discarding packet situation. Especially in HTTPS, for that "server hello" needs to send the certificate, which is a big packet | I was trying to read data from an API in my Go code and I was facing similar ssl error:
```
x509: certificate signed by unknown authority
```
My container was based on `debian:stretch` which is really really small ~100MB.
This happens when `ca-certificates` are not installed.
I installed `ca-certificates` (which also installs `openssl`) as shown below:
```
FROM debian:stretch
RUN apt-get update && apt-get install -y ca-certificates --no-install-recommends && rm -rf /var/lib/apt/lists/*
# ...
```
This will also work for any other base images which might not have `ca-certificates` installed. |
32,866,578 | I am trying to bastardise Django and Django REST Framework into a single module so see if it can work. So far, I have the following code:
```
###############################################################################
# SETTINGS
###############################################################################
import os
from django.apps import apps
from django.conf import settings
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
if not settings.configured:
settings.configure(
DEBUG=True,
SECRET_KEY='thisisthesecretkey',
ROOT_URLCONF=__name__,
STATIC_URL='/static/',
STATICFILES_DIRS=(
os.path.join(BASE_DIR, "static"),
),
MIGRATION_MODULES = {'__main__': 'migrations'},
MIDDLEWARE_CLASSES=(
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
),
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'tinydb',
}
},
INSTALLED_APPS = (
'__main__',
'rest_framework',
'django.contrib.staticfiles',
),
)
apps.populate(settings.INSTALLED_APPS)
###############################################################################
# MODELS
###############################################################################
from django.db import models
class Book(models.Model):
ISBN = models.AutoField(primary_key=True)
author = models.CharField(max_length=100)
title = models.CharField(max_length=200)
description = models.CharField(max_length=500)
###############################################################################
# SERIALIZERS
###############################################################################
from rest_framework import serializers
class BookSerializer(serializers.ModelSerializer):
class Meta:
model = Book
###############################################################################
# VIEWS
###############################################################################
class BooksView():
queryset = Book.objects.all()
serializer_class = BookSerializer
###############################################################################
# URLCONF
###############################################################################
from django.conf.urls import url, include
from rest_framework.routers import DefaultRouter
router = DefaultRouter()
router.register(r'books', BooksView)
urlpatterns = (
url(r'^$', include(router.urls)),
)
###############################################################################
# MANAGE
###############################################################################
import sys
if __name__ == "__main__":
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
```
Right now, the server runs and I see the API browser. However, when I try to create an object, I get the following trace:
```
>>> from __main__ import Book
>>> Book.objects.create(author='a1', title='t1', description='d1')
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/models/manager.py", line 127, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/models/query.py", line 348, in create
obj.save(force_insert=True, using=self.db)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/models/base.py", line 734, in save
force_update=force_update, update_fields=update_fields)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/models/base.py", line 762, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/models/base.py", line 846, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/models/base.py", line 885, in _do_insert
using=using, raw=raw)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/models/manager.py", line 127, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/models/query.py", line 920, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/models/sql/compiler.py", line 974, in execute_sql
cursor.execute(sql, params)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/backends/utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/utils.py", line 97, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/utils/six.py", line 658, in reraise
raise value.with_traceback(tb)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/lwm/.virtualenvs/tiny-api/lib/python3.4/site-packages/django/db/backends/sqlite3/base.py", line 318, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such table: __main___book
```
`makemigrations` returns:
```
lwm$ python api.py makemigrations
No changes detected
```
I can run a migration:
```
lwm$ python api.py migrate
Operations to perform:
Synchronize unmigrated apps: __main__, staticfiles, rest_framework
Apply all migrations: (none)
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
No migrations to apply.
```
So. I think, since I don't have my `Book` model in a seperate app, there is no database table being created for it. Other than manually creating the tables, for example, using the `db_table` Meta field, I still wanted to get all the goodness of the ORM doing things for me.
Any ideas? | 2015/09/30 | [
"https://Stackoverflow.com/questions/32866578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | If you are using `docker run` to start your container, you have the `--add-host=""` argument which takes a hostname and an IP that get written to the container's `/etc/hosts`.
Your startup command would look like this:
```
docker run -d --add-host="domain-a.dev:192.168.0.10" [...]
```
Replace `192.168.0.10` with your computer's local IP address.
Don't use `127.0.0.1` as Docker will resolve that to the container not your computer. | I basically worked around this now by using <http://xip.io/>
By using urls like `sub.127.0.0.1.xip.io` I can connect to my local machine. My app only has to know that `127.0.0.1.xip.io` is treated as the "top level domain", and `sub` is the domain name without tld. (In a Ruby on Rails app this can be done by adjusting `config.action_dispatch.tld_length = 6` for example). |
69,920,403 | There is an HTML page that I would like to find the elements of two input types and press one button to log in with the help of selenium along with python3.
The problem is that I can't seem to find a way of doing this correctly.
The two texts and the button are in a form without an id or some tag, also I'm new on this one.
Below there is the HTML code with the two text fields (Email & Password) I need to find with the
selenium WebDriver along with the submit button.
HTML CODE:
```html
<form action="https://www.example.com/login" autocomplete="on" method="post" role="form">
<input type="hidden" name="_token" value="asdc">
<div class="form-group">
<input type="email" name="email" class="form-control " placeholder="Email"
value="" autofocus>
</div>
<div class="form-group">
<input type="password" name="password" class="form-control " placeholder="Password">
</div>
<div class="form-group">
<button type="submit" class="btn btn-primary btn-block" style1="height: 41px; font-size: 18px">Sign In</button></div>
</form>
```
Full HTML (URLS are changed) :
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Login</title>
<meta content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no" name="viewport">
<link rel="shortcut icon" type="image/png" href="https://example.com/favicon.png?"/>
<link href="https://fonts.googleapis.com/css?family=Roboto:300,300i,400,400i,500,500i,700,700i,900"
rel="stylesheet">
<!-- Icon fonts -->
<link href="/fonts/fontawesome-pro/css/all.min.css" rel="stylesheet" type="text/css"/>
<!-- theme -->
<link id="theme-link" href="https://example.com/theme.css?&ver=a_230wednesday_10_nov_2021_135539_utc" rel="stylesheet" type="text/css"/>
<link href="https://example.com/login.css?&ver=a_230wednesday_10_nov_2021_135539_utc" rel="stylesheet" type="text/css"/>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag( 'js', new Date() );
gtag( 'config', "UA-101862321-1" );
</script>
</head>
<body>
<div class="limiter d-none">
<div class="container-login100">
<div class="wrap-login100 align-content-center align-items-center">
<div class="login100-pic js-tilt" data-tilt>
<img src="https://example.com/login-illustration.png?"/>
</div>
<div class="login100-form">
<div class="text-center mb-3">
<img src="https://example.com/logo-login.png?"/>
</div>
<div class="card-body p-4">
<form action="https://www.example.com/login" autocomplete="on" method="post" role="form">
<input type="hidden" name="_token" value="Somevalue">
<div class="form-group">
<input type="email" name="email"
class="form-control " placeholder="Email"
value="" autofocus>
</div>
<div class="form-group">
<input type="password" name="password" class="form-control " placeholder="Password">
</div>
<div class="form-group">
<label class="custom-control custom-checkbox">
<input id="remember" name="remember" type="checkbox" class="custom-control-input" value="1">
<span class="custom-control-label">Remember me</span>
</label>
</div>
<div class="form-group">
<button type="submit" class="btn btn-primary btn-block" style1="height: 41px; font-size: 18px">Sign In</button>
</div>
</form>
<div class="d-flex align-items-center form-group">
<hr class="flex-grow-1"/>
<div class="text-muted px-3">
OR CONNECT WITH
</div>
<hr class="flex-grow-1"/>
</div>
<a href="https://example.com/login/office" class="btn btn-default text-center btn-block">
<img src="https://example/office-365.png?&ver=a_230wednesday_10_nov_2021_135539_utc" class="img-fluid"/>
</a>
<div class="text-right mt-3">
<a href="https://www.example.com/password/reset">
Forgot Your Password?
</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="login-cover-bg">
<div class="login-box d-flex justify-content-between align-items-center">
<div class="login100-pic">
<img src="https://example.com/logo-login.png?"/>
</div>
<div class="login100-form">
<div class="text-center mb-3">
<img src="https://example.com/logo-login.png?" class="img-fluid"/>
</div>
<div class="card-body p-4">
<form action="https://www.example.com/login" autocomplete="on" method="post" role="form">
<input type="hidden" name="_token" value="somevalue">
<div class="form-group">
<input type="email" name="email"
class="form-control " placeholder="Email"
value="" autofocus>
</div>
<div class="form-group">
<input type="password" name="password" class="form-control " placeholder="Password">
</div>
<div class="form-group">
<label class="custom-control custom-checkbox">
<input id="remember" name="remember" type="checkbox" class="custom-control-input" value="1">
<span class="custom-control-label">Remember me</span>
</label>
</div>
<div class="form-group">
<button type="submit" class="btn btn-primary btn-block" style1="height: 41px; font-size: 18px">Sign In</button>
</div>
</form>
<div class="d-flex align-items-center form-group">
<hr class="flex-grow-1"/>
<div class="text-muted px-3">
OR CONNECT WITH
</div>
<hr class="flex-grow-1"/>
</div>
<a href="https://www.example.com/login/office" class="btn btn-default text-center btn-block">
<img src="office-365.png?&ver=a_230wednesday_10_nov_2021_135539_utc" class="img-fluid"/>
</a>
<div class="text-right mt-3">
<a href="https://www.example.com">
Forgot Your Password?
</a>
</div>
</div>
</div>
</div>
</div>
<!-- Core scripts From Bootstrap 4.4-->
<script src="https://code.jquery.com/jquery-3.4.1.min.js" ></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.16.1/umd/popper.min.js" ></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" "></script>
</body>
</html>
```
My code so far:
```css
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("http://example.com/")
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.LINK_TEXT, "Login to start working"))
)
element.click()
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, "//input[@placeholder='Password']"))
)
element.click()
element.send_keys('random@random.com')
element.send_keys(Keys.ENTER)
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "Submit"))
)
element.click()
except Exception as exc:
driver.quit()
print(exc)
```
Thank you in advance! | 2021/11/10 | [
"https://Stackoverflow.com/questions/69920403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1089615/"
] | I just changed your code from `presence_of_element_located` expected conditions to `presence_of_element_located`, corrected the locators and make some more things clearer. I hope now this should work.
```py
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("http://example.com/")
email_input = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.XPATH, "//input[@name='email']")))
email_input.click()
email_input.send_keys("your_email_value")
password_input = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.XPATH, "//input[@name='password']")))
password_input.click()
password_input.send_keys("your_password_value")
WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.XPATH, "//button[@type='submit']"))).click()
``` | To click on the link *`Login to start working`*, next key in the credentials and finally to click on the *Submit* button you can use the following [Locator Strategies](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890):
```
driver.get("http://example.com/")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.LINK_TEXT, "Login to start working"))).click()
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "div.form-group > input.form-control[name='email']"))).send_keys("random@random.com")
driver.find_element(By.CSS_SELECTOR, "div.form-group > input.form-control[name='password']").send_keys("random_password")
driver.find_element(By.XPATH, "//div[@class='form-group']/button[@class='btn btn-primary btn-block' and text()='Sign In']").click()
``` |
69,920,403 | There is an HTML page that I would like to find the elements of two input types and press one button to log in with the help of selenium along with python3.
The problem is that I can't seem to find a way of doing this correctly.
The two texts and the button are in a form without an id or some tag, also I'm new on this one.
Below there is the HTML code with the two text fields (Email & Password) I need to find with the
selenium WebDriver along with the submit button.
HTML CODE:
```html
<form action="https://www.example.com/login" autocomplete="on" method="post" role="form">
<input type="hidden" name="_token" value="asdc">
<div class="form-group">
<input type="email" name="email" class="form-control " placeholder="Email"
value="" autofocus>
</div>
<div class="form-group">
<input type="password" name="password" class="form-control " placeholder="Password">
</div>
<div class="form-group">
<button type="submit" class="btn btn-primary btn-block" style1="height: 41px; font-size: 18px">Sign In</button></div>
</form>
```
Full HTML (URLS are changed) :
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Login</title>
<meta content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no" name="viewport">
<link rel="shortcut icon" type="image/png" href="https://example.com/favicon.png?"/>
<link href="https://fonts.googleapis.com/css?family=Roboto:300,300i,400,400i,500,500i,700,700i,900"
rel="stylesheet">
<!-- Icon fonts -->
<link href="/fonts/fontawesome-pro/css/all.min.css" rel="stylesheet" type="text/css"/>
<!-- theme -->
<link id="theme-link" href="https://example.com/theme.css?&ver=a_230wednesday_10_nov_2021_135539_utc" rel="stylesheet" type="text/css"/>
<link href="https://example.com/login.css?&ver=a_230wednesday_10_nov_2021_135539_utc" rel="stylesheet" type="text/css"/>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag( 'js', new Date() );
gtag( 'config', "UA-101862321-1" );
</script>
</head>
<body>
<div class="limiter d-none">
<div class="container-login100">
<div class="wrap-login100 align-content-center align-items-center">
<div class="login100-pic js-tilt" data-tilt>
<img src="https://example.com/login-illustration.png?"/>
</div>
<div class="login100-form">
<div class="text-center mb-3">
<img src="https://example.com/logo-login.png?"/>
</div>
<div class="card-body p-4">
<form action="https://www.example.com/login" autocomplete="on" method="post" role="form">
<input type="hidden" name="_token" value="Somevalue">
<div class="form-group">
<input type="email" name="email"
class="form-control " placeholder="Email"
value="" autofocus>
</div>
<div class="form-group">
<input type="password" name="password" class="form-control " placeholder="Password">
</div>
<div class="form-group">
<label class="custom-control custom-checkbox">
<input id="remember" name="remember" type="checkbox" class="custom-control-input" value="1">
<span class="custom-control-label">Remember me</span>
</label>
</div>
<div class="form-group">
<button type="submit" class="btn btn-primary btn-block" style1="height: 41px; font-size: 18px">Sign In</button>
</div>
</form>
<div class="d-flex align-items-center form-group">
<hr class="flex-grow-1"/>
<div class="text-muted px-3">
OR CONNECT WITH
</div>
<hr class="flex-grow-1"/>
</div>
<a href="https://example.com/login/office" class="btn btn-default text-center btn-block">
<img src="https://example/office-365.png?&ver=a_230wednesday_10_nov_2021_135539_utc" class="img-fluid"/>
</a>
<div class="text-right mt-3">
<a href="https://www.example.com/password/reset">
Forgot Your Password?
</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="login-cover-bg">
<div class="login-box d-flex justify-content-between align-items-center">
<div class="login100-pic">
<img src="https://example.com/logo-login.png?"/>
</div>
<div class="login100-form">
<div class="text-center mb-3">
<img src="https://example.com/logo-login.png?" class="img-fluid"/>
</div>
<div class="card-body p-4">
<form action="https://www.example.com/login" autocomplete="on" method="post" role="form">
<input type="hidden" name="_token" value="somevalue">
<div class="form-group">
<input type="email" name="email"
class="form-control " placeholder="Email"
value="" autofocus>
</div>
<div class="form-group">
<input type="password" name="password" class="form-control " placeholder="Password">
</div>
<div class="form-group">
<label class="custom-control custom-checkbox">
<input id="remember" name="remember" type="checkbox" class="custom-control-input" value="1">
<span class="custom-control-label">Remember me</span>
</label>
</div>
<div class="form-group">
<button type="submit" class="btn btn-primary btn-block" style1="height: 41px; font-size: 18px">Sign In</button>
</div>
</form>
<div class="d-flex align-items-center form-group">
<hr class="flex-grow-1"/>
<div class="text-muted px-3">
OR CONNECT WITH
</div>
<hr class="flex-grow-1"/>
</div>
<a href="https://www.example.com/login/office" class="btn btn-default text-center btn-block">
<img src="office-365.png?&ver=a_230wednesday_10_nov_2021_135539_utc" class="img-fluid"/>
</a>
<div class="text-right mt-3">
<a href="https://www.example.com">
Forgot Your Password?
</a>
</div>
</div>
</div>
</div>
</div>
<!-- Core scripts From Bootstrap 4.4-->
<script src="https://code.jquery.com/jquery-3.4.1.min.js" ></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.16.1/umd/popper.min.js" ></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" "></script>
</body>
</html>
```
My code so far:
```css
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("http://example.com/")
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.LINK_TEXT, "Login to start working"))
)
element.click()
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, "//input[@placeholder='Password']"))
)
element.click()
element.send_keys('random@random.com')
element.send_keys(Keys.ENTER)
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "Submit"))
)
element.click()
except Exception as exc:
driver.quit()
print(exc)
```
Thank you in advance! | 2021/11/10 | [
"https://Stackoverflow.com/questions/69920403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1089615/"
] | I found a solution by doing the above and worked:
```
self.driver.find_element(By.XPATH, "//div[@class='login-box d-flex justify-content-between "
"align-items-center']//input[@placeholder='Email']").send_keys(
'randommail@mail.com')
```
The above can be done as well for the password field.
It might help somone in the future :)
Thank you again for your time ! | To click on the link *`Login to start working`*, next key in the credentials and finally to click on the *Submit* button you can use the following [Locator Strategies](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890):
```
driver.get("http://example.com/")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.LINK_TEXT, "Login to start working"))).click()
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "div.form-group > input.form-control[name='email']"))).send_keys("random@random.com")
driver.find_element(By.CSS_SELECTOR, "div.form-group > input.form-control[name='password']").send_keys("random_password")
driver.find_element(By.XPATH, "//div[@class='form-group']/button[@class='btn btn-primary btn-block' and text()='Sign In']").click()
``` |
34,126,957 | I'm trying to install Pygame for python 3.5 32bit. I have learned that I can open the `.whl` files provided on the site by using the `pip` command. The problem is I've tried multiple ways doing this but with constant error.
```
python -m pip install pygame-1.9.2a0-cp35-none-win32.whl
'python' is not recognized as an internal or external command,
operable program or batch file.
```
The file I'm trying to install: `pygame-1.9.2a0-cp35-none-win32.whl`
My python program is located in my programs folder and everything else works fine. | 2015/12/07 | [
"https://Stackoverflow.com/questions/34126957",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5481774/"
] | It would be helpful if your filtered the result set from the database by the customer's name, for example...
```
DefaultListModel dlm = new DefaultListModel();
try (PreparedStatement st = con.prepareStatement("select songpick from customer where customername=?")) {
String customerName = (String)jComboBox1.getSelectedItem();
st.setString(1, customerName);
try (ResultSet rs = st.executeQuery()) {
if (rs.next()) {
String songpick = rs.getString("songpick");
dlm.addElement(songpick);
}
jList1.setModel(dlm);
}
} catch (SQLException exp) {
exp.printStackTrace();
}
```
*ps: I don't know you database structure, so I'm only guessing at the relationships*
You should also take a look at [The try-with-resources Statement](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) for more details about how to manage your resources and the [SQL Where Clause](http://www.w3schools.com/sql/sql_where.asp) | You need to have a `WHERE` clause and set the value that you want to get from
E.g. `SELECT songpick FROM customer WHERE <columnName> = ?` and set the value of the that you need before the `executeQuery` statement with `st.setString(1, "Foo");` |
72,207,311 | ```
#!/bin/bash
data_dir=./all
for file_name in "$data_dir"/*
do
echo "$file_name"
python process.py "$file_name"
done
```
For example, this script processes the files sequentially in a directory in a 'for' loop. Is it possible to start multiple process.py instances to process files concurrently? I want to do this in a shell script. | 2022/05/11 | [
"https://Stackoverflow.com/questions/72207311",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3943868/"
] | It's better to use [os.listdir](https://docs.python.org/3/library/os.html#os.listdir) and [subprocess.Popen](https://stackoverflow.com/a/7224186/5707560) to start new processes. | With **GNU Parallel**, like this:
```
parallel python process.py {} ::: all/*
```
It will run N jobs in parallel, where N is the number of CPU cores you have, or you can specify `-j4` to run on just 4, for example.
Many, many options for:
* logging,
* splitting/chunking inputs,
* tagging/separating output,
* staggering job starts,
* massaging input parameters,
* fail and retry handling,
* distributing jobs and data to other machines
* and so on...
Try putting `[gnu-parallel]` in the StackOverflow search box. |
72,207,311 | ```
#!/bin/bash
data_dir=./all
for file_name in "$data_dir"/*
do
echo "$file_name"
python process.py "$file_name"
done
```
For example, this script processes the files sequentially in a directory in a 'for' loop. Is it possible to start multiple process.py instances to process files concurrently? I want to do this in a shell script. | 2022/05/11 | [
"https://Stackoverflow.com/questions/72207311",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3943868/"
] | I have another possibility for you, if still needed. It uses the `screen` command to create a new detached process with the supplied command.
Here is an example:
```sh
#!/bin/bash
data_dir=./all
for file_name in "$data_dir"/*
do
echo "$file_name"
screen -dm python process.py "$file_name"
done
``` | With **GNU Parallel**, like this:
```
parallel python process.py {} ::: all/*
```
It will run N jobs in parallel, where N is the number of CPU cores you have, or you can specify `-j4` to run on just 4, for example.
Many, many options for:
* logging,
* splitting/chunking inputs,
* tagging/separating output,
* staggering job starts,
* massaging input parameters,
* fail and retry handling,
* distributing jobs and data to other machines
* and so on...
Try putting `[gnu-parallel]` in the StackOverflow search box. |
17,694,780 | I'm a front-end dev struggling along with Django. I have the basics pretty much down but I've hit at wall at the following point.
I have a site running locally and also on a dev machine. Locally I've added an extra class model to an already existing app, registered it in the relevant admin.py and checked it in the settings. Locally the new class and relevant fields appear in admin but when I move this all to dev they're not appearing. The app is called 'publish'.
My method was as follows:
1. Created the new class in the publish > models.py file:
```
class Whitepaper(models.Model):
title = models.CharField(max_length=200)
slug = models.SlugField(max_length=100, blank=True)
pub_date = models.DateField('date published')
section = models.ForeignKey('Section', related_name='whitepapers', blank=True, null=True)
description = models.CharField(max_length=1000)
docfile = models.FileField(upload_to="whitepapers/%Y/%m/%d", null=True, blank=True)
```
1. Updated and migrated the model with South using:
```
python manage.py schemamigration publish --auto
```
and
```
python manage.py migrate publish
```
1. Registered the class in the admin.py file:
```
from models import Section, Tag, Post, Whitepaper
from django.contrib import admin
from django import forms
admin.site.register(Whitepaper)
```
The app is listed in the settings.py file:
```
INSTALLED_APPS = (
...,
...,
'publish',
...,
)
```
As this is running on a dev server that's hosting a few other testing areas, restarting the whole thing is out of the question so I've been 'touching' the .wsgi file.
On my local version this got the model and fields showing up in the admin but on the dev server they are nowhere to be seen.
What am I missing?
Thanks ye brainy ones. | 2013/07/17 | [
"https://Stackoverflow.com/questions/17694780",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2321623/"
] | I figured out the problem. Turns out the login I was using to get into the admin didn't have superuser privileges. So I made a new one with:
```
python manage.py createsuperuser
```
After logging in with the new username and password I could see all my new shiny tables! | Are you sure touching `.wsgi` file does restart your app?
It looks like it doesn't.
Make sure the app is restarted. Find the evidence touching `.wsgi` file restarts the app maybe.
Since you don't provide any insight about how the dev server runs the apps, we won't be able to help you any further. |
54,172,462 | I'm new to ML and Colab. Trying to play around with the project at <https://github.com/tkarras/progressive_growing_of_gans> but having a hard time getting it running in Colab.
When I run the import\_example.py script from the project, I get immediate errors relating to Tensorflow not loading. So I tried stepping back and simply doing `import tensorflow as tf`. When I run this, I get the below errors. I've got the runtime set to GPU. I was getting the same issue yesterday with the Colab demo GPU code, but that's working today.
Can anyone give me any pointers? It seems like basic stuff that should just work in the environment.
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
57
---> 58 from tensorflow.python.pywrap_tensorflow_internal import *
59 from tensorflow.python.pywrap_tensorflow_internal import __version__
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py in <module>()
27 return _mod
---> 28 _pywrap_tensorflow_internal = swig_import_helper()
29 del swig_import_helper
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py in swig_import_helper()
23 try:
---> 24 _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
25 finally:
/usr/lib/python3.6/imp.py in load_module(name, file, filename, details)
242 else:
--> 243 return load_dynamic(name, filename, file)
244 elif type_ == PKG_DIRECTORY:
/usr/lib/python3.6/imp.py in load_dynamic(name, path, file)
342 name=name, loader=loader, origin=path)
--> 343 return _load(spec)
344
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
<ipython-input-8-64156d691fe5> in <module>()
----> 1 import tensorflow as tf
/usr/local/lib/python3.6/dist-packages/tensorflow/__init__.py in <module>()
22
23 # pylint: disable=g-bad-import-order
---> 24 from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
25
26 try:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/__init__.py in <module>()
47 import numpy as np
48
---> 49 from tensorflow.python import pywrap_tensorflow
50
51 from tensorflow.python.tools import component_api_helper
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
72 for some common reasons and solutions. Include the entire stack trace
73 above this error message when asking for help.""" % traceback.format_exc()
---> 74 raise ImportError(msg)
75
76 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long
ImportError: Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
``` | 2019/01/13 | [
"https://Stackoverflow.com/questions/54172462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1958417/"
] | Even I was facing the same issue. Later I realized I forgot to enable **GPU** in notebook settings.
I enabled it and installed **TensorFlow-GPU** (GPU version).
You can find notebook settings in **Edit** > **Notebook Settings**.
[Here's the screenshot](https://i.stack.imgur.com/Sr7Hr.jpg) | You just have to read the error carefully:
>
> NOTE: If your import is failing due to a missing package, you can
> manually install dependencies using either !pip or !apt.
>
>
>
Try running:
```
!pip install tensorflow
```
inside notebook, and then rerun the cell with the import. |
54,172,462 | I'm new to ML and Colab. Trying to play around with the project at <https://github.com/tkarras/progressive_growing_of_gans> but having a hard time getting it running in Colab.
When I run the import\_example.py script from the project, I get immediate errors relating to Tensorflow not loading. So I tried stepping back and simply doing `import tensorflow as tf`. When I run this, I get the below errors. I've got the runtime set to GPU. I was getting the same issue yesterday with the Colab demo GPU code, but that's working today.
Can anyone give me any pointers? It seems like basic stuff that should just work in the environment.
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
57
---> 58 from tensorflow.python.pywrap_tensorflow_internal import *
59 from tensorflow.python.pywrap_tensorflow_internal import __version__
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py in <module>()
27 return _mod
---> 28 _pywrap_tensorflow_internal = swig_import_helper()
29 del swig_import_helper
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py in swig_import_helper()
23 try:
---> 24 _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
25 finally:
/usr/lib/python3.6/imp.py in load_module(name, file, filename, details)
242 else:
--> 243 return load_dynamic(name, filename, file)
244 elif type_ == PKG_DIRECTORY:
/usr/lib/python3.6/imp.py in load_dynamic(name, path, file)
342 name=name, loader=loader, origin=path)
--> 343 return _load(spec)
344
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
<ipython-input-8-64156d691fe5> in <module>()
----> 1 import tensorflow as tf
/usr/local/lib/python3.6/dist-packages/tensorflow/__init__.py in <module>()
22
23 # pylint: disable=g-bad-import-order
---> 24 from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
25
26 try:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/__init__.py in <module>()
47 import numpy as np
48
---> 49 from tensorflow.python import pywrap_tensorflow
50
51 from tensorflow.python.tools import component_api_helper
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
72 for some common reasons and solutions. Include the entire stack trace
73 above this error message when asking for help.""" % traceback.format_exc()
---> 74 raise ImportError(msg)
75
76 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long
ImportError: Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
``` | 2019/01/13 | [
"https://Stackoverflow.com/questions/54172462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1958417/"
] | Check if you have installed the version of tensorflow that you need in Colab or not?! (Maybe you have installed a different version of tensorflow-gpu and you have not installed the corresponding tensorflow version. I got the same error in this case)
```
!pip list
```
If you do not have tensorflow in the list, you should install both tensorflow and tensorflow-gpu as follow:
```
!pip install tensorflow=='enter version here'
!pip install tensorflow-gpu=='enter same version as the previous one here'
``` | You just have to read the error carefully:
>
> NOTE: If your import is failing due to a missing package, you can
> manually install dependencies using either !pip or !apt.
>
>
>
Try running:
```
!pip install tensorflow
```
inside notebook, and then rerun the cell with the import. |
54,172,462 | I'm new to ML and Colab. Trying to play around with the project at <https://github.com/tkarras/progressive_growing_of_gans> but having a hard time getting it running in Colab.
When I run the import\_example.py script from the project, I get immediate errors relating to Tensorflow not loading. So I tried stepping back and simply doing `import tensorflow as tf`. When I run this, I get the below errors. I've got the runtime set to GPU. I was getting the same issue yesterday with the Colab demo GPU code, but that's working today.
Can anyone give me any pointers? It seems like basic stuff that should just work in the environment.
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
57
---> 58 from tensorflow.python.pywrap_tensorflow_internal import *
59 from tensorflow.python.pywrap_tensorflow_internal import __version__
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py in <module>()
27 return _mod
---> 28 _pywrap_tensorflow_internal = swig_import_helper()
29 del swig_import_helper
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py in swig_import_helper()
23 try:
---> 24 _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
25 finally:
/usr/lib/python3.6/imp.py in load_module(name, file, filename, details)
242 else:
--> 243 return load_dynamic(name, filename, file)
244 elif type_ == PKG_DIRECTORY:
/usr/lib/python3.6/imp.py in load_dynamic(name, path, file)
342 name=name, loader=loader, origin=path)
--> 343 return _load(spec)
344
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
<ipython-input-8-64156d691fe5> in <module>()
----> 1 import tensorflow as tf
/usr/local/lib/python3.6/dist-packages/tensorflow/__init__.py in <module>()
22
23 # pylint: disable=g-bad-import-order
---> 24 from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
25
26 try:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/__init__.py in <module>()
47 import numpy as np
48
---> 49 from tensorflow.python import pywrap_tensorflow
50
51 from tensorflow.python.tools import component_api_helper
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
72 for some common reasons and solutions. Include the entire stack trace
73 above this error message when asking for help.""" % traceback.format_exc()
---> 74 raise ImportError(msg)
75
76 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long
ImportError: Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
``` | 2019/01/13 | [
"https://Stackoverflow.com/questions/54172462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1958417/"
] | Even I was facing the same issue. Later I realized I forgot to enable **GPU** in notebook settings.
I enabled it and installed **TensorFlow-GPU** (GPU version).
You can find notebook settings in **Edit** > **Notebook Settings**.
[Here's the screenshot](https://i.stack.imgur.com/Sr7Hr.jpg) | TensorFlow is included by default in Colab. So, I suspect you've somehow broken the existing install.
You can request a new backend by selecting the Runtime -> Reset all runtimes... menu. Then, try `import tensorflow as tf` again. |
54,172,462 | I'm new to ML and Colab. Trying to play around with the project at <https://github.com/tkarras/progressive_growing_of_gans> but having a hard time getting it running in Colab.
When I run the import\_example.py script from the project, I get immediate errors relating to Tensorflow not loading. So I tried stepping back and simply doing `import tensorflow as tf`. When I run this, I get the below errors. I've got the runtime set to GPU. I was getting the same issue yesterday with the Colab demo GPU code, but that's working today.
Can anyone give me any pointers? It seems like basic stuff that should just work in the environment.
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
57
---> 58 from tensorflow.python.pywrap_tensorflow_internal import *
59 from tensorflow.python.pywrap_tensorflow_internal import __version__
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py in <module>()
27 return _mod
---> 28 _pywrap_tensorflow_internal = swig_import_helper()
29 del swig_import_helper
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py in swig_import_helper()
23 try:
---> 24 _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
25 finally:
/usr/lib/python3.6/imp.py in load_module(name, file, filename, details)
242 else:
--> 243 return load_dynamic(name, filename, file)
244 elif type_ == PKG_DIRECTORY:
/usr/lib/python3.6/imp.py in load_dynamic(name, path, file)
342 name=name, loader=loader, origin=path)
--> 343 return _load(spec)
344
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
<ipython-input-8-64156d691fe5> in <module>()
----> 1 import tensorflow as tf
/usr/local/lib/python3.6/dist-packages/tensorflow/__init__.py in <module>()
22
23 # pylint: disable=g-bad-import-order
---> 24 from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
25
26 try:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/__init__.py in <module>()
47 import numpy as np
48
---> 49 from tensorflow.python import pywrap_tensorflow
50
51 from tensorflow.python.tools import component_api_helper
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
72 for some common reasons and solutions. Include the entire stack trace
73 above this error message when asking for help.""" % traceback.format_exc()
---> 74 raise ImportError(msg)
75
76 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long
ImportError: Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
``` | 2019/01/13 | [
"https://Stackoverflow.com/questions/54172462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1958417/"
] | Check if you have installed the version of tensorflow that you need in Colab or not?! (Maybe you have installed a different version of tensorflow-gpu and you have not installed the corresponding tensorflow version. I got the same error in this case)
```
!pip list
```
If you do not have tensorflow in the list, you should install both tensorflow and tensorflow-gpu as follow:
```
!pip install tensorflow=='enter version here'
!pip install tensorflow-gpu=='enter same version as the previous one here'
``` | TensorFlow is included by default in Colab. So, I suspect you've somehow broken the existing install.
You can request a new backend by selecting the Runtime -> Reset all runtimes... menu. Then, try `import tensorflow as tf` again. |
54,172,462 | I'm new to ML and Colab. Trying to play around with the project at <https://github.com/tkarras/progressive_growing_of_gans> but having a hard time getting it running in Colab.
When I run the import\_example.py script from the project, I get immediate errors relating to Tensorflow not loading. So I tried stepping back and simply doing `import tensorflow as tf`. When I run this, I get the below errors. I've got the runtime set to GPU. I was getting the same issue yesterday with the Colab demo GPU code, but that's working today.
Can anyone give me any pointers? It seems like basic stuff that should just work in the environment.
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
57
---> 58 from tensorflow.python.pywrap_tensorflow_internal import *
59 from tensorflow.python.pywrap_tensorflow_internal import __version__
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py in <module>()
27 return _mod
---> 28 _pywrap_tensorflow_internal = swig_import_helper()
29 del swig_import_helper
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py in swig_import_helper()
23 try:
---> 24 _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
25 finally:
/usr/lib/python3.6/imp.py in load_module(name, file, filename, details)
242 else:
--> 243 return load_dynamic(name, filename, file)
244 elif type_ == PKG_DIRECTORY:
/usr/lib/python3.6/imp.py in load_dynamic(name, path, file)
342 name=name, loader=loader, origin=path)
--> 343 return _load(spec)
344
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
<ipython-input-8-64156d691fe5> in <module>()
----> 1 import tensorflow as tf
/usr/local/lib/python3.6/dist-packages/tensorflow/__init__.py in <module>()
22
23 # pylint: disable=g-bad-import-order
---> 24 from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
25
26 try:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/__init__.py in <module>()
47 import numpy as np
48
---> 49 from tensorflow.python import pywrap_tensorflow
50
51 from tensorflow.python.tools import component_api_helper
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
72 for some common reasons and solutions. Include the entire stack trace
73 above this error message when asking for help.""" % traceback.format_exc()
---> 74 raise ImportError(msg)
75
76 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long
ImportError: Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
``` | 2019/01/13 | [
"https://Stackoverflow.com/questions/54172462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1958417/"
] | Even I was facing the same issue. Later I realized I forgot to enable **GPU** in notebook settings.
I enabled it and installed **TensorFlow-GPU** (GPU version).
You can find notebook settings in **Edit** > **Notebook Settings**.
[Here's the screenshot](https://i.stack.imgur.com/Sr7Hr.jpg) | Check if you have installed the version of tensorflow that you need in Colab or not?! (Maybe you have installed a different version of tensorflow-gpu and you have not installed the corresponding tensorflow version. I got the same error in this case)
```
!pip list
```
If you do not have tensorflow in the list, you should install both tensorflow and tensorflow-gpu as follow:
```
!pip install tensorflow=='enter version here'
!pip install tensorflow-gpu=='enter same version as the previous one here'
``` |
24,253,977 | I am trying to remove non repeating characters from a list in python. e.g list = [1,1,2,3,3,3,5,6] should return [1,1,3,3].
My initial attempt was:
```
def tester(data):
for x in data:
if data.count(x) == 1:
data.remove(x)
return data
```
This will work for some inputs, but for [1,2,3,4,5], for example, it returns [2,4]. Could someone please explain why this occurs? | 2014/06/17 | [
"https://Stackoverflow.com/questions/24253977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3636636/"
] | ```
l=[1,1,2,3,3,3,5,6]
[x for x in l if l.count(x) > 1]
[1, 1, 3, 3, 3]
```
Adds elements that appear at least twice in your list.
In your own code you need to change the line `for x in data` to `for x in data[:]:`
Using `data[:]` you are iterating over a `copy` of original list. | Another linear solution.
```
>>> data = [1, 1, 2, 3, 3, 3, 5, 6]
>>> D = dict.fromkeys(data, 0)
>>> for item in data:
... D[item] += 1
...
>>> [item for item in data if D[item] > 1]
[1, 1, 3, 3, 3]
``` |
24,253,977 | I am trying to remove non repeating characters from a list in python. e.g list = [1,1,2,3,3,3,5,6] should return [1,1,3,3].
My initial attempt was:
```
def tester(data):
for x in data:
if data.count(x) == 1:
data.remove(x)
return data
```
This will work for some inputs, but for [1,2,3,4,5], for example, it returns [2,4]. Could someone please explain why this occurs? | 2014/06/17 | [
"https://Stackoverflow.com/questions/24253977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3636636/"
] | There is a linear time solution for that:
```
def tester(data):
cnt = {}
for e in data:
cnt[e] = cnt.get(e, 0) + 1
return [x for x in data if cnt[x] > 1]
``` | You shouldn't remove items from a mutable list while iterating over that same list. The interpreter doesn't have any way to keep track of where it is in the list while you're doing this.
See [this question](//stackoverflow.com/questions/1207406/remove-items-from-a-list-while-iterating-in-python) for another example of the same problem, with many suggested alternative approaches. |
24,253,977 | I am trying to remove non repeating characters from a list in python. e.g list = [1,1,2,3,3,3,5,6] should return [1,1,3,3].
My initial attempt was:
```
def tester(data):
for x in data:
if data.count(x) == 1:
data.remove(x)
return data
```
This will work for some inputs, but for [1,2,3,4,5], for example, it returns [2,4]. Could someone please explain why this occurs? | 2014/06/17 | [
"https://Stackoverflow.com/questions/24253977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3636636/"
] | Another linear solution.
```
>>> data = [1, 1, 2, 3, 3, 3, 5, 6]
>>> D = dict.fromkeys(data, 0)
>>> for item in data:
... D[item] += 1
...
>>> [item for item in data if D[item] > 1]
[1, 1, 3, 3, 3]
``` | You shouldn't remove items from a mutable list while iterating over that same list. The interpreter doesn't have any way to keep track of where it is in the list while you're doing this.
See [this question](//stackoverflow.com/questions/1207406/remove-items-from-a-list-while-iterating-in-python) for another example of the same problem, with many suggested alternative approaches. |
24,253,977 | I am trying to remove non repeating characters from a list in python. e.g list = [1,1,2,3,3,3,5,6] should return [1,1,3,3].
My initial attempt was:
```
def tester(data):
for x in data:
if data.count(x) == 1:
data.remove(x)
return data
```
This will work for some inputs, but for [1,2,3,4,5], for example, it returns [2,4]. Could someone please explain why this occurs? | 2014/06/17 | [
"https://Stackoverflow.com/questions/24253977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3636636/"
] | ```
l=[1,1,2,3,3,3,5,6]
[x for x in l if l.count(x) > 1]
[1, 1, 3, 3, 3]
```
Adds elements that appear at least twice in your list.
In your own code you need to change the line `for x in data` to `for x in data[:]:`
Using `data[:]` you are iterating over a `copy` of original list. | you can use the list comprehention,just like this:
```
def tester(data):
return [x for x in data if data.count(x) != 1]
```
it is not recommended to remove item when iterating |
24,253,977 | I am trying to remove non repeating characters from a list in python. e.g list = [1,1,2,3,3,3,5,6] should return [1,1,3,3].
My initial attempt was:
```
def tester(data):
for x in data:
if data.count(x) == 1:
data.remove(x)
return data
```
This will work for some inputs, but for [1,2,3,4,5], for example, it returns [2,4]. Could someone please explain why this occurs? | 2014/06/17 | [
"https://Stackoverflow.com/questions/24253977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3636636/"
] | ```
l=[1,1,2,3,3,3,5,6]
[x for x in l if l.count(x) > 1]
[1, 1, 3, 3, 3]
```
Adds elements that appear at least twice in your list.
In your own code you need to change the line `for x in data` to `for x in data[:]:`
Using `data[:]` you are iterating over a `copy` of original list. | There is a linear time solution for that:
```
def tester(data):
cnt = {}
for e in data:
cnt[e] = cnt.get(e, 0) + 1
return [x for x in data if cnt[x] > 1]
``` |
24,253,977 | I am trying to remove non repeating characters from a list in python. e.g list = [1,1,2,3,3,3,5,6] should return [1,1,3,3].
My initial attempt was:
```
def tester(data):
for x in data:
if data.count(x) == 1:
data.remove(x)
return data
```
This will work for some inputs, but for [1,2,3,4,5], for example, it returns [2,4]. Could someone please explain why this occurs? | 2014/06/17 | [
"https://Stackoverflow.com/questions/24253977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3636636/"
] | This is occurring because you are removing from a list as you're iterating through it. Instead, consider appending to a new list.
You could also use collections.Counter, if you're using 2.7 or greater:
```
[a for a, b in collections.Counter(your_list).items() if b > 1]
``` | Another linear solution.
```
>>> data = [1, 1, 2, 3, 3, 3, 5, 6]
>>> D = dict.fromkeys(data, 0)
>>> for item in data:
... D[item] += 1
...
>>> [item for item in data if D[item] > 1]
[1, 1, 3, 3, 3]
``` |
24,253,977 | I am trying to remove non repeating characters from a list in python. e.g list = [1,1,2,3,3,3,5,6] should return [1,1,3,3].
My initial attempt was:
```
def tester(data):
for x in data:
if data.count(x) == 1:
data.remove(x)
return data
```
This will work for some inputs, but for [1,2,3,4,5], for example, it returns [2,4]. Could someone please explain why this occurs? | 2014/06/17 | [
"https://Stackoverflow.com/questions/24253977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3636636/"
] | Another linear solution.
```
>>> data = [1, 1, 2, 3, 3, 3, 5, 6]
>>> D = dict.fromkeys(data, 0)
>>> for item in data:
... D[item] += 1
...
>>> [item for item in data if D[item] > 1]
[1, 1, 3, 3, 3]
``` | you can use the list comprehention,just like this:
```
def tester(data):
return [x for x in data if data.count(x) != 1]
```
it is not recommended to remove item when iterating |
24,253,977 | I am trying to remove non repeating characters from a list in python. e.g list = [1,1,2,3,3,3,5,6] should return [1,1,3,3].
My initial attempt was:
```
def tester(data):
for x in data:
if data.count(x) == 1:
data.remove(x)
return data
```
This will work for some inputs, but for [1,2,3,4,5], for example, it returns [2,4]. Could someone please explain why this occurs? | 2014/06/17 | [
"https://Stackoverflow.com/questions/24253977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3636636/"
] | This is occurring because you are removing from a list as you're iterating through it. Instead, consider appending to a new list.
You could also use collections.Counter, if you're using 2.7 or greater:
```
[a for a, b in collections.Counter(your_list).items() if b > 1]
``` | You shouldn't remove items from a mutable list while iterating over that same list. The interpreter doesn't have any way to keep track of where it is in the list while you're doing this.
See [this question](//stackoverflow.com/questions/1207406/remove-items-from-a-list-while-iterating-in-python) for another example of the same problem, with many suggested alternative approaches. |
24,253,977 | I am trying to remove non repeating characters from a list in python. e.g list = [1,1,2,3,3,3,5,6] should return [1,1,3,3].
My initial attempt was:
```
def tester(data):
for x in data:
if data.count(x) == 1:
data.remove(x)
return data
```
This will work for some inputs, but for [1,2,3,4,5], for example, it returns [2,4]. Could someone please explain why this occurs? | 2014/06/17 | [
"https://Stackoverflow.com/questions/24253977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3636636/"
] | ```
l=[1,1,2,3,3,3,5,6]
[x for x in l if l.count(x) > 1]
[1, 1, 3, 3, 3]
```
Adds elements that appear at least twice in your list.
In your own code you need to change the line `for x in data` to `for x in data[:]:`
Using `data[:]` you are iterating over a `copy` of original list. | This is occurring because you are removing from a list as you're iterating through it. Instead, consider appending to a new list.
You could also use collections.Counter, if you're using 2.7 or greater:
```
[a for a, b in collections.Counter(your_list).items() if b > 1]
``` |
24,253,977 | I am trying to remove non repeating characters from a list in python. e.g list = [1,1,2,3,3,3,5,6] should return [1,1,3,3].
My initial attempt was:
```
def tester(data):
for x in data:
if data.count(x) == 1:
data.remove(x)
return data
```
This will work for some inputs, but for [1,2,3,4,5], for example, it returns [2,4]. Could someone please explain why this occurs? | 2014/06/17 | [
"https://Stackoverflow.com/questions/24253977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3636636/"
] | There is a linear time solution for that:
```
def tester(data):
cnt = {}
for e in data:
cnt[e] = cnt.get(e, 0) + 1
return [x for x in data if cnt[x] > 1]
``` | Another linear solution.
```
>>> data = [1, 1, 2, 3, 3, 3, 5, 6]
>>> D = dict.fromkeys(data, 0)
>>> for item in data:
... D[item] += 1
...
>>> [item for item in data if D[item] > 1]
[1, 1, 3, 3, 3]
``` |
45,062,219 | Is there a simpler way possible to just add numbers which are regarded as strings but also integer by python? It doesn't let me add it, maybe because of the way I converted the integer to string and then a list?
I have done this so far:
```
function_menu()
print()
numbers = str(number)
lists = []
lists.extend(numbers)
addition = sum(lists)
average = addition / len(lists)
print(average)
``` | 2017/07/12 | [
"https://Stackoverflow.com/questions/45062219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8295906/"
] | You could remove `0`s from the list. If the list becomes empty, return `0`, the product otherwise:
```
>>> no_zeroes = [value for value in values if value > 0]
>>> no_zeroes
[1.0, 3.4]
>>> reduce(lambda x, y : y*x, no_zeroes) if no_zeroes else 0
3.4
```
Note that from a mathematical point of view, the product of an empy list should probably be `1`. In that case, you could write :
```
reduce(lambda x, y : y*x, no_zeroes, 1)
``` | if you use numpy arrays you can filter out the zero values:
```
import numpy as np
vals = np.array([0.0, 0.0, 0.0, 0.0])
no_zeros = vals[vals>0]
if no_zeros:
print( np.prod(no_zeros))
else:
print(0.0)
``` |
45,062,219 | Is there a simpler way possible to just add numbers which are regarded as strings but also integer by python? It doesn't let me add it, maybe because of the way I converted the integer to string and then a list?
I have done this so far:
```
function_menu()
print()
numbers = str(number)
lists = []
lists.extend(numbers)
addition = sum(lists)
average = addition / len(lists)
print(average)
``` | 2017/07/12 | [
"https://Stackoverflow.com/questions/45062219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8295906/"
] | You could remove `0`s from the list. If the list becomes empty, return `0`, the product otherwise:
```
>>> no_zeroes = [value for value in values if value > 0]
>>> no_zeroes
[1.0, 3.4]
>>> reduce(lambda x, y : y*x, no_zeroes) if no_zeroes else 0
3.4
```
Note that from a mathematical point of view, the product of an empy list should probably be `1`. In that case, you could write :
```
reduce(lambda x, y : y*x, no_zeroes, 1)
``` | Just filter out 0.0 values with [`filter`](https://docs.python.org/3/library/functions.html#filter)
```
from functools import reduce
vals_without_zero = filter(lambda x: x, vals)
reduce(lambda x, y : x * y, vals_without_zero)
``` |
45,062,219 | Is there a simpler way possible to just add numbers which are regarded as strings but also integer by python? It doesn't let me add it, maybe because of the way I converted the integer to string and then a list?
I have done this so far:
```
function_menu()
print()
numbers = str(number)
lists = []
lists.extend(numbers)
addition = sum(lists)
average = addition / len(lists)
print(average)
``` | 2017/07/12 | [
"https://Stackoverflow.com/questions/45062219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8295906/"
] | You could remove `0`s from the list. If the list becomes empty, return `0`, the product otherwise:
```
>>> no_zeroes = [value for value in values if value > 0]
>>> no_zeroes
[1.0, 3.4]
>>> reduce(lambda x, y : y*x, no_zeroes) if no_zeroes else 0
3.4
```
Note that from a mathematical point of view, the product of an empy list should probably be `1`. In that case, you could write :
```
reduce(lambda x, y : y*x, no_zeroes, 1)
``` | Another option:
```
from functools import reduce
vals = [1.0, 0.0, 3.4, 0.0]
reduce(lambda x,y: y if x == 0 else (x if y == 0 else x*y), vals, 0)
# 3.4
vals = [0.0, 0.0, 0.0, 0.0]
reduce(lambda x,y: y if x == 0 else (x if y == 0 else x*y), vals, 0)
# 0.0
``` |
45,062,219 | Is there a simpler way possible to just add numbers which are regarded as strings but also integer by python? It doesn't let me add it, maybe because of the way I converted the integer to string and then a list?
I have done this so far:
```
function_menu()
print()
numbers = str(number)
lists = []
lists.extend(numbers)
addition = sum(lists)
average = addition / len(lists)
print(average)
``` | 2017/07/12 | [
"https://Stackoverflow.com/questions/45062219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8295906/"
] | You could remove `0`s from the list. If the list becomes empty, return `0`, the product otherwise:
```
>>> no_zeroes = [value for value in values if value > 0]
>>> no_zeroes
[1.0, 3.4]
>>> reduce(lambda x, y : y*x, no_zeroes) if no_zeroes else 0
3.4
```
Note that from a mathematical point of view, the product of an empy list should probably be `1`. In that case, you could write :
```
reduce(lambda x, y : y*x, no_zeroes, 1)
``` | Combining `functools.reduce` (with *initiator* value `1`) and `filter` functions:
```
vals = [1.0, 0.0, 3.4, 0.0]
result = functools.reduce(operator.mul, list(filter(lambda x: x>0, vals)) or [0], 1)
print(result) # 3.4
```
---
```
vals = [0.0, 0.0, 0.0, 0.0]
result = functools.reduce(operator.mul, list(filter(lambda x: x>0, vals)) or [0], 1)
print(result) # 0
``` |
45,062,219 | Is there a simpler way possible to just add numbers which are regarded as strings but also integer by python? It doesn't let me add it, maybe because of the way I converted the integer to string and then a list?
I have done this so far:
```
function_menu()
print()
numbers = str(number)
lists = []
lists.extend(numbers)
addition = sum(lists)
average = addition / len(lists)
print(average)
``` | 2017/07/12 | [
"https://Stackoverflow.com/questions/45062219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8295906/"
] | if you use numpy arrays you can filter out the zero values:
```
import numpy as np
vals = np.array([0.0, 0.0, 0.0, 0.0])
no_zeros = vals[vals>0]
if no_zeros:
print( np.prod(no_zeros))
else:
print(0.0)
``` | Just filter out 0.0 values with [`filter`](https://docs.python.org/3/library/functions.html#filter)
```
from functools import reduce
vals_without_zero = filter(lambda x: x, vals)
reduce(lambda x, y : x * y, vals_without_zero)
``` |
45,062,219 | Is there a simpler way possible to just add numbers which are regarded as strings but also integer by python? It doesn't let me add it, maybe because of the way I converted the integer to string and then a list?
I have done this so far:
```
function_menu()
print()
numbers = str(number)
lists = []
lists.extend(numbers)
addition = sum(lists)
average = addition / len(lists)
print(average)
``` | 2017/07/12 | [
"https://Stackoverflow.com/questions/45062219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8295906/"
] | if you use numpy arrays you can filter out the zero values:
```
import numpy as np
vals = np.array([0.0, 0.0, 0.0, 0.0])
no_zeros = vals[vals>0]
if no_zeros:
print( np.prod(no_zeros))
else:
print(0.0)
``` | Combining `functools.reduce` (with *initiator* value `1`) and `filter` functions:
```
vals = [1.0, 0.0, 3.4, 0.0]
result = functools.reduce(operator.mul, list(filter(lambda x: x>0, vals)) or [0], 1)
print(result) # 3.4
```
---
```
vals = [0.0, 0.0, 0.0, 0.0]
result = functools.reduce(operator.mul, list(filter(lambda x: x>0, vals)) or [0], 1)
print(result) # 0
``` |
45,062,219 | Is there a simpler way possible to just add numbers which are regarded as strings but also integer by python? It doesn't let me add it, maybe because of the way I converted the integer to string and then a list?
I have done this so far:
```
function_menu()
print()
numbers = str(number)
lists = []
lists.extend(numbers)
addition = sum(lists)
average = addition / len(lists)
print(average)
``` | 2017/07/12 | [
"https://Stackoverflow.com/questions/45062219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8295906/"
] | Another option:
```
from functools import reduce
vals = [1.0, 0.0, 3.4, 0.0]
reduce(lambda x,y: y if x == 0 else (x if y == 0 else x*y), vals, 0)
# 3.4
vals = [0.0, 0.0, 0.0, 0.0]
reduce(lambda x,y: y if x == 0 else (x if y == 0 else x*y), vals, 0)
# 0.0
``` | Just filter out 0.0 values with [`filter`](https://docs.python.org/3/library/functions.html#filter)
```
from functools import reduce
vals_without_zero = filter(lambda x: x, vals)
reduce(lambda x, y : x * y, vals_without_zero)
``` |
45,062,219 | Is there a simpler way possible to just add numbers which are regarded as strings but also integer by python? It doesn't let me add it, maybe because of the way I converted the integer to string and then a list?
I have done this so far:
```
function_menu()
print()
numbers = str(number)
lists = []
lists.extend(numbers)
addition = sum(lists)
average = addition / len(lists)
print(average)
``` | 2017/07/12 | [
"https://Stackoverflow.com/questions/45062219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8295906/"
] | Another option:
```
from functools import reduce
vals = [1.0, 0.0, 3.4, 0.0]
reduce(lambda x,y: y if x == 0 else (x if y == 0 else x*y), vals, 0)
# 3.4
vals = [0.0, 0.0, 0.0, 0.0]
reduce(lambda x,y: y if x == 0 else (x if y == 0 else x*y), vals, 0)
# 0.0
``` | Combining `functools.reduce` (with *initiator* value `1`) and `filter` functions:
```
vals = [1.0, 0.0, 3.4, 0.0]
result = functools.reduce(operator.mul, list(filter(lambda x: x>0, vals)) or [0], 1)
print(result) # 3.4
```
---
```
vals = [0.0, 0.0, 0.0, 0.0]
result = functools.reduce(operator.mul, list(filter(lambda x: x>0, vals)) or [0], 1)
print(result) # 0
``` |
37,622,153 | I would like to compute all (different) intersections of a collection of finite sets of integers (here implemented as a list of lists) in python (to avoid confusion, a formal definition is at the end of the question):
```
> A = [[0,1,2,3],[0,1,4],[1,2,4],[2,3,4],[0,3,4]]
> all_intersections(A) # desired output
[[], [0], [1], [2], [3], [4], [0, 1], [0, 3], [0, 4], [1, 2], [1, 4], [2, 3], [2, 4], [3, 4], [0, 1, 4], [0, 3, 4], [1, 2, 4], [2, 3, 4], [0, 1, 2, 3]]
```
I have an algorithm that does it iteratively, but it is rather slow (should I post it?), a test case would be
```
[[0, 1, 2, 3, 4, 9], [0, 1, 4, 5, 6, 10], [0, 2, 4, 5, 7, 11], [1, 3, 4, 6, 8, 12], [2, 3, 4, 7, 8, 13], [4, 5, 6, 7, 8, 14], [0, 1, 9, 10, 15, 16], [0, 2, 9, 11, 15, 17], [1, 3, 9, 12, 16, 18], [2, 3, 9, 13, 17, 18], [9, 15, 16, 17, 18, 19], [0, 5, 10, 11, 15, 20], [1, 6, 10, 12, 16, 21], [10, 15, 16, 19, 20, 21], [5, 6, 10, 14, 20, 21], [11, 15, 17, 19, 20, 22], [5, 7, 11, 14, 20, 22], [2, 7, 11, 13, 17, 22], [7, 8, 13, 14, 22, 23], [3, 8, 12, 13, 18, 23], [13, 17, 18, 19, 22, 23], [14, 19, 20, 21, 22, 23], [6, 8, 12, 14, 21, 23], [12, 16, 18, 19, 21, 23]]
```
which takes me about 2.5 secs to compute.
>
> Any ideas how to do it fast?
>
>
>
Formal definition (actually hard without latex mode): let A = {A1,...,An} be a finite set of finite sets Ai of non-negative integers. The output should then be the set { intersection of the sets in B : B subset of A }.
So the formal algorithm would be to take the union of all intersections of all subsets of A. But that's clearly taking forever.
Many thanks! | 2016/06/03 | [
"https://Stackoverflow.com/questions/37622153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1442181/"
] | Here is a recursive solution. It is almost instantaneous on your test example:
```
def allIntersections(frozenSets):
if len(frozenSets) == 0:
return []
else:
head = frozenSets[0]
tail = frozenSets[1:]
tailIntersections = allIntersections(tail)
newIntersections = [head]
newIntersections.extend(tailIntersections)
newIntersections.extend(head & s for s in tailIntersections)
return list(set(newIntersections))
def all_intersections(lists):
sets = allIntersections([frozenset(s) for s in lists])
return [list(s) for s in sets]
```
**On Edit** Here is a cleaner, nonrecursive implementation of the same ideas.
The problem is easiest if you define the intersection of an empty collection of sets to be the universal set, and an adequate universal set can be obtained by taking the union of all elements. This is a standard move in lattice-theory, and is dual to taking the union of an empty collection of sets to be the empty set. You could always throw away this universal set if you don't want it:
```
def allIntersections(frozenSets):
universalSet = frozenset.union(*frozenSets)
intersections = set([universalSet])
for s in frozenSets:
moreIntersections = set(s & t for t in intersections)
intersections.update(moreIntersections)
return intersections
def all_intersections(lists):
sets = allIntersections([frozenset(s) for s in lists])
return [list(s) for s in sets]
```
The reason that this is so fast with your test example is that, even though your collection has 24 sets, hence having 2\*\*24 (16.8 million) potential intersections, there are in fact only 242 (or 241 if you don't count the empty intersection) distinct intersections. Thus the number of intersections in each pass through the loop is in the low hundreds at most.
It is possible to pick 24 sets so that all of the 2\*\*24 possible intersections are in fact different, so it is easy to see that the worst-case behavior is exponential. But if, as in your test example, the number of intersections is small, this approach will allow you to rapidly compute them.
A potential optimization might be to sort the sets in increasing size before you loop over them. Processing the smaller sets up front might result in more empty intersections appearing earlier, thus keeping the total number of distinct intersections smaller until towards the end of the loop. | Iterative solution that takes about 3.5 ms on my machine for your large test input:
```
from itertools import starmap, product
from operator import and_
def all_intersections(sets):
# Convert to set of frozensets for uniquification/type correctness
last = new = sets = set(map(frozenset, sets))
# Keep going until further intersections add nothing to results
while new:
# Compute intersection of old values with newly found values
new = set(starmap(and_, product(last, new)))
last = sets.copy() # Save off prior state
new -= last # Determine truly newly added values
sets |= new # Accumulate newly added values in complete set
# No more intersections being generated, convert results to canonical
# form, list of lists, where each sublist is displayed in order, and
# the top level list is ordered first by size of sublist, then by contents
return sorted(map(sorted, sets), key=lambda x: (len(x), x))
```
Basically, it just keeps doing two way intersections among the old result set and the newly found intersections until a round of intersections doesn't change anything, then it's done.
**Note:** This is not actually the best solution (recursion is sufficiently better algorithmically to win on the test data, where John Coleman's solution, after adding sorting to the outer wrapper so it matches format, takes about 0.94 ms, vs. 3.5 ms for mine). I'm mostly providing it as an example of solving the problem in other ways. |
61,657,685 | First project from work and got stuck with this tedious error on Ubuntu.
Currently using node -v 13.8.0, installed python 2.7.17, GCC 7.5.0
also checked node-gyp npm page and installed all python and gcc dependencies.
here is my package.json file
```
"dependencies": {
"apn": "^2.1.5",
"async": "^1.5.2",
"bcrypt-nodejs": "^0.0.3",
"body-parser": "^1.15.1",
"compression": "^1.6.2",
"consolidate": "^0.14.5",
"cookie-parser": "^1.4.1",
"cors": "^2.8.4",
"dotenv": "^2.0.0",
"express": "^4.13.4",
"express-validator": "^2.20.4",
"fcm-node": "^1.2.0",
"geocoder": "^0.2.3",
"jimp": "^0.2.28",
"jsonwebtoken": "^5.7.0",
"moment": "^2.12.0",
"moment-timezone": "^0.5.14",
"mongoose": "^4.4.8",
"mongoose-unique-validator": "^1.0.6",
"morgan": "^1.7.0",
"multer": "^1.3.0",
"mustache": "^2.3.0",
"node-cron": "^1.2.1",
"node-excel-export": "^1.4.4",
"nodemailer": "^2.3.0",
"nodemailer-mailgun-transport": "^1.3.5",
"nodemailer-smtp-transport": "^2.7.4",
"qr-image": "^3.2.0",
"randomstring": "^1.1.5",
"request": "^2.69.0",
"soap": "^0.23.0",
"socket.io": "^2.0.3",
"strong-soap": "^1.5.0",
"swagger-jsdoc": "^1.9.3",
"swagger-ui-express": "^1.0.7",
"time": "^0.12.0",
"twilio": "^3.9.1",
"underscore": "^1.8.3",
"uuid": "^3.2.1",
"ws.js": "^2.0.23"
},
"devDependencies": {},
"engines": {
"node": "6.9.5"
}
```
```
npm install
> ursa@0.9.4 install /home/shubham/Desktop/hotel/demo_hotellife_api/node_modules/ursa
> node-gyp rebuild
make: Entering directory '/home/shubham/Desktop/hotel/demo_hotellife_api/node_modules/ursa/build'
CXX(target) Release/obj.target/ursaNative/src/ursaNative.o
In file included from ../../nan/nan.h:192:0,
from ../src/ursaNative.h:10,
from ../src/ursaNative.cc:3:
../../nan/nan_maybe_43_inl.h: In function ‘Nan::Maybe<bool> Nan::ForceSet(v8::Local<v8::Object>, v8::Local<v8::Value>, v8::Local<v8::Value>, v8::PropertyAttribute)’:
../../nan/nan_maybe_43_inl.h:112:15: error: ‘class v8::Object’ has no member named ‘ForceSet’
return obj->ForceSet(isolate->GetCurrentContext(), key, value, attribs);
^~~~~~~~
In file included from ../../nan/nan_converters.h:67:0,
from ../../nan/nan.h:197,
from ../src/ursaNative.h:10,
from ../src/ursaNative.cc:3:
../../nan/nan_converters_43_inl.h: In static member function ‘static Nan::imp::ToFactoryBase<v8::Boolean>::return_t Nan::imp::ToFactory<v8::Boolean>::convert(v8::Local<v8::Value>)’:
../../nan/nan_converters_43_inl.h:18:69: error: no matching function for call to ‘v8::Value::ToBoolean(v8::Local<v8::Context>)’
val->To ## TYPE(v8::Isolate::GetCurrent()->GetCurrentContext()) \
^
../../nan/nan_converters_43_inl.h:22:1: note: in expansion of macro ‘X’
X(Boolean)
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2750:44: note: candidate: v8::MaybeLocal<v8::String> v8::Value::ToString(v8::Local<v8::Context>) const
V8_WARN_UNUSED_RESULT MaybeLocal<String> ToString(
^~~~~~~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2750:44: note: candidate expects 1 argument, 0 provided
In file included from ../src/time.cc:5:0:
../../nan/nan.h:921:74: error: no matching function for call to ‘v8::String::WriteUtf8(char*&, int, int, const int&)’
length_ = string->WriteUtf8(str_, static_cast<int>(len), 0, flags);
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2947:7: note: candidate: int v8::String::WriteUtf8(v8::Isolate*, char*, int, int*, int) const
int WriteUtf8(Isolate* isolate, char* buffer, int length = -1,
^~~~~~~~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2947:7: note: no known conversion for argument 1 from ‘char*’ to ‘v8::Isolate*’
In file included from ../src/time.cc:5:0:
../../nan/nan.h: In member function ‘v8::Local<v8::Value> Nan::Callback::Call_(v8::Isolate*, v8::Local<v8::Object>, int, v8::Local<v8::Value>*) const’:
../../nan/nan.h:1479:5: warning: ‘v8::Local<v8::Value> node::MakeCallback(v8::Isolate*, v8::Local<v8::Object>, v8::Local<v8::Function>, int, v8::Local<v8::Value>*)’ is deprecated: Use MakeCallback(..., async_context) [-Wdeprecated-declarations]
));
^
In file included from ../../nan/nan.h:49:0,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:180:50: note: declared here
NODE_EXTERN v8::Local<v8::Value> MakeCallback(
^
/home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:99:42: note: in definition of macro ‘NODE_DEPRECATED’
__attribute__((deprecated(message))) declarator
^~~~~~~~~~
In file included from ../src/time.cc:5:0:
../../nan/nan.h: In member function ‘void Nan::AsyncWorker::SaveToPersistent(const char*, const v8::Local<v8::Value>&)’:
../../nan/nan.h:1533:64: error: no matching function for call to ‘v8::Object::Set(v8::Local<v8::String>, const v8::Local<v8::Value>&)’
New(persistentHandle)->Set(New(key).ToLocalChecked(), value);
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3547:37: note: candidate: v8::Maybe<bool> v8::Object::Set(v8::Local<v8::Context>, v8::Local<v8::Value>, v8::Local<v8::Value>)
V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context,
^~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3547:37: note: candidate expects 3 arguments, 2 provided
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3550:37: note: candidate: v8::Maybe<bool> v8::Object::Set(v8::Local<v8::Context>, uint32_t, v8::Local<v8::Value>)
V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context, uint32_t index,
^~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3550:37: note: candidate expects 3 arguments, 2 provided
In file included from ../src/time.cc:5:0:
../../nan/nan.h: In member function ‘void Nan::AsyncWorker::SaveToPersistent(const v8::Local<v8::String>&, const v8::Local<v8::Value>&)’:
../../nan/nan.h:1539:42: error: no matching function for call to ‘v8::Object::Set(const v8::Local<v8::String>&, const v8::Local<v8::Value>&)’
New(persistentHandle)->Set(key, value);
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3547:37: note: candidate: v8::Maybe<bool> v8::Object::Set(v8::Local<v8::Context>, v8::Local<v8::Value>, v8::Local<v8::Value>)
V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context,
^~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3547:37: note: candidate expects 3 arguments, 2 provided
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3550:37: note: candidate: v8::Maybe<bool> v8::Object::Set(v8::Local<v8::Context>, uint32_t, v8::Local<v8::Value>)
V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context, uint32_t index,
^~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3550:37: note: candidate expects 3 arguments, 2 provided
In file included from ../src/time.cc:5:0:
../../nan/nan.h: In member function ‘void Nan::AsyncWorker::SaveToPersistent(uint32_t, const v8::Local<v8::Value>&)’:
../../nan/nan.h:1545:44: error: no matching function for call to ‘v8::Object::Set(uint32_t&, const v8::Local<v8::Value>&)’
New(persistentHandle)->Set(index, value);
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3547:37: note: candidate: v8::Maybe<bool> v8::Object::Set(v8::Local<v8::Context>, v8::Local<v8::Value>, v8::Local<v8::Value>)
V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context,
^~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3547:37: note: candidate expects 3 arguments, 2 provided
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3550:37: note: candidate: v8::Maybe<bool> v8::Object::Set(v8::Local<v8::Context>, uint32_t, v8::Local<v8::Value>)
V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context, uint32_t index,
^~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3550:37: note: candidate expects 3 arguments, 2 provided
In file included from ../src/time.cc:5:0:
../../nan/nan.h: In member function ‘v8::Local<v8::Value> Nan::AsyncWorker::GetFromPersistent(const char*) const’:
../../nan/nan.h:1551:61: error: no matching function for call to ‘v8::Object::Get(v8::Local<v8::String>)’
New(persistentHandle)->Get(New(key).ToLocalChecked()));
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3594:43: note: candidate: v8::MaybeLocal<v8::Value> v8::Object::Get(v8::Local<v8::Context>, v8::Local<v8::Value>)
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Get(Local<Context> context,
^~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3594:43: note: candidate expects 2 arguments, 1 provided
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3597:43: note: candidate: v8::MaybeLocal<v8::Value> v8::Object::Get(v8::Local<v8::Context>, uint32_t)
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Get(Local<Context> context,
^~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3597:43: note: candidate expects 2 arguments, 1 provided
In file included from ../src/time.cc:5:0:
../../nan/nan.h: In member function ‘v8::Local<v8::Value> Nan::AsyncWorker::GetFromPersistent(const v8::Local<v8::String>&) const’:
../../nan/nan.h:1557:55: error: no matching function for call to ‘v8::Object::Get(const v8::Local<v8::String>&)’
return scope.Escape(New(persistentHandle)->Get(key));
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3594:43: note: candidate: v8::MaybeLocal<v8::Value> v8::Object::Get(v8::Local<v8::Context>, v8::Local<v8::Value>)
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Get(Local<Context> context,
^~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3594:43: note: candidate expects 2 arguments, 1 provided
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3597:43: note: candidate: v8::MaybeLocal<v8::Value> v8::Object::Get(v8::Local<v8::Context>, uint32_t)
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Get(Local<Context> context,
^~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3597:43: note: candidate expects 2 arguments, 1 provided
In file included from ../src/time.cc:5:0:
../../nan/nan.h: In member function ‘v8::Local<v8::Value> Nan::AsyncWorker::GetFromPersistent(uint32_t) const’:
../../nan/nan.h:1562:57: error: no matching function for call to ‘v8::Object::Get(uint32_t&)’
return scope.Escape(New(persistentHandle)->Get(index));
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3594:43: note: candidate: v8::MaybeLocal<v8::Value> v8::Object::Get(v8::Local<v8::Context>, v8::Local<v8::Value>)
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Get(Local<Context> context,
^~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3594:43: note: candidate expects 2 arguments, 1 provided
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3597:43: note: candidate: v8::MaybeLocal<v8::Value> v8::Object::Get(v8::Local<v8::Context>, uint32_t)
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Get(Local<Context> context,
^~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:3597:43: note: candidate expects 2 arguments, 1 provided
In file included from /usr/include/c++/7/cassert:44:0,
from /home/shubham/.cache/node-gyp/13.8.0/include/node/node_object_wrap.h:26,
from ../../nan/nan.h:51,
from ../src/time.cc:5:
../../nan/nan_object_wrap.h: In destructor ‘virtual Nan::ObjectWrap::~ObjectWrap()’:
../../nan/nan_object_wrap.h:24:25: error: ‘class Nan::Persistent<v8::Object>’ has no member named ‘IsNearDeath’
assert(persistent().IsNearDeath());
^
In file included from ../../nan/nan.h:2222:0,
from ../src/time.cc:5:
../../nan/nan_object_wrap.h: In member function ‘void Nan::ObjectWrap::MakeWeak()’:
../../nan/nan_object_wrap.h:67:18: error: ‘class Nan::Persistent<v8::Object>’ has no member named ‘MarkIndependent’
persistent().MarkIndependent();
^~~~~~~~~~~~~~~
In file included from /usr/include/c++/7/cassert:44:0,
from /home/shubham/.cache/node-gyp/13.8.0/include/node/node_object_wrap.h:26,
from ../../nan/nan.h:51,
from ../src/time.cc:5:
../../nan/nan_object_wrap.h: In static member function ‘static void Nan::ObjectWrap::WeakCallback(const v8::WeakCallbackInfo<Nan::ObjectWrap>&)’:
../../nan/nan_object_wrap.h:124:26: error: ‘class Nan::Persistent<v8::Object>’ has no member named ‘IsNearDeath’
assert(wrap->handle_.IsNearDeath());
^
../src/time.cc: At global scope:
../src/time.cc:12:20: error: ‘Handle’ has not been declared
static void Init(Handle<Object> target) {
^~~~~~
../src/time.cc:12:26: error: expected ‘,’ or ‘...’ before ‘<’ token
static void Init(Handle<Object> target) {
^
../src/time.cc: In static member function ‘static void Time::Init(int)’:
../src/time.cc:16:20: error: ‘target’ was not declared in this scope
Nan::SetMethod(target, "time", Time_);
^~~~~~
../src/time.cc:16:20: note: suggested alternative: ‘tzset’
Nan::SetMethod(target, "time", Time_);
^~~~~~
tzset
../src/time.cc: In static member function ‘static Nan::NAN_METHOD_RETURN_TYPE Time::Localtime(Nan::NAN_METHOD_ARGS_TYPE)’:
../src/time.cc:87:64: error: no matching function for call to ‘v8::Value::IntegerValue()’
time_t rawtime = static_cast<time_t>(info[0]->IntegerValue());
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2774:40: note: candidate: v8::Maybe<long int> v8::Value::IntegerValue(v8::Local<v8::Context>) const
V8_WARN_UNUSED_RESULT Maybe<int64_t> IntegerValue(
^~~~~~~~~~~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2774:40: note: candidate expects 1 argument, 0 provided
../src/time.cc: In static member function ‘static Nan::NAN_METHOD_RETURN_TYPE Time::Mktime(Nan::NAN_METHOD_ARGS_TYPE)’:
../src/time.cc:141:103: error: no matching function for call to ‘v8::Value::Int32Value()’
tmstr.tm_sec = Nan::Get(arg, Nan::New("seconds").ToLocalChecked()).ToLocalChecked()->Int32Value();
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate: v8::Maybe<int> v8::Value::Int32Value(v8::Local<v8::Context>) const
V8_WARN_UNUSED_RESULT Maybe<int32_t> Int32Value(Local<Context> context) const;
^~~~~~~~~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate expects 1 argument, 0 provided
../src/time.cc:142:103: error: no matching function for call to ‘v8::Value::Int32Value()’
tmstr.tm_min = Nan::Get(arg, Nan::New("minutes").ToLocalChecked()).ToLocalChecked()->Int32Value();
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate: v8::Maybe<int> v8::Value::Int32Value(v8::Local<v8::Context>) const
V8_WARN_UNUSED_RESULT Maybe<int32_t> Int32Value(Local<Context> context) const;
^~~~~~~~~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate expects 1 argument, 0 provided
../src/time.cc:143:101: error: no matching function for call to ‘v8::Value::Int32Value()’
tmstr.tm_hour = Nan::Get(arg, Nan::New("hours").ToLocalChecked()).ToLocalChecked()->Int32Value();
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate: v8::Maybe<int> v8::Value::Int32Value(v8::Local<v8::Context>) const
V8_WARN_UNUSED_RESULT Maybe<int32_t> Int32Value(Local<Context> context) const;
^~~~~~~~~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate expects 1 argument, 0 provided
../src/time.cc:144:106: error: no matching function for call to ‘v8::Value::Int32Value()’
tmstr.tm_mday = Nan::Get(arg, Nan::New("dayOfMonth").ToLocalChecked()).ToLocalChecked()->Int32Value();
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate: v8::Maybe<int> v8::Value::Int32Value(v8::Local<v8::Context>) const
V8_WARN_UNUSED_RESULT Maybe<int32_t> Int32Value(Local<Context> context) const;
^~~~~~~~~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate expects 1 argument, 0 provided
../src/time.cc:145:101: error: no matching function for call to ‘v8::Value::Int32Value()’
tmstr.tm_mon = Nan::Get(arg, Nan::New("month").ToLocalChecked()).ToLocalChecked()->Int32Value();
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate: v8::Maybe<int> v8::Value::Int32Value(v8::Local<v8::Context>) const
V8_WARN_UNUSED_RESULT Maybe<int32_t> Int32Value(Local<Context> context) const;
^~~~~~~~~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate expects 1 argument, 0 provided
../src/time.cc:146:100: error: no matching function for call to ‘v8::Value::Int32Value()’
tmstr.tm_year = Nan::Get(arg, Nan::New("year").ToLocalChecked()).ToLocalChecked()->Int32Value();
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate: v8::Maybe<int> v8::Value::Int32Value(v8::Local<v8::Context>) const
V8_WARN_UNUSED_RESULT Maybe<int32_t> Int32Value(Local<Context> context) const;
^~~~~~~~~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate expects 1 argument, 0 provided
../src/time.cc:147:113: error: no matching function for call to ‘v8::Value::Int32Value()’
tmstr.tm_isdst = Nan::Get(arg, Nan::New("isDaylightSavings").ToLocalChecked()).ToLocalChecked()->Int32Value();
^
In file included from /home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:63:0,
from ../../nan/nan.h:49,
from ../src/time.cc:5:
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate: v8::Maybe<int> v8::Value::Int32Value(v8::Local<v8::Context>) const
V8_WARN_UNUSED_RESULT Maybe<int32_t> Int32Value(Local<Context> context) const;
^~~~~~~~~~
/home/shubham/.cache/node-gyp/13.8.0/include/node/v8.h:2778:40: note: candidate expects 1 argument, 0 provided
../src/time.cc: At global scope:
../src/time.cc:156:21: error: variable or field ‘init’ declared void
static void init (Handle<Object> target) {
^~~~~~
../src/time.cc:156:21: error: ‘Handle’ was not declared in this scope
../src/time.cc:156:21: note: suggested alternative: ‘rand_r’
static void init (Handle<Object> target) {
^~~~~~
rand_r
../src/time.cc:156:34: error: expected primary-expression before ‘>’ token
static void init (Handle<Object> target) {
^
../src/time.cc:156:36: error: ‘target’ was not declared in this scope
static void init (Handle<Object> target) {
^~~~~~
../src/time.cc:156:36: note: suggested alternative: ‘tzset’
static void init (Handle<Object> target) {
^~~~~~
tzset
In file included from ../../nan/nan.h:49:0,
from ../src/time.cc:5:
../src/time.cc:159:21: error: ‘init’ was not declared in this scope
NODE_MODULE(time, init)
^
/home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:610:36: note: in definition of macro ‘NODE_MODULE_X’
(node::addon_register_func) (regfunc), \
^~~~~~~
../src/time.cc:159:3: note: in expansion of macro ‘NODE_MODULE’
NODE_MODULE(time, init)
^~~~~~~~~~~
../src/time.cc:159:21: note: suggested alternative: ‘int’
NODE_MODULE(time, init)
^
/home/shubham/.cache/node-gyp/13.8.0/include/node/node.h:610:36: note: in definition of macro ‘NODE_MODULE_X’
(node::addon_register_func) (regfunc), \
^~~~~~~
../src/time.cc:159:3: note: in expansion of macro ‘NODE_MODULE’
NODE_MODULE(time, init)
^~~~~~~~~~~
time.target.mk:113: recipe for target 'Release/obj.target/time/src/time.o' failed
make: *** [Release/obj.target/time/src/time.o] Error 1
make: Leaving directory '/home/shubham/Desktop/hotel/demo_hotellife_api/node_modules/time/build'
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack at ChildProcess.emit (events.js:321:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:275:12)
gyp ERR! System Linux 5.0.0-37-generic
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /home/shubham/Desktop/hotel/demo_hotellife_api/node_modules/time
gyp ERR! node -v v13.8.0
gyp ERR! node-gyp -v v5.0.5
gyp ERR! not ok
npm WARN Trump@1.0.0 No repository field.
npm WARN Trump@1.0.0 No license field.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.1.2 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.1.2: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: ursa@0.9.4 (node_modules/ursa):
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: ursa@0.9.4 install: `node-gyp rebuild`
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: Exit status 1
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! time@0.12.0 install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the time@0.12.0 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/shubham/.npm/_logs/2020-05-07T11_31_00_132Z-debug.log
```
**Things I tried**
downgraded to various node versions like to v8.11.0- same problem
downgraded node to v6.9.5 as mentioned in package.json file. but there started getting various
JS versioning errors like '...Object' not supported. not supporting es6.
installed node-gyp globally, still nothing resolved.
please help me on my maiden command of my maiden project | 2020/05/07 | [
"https://Stackoverflow.com/questions/61657685",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11847762/"
] | The problem you have is the **time** package which is outdated <https://www.npmjs.com/package/time>.
There are some different solutions depending on why are you using that package, you could use a different library/package to handle dates and time, but you will probably need to refactor some code.
Try removing this dependency from the **package.json**, `"time": "^0.12.0"` and running `npm install` again to see if it runs correctly.
Then change the dependency to use an updated fork of `time` like this one
<https://github.com/riyadshauk/node-time>. Your `time` dependency on **package.json** should look like this
```
"dependencies": {
"time": "github:riyadshauk/node-time"
}
```
Your full dependencies should be like this (assuming you haven't installed/removed anything since your question), the only change is on `time`, the rest is the same.
I used node v13.7.0
```
"dependencies": {
"apn": "^2.1.5",
"async": "^1.5.2",
"bcrypt-nodejs": "^0.0.3",
"body-parser": "^1.15.1",
"compression": "^1.6.2",
"consolidate": "^0.14.5",
"cookie-parser": "^1.4.1",
"cors": "^2.8.4",
"dotenv": "^2.0.0",
"express": "^4.13.4",
"express-validator": "^2.20.4",
"fcm-node": "^1.2.0",
"geocoder": "^0.2.3",
"jimp": "^0.2.28",
"jsonwebtoken": "^5.7.0",
"moment": "^2.12.0",
"moment-timezone": "^0.5.14",
"mongoose": "^4.4.8",
"mongoose-unique-validator": "^1.0.6",
"morgan": "^1.7.0",
"multer": "^1.3.0",
"mustache": "^2.3.0",
"node-cron": "^1.2.1",
"node-excel-export": "^1.4.4",
"nodemailer": "^2.3.0",
"nodemailer-mailgun-transport": "^1.3.5",
"nodemailer-smtp-transport": "^2.7.4",
"qr-image": "^3.2.0",
"randomstring": "^1.1.5",
"request": "^2.69.0",
"soap": "^0.23.0",
"socket.io": "^2.0.3",
"strong-soap": "^1.5.0",
"swagger-jsdoc": "^1.9.3",
"swagger-ui-express": "^1.0.7",
"time": "github:riyadshauk/node-time",
"twilio": "^3.9.1",
"underscore": "^1.8.3",
"uuid": "^3.2.1",
"ws.js": "^2.0.23"
}
``` | This works for me
```
npm install -g npm-check-updates
npm-check-updates -u
npm install
``` |
11,067,697 | I'm building a calendar-based web app, and am in great need of a javascript Date library-- something similar to python's [dateutil](http://labix.org/python-dateutil). I came across [DateJs](http://www.datejs.com/). The functionality looks great. My only hesitance is that the repo hasn't been touched since early 2008. Usually this is a (very) bad sign for an open source project. So I wanted to check whether people currently use, trust and recommend this library, or whether its just an abandoned set of good intentions with good SEO. If not, is there another comprehensive JS date library I should be looking at? | 2012/06/16 | [
"https://Stackoverflow.com/questions/11067697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/652693/"
] | I'd like to recommend [momentjs](http://momentjs.com/) - a very lightweight, yet surprisingly capable Date JS library. ) | DateJS works wonders for us. I am not really concerned that development seems to have stalled as it is pretty complete as-is. |
11,067,697 | I'm building a calendar-based web app, and am in great need of a javascript Date library-- something similar to python's [dateutil](http://labix.org/python-dateutil). I came across [DateJs](http://www.datejs.com/). The functionality looks great. My only hesitance is that the repo hasn't been touched since early 2008. Usually this is a (very) bad sign for an open source project. So I wanted to check whether people currently use, trust and recommend this library, or whether its just an abandoned set of good intentions with good SEO. If not, is there another comprehensive JS date library I should be looking at? | 2012/06/16 | [
"https://Stackoverflow.com/questions/11067697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/652693/"
] | I'd like to recommend [momentjs](http://momentjs.com/) - a very lightweight, yet surprisingly capable Date JS library. ) | Try <https://github.com/abritinthebay/datejs> ,
It is a forked version of <http://www.datejs.com/>
It is actively maintained by new Author ! |
11,067,697 | I'm building a calendar-based web app, and am in great need of a javascript Date library-- something similar to python's [dateutil](http://labix.org/python-dateutil). I came across [DateJs](http://www.datejs.com/). The functionality looks great. My only hesitance is that the repo hasn't been touched since early 2008. Usually this is a (very) bad sign for an open source project. So I wanted to check whether people currently use, trust and recommend this library, or whether its just an abandoned set of good intentions with good SEO. If not, is there another comprehensive JS date library I should be looking at? | 2012/06/16 | [
"https://Stackoverflow.com/questions/11067697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/652693/"
] | Try <https://github.com/abritinthebay/datejs> ,
It is a forked version of <http://www.datejs.com/>
It is actively maintained by new Author ! | DateJS works wonders for us. I am not really concerned that development seems to have stalled as it is pretty complete as-is. |
1,314,717 | In python, I can construct my [optparse](http://docs.python.org/library/optparse.html) instance such that it will automatically filter out the options and non-option/flags into two different buckets:
```
(options, args) = parser.parse_args()
```
With boost::program\_options, how do I retrieve a list of tokens which are the remaining non-option and non-flag tokens?
e.g. If my program has flags
```
--foo
--bar BAR
```
and I then pass in the command line:
```
--foo hey --bar BAR you
```
how can I get a list comprised solely of tokens "hey" and "you" | 2009/08/22 | [
"https://Stackoverflow.com/questions/1314717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20712/"
] | IIRC, you have to use a combination of [`positional_options_description`](http://www.boost.org/doc/libs/1_39_0/doc/html/program_options/overview.html#id2892937) and [*hidden options*](http://www.boost.org/doc/libs/1_39_0/doc/html/program_options/howto.html#id2893967). The idea is to (1) add a normal option and give it a name, maybe something like `--positional=ARG`, (2) don't include that option in the help description, (3) configure `command_line_parser` to treat all positional arguments as if `--positional` was specified, and (4) retrieve the positional arguments using `vm["positional"].as< std::vector<std::string> >()`.
There is probably an example somewhere in the source tree but I don't have it on this machine right now. | Here is an example:
```
namespace po = boost::program_options;
po::positional_options_description m_positional;
po::options_description m_cmdLine;
po::variables_map m_variables;
m_cmdLine.add_options()
(/*stuff*/)
("input", po::value<vector<string> >()->composing(), "")
;
m_positional.add("input", -1);
po::parsed_options parsed = po::command_line_parser(argc, argv)
.options(m_cmdLine)
.positional(m_positional)
.allow_unregistered()
.run();
// store, notify, etc
```
Then just get "input" named options as vector of strings and you are all set. |
59,801,340 | I have been attempting to make a small python program to monitor and return ping results from different servers. I have reached a point where pinging each device in the sequence has become inefficient and lacks performance. I want to continuously ping each one of my targets at the same time on my python.
What would the best approach to this be? Thanks for your time
```
def get_latency(ip_address, port):
from tcp_latency import measure_latency
from datetime import datetime
now = datetime.now()
current_time = now.strftime("%Y-%m-%d %H:%M:%S")
latency = str(measure_latency(host=ip_address, port=port, runs=1, timeout=1))[1:-1]
#add to table and upload to database function()
ip_address_list = [('google.com', '80'), ('bing.com', '80')]
#Problem
#run function simultaneously but with different arguments
get_latency(ip_address_list[0][0], ip_address_list[0][1])
get_latency(ip_address_list[1][0], ip_address_list[1][1])
``` | 2020/01/18 | [
"https://Stackoverflow.com/questions/59801340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12737497/"
] | First of all you can't have many id's with the same value in a one html page. Cause it will result an error in the future while your doing a lot of code to it. Please change your btnedt to a class not an id. then change your script like this.
```
<script>
$(document).ready(function () {
$(document).on('click', '.btnedt', function() {
// alert($(this).closest('tr').find('.contact_name').text());
$("#newModal").modal("show");
});
});
</script>
```
if modal is already the issue. Then here some reason's why its not appearing
1. **Remove the fade class in your modal.**
2. **The versions of your Javascript and Bootstrap does not match.**
3. **You forgot to include bootstrap.js library in the document**
please check this out [enter link description here](https://stackoverflow.com/questions/36672304/bootstrap-mymodal-modalshow-is-not-working) | Use class instead of multiple IDs
=================================
>
> Share your modal code
>
>
>
```
var mem_butn = "<td><input type=\"button\" class=\"btnedt\" value=\"Edit\" /></td>";
```
```
<script>
$(document).ready(function () {
$('body').on('click', '.btnedt', function() {
$("#newModal").modal("show");
});
});
</script>
``` |
61,005,152 | Here I am using `fft` function of `numpy` to plot the fft of PCM wave generated from a 10000Hz sine wave. But the amplitude of the plot I am getting is wrong.
The frequency is coming correct using `fftfreq` function which I am printing in the console itself. My python code is here.
```
import numpy as np
import matplotlib.pyplot as plt
frate = 44100
filename = 'Sine_10000Hz.bin' #signed16 bit PCM of a 10000Hz sine wave
f = open('Sine_10000Hz.bin','rb')
y = np.fromfile(f,dtype='int16') #Extract the signed 16 bit PCM value of 10000Hz Sine wave
f.close()
####### Spectral Analysis #########
fft_value = np.fft.fft(y)
freqs = np.fft.fftfreq(len(fft_value)) # frequencies associated with the coefficients:
print("freqs.min(), freqs.max()")
idx = np.argmax(np.abs(fft_value)) # Find the peak in the coefficients
freq = freqs[idx]
freq_in_hertz = abs(freq * frate)
print("\n\n\n\n\n\nfreq_in_hertz")
print(freq_in_hertz)
for i in range(2):
print("Value at index {}:\t{}".format(i, fft_value[i + 1]), "\nValue at index {}:\t{}".format(fft_value.size -1 - i, fft_value[-1 - i]))
#####
n_sa = 8 * int(freq_in_hertz)
t_fft = np.linspace(0, 1, n_sa)
T = t_fft[1] - t_fft[0] # sampling interval
N = n_sa #Here it is n_sample
print("\nN value=")
print(N)
# 1/T = frequency
f = np.linspace(0, 1 / T, N)
plt.ylabel("Amplitude")
plt.xlabel("Frequency [Hz]")
plt.xlim(0,15000)
# 2 / N is a normalization factor Here second half of the sequence gives us no new information that the half of the FFT sequence is the output we need.
plt.bar(f[:N // 2], np.abs(fft_value)[:N // 2] * 2 / N, width=15,color="red")
```
Output comes in the console (Only minimal prints I am pasting here)
```
freqs.min(), freqs.max()
-0.5 0.49997732426303854
freq_in_hertz
10000.0
Value at index 0: (19.949569768991054-17.456031216294324j)
Value at index 44099: (19.949569768991157+17.45603121629439j)
Value at index 1: (9.216783424692835-13.477631008179145j)
Value at index 44098: (9.216783424692792+13.477631008179262j)
N value=
80000
```
The frequency extraction is coming correctly but in the plot something I am doing is incorrect which I don't know.
Updating the work:
1. When I am change the multiplication factor **10** in the line `n_sa = 10 * int(freq_in_hertz)` to **5** gives me correct plot. Whether its correct or not I am not able to understand
2. In the line `plt.xlim(0,15000)` if I increase max value to **20000** again is not plotting. Till **15000** it is plotting correctly.
3. I generated this `Sine_10000Hz.bin` using Audacity tool where I generate a sine wave of freq 10000Hz of 1sec duration and a sampling rate of 44100. Then I exported this audio to signed 16bit with headerless (means raw PCM). I could able to regenerate this sine wave using this script. Also I want to calculate the FFT of this. So I expect a peak at 10000Hz with amplitude 32767. You can see i changed the multiplication factor `8` instead of `10` in the line `n_sa = 8 * int(freq_in_hertz)`. Hence it worked. But the amplitude is showing incorrect. I will attach my new figure here[](https://i.stack.imgur.com/dQSyX.png) | 2020/04/03 | [
"https://Stackoverflow.com/questions/61005152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12673488/"
] | After a long home work I could able to find my issue. As I mentioned in the **Updating the work:** the reason was with the number of samples which I took was wrong.
I changed the two lines in the code
```
n_sa = 8 * int(freq_in_hertz)
t_fft = np.linspace(0, 1, n_sa)
```
to
```
n_sa = y.size //number of samples directly taken from the raw 16bits
t_fft = np.arange(n_sa)/frate //Here we need to divide each samples by the sampling rate
```
This solved my issue.
My spectral output is [](https://i.stack.imgur.com/Xd7bF.png)
Special thanks to @meta4 and @YoniChechik for giving me some suggestions. | I'm not sure exactly what you are trying to do, but my suspicion is that the Sine\_10000Hz.bin file isn't what you think it is.
Is it possible it contains more than one channel (left & right)?
Is it realy signed 16 bit integers?
It's not hard to create a 10kHz sine wave in 16 bit integers in numpy.
```py
import numpy as np
import matplotlib.pyplot as plt
n_samples = 2000
f_signal = 10000 # (Hz) Signal Frequency
f_sample = 44100 # (Hz) Sample Rate
amplitude = 2**3 # Arbitrary. Must be > 1. Should be > 2. Larger makes FFT results better
time = np.arange(n_samples) / f_sample # sample times
# The signal
y = (np.sin(time * f_signal * 2 * np.pi) * amplitude).astype('int16')
```
If you plot 30 points of the signal you can see there are about 5 points per cycle.
```py
plt.plot(time[:30], y[:30], marker='o')
plt.xlabel('Time (s)')
plt.yticks([]); # Amplitude value is artificial. hide it
```
[](https://i.stack.imgur.com/hQXt2.png)
If you plot 30 samples of the data from Sine\_10000Hz.bin does it have about 5 points per cycle?
This is my attempt to recreate the FFT work as I understand it.
```
fft_value = np.fft.fft(y) # compute the FFT
freqs = np.fft.fftfreq(len(fft_value)) * f_sample # frequencies for each FFT bin
N = len(y)
plt.plot(freqs[:N//2], np.abs(fft_value[:N//2]))
plt.yscale('log')
plt.ylabel("Amplitude")
plt.xlabel("Frequency [Hz]")
```
I get the following plot
[](https://i.stack.imgur.com/ACMl2.png)
The y-axis of this plot is on a log scale. Notice that the amplitude of the peak is in the thousands. The amplitude of most of the rest of the data points are around 100.
```py
idx_max = np.argmax(np.abs(fft_value)) # Find the peak in the coefficients
idx_min = np.argmin(np.abs(fft_value)) # Find the peak in the coefficients
print(f'idx_max = {idx_max}, idx_min = {idx_min}')
print(f'f_max = {freqs[idx_max]}, f_min = {freqs[idx_min]}')
print(f'fft_value[idx_max] {fft_value[idx_max]}')
print(f'fft_value[idx_min] {fft_value[idx_min]}')
```
produces:
```
idx_max = 1546, idx_min = 1738
f_max = -10010.7, f_min = -5777.1
fft_value[idx_max] (-4733.232076236707+219.11718299533203j)
fft_value[idx_min] (-0.17017443966211232+0.9557200531465061j)
``` |
61,005,152 | Here I am using `fft` function of `numpy` to plot the fft of PCM wave generated from a 10000Hz sine wave. But the amplitude of the plot I am getting is wrong.
The frequency is coming correct using `fftfreq` function which I am printing in the console itself. My python code is here.
```
import numpy as np
import matplotlib.pyplot as plt
frate = 44100
filename = 'Sine_10000Hz.bin' #signed16 bit PCM of a 10000Hz sine wave
f = open('Sine_10000Hz.bin','rb')
y = np.fromfile(f,dtype='int16') #Extract the signed 16 bit PCM value of 10000Hz Sine wave
f.close()
####### Spectral Analysis #########
fft_value = np.fft.fft(y)
freqs = np.fft.fftfreq(len(fft_value)) # frequencies associated with the coefficients:
print("freqs.min(), freqs.max()")
idx = np.argmax(np.abs(fft_value)) # Find the peak in the coefficients
freq = freqs[idx]
freq_in_hertz = abs(freq * frate)
print("\n\n\n\n\n\nfreq_in_hertz")
print(freq_in_hertz)
for i in range(2):
print("Value at index {}:\t{}".format(i, fft_value[i + 1]), "\nValue at index {}:\t{}".format(fft_value.size -1 - i, fft_value[-1 - i]))
#####
n_sa = 8 * int(freq_in_hertz)
t_fft = np.linspace(0, 1, n_sa)
T = t_fft[1] - t_fft[0] # sampling interval
N = n_sa #Here it is n_sample
print("\nN value=")
print(N)
# 1/T = frequency
f = np.linspace(0, 1 / T, N)
plt.ylabel("Amplitude")
plt.xlabel("Frequency [Hz]")
plt.xlim(0,15000)
# 2 / N is a normalization factor Here second half of the sequence gives us no new information that the half of the FFT sequence is the output we need.
plt.bar(f[:N // 2], np.abs(fft_value)[:N // 2] * 2 / N, width=15,color="red")
```
Output comes in the console (Only minimal prints I am pasting here)
```
freqs.min(), freqs.max()
-0.5 0.49997732426303854
freq_in_hertz
10000.0
Value at index 0: (19.949569768991054-17.456031216294324j)
Value at index 44099: (19.949569768991157+17.45603121629439j)
Value at index 1: (9.216783424692835-13.477631008179145j)
Value at index 44098: (9.216783424692792+13.477631008179262j)
N value=
80000
```
The frequency extraction is coming correctly but in the plot something I am doing is incorrect which I don't know.
Updating the work:
1. When I am change the multiplication factor **10** in the line `n_sa = 10 * int(freq_in_hertz)` to **5** gives me correct plot. Whether its correct or not I am not able to understand
2. In the line `plt.xlim(0,15000)` if I increase max value to **20000** again is not plotting. Till **15000** it is plotting correctly.
3. I generated this `Sine_10000Hz.bin` using Audacity tool where I generate a sine wave of freq 10000Hz of 1sec duration and a sampling rate of 44100. Then I exported this audio to signed 16bit with headerless (means raw PCM). I could able to regenerate this sine wave using this script. Also I want to calculate the FFT of this. So I expect a peak at 10000Hz with amplitude 32767. You can see i changed the multiplication factor `8` instead of `10` in the line `n_sa = 8 * int(freq_in_hertz)`. Hence it worked. But the amplitude is showing incorrect. I will attach my new figure here[](https://i.stack.imgur.com/dQSyX.png) | 2020/04/03 | [
"https://Stackoverflow.com/questions/61005152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12673488/"
] | After a long home work I could able to find my issue. As I mentioned in the **Updating the work:** the reason was with the number of samples which I took was wrong.
I changed the two lines in the code
```
n_sa = 8 * int(freq_in_hertz)
t_fft = np.linspace(0, 1, n_sa)
```
to
```
n_sa = y.size //number of samples directly taken from the raw 16bits
t_fft = np.arange(n_sa)/frate //Here we need to divide each samples by the sampling rate
```
This solved my issue.
My spectral output is [](https://i.stack.imgur.com/Xd7bF.png)
Special thanks to @meta4 and @YoniChechik for giving me some suggestions. | I'm adding a link to a script I've build that outputs the FFT with ACTUAL amplitude (for real signals - e.g. your signal). Have a go and see if it works:
`dt=1/frate` in your constellation....
<https://stackoverflow.com/a/53925342/4879610> |
64,924,830 | Is it possible to write an API with Python so you can connect a physical ON and OFF switch via USB to a PC and when user presses the switch to ON or OFF, the python program detects it and send a signal to a web app and shows ON or OFF message on the website?
I am sorry if what I am asking its not clear enough! | 2020/11/20 | [
"https://Stackoverflow.com/questions/64924830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12487489/"
] | You can simply use `std::optional`:
```
String(const std::optional<int> &min_len, const std::optional<int> &max_len,
const std::optional<std::string> &pattern);
Type *type = new String(5, {}, std::nullptr); // last 2 parameters are omitted.
```
For C++14 you can use similar constructs that exist in other open source libraries, (e.g. `boost::optional`, `folly::Optional`). | Have you tried to use an [std::optional](https://en.cppreference.com/w/cpp/utility/optional) (since C++17)?
I know you mentioned the need to use C++14 compatible code, but there is a [boost::optional](https://www.boost.org/doc/libs/1_65_1/libs/optional/doc/html/index.html) available. |
64,924,830 | Is it possible to write an API with Python so you can connect a physical ON and OFF switch via USB to a PC and when user presses the switch to ON or OFF, the python program detects it and send a signal to a web app and shows ON or OFF message on the website?
I am sorry if what I am asking its not clear enough! | 2020/11/20 | [
"https://Stackoverflow.com/questions/64924830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12487489/"
] | You can simply use `std::optional`:
```
String(const std::optional<int> &min_len, const std::optional<int> &max_len,
const std::optional<std::string> &pattern);
Type *type = new String(5, {}, std::nullptr); // last 2 parameters are omitted.
```
For C++14 you can use similar constructs that exist in other open source libraries, (e.g. `boost::optional`, `folly::Optional`). | You can write your own class that can contain value or not if you can't use std::optional. It is not like lot of code. Can make its interface like std::optioal has or can make something different, what matters is data:
```
class OptInt {
bool set_;
int value_;
public:
OptInt() : set_(false) , value_(0) {}
OptInt(int v) : set_(true), value_(v) {}
// can add other ways how to create it
bool isSet() const {return set_;}
int get() const {if (!set_) throw 666; return value_;}
// can add other operations you want to it.
};
```
Then you can use such default-constructed OptInt as your default argument and so it will be not set, but if caller provides int argument then it will be set. |
64,924,830 | Is it possible to write an API with Python so you can connect a physical ON and OFF switch via USB to a PC and when user presses the switch to ON or OFF, the python program detects it and send a signal to a web app and shows ON or OFF message on the website?
I am sorry if what I am asking its not clear enough! | 2020/11/20 | [
"https://Stackoverflow.com/questions/64924830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12487489/"
] | You can write your own class that can contain value or not if you can't use std::optional. It is not like lot of code. Can make its interface like std::optioal has or can make something different, what matters is data:
```
class OptInt {
bool set_;
int value_;
public:
OptInt() : set_(false) , value_(0) {}
OptInt(int v) : set_(true), value_(v) {}
// can add other ways how to create it
bool isSet() const {return set_;}
int get() const {if (!set_) throw 666; return value_;}
// can add other operations you want to it.
};
```
Then you can use such default-constructed OptInt as your default argument and so it will be not set, but if caller provides int argument then it will be set. | Have you tried to use an [std::optional](https://en.cppreference.com/w/cpp/utility/optional) (since C++17)?
I know you mentioned the need to use C++14 compatible code, but there is a [boost::optional](https://www.boost.org/doc/libs/1_65_1/libs/optional/doc/html/index.html) available. |
64,924,830 | Is it possible to write an API with Python so you can connect a physical ON and OFF switch via USB to a PC and when user presses the switch to ON or OFF, the python program detects it and send a signal to a web app and shows ON or OFF message on the website?
I am sorry if what I am asking its not clear enough! | 2020/11/20 | [
"https://Stackoverflow.com/questions/64924830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12487489/"
] | I lack reputation points to comment on @Kostas
To be compatible with C++14 you can try experimental namespace which has optional (if it is available/works on your compiler)
```
#include <experimental/optional>
```
then you can use
```
String(std::experimental::optional<int> &min_len,.....)
min_len.value_or(-1)
``` | Have you tried to use an [std::optional](https://en.cppreference.com/w/cpp/utility/optional) (since C++17)?
I know you mentioned the need to use C++14 compatible code, but there is a [boost::optional](https://www.boost.org/doc/libs/1_65_1/libs/optional/doc/html/index.html) available. |
64,924,830 | Is it possible to write an API with Python so you can connect a physical ON and OFF switch via USB to a PC and when user presses the switch to ON or OFF, the python program detects it and send a signal to a web app and shows ON or OFF message on the website?
I am sorry if what I am asking its not clear enough! | 2020/11/20 | [
"https://Stackoverflow.com/questions/64924830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12487489/"
] | I lack reputation points to comment on @Kostas
To be compatible with C++14 you can try experimental namespace which has optional (if it is available/works on your compiler)
```
#include <experimental/optional>
```
then you can use
```
String(std::experimental::optional<int> &min_len,.....)
min_len.value_or(-1)
``` | You can write your own class that can contain value or not if you can't use std::optional. It is not like lot of code. Can make its interface like std::optioal has or can make something different, what matters is data:
```
class OptInt {
bool set_;
int value_;
public:
OptInt() : set_(false) , value_(0) {}
OptInt(int v) : set_(true), value_(v) {}
// can add other ways how to create it
bool isSet() const {return set_;}
int get() const {if (!set_) throw 666; return value_;}
// can add other operations you want to it.
};
```
Then you can use such default-constructed OptInt as your default argument and so it will be not set, but if caller provides int argument then it will be set. |
8,210,344 | I get the following error
ImportError: No module named numeric if I have the following import
```
from numeric import *
```
in my python source code. How do I get this running on my Windows box against a python 2.7.x compiler? | 2011/11/21 | [
"https://Stackoverflow.com/questions/8210344",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1004443/"
] | You will probably need to install this module: <http://numpy.scipy.org/>
There are binaries for windows too, so installation should be easy.
Josh | There is no common module called `numeric`. Are you sure you don't mean `import numpy`? |
8,210,344 | I get the following error
ImportError: No module named numeric if I have the following import
```
from numeric import *
```
in my python source code. How do I get this running on my Windows box against a python 2.7.x compiler? | 2011/11/21 | [
"https://Stackoverflow.com/questions/8210344",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1004443/"
] | There *is* a module called numeric, but it's been deprecated for years in favour of numpy. You probably want to update your code to use numpy instead.
If you really need numeric, you can get it [here](http://sourceforge.net/projects/numpy/files/Old%20Numeric/24.2/), but you'll have to compile it from source for Python 2.7, because the latest binaries are for 2.4. | There is no common module called `numeric`. Are you sure you don't mean `import numpy`? |
4,227,503 | I would like to establish a good naming scheme for physical/mathematical quantities used in my simulation code. Consider the following example:
```
from math import *
class GaussianBeamIntensity(object):
"""
Optical intensity profile of a Gaussian laser beam.
"""
def __init__(self, intensity_at_waist_center, waist_radius, wavelength):
"""
Arguments:
*intensity_at_waist_center*: The optical intensity of the beam at the
center of its waist in W/m^2 units.
*waist_radius*: The radius of the beam waist in meters.
*wavelength*: The wavelength of the laser beam in meters.
"""
self.intensity_at_waist_center = intensity_at_waist_center
self.waist_radius = waist_radius
self.wavelength = wavelength
self._calculate_auxiliary_quantities()
def _calculate_auxiliary_quantities(self):
# Shorthand notation
w_0, lambda_ = self.waist_radius, self.wavelength
self.rayleigh_range = pi * w_0**2 / lambda_
# Generally some more quantities could follow
def __call__(self, rho, z):
"""
Arguments:
*rho*, *z*: Cylindrical coordinates of a spatial point.
"""
# Shorthand notation
I_0, w_0 = self.intensity_at_waist_center, self.waist_radius
z_R = self.rayleigh_range
w_z = w_0 * sqrt(1.0 + (z / z_R)**2)
I = I_0 * (w_0 / w_z)**2 * exp(-2.0 * rho**2 / w_z**2)
return I
```
What consistent naming scheme would you propose for the physical properties (properties, function arguments etc.) in order to balance between **readability** and **concise notation** (that formulae remain relatively short)? Could you please refine the example above? Or perhaps propose a better scheme?
It would be nice to follow the guidelines of [PEP8](http://www.python.org/dev/peps/pep-0008/ "PEP8"), remembering that "A Foolish Consistency is the Hobgoblin of Little Minds". It seems difficult to stick to descriptive names while obeying the traditional 80-character limit for line lengths.
Thank you in advance! | 2010/11/19 | [
"https://Stackoverflow.com/questions/4227503",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/335609/"
] | I think you've already found the good balance. Expressive names are important, so I totally agree with the use of *wavelenght* instead of lambda as a class attribute. This way the interface remains clear and expressive.
In a long formula, though, lambda\_ is good choice as shorthand notation, because this is a commonly accepted and widely used notation for the wavelength in optics. I think when you implement a formula, what you want to do is staying as close as possible to the form of the equations you'd write on a piece of paper (or as they appear in an article etc).
In short: keep the interfaces expressive, the formulae short. | Use Python3 and you can use the actual symbol λ for a variable name.
I look forward to writing code like:
```
from math import pi as π
sphere_volume = lambda r : 4/3 * π * r**3
``` |
53,545,656 | I know there are many post related to dictionary operations but I could not find the solution for my special case.
I have list of dictinoary (repeated dictionary keys with similar or different values) and I have to create a new dictionary from this list.
Eg:
```
a = [{u'a': 1}, {u'a': 2}, {u'a': 1}, {u'b': 2}, {u'b': 1}, {u'c': 1}, {u'c': 1}]
```
Output I am looking for:
```
{'a': 2, 'b':2, 'c': 1}
```
So as you can see I just want one entry per key from the list and the value for that key would be max of all values. Hope its not too confusing.
I have come with a working soultion but I just wanted to check if there is more pythonic answer to this (with less #of lines or better way)
This is my working solution:
```
d = {}
for i in a:
if not d.get(i.keys()[0]):
d.update(i)
elif d.get(i.keys()[0], 0) < i.values()[0]:
d.update(i)
print d
```
Thansk for your time. | 2018/11/29 | [
"https://Stackoverflow.com/questions/53545656",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2565385/"
] | You can sort the list `a` so that the like keys are groups and the largest values are last. Then add the values so that last value is the value left in the dict:
```
>>> a = [{u'a': 1}, {u'a': 2}, {u'a': 1}, {u'b': 2}, {u'b': 1}, {u'c': 1}, {u'c': 1}]
>>> {k:v for k,v in (x.items()[0] for x in sorted(a))}
{u'a': 2, u'c': 1, u'b': 2}
```
Or, alternate syntax:
```
>>> dict(x.items()[0] for x in sorted(a))
```
For Python 2 and 3 syntax:
```
>>> {k:v for k,v in (sorted(list(x.items())[0] for x in a))}
{'a': 2, 'b': 2, 'c': 1}
>>> dict(sorted(list(x.items())[0] for x in a))
{'a': 2, 'b': 2, 'c': 1}
```
---
From comments: *what's happening here?*
First, let's come up with a more instructive example:
```
>>> a = [{u'a': -1}, {u'a': -11}, {u'a': -3}, {u'b': 0}, {u'b': 100}, {u'c': 3}, {u'c': 1}]
```
So the desired result here is the keys (for Python 3 that maintain order in a dict or with OrderedDict) would be i) Keys in groups of sorted values and then ii) values interpreted as numerics in increasing values.
So try this first:
```
>>> sorted(list(x.items())[0] for x in a)
[('a', -11), ('a', -3), ('a', -1), ('b', 0), ('b', 100), ('c', 1), ('c', 3)]
```
Break it apart:
```
sorted(list(x.items())[0] for x in a)
^ ^ comprehension of
^ a list of one element dicts
^ ^ ^ convert to a two element tuple
^ sort the tuple first by key, then by value
```
So that works by sorting the tuples first by the keys, then by the values.
Which leads to an alternate solution using `groupby`:
```
>>> from itertools import groupby
>>> for k,v in groupby(sorted(list(x.items())[0] for x in a), key=lambda t: t[0]):
... print(k, max(v))
...
a ('a', -1)
b ('b', 100)
c ('c', 3)
```
The `groupby` solution would be substantially more memory friendly since it does not create an extra list. The first solution, likely, will be faster with smaller list of dicts since the sorting is easier (but you would need to test that.)
It is not *required* in the solution that I gave that the keys be grouped (it is required for `groupby` to work). This works too:
```
>>> sorted((list(x.items())[0] for x in a), key=lambda t: t[1])
[('a', -11), ('a', -3), ('a', -1), ('b', 0), ('c', 1), ('c', 3), ('b', 100)]
```
Then turn it into a `dict` with the dict construction function. Recall that takes a list of tuples of `(key, value)`:
```
>>> dict(sorted((list(x.items())[0] for x in a), key=lambda t: t[1]))
{'a': -1, 'b': 100, 'c': 3}
``` | You could do:
```
a = [{u'a': 1}, {u'a': 2}, {u'a': 1}, {u'b': 2}, {u'b': 1}, {u'c': 1}, {u'c': 1}]
result = {}
for di in a:
for key, value in di.items():
result[key] = max(value, result.get(key, value))
print(result)
```
**Output**
```
{'a': 2, 'c': 1, 'b': 2}
``` |
53,545,656 | I know there are many post related to dictionary operations but I could not find the solution for my special case.
I have list of dictinoary (repeated dictionary keys with similar or different values) and I have to create a new dictionary from this list.
Eg:
```
a = [{u'a': 1}, {u'a': 2}, {u'a': 1}, {u'b': 2}, {u'b': 1}, {u'c': 1}, {u'c': 1}]
```
Output I am looking for:
```
{'a': 2, 'b':2, 'c': 1}
```
So as you can see I just want one entry per key from the list and the value for that key would be max of all values. Hope its not too confusing.
I have come with a working soultion but I just wanted to check if there is more pythonic answer to this (with less #of lines or better way)
This is my working solution:
```
d = {}
for i in a:
if not d.get(i.keys()[0]):
d.update(i)
elif d.get(i.keys()[0], 0) < i.values()[0]:
d.update(i)
print d
```
Thansk for your time. | 2018/11/29 | [
"https://Stackoverflow.com/questions/53545656",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2565385/"
] | You can sort the list `a` so that the like keys are groups and the largest values are last. Then add the values so that last value is the value left in the dict:
```
>>> a = [{u'a': 1}, {u'a': 2}, {u'a': 1}, {u'b': 2}, {u'b': 1}, {u'c': 1}, {u'c': 1}]
>>> {k:v for k,v in (x.items()[0] for x in sorted(a))}
{u'a': 2, u'c': 1, u'b': 2}
```
Or, alternate syntax:
```
>>> dict(x.items()[0] for x in sorted(a))
```
For Python 2 and 3 syntax:
```
>>> {k:v for k,v in (sorted(list(x.items())[0] for x in a))}
{'a': 2, 'b': 2, 'c': 1}
>>> dict(sorted(list(x.items())[0] for x in a))
{'a': 2, 'b': 2, 'c': 1}
```
---
From comments: *what's happening here?*
First, let's come up with a more instructive example:
```
>>> a = [{u'a': -1}, {u'a': -11}, {u'a': -3}, {u'b': 0}, {u'b': 100}, {u'c': 3}, {u'c': 1}]
```
So the desired result here is the keys (for Python 3 that maintain order in a dict or with OrderedDict) would be i) Keys in groups of sorted values and then ii) values interpreted as numerics in increasing values.
So try this first:
```
>>> sorted(list(x.items())[0] for x in a)
[('a', -11), ('a', -3), ('a', -1), ('b', 0), ('b', 100), ('c', 1), ('c', 3)]
```
Break it apart:
```
sorted(list(x.items())[0] for x in a)
^ ^ comprehension of
^ a list of one element dicts
^ ^ ^ convert to a two element tuple
^ sort the tuple first by key, then by value
```
So that works by sorting the tuples first by the keys, then by the values.
Which leads to an alternate solution using `groupby`:
```
>>> from itertools import groupby
>>> for k,v in groupby(sorted(list(x.items())[0] for x in a), key=lambda t: t[0]):
... print(k, max(v))
...
a ('a', -1)
b ('b', 100)
c ('c', 3)
```
The `groupby` solution would be substantially more memory friendly since it does not create an extra list. The first solution, likely, will be faster with smaller list of dicts since the sorting is easier (but you would need to test that.)
It is not *required* in the solution that I gave that the keys be grouped (it is required for `groupby` to work). This works too:
```
>>> sorted((list(x.items())[0] for x in a), key=lambda t: t[1])
[('a', -11), ('a', -3), ('a', -1), ('b', 0), ('c', 1), ('c', 3), ('b', 100)]
```
Then turn it into a `dict` with the dict construction function. Recall that takes a list of tuples of `(key, value)`:
```
>>> dict(sorted((list(x.items())[0] for x in a), key=lambda t: t[1]))
{'a': -1, 'b': 100, 'c': 3}
``` | You could use a `defaultdict`:
```
from collections import defaultdict
d = defaultdict(lambda: 0)
for val in a:
if d[val.keys()[0]] < val.values()[0]:
d[val.keys()[0]] = val.values()[0]
```
**Output**
```
{u'a': 2, u'b': 2, u'c': 1}
``` |
53,545,656 | I know there are many post related to dictionary operations but I could not find the solution for my special case.
I have list of dictinoary (repeated dictionary keys with similar or different values) and I have to create a new dictionary from this list.
Eg:
```
a = [{u'a': 1}, {u'a': 2}, {u'a': 1}, {u'b': 2}, {u'b': 1}, {u'c': 1}, {u'c': 1}]
```
Output I am looking for:
```
{'a': 2, 'b':2, 'c': 1}
```
So as you can see I just want one entry per key from the list and the value for that key would be max of all values. Hope its not too confusing.
I have come with a working soultion but I just wanted to check if there is more pythonic answer to this (with less #of lines or better way)
This is my working solution:
```
d = {}
for i in a:
if not d.get(i.keys()[0]):
d.update(i)
elif d.get(i.keys()[0], 0) < i.values()[0]:
d.update(i)
print d
```
Thansk for your time. | 2018/11/29 | [
"https://Stackoverflow.com/questions/53545656",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2565385/"
] | You can sort the list `a` so that the like keys are groups and the largest values are last. Then add the values so that last value is the value left in the dict:
```
>>> a = [{u'a': 1}, {u'a': 2}, {u'a': 1}, {u'b': 2}, {u'b': 1}, {u'c': 1}, {u'c': 1}]
>>> {k:v for k,v in (x.items()[0] for x in sorted(a))}
{u'a': 2, u'c': 1, u'b': 2}
```
Or, alternate syntax:
```
>>> dict(x.items()[0] for x in sorted(a))
```
For Python 2 and 3 syntax:
```
>>> {k:v for k,v in (sorted(list(x.items())[0] for x in a))}
{'a': 2, 'b': 2, 'c': 1}
>>> dict(sorted(list(x.items())[0] for x in a))
{'a': 2, 'b': 2, 'c': 1}
```
---
From comments: *what's happening here?*
First, let's come up with a more instructive example:
```
>>> a = [{u'a': -1}, {u'a': -11}, {u'a': -3}, {u'b': 0}, {u'b': 100}, {u'c': 3}, {u'c': 1}]
```
So the desired result here is the keys (for Python 3 that maintain order in a dict or with OrderedDict) would be i) Keys in groups of sorted values and then ii) values interpreted as numerics in increasing values.
So try this first:
```
>>> sorted(list(x.items())[0] for x in a)
[('a', -11), ('a', -3), ('a', -1), ('b', 0), ('b', 100), ('c', 1), ('c', 3)]
```
Break it apart:
```
sorted(list(x.items())[0] for x in a)
^ ^ comprehension of
^ a list of one element dicts
^ ^ ^ convert to a two element tuple
^ sort the tuple first by key, then by value
```
So that works by sorting the tuples first by the keys, then by the values.
Which leads to an alternate solution using `groupby`:
```
>>> from itertools import groupby
>>> for k,v in groupby(sorted(list(x.items())[0] for x in a), key=lambda t: t[0]):
... print(k, max(v))
...
a ('a', -1)
b ('b', 100)
c ('c', 3)
```
The `groupby` solution would be substantially more memory friendly since it does not create an extra list. The first solution, likely, will be faster with smaller list of dicts since the sorting is easier (but you would need to test that.)
It is not *required* in the solution that I gave that the keys be grouped (it is required for `groupby` to work). This works too:
```
>>> sorted((list(x.items())[0] for x in a), key=lambda t: t[1])
[('a', -11), ('a', -3), ('a', -1), ('b', 0), ('c', 1), ('c', 3), ('b', 100)]
```
Then turn it into a `dict` with the dict construction function. Recall that takes a list of tuples of `(key, value)`:
```
>>> dict(sorted((list(x.items())[0] for x in a), key=lambda t: t[1]))
{'a': -1, 'b': 100, 'c': 3}
``` | You could do by iterating over all of your dicts and updating final dict `new_a` with its content if given key isn't in new dict or its value is lower than original value.
```
a = [{u'a': 1}, {u'a': 2}, {u'a': 1}, {u'b': 2}, {u'b': 1}, {u'c': 1}, {u'c': 1}]
new_a = {}
for dict_ in a:
key, value = list(dict_.items())[0]
if key not in new_a or new_a[key] < value:
new_a[key] = value
print(new_a) # -> {'c': 1, 'b': 2, 'a': 2}
``` |
67,268,013 | I have been trying to draw a networkx multidigraph with multiple self-loops on nodes using matplotlib for quite a few days now but nothing works.
After multiple tests, I narrowed the problem to Networkx with Matplotlib.
I executed the following tutorial <https://networkx.org/documentation/latest/auto_examples/drawing/plot_selfloops.html>
on my laptop and using an online python compiler but no self-loop get printed. I think there might be a bug somewhere since it is supposed to work according the official documentation.
Please do not propose to used Graphviz... I already did and it does not correspond to what I want.
Current Behavior
From the tutorial, here is the image that I obtain :

Expected Behavior
The image that I'm supposed to obtain :

My environment :
Python version: 3
NetworkX version: 2.5 (I upraded it)
Thanks! | 2021/04/26 | [
"https://Stackoverflow.com/questions/67268013",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6099112/"
] | any time GoogleFinance() reutrns a historical array, you need to INDEX() it to get just the single answer.
It's almost always the second row and second column of the array that you want.
So:
```
=INDEX(Goooglefinance(.... ), 2, 2)
``` | I've tested your function and the error ***Function MULTIPLY parameter 2 expects number values. But 'Date' is a text and cannot be coerced to a number*** is due to this part `E2*GOOGLEFINANCE("Currency:"&F2&$G$1,"price", H2)` in your IFS function.
The return value of the `GOOGLEFINANCE("Currency:"&F2&$G$1,"price", H2)` is an array (it contains strings and numbers) and multiplying it to E2 value is not possible. Also, if running `GOOGLEFINANCE("Currency:"&F2&$G$1,"price", H2)` alone works just fine, then the H2 (the date cell) isn't the main cause of the error.
**RECOMMENDED SOLUTION:**
I've checked on how to only return price instead of an array on GOOGLEFINANCE function and stumbled upon an answer from [How can I get GOOGLEFINANCE to return only the historical price of a stock and not an array?](https://webapps.stackexchange.com/questions/14725/how-can-i-get-googlefinance-to-return-only-the-historical-price-of-a-stock-and-n/23964#23964).
Instead, you can try this function below:
```
=IFS(F2<>"EUR", min(GoogleFinance("Currency:"&F2&$G$1, "PRICE", H2))*E2, F2 = "EUR",E2)
```
Here's a sample test on my end:
[](https://i.stack.imgur.com/XMOvx.png) |
51,263,370 | I am trying to implement k-nearest neighbor algorithm with the dataset which I have preprocessed. I imported the data as pandas dataframe and then converted it into numpy array but the following error is occuring-
```
File "/home/user/Documents/Mooc_implementation.py", line 8, in <module>
x = num_data[:,:10]
File "/usr/lib/python2.7/dist-packages/numpy/core/records.py", line 499, in __getitem__
obj = super(recarray, self).__getitem__(indx)
IndexError: too many indices for array
```
here is my code-
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('/home/user/Documents/MOOC dataset cleaned/student_reg_vle_info_assessment.csv')
num_data = dataset.to_records(index=False)
x = num_data[:,:10]
y = num_data[:,10:11]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=4)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
```
what should I do?
output of dataset.head()-
```
date_submitted date_registration date_unregistration sum_click \
0 18 -159 445 16
1 22 -53 445 4
2 30 -92 12 3
3 17 -52 445 1
4 26 -176 445 5
num_of_prev_attempts age_band region highest_education studied_credits \
0 0 0 0 0 240
1 0 1 1 0 60
2 0 1 2 1 60
3 0 1 3 1 60
4 0 2 4 2 60
score final_result
0 78 0
1 70 0
2 87 2
3 72 0
4 69 0
[Finished in 0.274s]
``` | 2018/07/10 | [
"https://Stackoverflow.com/questions/51263370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9355642/"
] | you use `.` instead of `->` because of this declaration of parameters:
`int ball_room(ball *b, int i, int n)`
`b` is expected to be pointer to data with type `ball`, so you can access it in various ways:
1. array way: e.g. `b[5].somefield = 15` - you use dot here, because if `b` is of type `ball *`, it means that `b` is pointer OR it is array of objects with type `b`, if it's array of objects with type `b` (which is your case) you use `.` to access fields of object
2. pointer way: e.g. `(b+5)->somefield = 15` - it will do exactly same thing as code above, but you will access data in `pointer` way | In C/C++ an array devolves into the address of it's first member. So when you pass the array to `ball_room` what actually gets passed is `&ball[0]`.
Now inside `ball_room` the reverse happens. `b` is a pointer to ball. But here you use it as an array `b[j]`. So it un-devolves back into an array of structs. So what `b[j]` gives you is the struct and not a pointer to a struct. Consequently you access it using `.` instead of `->`.
You can also use `(b + j)->somefield`. Or for even more fun how about writing `j[b].somefield`. The later being a really confusing left-over from the eraly compiler days when `a[b]` truly got turned into `*(a + b)` internally. |
51,263,370 | I am trying to implement k-nearest neighbor algorithm with the dataset which I have preprocessed. I imported the data as pandas dataframe and then converted it into numpy array but the following error is occuring-
```
File "/home/user/Documents/Mooc_implementation.py", line 8, in <module>
x = num_data[:,:10]
File "/usr/lib/python2.7/dist-packages/numpy/core/records.py", line 499, in __getitem__
obj = super(recarray, self).__getitem__(indx)
IndexError: too many indices for array
```
here is my code-
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('/home/user/Documents/MOOC dataset cleaned/student_reg_vle_info_assessment.csv')
num_data = dataset.to_records(index=False)
x = num_data[:,:10]
y = num_data[:,10:11]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=4)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
```
what should I do?
output of dataset.head()-
```
date_submitted date_registration date_unregistration sum_click \
0 18 -159 445 16
1 22 -53 445 4
2 30 -92 12 3
3 17 -52 445 1
4 26 -176 445 5
num_of_prev_attempts age_band region highest_education studied_credits \
0 0 0 0 0 240
1 0 1 1 0 60
2 0 1 2 1 60
3 0 1 3 1 60
4 0 2 4 2 60
score final_result
0 78 0
1 70 0
2 87 2
3 72 0
4 69 0
[Finished in 0.274s]
``` | 2018/07/10 | [
"https://Stackoverflow.com/questions/51263370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9355642/"
] | Given `ball *b`, `b[j]` is an element from the elements that `b` points to. Thus `b[j]` is not a pointer; it is a struct. Since it is a struct, you use `.` to refer to members in it.
The definition of `b[j]` in the C standard is that it is `*((b)+(j))`. So it takes the pointer `b`, moves `j` elements beyond it, and then applies `*`.
Since `*` is already applied in `b[j]`, you do not need `->`, just `.`. | you use `.` instead of `->` because of this declaration of parameters:
`int ball_room(ball *b, int i, int n)`
`b` is expected to be pointer to data with type `ball`, so you can access it in various ways:
1. array way: e.g. `b[5].somefield = 15` - you use dot here, because if `b` is of type `ball *`, it means that `b` is pointer OR it is array of objects with type `b`, if it's array of objects with type `b` (which is your case) you use `.` to access fields of object
2. pointer way: e.g. `(b+5)->somefield = 15` - it will do exactly same thing as code above, but you will access data in `pointer` way |
51,263,370 | I am trying to implement k-nearest neighbor algorithm with the dataset which I have preprocessed. I imported the data as pandas dataframe and then converted it into numpy array but the following error is occuring-
```
File "/home/user/Documents/Mooc_implementation.py", line 8, in <module>
x = num_data[:,:10]
File "/usr/lib/python2.7/dist-packages/numpy/core/records.py", line 499, in __getitem__
obj = super(recarray, self).__getitem__(indx)
IndexError: too many indices for array
```
here is my code-
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('/home/user/Documents/MOOC dataset cleaned/student_reg_vle_info_assessment.csv')
num_data = dataset.to_records(index=False)
x = num_data[:,:10]
y = num_data[:,10:11]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=4)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
```
what should I do?
output of dataset.head()-
```
date_submitted date_registration date_unregistration sum_click \
0 18 -159 445 16
1 22 -53 445 4
2 30 -92 12 3
3 17 -52 445 1
4 26 -176 445 5
num_of_prev_attempts age_band region highest_education studied_credits \
0 0 0 0 0 240
1 0 1 1 0 60
2 0 1 2 1 60
3 0 1 3 1 60
4 0 2 4 2 60
score final_result
0 78 0
1 70 0
2 87 2
3 72 0
4 69 0
[Finished in 0.274s]
``` | 2018/07/10 | [
"https://Stackoverflow.com/questions/51263370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9355642/"
] | you use `.` instead of `->` because of this declaration of parameters:
`int ball_room(ball *b, int i, int n)`
`b` is expected to be pointer to data with type `ball`, so you can access it in various ways:
1. array way: e.g. `b[5].somefield = 15` - you use dot here, because if `b` is of type `ball *`, it means that `b` is pointer OR it is array of objects with type `b`, if it's array of objects with type `b` (which is your case) you use `.` to access fields of object
2. pointer way: e.g. `(b+5)->somefield = 15` - it will do exactly same thing as code above, but you will access data in `pointer` way | In C, the array name is a pointer to array’s first element, hence your function declaration has name `ball *b`and works when you pass a `ball[]` instance.
Try dynamically allocating the memory by using `malloc()` and passing that pointer to your function. |
51,263,370 | I am trying to implement k-nearest neighbor algorithm with the dataset which I have preprocessed. I imported the data as pandas dataframe and then converted it into numpy array but the following error is occuring-
```
File "/home/user/Documents/Mooc_implementation.py", line 8, in <module>
x = num_data[:,:10]
File "/usr/lib/python2.7/dist-packages/numpy/core/records.py", line 499, in __getitem__
obj = super(recarray, self).__getitem__(indx)
IndexError: too many indices for array
```
here is my code-
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('/home/user/Documents/MOOC dataset cleaned/student_reg_vle_info_assessment.csv')
num_data = dataset.to_records(index=False)
x = num_data[:,:10]
y = num_data[:,10:11]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=4)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
```
what should I do?
output of dataset.head()-
```
date_submitted date_registration date_unregistration sum_click \
0 18 -159 445 16
1 22 -53 445 4
2 30 -92 12 3
3 17 -52 445 1
4 26 -176 445 5
num_of_prev_attempts age_band region highest_education studied_credits \
0 0 0 0 0 240
1 0 1 1 0 60
2 0 1 2 1 60
3 0 1 3 1 60
4 0 2 4 2 60
score final_result
0 78 0
1 70 0
2 87 2
3 72 0
4 69 0
[Finished in 0.274s]
``` | 2018/07/10 | [
"https://Stackoverflow.com/questions/51263370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9355642/"
] | you use `.` instead of `->` because of this declaration of parameters:
`int ball_room(ball *b, int i, int n)`
`b` is expected to be pointer to data with type `ball`, so you can access it in various ways:
1. array way: e.g. `b[5].somefield = 15` - you use dot here, because if `b` is of type `ball *`, it means that `b` is pointer OR it is array of objects with type `b`, if it's array of objects with type `b` (which is your case) you use `.` to access fields of object
2. pointer way: e.g. `(b+5)->somefield = 15` - it will do exactly same thing as code above, but you will access data in `pointer` way | For explanation of the current issue, see [Eric](https://stackoverflow.com/users/298225/eric-postpischil)'s [answer](https://stackoverflow.com/a/51264543/1312382); in some of the answers given so far there is dangerous wording applied, so just to make clear: When do we have an array and when a pointer???
Consider the following:
```
int a[7];
```
As long as we can refer to `a` directly, we still have an array and can use any operations that are valid on, e. g. getting size:
```
size_t n = sizeof(a); // size in bytes, not ints, i. e. 7*sizeof(int)
```
You can pass arrays to functions or even do pointer arithmetics on:
```
f(a);
a + 1;
```
In both cases, the array "decays" to a pointer, though, and the result is a pointer as well. Be aware that you can assign new values to a pointer, but not to an array itself (you can assign new values to the array's elements, directly or via pointer), so you cannot do things like `++a` either.
When an array decays to a pointer, it gets a pointer to its first element:
```
int* ptr = a;
int* ptr = &*a; // only pointers can be dereferenced -> a decays!
int* ptr = &a[0]; // short for &*(a + 0)...
```
All result in exactly the same; however, the following is invalid:
```
int* ptr = &a;
```
Taking the address of an entire array actually *is* possible, but the resulting pointer is not of type "pointer to element" nor of type "pointer to pointer to element" (`int**` in the example), but of type "pointer to array of specific size". Syntax for is ugly, though, but the following would be legal again:
```
int(*aptr)[7] = &a;
```
You need to read: if I dereference `ptr`, I get `int[7]`...
Once decayed, there is only a pointer to the array left (more precisely: to one of the array elements, directly after decaying, to the first; array and first element always share the same address, so, although of different type, both pointers `ptr` and `aptr` from above hold exactly the same value). Pointers can be moved around within the array, but they do not hold as much information as the array itself, especially, the array size gets lost. This is why one needs to pass the array's length together with the pointer to functions (if needed; another variant is a sentinel value denoting the array end such as the terminating null character in strings or the null pointer following the string arguments in `main`'s arguments list):
```
int a[7];
f(a, sizeof(a)/sizeof(*a)); // division: sizeof is in bytes, dividing by size
// of first element gives number of elements
```
Possibly with f as:
```
void f(int b[], size_t n)
// ^^^^^^^ in function parameter lists, just alternative syntax for int* b !!!
// however, we can show more explicitly that we expect a pointer
// to an array this way...
{
size_t m = sizeof(b); // as b is a POINTER, gives the constant (but hardware specific!)
// size of a pointer (on typical modern 64-bit hardware 8 bytes),
// no matter what size of the array being pointed to is!!!
while(n)
{
*b++ = n--;
// ^^ advances pointer, NOT array!
}
}
```
Hope this helps to avoid confusion. |
51,263,370 | I am trying to implement k-nearest neighbor algorithm with the dataset which I have preprocessed. I imported the data as pandas dataframe and then converted it into numpy array but the following error is occuring-
```
File "/home/user/Documents/Mooc_implementation.py", line 8, in <module>
x = num_data[:,:10]
File "/usr/lib/python2.7/dist-packages/numpy/core/records.py", line 499, in __getitem__
obj = super(recarray, self).__getitem__(indx)
IndexError: too many indices for array
```
here is my code-
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('/home/user/Documents/MOOC dataset cleaned/student_reg_vle_info_assessment.csv')
num_data = dataset.to_records(index=False)
x = num_data[:,:10]
y = num_data[:,10:11]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=4)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
```
what should I do?
output of dataset.head()-
```
date_submitted date_registration date_unregistration sum_click \
0 18 -159 445 16
1 22 -53 445 4
2 30 -92 12 3
3 17 -52 445 1
4 26 -176 445 5
num_of_prev_attempts age_band region highest_education studied_credits \
0 0 0 0 0 240
1 0 1 1 0 60
2 0 1 2 1 60
3 0 1 3 1 60
4 0 2 4 2 60
score final_result
0 78 0
1 70 0
2 87 2
3 72 0
4 69 0
[Finished in 0.274s]
``` | 2018/07/10 | [
"https://Stackoverflow.com/questions/51263370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9355642/"
] | Given `ball *b`, `b[j]` is an element from the elements that `b` points to. Thus `b[j]` is not a pointer; it is a struct. Since it is a struct, you use `.` to refer to members in it.
The definition of `b[j]` in the C standard is that it is `*((b)+(j))`. So it takes the pointer `b`, moves `j` elements beyond it, and then applies `*`.
Since `*` is already applied in `b[j]`, you do not need `->`, just `.`. | In C/C++ an array devolves into the address of it's first member. So when you pass the array to `ball_room` what actually gets passed is `&ball[0]`.
Now inside `ball_room` the reverse happens. `b` is a pointer to ball. But here you use it as an array `b[j]`. So it un-devolves back into an array of structs. So what `b[j]` gives you is the struct and not a pointer to a struct. Consequently you access it using `.` instead of `->`.
You can also use `(b + j)->somefield`. Or for even more fun how about writing `j[b].somefield`. The later being a really confusing left-over from the eraly compiler days when `a[b]` truly got turned into `*(a + b)` internally. |
51,263,370 | I am trying to implement k-nearest neighbor algorithm with the dataset which I have preprocessed. I imported the data as pandas dataframe and then converted it into numpy array but the following error is occuring-
```
File "/home/user/Documents/Mooc_implementation.py", line 8, in <module>
x = num_data[:,:10]
File "/usr/lib/python2.7/dist-packages/numpy/core/records.py", line 499, in __getitem__
obj = super(recarray, self).__getitem__(indx)
IndexError: too many indices for array
```
here is my code-
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('/home/user/Documents/MOOC dataset cleaned/student_reg_vle_info_assessment.csv')
num_data = dataset.to_records(index=False)
x = num_data[:,:10]
y = num_data[:,10:11]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=4)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
```
what should I do?
output of dataset.head()-
```
date_submitted date_registration date_unregistration sum_click \
0 18 -159 445 16
1 22 -53 445 4
2 30 -92 12 3
3 17 -52 445 1
4 26 -176 445 5
num_of_prev_attempts age_band region highest_education studied_credits \
0 0 0 0 0 240
1 0 1 1 0 60
2 0 1 2 1 60
3 0 1 3 1 60
4 0 2 4 2 60
score final_result
0 78 0
1 70 0
2 87 2
3 72 0
4 69 0
[Finished in 0.274s]
``` | 2018/07/10 | [
"https://Stackoverflow.com/questions/51263370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9355642/"
] | In C/C++ an array devolves into the address of it's first member. So when you pass the array to `ball_room` what actually gets passed is `&ball[0]`.
Now inside `ball_room` the reverse happens. `b` is a pointer to ball. But here you use it as an array `b[j]`. So it un-devolves back into an array of structs. So what `b[j]` gives you is the struct and not a pointer to a struct. Consequently you access it using `.` instead of `->`.
You can also use `(b + j)->somefield`. Or for even more fun how about writing `j[b].somefield`. The later being a really confusing left-over from the eraly compiler days when `a[b]` truly got turned into `*(a + b)` internally. | In C, the array name is a pointer to array’s first element, hence your function declaration has name `ball *b`and works when you pass a `ball[]` instance.
Try dynamically allocating the memory by using `malloc()` and passing that pointer to your function. |
51,263,370 | I am trying to implement k-nearest neighbor algorithm with the dataset which I have preprocessed. I imported the data as pandas dataframe and then converted it into numpy array but the following error is occuring-
```
File "/home/user/Documents/Mooc_implementation.py", line 8, in <module>
x = num_data[:,:10]
File "/usr/lib/python2.7/dist-packages/numpy/core/records.py", line 499, in __getitem__
obj = super(recarray, self).__getitem__(indx)
IndexError: too many indices for array
```
here is my code-
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('/home/user/Documents/MOOC dataset cleaned/student_reg_vle_info_assessment.csv')
num_data = dataset.to_records(index=False)
x = num_data[:,:10]
y = num_data[:,10:11]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=4)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
```
what should I do?
output of dataset.head()-
```
date_submitted date_registration date_unregistration sum_click \
0 18 -159 445 16
1 22 -53 445 4
2 30 -92 12 3
3 17 -52 445 1
4 26 -176 445 5
num_of_prev_attempts age_band region highest_education studied_credits \
0 0 0 0 0 240
1 0 1 1 0 60
2 0 1 2 1 60
3 0 1 3 1 60
4 0 2 4 2 60
score final_result
0 78 0
1 70 0
2 87 2
3 72 0
4 69 0
[Finished in 0.274s]
``` | 2018/07/10 | [
"https://Stackoverflow.com/questions/51263370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9355642/"
] | Given `ball *b`, `b[j]` is an element from the elements that `b` points to. Thus `b[j]` is not a pointer; it is a struct. Since it is a struct, you use `.` to refer to members in it.
The definition of `b[j]` in the C standard is that it is `*((b)+(j))`. So it takes the pointer `b`, moves `j` elements beyond it, and then applies `*`.
Since `*` is already applied in `b[j]`, you do not need `->`, just `.`. | In C, the array name is a pointer to array’s first element, hence your function declaration has name `ball *b`and works when you pass a `ball[]` instance.
Try dynamically allocating the memory by using `malloc()` and passing that pointer to your function. |
51,263,370 | I am trying to implement k-nearest neighbor algorithm with the dataset which I have preprocessed. I imported the data as pandas dataframe and then converted it into numpy array but the following error is occuring-
```
File "/home/user/Documents/Mooc_implementation.py", line 8, in <module>
x = num_data[:,:10]
File "/usr/lib/python2.7/dist-packages/numpy/core/records.py", line 499, in __getitem__
obj = super(recarray, self).__getitem__(indx)
IndexError: too many indices for array
```
here is my code-
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('/home/user/Documents/MOOC dataset cleaned/student_reg_vle_info_assessment.csv')
num_data = dataset.to_records(index=False)
x = num_data[:,:10]
y = num_data[:,10:11]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=4)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
```
what should I do?
output of dataset.head()-
```
date_submitted date_registration date_unregistration sum_click \
0 18 -159 445 16
1 22 -53 445 4
2 30 -92 12 3
3 17 -52 445 1
4 26 -176 445 5
num_of_prev_attempts age_band region highest_education studied_credits \
0 0 0 0 0 240
1 0 1 1 0 60
2 0 1 2 1 60
3 0 1 3 1 60
4 0 2 4 2 60
score final_result
0 78 0
1 70 0
2 87 2
3 72 0
4 69 0
[Finished in 0.274s]
``` | 2018/07/10 | [
"https://Stackoverflow.com/questions/51263370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9355642/"
] | Given `ball *b`, `b[j]` is an element from the elements that `b` points to. Thus `b[j]` is not a pointer; it is a struct. Since it is a struct, you use `.` to refer to members in it.
The definition of `b[j]` in the C standard is that it is `*((b)+(j))`. So it takes the pointer `b`, moves `j` elements beyond it, and then applies `*`.
Since `*` is already applied in `b[j]`, you do not need `->`, just `.`. | For explanation of the current issue, see [Eric](https://stackoverflow.com/users/298225/eric-postpischil)'s [answer](https://stackoverflow.com/a/51264543/1312382); in some of the answers given so far there is dangerous wording applied, so just to make clear: When do we have an array and when a pointer???
Consider the following:
```
int a[7];
```
As long as we can refer to `a` directly, we still have an array and can use any operations that are valid on, e. g. getting size:
```
size_t n = sizeof(a); // size in bytes, not ints, i. e. 7*sizeof(int)
```
You can pass arrays to functions or even do pointer arithmetics on:
```
f(a);
a + 1;
```
In both cases, the array "decays" to a pointer, though, and the result is a pointer as well. Be aware that you can assign new values to a pointer, but not to an array itself (you can assign new values to the array's elements, directly or via pointer), so you cannot do things like `++a` either.
When an array decays to a pointer, it gets a pointer to its first element:
```
int* ptr = a;
int* ptr = &*a; // only pointers can be dereferenced -> a decays!
int* ptr = &a[0]; // short for &*(a + 0)...
```
All result in exactly the same; however, the following is invalid:
```
int* ptr = &a;
```
Taking the address of an entire array actually *is* possible, but the resulting pointer is not of type "pointer to element" nor of type "pointer to pointer to element" (`int**` in the example), but of type "pointer to array of specific size". Syntax for is ugly, though, but the following would be legal again:
```
int(*aptr)[7] = &a;
```
You need to read: if I dereference `ptr`, I get `int[7]`...
Once decayed, there is only a pointer to the array left (more precisely: to one of the array elements, directly after decaying, to the first; array and first element always share the same address, so, although of different type, both pointers `ptr` and `aptr` from above hold exactly the same value). Pointers can be moved around within the array, but they do not hold as much information as the array itself, especially, the array size gets lost. This is why one needs to pass the array's length together with the pointer to functions (if needed; another variant is a sentinel value denoting the array end such as the terminating null character in strings or the null pointer following the string arguments in `main`'s arguments list):
```
int a[7];
f(a, sizeof(a)/sizeof(*a)); // division: sizeof is in bytes, dividing by size
// of first element gives number of elements
```
Possibly with f as:
```
void f(int b[], size_t n)
// ^^^^^^^ in function parameter lists, just alternative syntax for int* b !!!
// however, we can show more explicitly that we expect a pointer
// to an array this way...
{
size_t m = sizeof(b); // as b is a POINTER, gives the constant (but hardware specific!)
// size of a pointer (on typical modern 64-bit hardware 8 bytes),
// no matter what size of the array being pointed to is!!!
while(n)
{
*b++ = n--;
// ^^ advances pointer, NOT array!
}
}
```
Hope this helps to avoid confusion. |
51,263,370 | I am trying to implement k-nearest neighbor algorithm with the dataset which I have preprocessed. I imported the data as pandas dataframe and then converted it into numpy array but the following error is occuring-
```
File "/home/user/Documents/Mooc_implementation.py", line 8, in <module>
x = num_data[:,:10]
File "/usr/lib/python2.7/dist-packages/numpy/core/records.py", line 499, in __getitem__
obj = super(recarray, self).__getitem__(indx)
IndexError: too many indices for array
```
here is my code-
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('/home/user/Documents/MOOC dataset cleaned/student_reg_vle_info_assessment.csv')
num_data = dataset.to_records(index=False)
x = num_data[:,:10]
y = num_data[:,10:11]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=4)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
```
what should I do?
output of dataset.head()-
```
date_submitted date_registration date_unregistration sum_click \
0 18 -159 445 16
1 22 -53 445 4
2 30 -92 12 3
3 17 -52 445 1
4 26 -176 445 5
num_of_prev_attempts age_band region highest_education studied_credits \
0 0 0 0 0 240
1 0 1 1 0 60
2 0 1 2 1 60
3 0 1 3 1 60
4 0 2 4 2 60
score final_result
0 78 0
1 70 0
2 87 2
3 72 0
4 69 0
[Finished in 0.274s]
``` | 2018/07/10 | [
"https://Stackoverflow.com/questions/51263370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9355642/"
] | For explanation of the current issue, see [Eric](https://stackoverflow.com/users/298225/eric-postpischil)'s [answer](https://stackoverflow.com/a/51264543/1312382); in some of the answers given so far there is dangerous wording applied, so just to make clear: When do we have an array and when a pointer???
Consider the following:
```
int a[7];
```
As long as we can refer to `a` directly, we still have an array and can use any operations that are valid on, e. g. getting size:
```
size_t n = sizeof(a); // size in bytes, not ints, i. e. 7*sizeof(int)
```
You can pass arrays to functions or even do pointer arithmetics on:
```
f(a);
a + 1;
```
In both cases, the array "decays" to a pointer, though, and the result is a pointer as well. Be aware that you can assign new values to a pointer, but not to an array itself (you can assign new values to the array's elements, directly or via pointer), so you cannot do things like `++a` either.
When an array decays to a pointer, it gets a pointer to its first element:
```
int* ptr = a;
int* ptr = &*a; // only pointers can be dereferenced -> a decays!
int* ptr = &a[0]; // short for &*(a + 0)...
```
All result in exactly the same; however, the following is invalid:
```
int* ptr = &a;
```
Taking the address of an entire array actually *is* possible, but the resulting pointer is not of type "pointer to element" nor of type "pointer to pointer to element" (`int**` in the example), but of type "pointer to array of specific size". Syntax for is ugly, though, but the following would be legal again:
```
int(*aptr)[7] = &a;
```
You need to read: if I dereference `ptr`, I get `int[7]`...
Once decayed, there is only a pointer to the array left (more precisely: to one of the array elements, directly after decaying, to the first; array and first element always share the same address, so, although of different type, both pointers `ptr` and `aptr` from above hold exactly the same value). Pointers can be moved around within the array, but they do not hold as much information as the array itself, especially, the array size gets lost. This is why one needs to pass the array's length together with the pointer to functions (if needed; another variant is a sentinel value denoting the array end such as the terminating null character in strings or the null pointer following the string arguments in `main`'s arguments list):
```
int a[7];
f(a, sizeof(a)/sizeof(*a)); // division: sizeof is in bytes, dividing by size
// of first element gives number of elements
```
Possibly with f as:
```
void f(int b[], size_t n)
// ^^^^^^^ in function parameter lists, just alternative syntax for int* b !!!
// however, we can show more explicitly that we expect a pointer
// to an array this way...
{
size_t m = sizeof(b); // as b is a POINTER, gives the constant (but hardware specific!)
// size of a pointer (on typical modern 64-bit hardware 8 bytes),
// no matter what size of the array being pointed to is!!!
while(n)
{
*b++ = n--;
// ^^ advances pointer, NOT array!
}
}
```
Hope this helps to avoid confusion. | In C, the array name is a pointer to array’s first element, hence your function declaration has name `ball *b`and works when you pass a `ball[]` instance.
Try dynamically allocating the memory by using `malloc()` and passing that pointer to your function. |
58,411,930 | After all of web searching and coming up of no answer to this, I thought of asking this question on this platform. I had an application container which i try to connect with mine database container but due to reasons unaware mine application is not able to connect.
I am providing all the relevant information required for it.
Here is mine docker-compose file
```
version: '2'
services:
application: &application
image: gmasmatrix_application:latest
command: /start.sh
volumes:
- .:/app
depends_on:
- db
ports:
- 8000:8000
# cpu_shares: 874
# mem_limit: 1610612736
# mem_reservation: 1610612736
build:
context: ./
dockerfile: ./compose/local/application/Dockerfile
args:
- GMAS_ENV_TYPE=local
db:
image: mysql:5.7.10
environment:
MYSQL_DATABASE: gmas_mkt
MYSQL_ROOT_PASSWORD: pulkit1607
ports:
- '3306:3306'
volumes:
- my-db:/var/lib/mysql
volumes:
my-db:
```
I also changed the host in the settings.py file
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'gmas_mkt',
'USER': 'root',
'PASSWORD': 'pulkit1607',
'HOST': 'db',
'PORT': '3306',
}
}
```
Here is the error I get upon running the docker-compose up file
```
application_1 | Traceback (most recent call last):
application_1 | File "manage.py", line 22, in <module>
application_1 | execute_from_command_line(sys.argv)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 354, in execute_from_command_line
application_1 | utility.execute()
application_1 | File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 346, in execute
application_1 | self.fetch_command(subcommand).run_from_argv(self.argv)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 394, in run_from_argv
application_1 | self.execute(*args, **cmd_options)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 444, in execute
application_1 | self.check()
application_1 | File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 482, in check
application_1 | include_deployment_checks=include_deployment_checks,
application_1 | File "/usr/local/lib/python2.7/site-packages/django/core/checks/registry.py", line 72, in run_checks
application_1 | new_errors = check(app_configs=app_configs)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/core/checks/model_checks.py", line 28, in check_all_models
application_1 | errors.extend(model.check(**kwargs))
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/models/base.py", line 1205, in check
application_1 | errors.extend(cls._check_fields(**kwargs))
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/models/base.py", line 1282, in _check_fields
application_1 | errors.extend(field.check(**kwargs))
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 934, in check
application_1 | errors = super(AutoField, self).check(**kwargs)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 207, in check
application_1 | errors.extend(self._check_backend_specific_checks(**kwargs))
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 306, in _check_backend_specific_checks
application_1 | return connection.validation.check_field(self, **kwargs)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/mysql/validation.py", line 18, in check_field
application_1 | field_type = field.db_type(connection)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 614, in db_type
application_1 | return connection.data_types[self.get_internal_type()] % data
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/__init__.py", line 36, in __getattr__
application_1 | return getattr(connections[DEFAULT_DB_ALIAS], item)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/utils/functional.py", line 59, in __get__
application_1 | res = instance.__dict__[self.name] = self.func(instance)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 196, in data_types
application_1 | if self.features.supports_microsecond_precision:
application_1 | File "/usr/local/lib/python2.7/site-packages/django/utils/functional.py", line 59, in __get__
application_1 | res = instance.__dict__[self.name] = self.func(instance)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/mysql/features.py", line 52, in supports_microsecond_precision
application_1 | return self.connection.mysql_version >= (5, 6, 4) and Database.version_info >= (1, 2, 5)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/utils/functional.py", line 59, in __get__
application_1 | res = instance.__dict__[self.name] = self.func(instance)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 371, in mysql_version
application_1 | with self.temporary_connection():
application_1 | File "/usr/local/lib/python2.7/contextlib.py", line 17, in __enter__
application_1 | return self.gen.next()
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 462, in temporary_connection
application_1 | cursor = self.cursor()
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 162, in cursor
application_1 | cursor = self.make_debug_cursor(self._cursor())
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 135, in _cursor
application_1 | self.ensure_connection()
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection
application_1 | self.connect()
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 98, in __exit__
application_1 | six.reraise(dj_exc_type, dj_exc_value, traceback)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection
application_1 | self.connect()
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 119, in connect
application_1 | self.connection = self.get_new_connection(conn_params)
application_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 276, in get_new_connection
application_1 | conn = Database.connect(**conn_params)
application_1 | File "/usr/local/lib/python2.7/site-packages/MySQLdb/__init__.py", line 84, in Connect
application_1 | return Connection(*args, **kwargs)
application_1 | File "/usr/local/lib/python2.7/site-packages/MySQLdb/connections.py", line 166, in __init__
application_1 | super(Connection, self).__init__(*args, **kwargs2)
application_1 | django.db.utils.OperationalError: (2002, "Can't connect to MySQL server on 'db' (115)")
```
Not able to figure out what i am doing wrong. | 2019/10/16 | [
"https://Stackoverflow.com/questions/58411930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5182512/"
] | in my case
i changed
```
mysql://root:root@localhost:3307/db
```
to
```
mysql://root:root@Gateway ip:3307/test_db
```
the Gateway ip you can find it in
```
docker network ls
docker network inspect
``` | Change 'HOST': 'db' to 'HOST': '**localhost**' in settings.py file. Because you map the default MySQL port from container to MySQL default port in your main host. |
29,678,324 | I'm just learning python so be gentle. I want to read a file and that to be one function and then have another function work on what the previous function "read". I am having trouble passing the result on one function to another. Here is what I have no far:
I want to call read\_file more than once and to be able to pass its result to more than one function, therefore, I do not want frame to be a global variable. How do I get read\_file to pass 'frame' to cost\_channelID directly? Perhaps, for cost\_channelID to call read\_file?
```
def read_file():
user_input = raw_input("please put date needed in x.xx form: ")
path = r'C:\\Users\\CP\\documents\\' + user_input
allFiles = glob.glob(path + '/*.csv')
frame = pd.DataFrame()
list = []
for file in allFiles:
df = pd.read_csv(file,index_col=None,header=0)
list.append(df)
frame =pd.concat(list,ignore_index=True)
def cost_channelID():
numbers =r'[0,1,2,3,4,5,6,7,8,9]'
Ads = frame['Ad']
ID = []
for ad in Ads:
num = ''.join(re.findall(numbers,ad)[1:7])
ID.append(num)
ID = pd.Series(ID)
pieces = [frame,ID]
frame2 = pd.concat(pieces,ignore_index=True,axis=1)
frame2 = frame2.rename(columns={0:'Ad',1:'Ad Impressions',2:'Total Ad Spend',3:'eCPM (Total Ad Spend/Ad)',4:'Ad Attempts',5:'ID'})
```
Any and all help is greatly appreciated! | 2015/04/16 | [
"https://Stackoverflow.com/questions/29678324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4797120/"
] | Here is your modified code (comments in uppercase for easier finding, not rudeness):
```
def read_file():
user_input = raw_input("please put date needed in x.xx form: ")
path = r'C:\\Users\\CP\\documents\\' + user_input
allFiles = glob.glob(path + '/*.csv')
frame = pd.DataFrame()
list = []
for file in allFiles:
df = pd.read_csv(file,index_col=None,header=0)
list.append(df)
frame =pd.concat(list,ignore_index=True)
return(frame) #YOUR FUNCTION WILL BE RETURNING THE READ FRAME
def cost_channelID(read_frame): #YOU WILL BE RECEIVING A DIFFERENT FRAME EVERY TIME
numbers =r'[0,1,2,3,4,5,6,7,8,9]'
Ads = read_frame['Ad']
ID = []
for ad in Ads:
num = ''.join(re.findall(numbers,ad)[1:7])
ID.append(num)
ID = pd.Series(ID)
pieces = [frame,ID]
frame2 = pd.concat(pieces,ignore_index=True,axis=1)
frame2 = frame2.rename(columns={0:'Ad',1:'Ad Impressions',2:'Total Ad Spend',3:'eCPM (Total Ad Spend/Ad)',4:'Ad Attempts',5:'ID'})
#HERE YOU MEANT TO USE FRAME2 WITHIN THE FUNCTION? IT IS WHAT HAPPENING BECAUSE OF THE INDENTATION
frame1 = read_file() #YOU CAN READ AS MANY FRAMES AS YOU WANT AND THEY'LL BE KEPT IN SEPARATED FRAMES
frame2 = read_file()
#...framexx = read_file()
#AND YOU CAN JUST CALL cost_channelID in any of them (or any other function)
const_channelID(frame1)
const_channelID(frame2)
#... AND SO ON
``` | If you are trying to pass frame from one function to the other, you need to declare it outside the scope of the function. Otherwise we need more information about what you are trying to accomplish.
```
frame = None
def read_file():
user_input = raw_input("please put date needed in x.xx form: ")
path = r'C:\\Users\\CP\\documents\\' + user_input
allFiles = glob.glob(path + '/*.csv')
frame = pd.DataFrame()
list = []
for file in allFiles:
df = pd.read_csv(file,index_col=None,header=0)
list.append(df)
frame =pd.concat(list,ignore_index=True)
def cost_channelID():
numbers =r'[0,1,2,3,4,5,6,7,8,9]'
Ads = frame['Ad']
ID = []
for ad in Ads:
num = ''.join(re.findall(numbers,ad)[1:7])
ID.append(num)
ID = pd.Series(ID)
pieces = [frame,ID]
frame2 = pd.concat(pieces,ignore_index=True,axis=1)
frame2 = frame2.rename(columns={0:'Ad',1:'Ad Impressions',2:'Total Ad Spend',3:'eCPM (Total Ad Spend/Ad)',4:'Ad Attempts',5:'ID'})
```
Here is a good reference for scope rules in python
[Short Description of the Scoping Rules?](https://stackoverflow.com/questions/291978/short-description-of-python-scoping-rules) |
51,171,741 | I am using Python for the first time to create a simple JSON parser. However, when printing the JSON data to the console, it includes many extra brackets and other symbols that are unwanted. I am also running Python 2.7.10.
```
import json
from urllib2 import urlopen
response = urlopen("https://finance.yahoo.com/webservice/v1/symbols/allcurrencies/quote?format=json")
source = response.read()
# print(source)
data = json.loads(source)
# print(json.dumps(data, indent=2))
usd_rates = dict()
for item in data['list']['resources']:
name = item['resource']['fields']['name']
price = item['resource']['fields']['price']
usd_rates[name] = price
print(name, price)
```
And the output is as follows:
[](https://i.stack.imgur.com/pW15b.png)
When I try to change the python version to 3.7.10:
[](https://i.stack.imgur.com/9R47A.png) | 2018/07/04 | [
"https://Stackoverflow.com/questions/51171741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6131554/"
] | I think you are actually printing a tuple with python 2 `print` syntax and the `u` character is a unicode flag ([What exactly do "u" and "r" string flags do, and what are raw string literals?](https://stackoverflow.com/questions/2081640/what-exactly-do-u-and-r-string-flags-do-and-what-are-raw-string-literals)).
Also in python 3 you couldn't use `urllib2` but would have to use `urllib.request`.
This code works for me (python 3.6.5):
```
import json
from urllib.request import urlopen
response = urlopen("https://finance.yahoo.com/webservice/v1/symbols/allcurrencies/quote?format=json")
source = response.read()
data = json.loads(source)
usd_rates = dict()
for item in data['list']['resources']:
name = item['resource']['fields']['name']
price = item['resource']['fields']['price']
usd_rates[name] = price
print(name, price)
```
EDIT ---------
From the image you posted it looks like you have python 3 installed but your `usr/bin/python` is a symbolic link to `usr/bin/python2`.
If you want to run python 3 by default you could create an alias.
Check this link for more info <https://askubuntu.com/questions/320996/how-to-make-python-program-command-execute-python-3>
(should be valid info for macs too) | Convert name and price to string
`print(str(name), str(price))`
or
use
`name = str(item['resource']['fields']['name'])` |
70,423,743 | I'm new in python and programming in general. I have this project to create a simple library to add new authors and upload books. I must also display the uploaded book details(total words, most common words, different words, etc.) when I open the book. Now to do this I know that I first must open the uploaded book in readmode, but I don't know how to access the file that I uploaded.
This is my models:
```
from django.db import models
from authors.models import Author
class Book(models.Model):
author = models.ForeignKey(Author, on_delete=models.CASCADE)
book_name = models.CharField(max_length=50)
genre = models.CharField(max_length=50)
files = models.FileField(upload_to='books/files/')
def __str__(self):
return self.book_name()
```
This is my forms:
```
from django import forms
from .models import Book
class BookForm(forms.ModelForm):
class Meta:
model = Book
fields = ('book_name', 'author', 'genre', 'files')
```
My views:
```
def create(request):
if request.method == 'POST':
form = BookForm(request.POST, request.FILES)
if form.is_valid():
form.save()
return redirect('books:home')
elif request.method == 'GET':
form = BookForm()
return render(request, 'books/create.html', {
'form': form
})
def detail(request, pk):
book = get_object_or_404(Book, pk=pk)
return render(request, 'books/detail.html', context={'book': book})
```
My details.html:
```
{% extends 'base.html' %}
{% block title %} {{ book.book_name }} {% endblock %}
{% block content %}
<p>Book title: {{ book.book_name }}</p>
<p>Genre: {{ book.genre }}</p>
<p>Author:
<a href="{% url 'authors:detail' book.author.pk %}">{{ book.author.full_name }}</a>
</p>
<p><button type="button"><a href="{{ book.files.url }}" target="_blank">Shkarko</a></button></p>
{% endblock %}
```
I want it to be shown in details.html.
Thanks in advance! | 2021/12/20 | [
"https://Stackoverflow.com/questions/70423743",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17725025/"
] | ```
<v-btn
class="mr-4"
:loading="saveLoading == index"
@click="submit(item, index)"
>
data: () => ({
valid: true,
saveLoading: -1,
})
submit (formItem, index) {
this.saveLoading = index
console.log(formItem)
// POST the formItem and make the saveLoading false in async then(), catch()
},
```
It is not a very logical method, but this way you can solve the problem. | If you are generating the forms from the api response, mean that in some way you are attaching the response to the Vue Data property. If so, you could easily enrich the objects of the array (the forms object) to have a isLoading property. So the result will be something like:
```
API RESPONSE
[
{
form_name: 'Form One',
name: 'Peter',
email: 'peter@email.com'
},
{
form_name: 'Form Two',
name: 'John',
email: 'john@email.com'
}
]
```
and store is as:
```
Stored object
[
{
form_name: 'Form One',
name: 'Peter',
email: 'peter@email.com',
isLoading: false
},
{
form_name: 'Form Two',
name: 'John',
email: 'john@email.com',
isLoading: false
}
]
```
Set the loading status in the submit(item) method:
```
submit(item){
item.isLoading = true;
}
```
then you can use it by checking the value of isLoading on the form object, something on the lines of:
```
<v-form
v-for="(item, index) in forms" :key="index"
>
<v-btn
:loading="item.isLoading"
@click="submit(item)"
>
submit
</v-btn>
</v-form>
``` |
70,423,743 | I'm new in python and programming in general. I have this project to create a simple library to add new authors and upload books. I must also display the uploaded book details(total words, most common words, different words, etc.) when I open the book. Now to do this I know that I first must open the uploaded book in readmode, but I don't know how to access the file that I uploaded.
This is my models:
```
from django.db import models
from authors.models import Author
class Book(models.Model):
author = models.ForeignKey(Author, on_delete=models.CASCADE)
book_name = models.CharField(max_length=50)
genre = models.CharField(max_length=50)
files = models.FileField(upload_to='books/files/')
def __str__(self):
return self.book_name()
```
This is my forms:
```
from django import forms
from .models import Book
class BookForm(forms.ModelForm):
class Meta:
model = Book
fields = ('book_name', 'author', 'genre', 'files')
```
My views:
```
def create(request):
if request.method == 'POST':
form = BookForm(request.POST, request.FILES)
if form.is_valid():
form.save()
return redirect('books:home')
elif request.method == 'GET':
form = BookForm()
return render(request, 'books/create.html', {
'form': form
})
def detail(request, pk):
book = get_object_or_404(Book, pk=pk)
return render(request, 'books/detail.html', context={'book': book})
```
My details.html:
```
{% extends 'base.html' %}
{% block title %} {{ book.book_name }} {% endblock %}
{% block content %}
<p>Book title: {{ book.book_name }}</p>
<p>Genre: {{ book.genre }}</p>
<p>Author:
<a href="{% url 'authors:detail' book.author.pk %}">{{ book.author.full_name }}</a>
</p>
<p><button type="button"><a href="{{ book.files.url }}" target="_blank">Shkarko</a></button></p>
{% endblock %}
```
I want it to be shown in details.html.
Thanks in advance! | 2021/12/20 | [
"https://Stackoverflow.com/questions/70423743",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17725025/"
] | The problem is that you have only one variable. You could do `saveLoading1` to `saveLoadingN` but that's not very useful.
```js
var forms = [
{
form_name: 'Form One',
name: 'Peter',
email: 'peter@email.com'
},
{
form_name: 'Form Two',
name: 'John',
email: 'john@email.com'
}
]
function generate_computed_properties() {
var states = {};
for (let [ index, item ] of forms.entries()) {
states['saveLoading_' + index] = false
}
return { states }
}
new Vue({
el: '#app',
data() {
return {
forms: window.forms,
...generate_computed_properties()
}
},
methods: {
submit(index) {
this.setLoadingState(index, true)
setTimeout((function() {
this.setLoadingState(index, false)
}).bind(this), 1000)
},
setLoadingState(index, state) {
this.states['saveLoading_' + index] = state
},
},
computed: {
},
template: `<div>
<form
v-for="(item, index) in forms" :key="index"
>
<h2>{{ item.form_name }}</h2>
<button
:disabled="states['saveLoading_' + index]"
@click.prevent="submit(index)"
>
<span v-if="states['saveLoading_' + index]">Loading …</span>
<span v-else>submit</span>
</button>
</form>
</div>`,
})
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.17/vue.js"></script>
<div id="app"></div>
```
It'll be better to split your forms into subcomponents:
```js
var forms = [
{
form_name: 'Form One',
name: 'Peter',
email: 'peter@email.com'
},
{
form_name: 'Form Two',
name: 'John',
email: 'john@email.com'
}
];
var Button = Vue.component('v-btn', {
props: [ 'loading' ],
template: `<button :disabled="loading" @click="handleClick">
<slot v-if="!loading"></slot>
<span v-else>Saving …</span>
</button>`,
methods: {
handleClick() {
this.$emit('click');
}
}
})
var Form = Vue.component('v-form', {
name: "Form",
props: [ 'form' ],
data() {
return {
saveLoading: false
}
},
template: `<div>
<h2>{{ form.form_name }}</h2>
<v-btn
:loading="saveLoading"
@click="submit(form)"
>
submit
</v-btn>
</div>`,
methods: {
submit() {
console.log('save')
this.setLoading(true);
setTimeout(this.setLoading.bind(this, false), 1000)
},
setLoading(isLoading) {
this.saveLoading = isLoading
}
}
});
new Vue({
el: '#app',
components: { Form },
data() {
return {
forms: window.forms
}
},
template: `<div>
<v-form
v-for="(item, index) in forms" :key="index" :form="item"
>
</v-form>
</div>`
})
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.17/vue.js"></script>
<div id="app"></div>
``` | If you are generating the forms from the api response, mean that in some way you are attaching the response to the Vue Data property. If so, you could easily enrich the objects of the array (the forms object) to have a isLoading property. So the result will be something like:
```
API RESPONSE
[
{
form_name: 'Form One',
name: 'Peter',
email: 'peter@email.com'
},
{
form_name: 'Form Two',
name: 'John',
email: 'john@email.com'
}
]
```
and store is as:
```
Stored object
[
{
form_name: 'Form One',
name: 'Peter',
email: 'peter@email.com',
isLoading: false
},
{
form_name: 'Form Two',
name: 'John',
email: 'john@email.com',
isLoading: false
}
]
```
Set the loading status in the submit(item) method:
```
submit(item){
item.isLoading = true;
}
```
then you can use it by checking the value of isLoading on the form object, something on the lines of:
```
<v-form
v-for="(item, index) in forms" :key="index"
>
<v-btn
:loading="item.isLoading"
@click="submit(item)"
>
submit
</v-btn>
</v-form>
``` |
37,143,664 | Just trying to pull some lat/lon info from EXIF data on a bunch of photos, but code is throwing a `KeyError` even though that key is used (successfully) later on to print specific coordinates.
Dictionary in question is "`tags`" - `'GPS GPSLatitude'` and `'GPS GPSLongitude'` are both keys in `tags.keys()`; I've triple checked.
So any intuition on why `tags['GPS GPSLatitude']` & `tags['GPS GPSLongitude']` are throwing key errors?
```
import os
import exifread
output = dict()
output['name'] = []
output['lon'] = []
output['lat'] = []
for file in os.listdir(path):
if file.endswith(".JPG"):
full_path = path + file
print (file) #check to ensure all files were found
output['name'].append(file) #append photo name to dictionary
f = open(full_path, 'rb') #open photo
tags = exifread.process_file(f) #read exifdata
# lon = tags['GPS GPSLongitude'] #this + next line = one method
# output['lon'].append(lon)
# output['lat'].append(tags['GPS GPSLatitude']) # = cleaner second method
for tag in tags.keys():
if tag in ('GPS GPSLongitude','GPS GPSLatitude'):
print ("Key: %s, value %s" % (tag, tags[tag])) #successfully prints lat/lon coords with 'GPS GPSLongitude' and 'GPS GPSLatitude' as keys
```
UPDATE:
Here's the output of `print (tags.keys())` -- you'll see `GPS GPSLatitude` and `GPS GPSLongitude` in there. Also, have manually checked all the photos in the subset I'm using have GPS data.
`dict_keys(['GPS GPSImgDirection', 'EXIF SceneType', 'MakerNote Tag 0x0006', 'GPS GPSDestBearing', 'Thumbnail XResolution', 'EXIF BrightnessValue', 'GPS GPSAltitude', 'GPS GPSLongitude', 'EXIF LensSpecification', 'GPS GPSAltitudeRef', 'GPS GPSSpeedRef', 'GPS GPSDestBearingRef', 'EXIF WhiteBalance', 'Thumbnail ResolutionUnit', 'EXIF FocalLengthIn35mmFilm', 'EXIF SceneCaptureType', 'Image Model', 'MakerNote Tag 0x0008', 'Image Make', 'EXIF ShutterSpeedValue', 'MakerNote Tag 0x0007', 'EXIF ExifImageWidth', 'EXIF LensModel', 'Image YResolution', 'EXIF ComponentsConfiguration', 'Image GPSInfo', 'EXIF ISOSpeedRatings', 'EXIF ExposureMode', 'EXIF Flash', 'EXIF FlashPixVersion', 'GPS GPSLatitudeRef', 'EXIF ExposureBiasValue', 'Thumbnail JPEGInterchangeFormatLength', 'Thumbnail Compression', 'Image YCbCrPositioning', 'EXIF MakerNote', 'EXIF FNumber', 'JPEGThumbnail', 'MakerNote Tag 0x0001', 'EXIF ColorSpace', 'EXIF SubSecTimeDigitized', 'Thumbnail JPEGInterchangeFormat', 'MakerNote Tag 0x0004', 'EXIF SubjectArea', 'Image ResolutionUnit', 'EXIF SensingMethod', 'Image DateTime', 'Image Orientation', 'EXIF ExifVersion', 'Image ExifOffset', 'GPS GPSImgDirectionRef', 'MakerNote Tag 0x0014', 'Thumbnail YResolution', 'EXIF DateTimeOriginal', 'MakerNote Tag 0x0005', 'EXIF LensMake', 'EXIF DateTimeDigitized', 'MakerNote Tag 0x0003', 'GPS GPSTimeStamp', 'EXIF ExposureTime', 'GPS Tag 0x001F', 'EXIF SubSecTimeOriginal', 'GPS GPSLatitude', 'Image Software', 'EXIF ApertureValue', 'GPS GPSDate', 'EXIF ExposureProgram', 'GPS GPSSpeed', 'EXIF ExifImageLength', 'EXIF MeteringMode', 'GPS GPSLongitudeRef', 'EXIF FocalLength', 'Image XResolution'])`
Traceback
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-14-949ba89a1248> in <module>()
16 # lon = tags["GPS GPSLongitude"]
17 # output['lon'].append(lon)
---> 18 output['lat'].append(tags['GPS GPSLatitude'])
19 for tag in tags.keys():
20 if tag in ('GPS GPSLongitude','GPS GPSLatitude'):
KeyError: 'GPS GPSLatitude'
```
Link to photo: [https://drive.google.com/a/cornell.edu/file/d/0B1DwcbbAH1yuTEs0cUhhODdlNnc/view](https://drive.google.com/a/cornell.edu/file/d/0B1DwcbbAH1yuTEs0cUhhODdlNnc/view])
Output of the print statement for this photo
```
IMG_6680.JPG
Key: GPS GPSLongitude, value [76, 29, 353/20]
Key: GPS GPSLatitude, value [42, 26, 5069/100]
``` | 2016/05/10 | [
"https://Stackoverflow.com/questions/37143664",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5957741/"
] | `GPS GPSLatitude` and `GPS GPSLongitude` may not be present in all tag dicts.
Instead of accessing keys as `tags['GPS GPSLatitude']` & `tags['GPS GPSLongitude']` , you can also access these as `tags.get('GPS GPSLatitude')` & `tags.get('GPS GPSLongitude')` This wil return `None` instead of throwing error, where you can apply if-else condition also to verify where these keys are not present. | I think @BryanOakley has the right idea. If the key isn't in the dict, it isn't there. (Those fields are optional, and some files might not have the data.) So you can use the `dict.get(key, default=None)` approach, and replace the Key Error with a default value.
```
jpegs = [file for file in os.listdir(path) if file.endswith('.JPG')]
locns = []
for jpeg in jpegs:
with open(jpeg,'rb') as jpf:
exif = exifread.process_file(jpf)
lat = exif.get('GPS GPSLatitude', 'Not available')
lon = exif.get('GPS GPSLongitude', 'Not available')
locns.append((lat, lon))
``` |
58,673,628 | I have been leaning python and programming for not so long. So you may find my question silly.
I am reviewing generator and try to generate 'yes', 'no' infinitely just to understand the concept.
I have tried this code but having "yes" each time
```
def yes_or_no():
answer = ["yes","no"]
i=0
while True:
if i >=2:
i=0
yield answer[i]
i+=1
c=next(yes_or_no())
print(c)
print(c)
print(c)
print(c)
``` | 2019/11/02 | [
"https://Stackoverflow.com/questions/58673628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12308317/"
] | `yes_no()` produces the generator; you want to call `next` on the same generator each time, rather than printing the same first element over and over.
```
c = yes_no()
print(next(c))
print(next(c))
# etc.
```
That said, there's no need for a separate counter; just yield `yes`, then yield `no`, then repeat.
```
def yes_or_no():
while True:
yield "yes"
yield "no"
``` | You need to initialize the generator and then call `next` on the initialized generator object:
```
c = yes_or_no()
```
Now you need to call `next` on `c`:
```
print(next(c))
print(next(c))
```
---
In your current code `c=next(yes_or_no())`:
* `yes_or_no()` will initialize the generator and calling `next` on it will get the first `yes` and you're saving that `yes` as name `c`
* In the next lines, you're just printing same `yes` referred by `c` while doing `print(c)` |
58,673,628 | I have been leaning python and programming for not so long. So you may find my question silly.
I am reviewing generator and try to generate 'yes', 'no' infinitely just to understand the concept.
I have tried this code but having "yes" each time
```
def yes_or_no():
answer = ["yes","no"]
i=0
while True:
if i >=2:
i=0
yield answer[i]
i+=1
c=next(yes_or_no())
print(c)
print(c)
print(c)
print(c)
``` | 2019/11/02 | [
"https://Stackoverflow.com/questions/58673628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12308317/"
] | `yes_no()` produces the generator; you want to call `next` on the same generator each time, rather than printing the same first element over and over.
```
c = yes_no()
print(next(c))
print(next(c))
# etc.
```
That said, there's no need for a separate counter; just yield `yes`, then yield `no`, then repeat.
```
def yes_or_no():
while True:
yield "yes"
yield "no"
``` | While your function does return a generator and it has been stated by others that all you need to do is iterate over it using a loop or calling next in succession. Python provides you a great library called `itertools` to do exactly this thing; it's called [`itertools.cycle`](https://docs.python.org/3/library/itertools.html#itertools.cycle). This is all the code you need to replicate your functions ability:
```
def yes_no():
return itertools.cycle(['yes', 'no'])
```
And just like others have said, a generator can be iterated over using `next` or a loop.
```
>>> c = yes_no()
>>> next(c)
'yes'
>>> next(c)
'no'
...
``` |
19,351,065 | I'm trying to get a deeper understanding of how Python works, and I've been looking at the grammar shown at <http://docs.python.org/3.3/reference/grammar.html>.
I notice it says you would have to change parsermodule.c also, but truthfully I'm just not following what's going on here.
I understand that a grammar is a specification for how to read the language, but...I can't even tell what this is written in. It looks almost like Python but then it isn't.
I'm looking to get a better understanding of this specification and how it is used internally by Python to....do things. What depends on it (the answer is everything, but I mean specifically which aspect of the "engine" is processing it), what uses it, how does it tie in to compiling/running a script?
It's hard to believe that the whole language comes down to a two page specification... | 2013/10/13 | [
"https://Stackoverflow.com/questions/19351065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1029146/"
] | That is basically an [EBNF](http://en.wikipedia.org/wiki/EBNF) (Extended Backus–Naur Form) specification. | When you write a program in a language, the very first thing your interpreter/compiler must do in order to go from a sequence of characters to actual action is to translate that sequence of characters in a higher complexity structure. To do so, first it chunks up your program in a sequence of tokens expressing what each "word" represents. For example, the construct
```
if foo == 3: print 'hello'
```
will be converted into
```
1,0-1,2: NAME 'if'
1,3-1,6: NAME 'foo'
1,7-1,9: OP '=='
1,10-1,11: NUMBER '3'
1,11-1,12: OP ':'
1,13-1,18: NAME 'print'
1,19-1,26: STRING "'hello'"
2,0-2,0: ENDMARKER ''
```
But note that even something like "if if if if" is correctly made into tokens
```
1,0-1,2: NAME 'if'
1,3-1,5: NAME 'if'
1,6-1,8: NAME 'if'
1,9-1,11: NAME 'if'
2,0-2,0: ENDMARKER ''
```
What follows the tokenization is the parsing into a higher level structure that analyzes if the tokens actually make sense taken together, something that the latter example does not, but the first does. To do so, the parser must recognize the actual meaning of the tokens (e.g. the if is a keyword, and foo is a variable), then build a tree out of the tokens, organizing them in a hierarchy and see if this hierarchy actually makes sense. Here is where the grammar you are seeing comes in. That grammar is in BNF, which is a notation to express the constructs the language can recognize. That grammar is digested by a program (for example, bison) which has the magic property of taking that grammar and generate actual C code that does the heavy work for you, normally by recognizing the tokens, organizing them, returning you a parse tree, or tell you where there's a mistake.
Short version: developing a language is about defining tokens and how these tokens are put together to give something meaningful. This is done through the grammar, which you use to generate the actual "parser" code with automated tools. |
19,351,065 | I'm trying to get a deeper understanding of how Python works, and I've been looking at the grammar shown at <http://docs.python.org/3.3/reference/grammar.html>.
I notice it says you would have to change parsermodule.c also, but truthfully I'm just not following what's going on here.
I understand that a grammar is a specification for how to read the language, but...I can't even tell what this is written in. It looks almost like Python but then it isn't.
I'm looking to get a better understanding of this specification and how it is used internally by Python to....do things. What depends on it (the answer is everything, but I mean specifically which aspect of the "engine" is processing it), what uses it, how does it tie in to compiling/running a script?
It's hard to believe that the whole language comes down to a two page specification... | 2013/10/13 | [
"https://Stackoverflow.com/questions/19351065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1029146/"
] | A grammar is used to describe all possible strings in a language. It is also useful in specifying how a parser should parse the language.
In this grammar it seems like they are using their own version of [EBNF](http://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_Form), where a non-terminal is any lowercase word and a terminal is all uppercase or surrounded by quotes. For example, NEWLINE is a terminal, arith\_expr is a non-terminal and 'if' is also a terminal. Any non-terminal can be replaced by anything to the right of the colon of it's respective production rule. For example, if you look at the first rule:
>
> single\_input: NEWLINE | simple\_stmt | compound\_stmt NEWLINE
>
>
>
We can replace single\_input with one of either a NEWLINE, a simple\_stmt or a compound\_stmt followed by a NEWLINE. Suppose we replaced it with "compound\_stmt NEWLINE", then we would look for the production rule for compound\_stmt:
>
> compound\_stmt: if\_stmt | while\_stmt | for\_stmt | try\_stmt | with\_stmt | funcdef | classdef | decorated
>
>
>
and choose which of these we want to use, and substitute it for "compound\_stmt" (Keeping NEWLINE in it's place)
Suppose we wanted to generate the valid python program:
```
if 5 < 2 + 3 or not 1 == 5:
raise
```
We could use the following derivation:
1. single\_input
2. compound\_stmt NEWLINE
3. if\_stmt NEWLINE
4. 'if' test ':' suite NEWLINE
5. 'if' or\_test ':' NEWLINE INDENT stmt stmt DEDENT NEWLINE
6. 'if' and\_test 'or' and\_test ':' NEWLINE INDENT simple\_stmt DEDENT NEWLINE
7. 'if' not\_test 'or' not\_test ':' NEWLINE INDENT small\_stmt DEDENT NEWLINE
8. 'if' comparison 'or' 'not' not\_test ':' NEWLINE INDENT flow\_stmt DEDENT NEWLINE
9. 'if' expr comp\_op expr 'or' 'not' comparison ':' NEWLINE INDENT raise\_stmt DEDENT NEWLINE
10. 'if' arith\_expr '<' arith\_expr 'or' 'not' arith\_expr comp\_op arith\_expr ':' NEWLINE INDENT 'raise' DEDENT NEWLINE
11. 'if' term '<' term '+' term 'or' 'not' arith\_expr == arith\_expr ':' NEWLINE INDENT 'raise' DEDENT NEWLINE
12. 'if' NUMBER '<' NUMBER '+' NUMBER 'or' 'not' NUMBER == NUMBER ':' NEWLINE INDENT 'raise' DEDENT NEWLINE
A couple of notes here, firstly, we must start with one of the non-terminals which is listed as a starting non-terminal. In that page, they list them as single\_input, file\_input, or eval\_input. Secondly, a derivation is finished once all the symbols are terminal (hence the name). Thirdly, it is more common to do one substitution per line, for the sake of brevity I did all possible substitutions at once and started skipping steps near the end.
Given a string in the language, how do we find it's derivation? This is the job of a parser. A parser reverse-engineers a production sequence to first check that it is indeed a valid string, and furthermore how it can be derived from the grammar. It's worth noting that many grammars can describe a single language. However, for a given string, it's derivation will of course be different for each grammar. So technically we write a parser for a grammar not a language. Some grammars are easier to parse, some grammars are easier to read/understand. This one belongs in the former.
Also this doesn't specify the entire language, just what it looks like. A grammar says nothing about semantics.
If you're interested in more about parsing and grammar I recommend [Grune, Jacobs - Parsing Techniques](http://dickgrune.com/Books/PTAPG_1st_Edition/). It's free and good for self-study. | That is basically an [EBNF](http://en.wikipedia.org/wiki/EBNF) (Extended Backus–Naur Form) specification. |
19,351,065 | I'm trying to get a deeper understanding of how Python works, and I've been looking at the grammar shown at <http://docs.python.org/3.3/reference/grammar.html>.
I notice it says you would have to change parsermodule.c also, but truthfully I'm just not following what's going on here.
I understand that a grammar is a specification for how to read the language, but...I can't even tell what this is written in. It looks almost like Python but then it isn't.
I'm looking to get a better understanding of this specification and how it is used internally by Python to....do things. What depends on it (the answer is everything, but I mean specifically which aspect of the "engine" is processing it), what uses it, how does it tie in to compiling/running a script?
It's hard to believe that the whole language comes down to a two page specification... | 2013/10/13 | [
"https://Stackoverflow.com/questions/19351065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1029146/"
] | The python grammar - as most others - is given in [BNF](http://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form) or **Backus–Naur Form**. Try reading up on how to read it but the basic structure is:
```
<something> ::= (<something defined elsewhere> | [some fixed things]) [...]
```
This is read as a `<something>` **is defined as** `something else` **or** any of the fixed things repeated a multitude of times.
BNF is based on a nearly 2000 year old format for describing the permitted structure of a language, is incredibly terse and will describe all the allowed structures in a given language, *not necessarily all those that would make sense*.
Example
-------
Basic arithmetic can be described as:
```
<simple arithmetic expression> ::= <numeric expr>[ ]...(<operator>[ ]...<numeric expr>|<simple arithmetic expression>)
<numeric expr> ::= [<sign>]<digit>[...][.<digit>[...]]
<sign> ::= +|-
<operator> ::= [+-*/]
<digit> ::= [0123456789]
```
Which says that a simple arithmetic operation is an, optionally signed, number consisting of one or more digits, possibly with a decimal point and one, or more, subsequent digits, optionally followed by spaces, followed by exactly one of `+-*/`, optionally followed by spaces, followed by either a number or another simple arithmetic operation, i.e. a number followed by, etc.
This describes, *just about,* all of the basic arithmetic operations and can be extended to include functions, etc. Notice that does allow invalid operations that are a valid syntax, e.g.: `22.34 / -0.0` is valid syntactically even though the result is not valid.
It can sometimes make you aware that operations are possible that you might not have thought of, e.g.: `56+-50` is a valid operation as is `2*-10` but `2*/3` is not.
Note that [SGML](https://en.wikipedia.org/wiki/Standard_Generalized_Markup_Language) and [XML](https://en.wikipedia.org/wiki/XML)/[Schema](https://en.wikipedia.org/wiki/Schema.org) are both related but different methodologies for describing the structure of **any** language. YAML is another method for describing the allowed structures in a **computer** specific languages.
*Disclaimer: My BNF is a little rusty so if I have made any major mistakes in the above my apologies and please correct me.* | When you write a program in a language, the very first thing your interpreter/compiler must do in order to go from a sequence of characters to actual action is to translate that sequence of characters in a higher complexity structure. To do so, first it chunks up your program in a sequence of tokens expressing what each "word" represents. For example, the construct
```
if foo == 3: print 'hello'
```
will be converted into
```
1,0-1,2: NAME 'if'
1,3-1,6: NAME 'foo'
1,7-1,9: OP '=='
1,10-1,11: NUMBER '3'
1,11-1,12: OP ':'
1,13-1,18: NAME 'print'
1,19-1,26: STRING "'hello'"
2,0-2,0: ENDMARKER ''
```
But note that even something like "if if if if" is correctly made into tokens
```
1,0-1,2: NAME 'if'
1,3-1,5: NAME 'if'
1,6-1,8: NAME 'if'
1,9-1,11: NAME 'if'
2,0-2,0: ENDMARKER ''
```
What follows the tokenization is the parsing into a higher level structure that analyzes if the tokens actually make sense taken together, something that the latter example does not, but the first does. To do so, the parser must recognize the actual meaning of the tokens (e.g. the if is a keyword, and foo is a variable), then build a tree out of the tokens, organizing them in a hierarchy and see if this hierarchy actually makes sense. Here is where the grammar you are seeing comes in. That grammar is in BNF, which is a notation to express the constructs the language can recognize. That grammar is digested by a program (for example, bison) which has the magic property of taking that grammar and generate actual C code that does the heavy work for you, normally by recognizing the tokens, organizing them, returning you a parse tree, or tell you where there's a mistake.
Short version: developing a language is about defining tokens and how these tokens are put together to give something meaningful. This is done through the grammar, which you use to generate the actual "parser" code with automated tools. |
19,351,065 | I'm trying to get a deeper understanding of how Python works, and I've been looking at the grammar shown at <http://docs.python.org/3.3/reference/grammar.html>.
I notice it says you would have to change parsermodule.c also, but truthfully I'm just not following what's going on here.
I understand that a grammar is a specification for how to read the language, but...I can't even tell what this is written in. It looks almost like Python but then it isn't.
I'm looking to get a better understanding of this specification and how it is used internally by Python to....do things. What depends on it (the answer is everything, but I mean specifically which aspect of the "engine" is processing it), what uses it, how does it tie in to compiling/running a script?
It's hard to believe that the whole language comes down to a two page specification... | 2013/10/13 | [
"https://Stackoverflow.com/questions/19351065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1029146/"
] | A grammar is used to describe all possible strings in a language. It is also useful in specifying how a parser should parse the language.
In this grammar it seems like they are using their own version of [EBNF](http://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_Form), where a non-terminal is any lowercase word and a terminal is all uppercase or surrounded by quotes. For example, NEWLINE is a terminal, arith\_expr is a non-terminal and 'if' is also a terminal. Any non-terminal can be replaced by anything to the right of the colon of it's respective production rule. For example, if you look at the first rule:
>
> single\_input: NEWLINE | simple\_stmt | compound\_stmt NEWLINE
>
>
>
We can replace single\_input with one of either a NEWLINE, a simple\_stmt or a compound\_stmt followed by a NEWLINE. Suppose we replaced it with "compound\_stmt NEWLINE", then we would look for the production rule for compound\_stmt:
>
> compound\_stmt: if\_stmt | while\_stmt | for\_stmt | try\_stmt | with\_stmt | funcdef | classdef | decorated
>
>
>
and choose which of these we want to use, and substitute it for "compound\_stmt" (Keeping NEWLINE in it's place)
Suppose we wanted to generate the valid python program:
```
if 5 < 2 + 3 or not 1 == 5:
raise
```
We could use the following derivation:
1. single\_input
2. compound\_stmt NEWLINE
3. if\_stmt NEWLINE
4. 'if' test ':' suite NEWLINE
5. 'if' or\_test ':' NEWLINE INDENT stmt stmt DEDENT NEWLINE
6. 'if' and\_test 'or' and\_test ':' NEWLINE INDENT simple\_stmt DEDENT NEWLINE
7. 'if' not\_test 'or' not\_test ':' NEWLINE INDENT small\_stmt DEDENT NEWLINE
8. 'if' comparison 'or' 'not' not\_test ':' NEWLINE INDENT flow\_stmt DEDENT NEWLINE
9. 'if' expr comp\_op expr 'or' 'not' comparison ':' NEWLINE INDENT raise\_stmt DEDENT NEWLINE
10. 'if' arith\_expr '<' arith\_expr 'or' 'not' arith\_expr comp\_op arith\_expr ':' NEWLINE INDENT 'raise' DEDENT NEWLINE
11. 'if' term '<' term '+' term 'or' 'not' arith\_expr == arith\_expr ':' NEWLINE INDENT 'raise' DEDENT NEWLINE
12. 'if' NUMBER '<' NUMBER '+' NUMBER 'or' 'not' NUMBER == NUMBER ':' NEWLINE INDENT 'raise' DEDENT NEWLINE
A couple of notes here, firstly, we must start with one of the non-terminals which is listed as a starting non-terminal. In that page, they list them as single\_input, file\_input, or eval\_input. Secondly, a derivation is finished once all the symbols are terminal (hence the name). Thirdly, it is more common to do one substitution per line, for the sake of brevity I did all possible substitutions at once and started skipping steps near the end.
Given a string in the language, how do we find it's derivation? This is the job of a parser. A parser reverse-engineers a production sequence to first check that it is indeed a valid string, and furthermore how it can be derived from the grammar. It's worth noting that many grammars can describe a single language. However, for a given string, it's derivation will of course be different for each grammar. So technically we write a parser for a grammar not a language. Some grammars are easier to parse, some grammars are easier to read/understand. This one belongs in the former.
Also this doesn't specify the entire language, just what it looks like. A grammar says nothing about semantics.
If you're interested in more about parsing and grammar I recommend [Grune, Jacobs - Parsing Techniques](http://dickgrune.com/Books/PTAPG_1st_Edition/). It's free and good for self-study. | The python grammar - as most others - is given in [BNF](http://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form) or **Backus–Naur Form**. Try reading up on how to read it but the basic structure is:
```
<something> ::= (<something defined elsewhere> | [some fixed things]) [...]
```
This is read as a `<something>` **is defined as** `something else` **or** any of the fixed things repeated a multitude of times.
BNF is based on a nearly 2000 year old format for describing the permitted structure of a language, is incredibly terse and will describe all the allowed structures in a given language, *not necessarily all those that would make sense*.
Example
-------
Basic arithmetic can be described as:
```
<simple arithmetic expression> ::= <numeric expr>[ ]...(<operator>[ ]...<numeric expr>|<simple arithmetic expression>)
<numeric expr> ::= [<sign>]<digit>[...][.<digit>[...]]
<sign> ::= +|-
<operator> ::= [+-*/]
<digit> ::= [0123456789]
```
Which says that a simple arithmetic operation is an, optionally signed, number consisting of one or more digits, possibly with a decimal point and one, or more, subsequent digits, optionally followed by spaces, followed by exactly one of `+-*/`, optionally followed by spaces, followed by either a number or another simple arithmetic operation, i.e. a number followed by, etc.
This describes, *just about,* all of the basic arithmetic operations and can be extended to include functions, etc. Notice that does allow invalid operations that are a valid syntax, e.g.: `22.34 / -0.0` is valid syntactically even though the result is not valid.
It can sometimes make you aware that operations are possible that you might not have thought of, e.g.: `56+-50` is a valid operation as is `2*-10` but `2*/3` is not.
Note that [SGML](https://en.wikipedia.org/wiki/Standard_Generalized_Markup_Language) and [XML](https://en.wikipedia.org/wiki/XML)/[Schema](https://en.wikipedia.org/wiki/Schema.org) are both related but different methodologies for describing the structure of **any** language. YAML is another method for describing the allowed structures in a **computer** specific languages.
*Disclaimer: My BNF is a little rusty so if I have made any major mistakes in the above my apologies and please correct me.* |
19,351,065 | I'm trying to get a deeper understanding of how Python works, and I've been looking at the grammar shown at <http://docs.python.org/3.3/reference/grammar.html>.
I notice it says you would have to change parsermodule.c also, but truthfully I'm just not following what's going on here.
I understand that a grammar is a specification for how to read the language, but...I can't even tell what this is written in. It looks almost like Python but then it isn't.
I'm looking to get a better understanding of this specification and how it is used internally by Python to....do things. What depends on it (the answer is everything, but I mean specifically which aspect of the "engine" is processing it), what uses it, how does it tie in to compiling/running a script?
It's hard to believe that the whole language comes down to a two page specification... | 2013/10/13 | [
"https://Stackoverflow.com/questions/19351065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1029146/"
] | A grammar is used to describe all possible strings in a language. It is also useful in specifying how a parser should parse the language.
In this grammar it seems like they are using their own version of [EBNF](http://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_Form), where a non-terminal is any lowercase word and a terminal is all uppercase or surrounded by quotes. For example, NEWLINE is a terminal, arith\_expr is a non-terminal and 'if' is also a terminal. Any non-terminal can be replaced by anything to the right of the colon of it's respective production rule. For example, if you look at the first rule:
>
> single\_input: NEWLINE | simple\_stmt | compound\_stmt NEWLINE
>
>
>
We can replace single\_input with one of either a NEWLINE, a simple\_stmt or a compound\_stmt followed by a NEWLINE. Suppose we replaced it with "compound\_stmt NEWLINE", then we would look for the production rule for compound\_stmt:
>
> compound\_stmt: if\_stmt | while\_stmt | for\_stmt | try\_stmt | with\_stmt | funcdef | classdef | decorated
>
>
>
and choose which of these we want to use, and substitute it for "compound\_stmt" (Keeping NEWLINE in it's place)
Suppose we wanted to generate the valid python program:
```
if 5 < 2 + 3 or not 1 == 5:
raise
```
We could use the following derivation:
1. single\_input
2. compound\_stmt NEWLINE
3. if\_stmt NEWLINE
4. 'if' test ':' suite NEWLINE
5. 'if' or\_test ':' NEWLINE INDENT stmt stmt DEDENT NEWLINE
6. 'if' and\_test 'or' and\_test ':' NEWLINE INDENT simple\_stmt DEDENT NEWLINE
7. 'if' not\_test 'or' not\_test ':' NEWLINE INDENT small\_stmt DEDENT NEWLINE
8. 'if' comparison 'or' 'not' not\_test ':' NEWLINE INDENT flow\_stmt DEDENT NEWLINE
9. 'if' expr comp\_op expr 'or' 'not' comparison ':' NEWLINE INDENT raise\_stmt DEDENT NEWLINE
10. 'if' arith\_expr '<' arith\_expr 'or' 'not' arith\_expr comp\_op arith\_expr ':' NEWLINE INDENT 'raise' DEDENT NEWLINE
11. 'if' term '<' term '+' term 'or' 'not' arith\_expr == arith\_expr ':' NEWLINE INDENT 'raise' DEDENT NEWLINE
12. 'if' NUMBER '<' NUMBER '+' NUMBER 'or' 'not' NUMBER == NUMBER ':' NEWLINE INDENT 'raise' DEDENT NEWLINE
A couple of notes here, firstly, we must start with one of the non-terminals which is listed as a starting non-terminal. In that page, they list them as single\_input, file\_input, or eval\_input. Secondly, a derivation is finished once all the symbols are terminal (hence the name). Thirdly, it is more common to do one substitution per line, for the sake of brevity I did all possible substitutions at once and started skipping steps near the end.
Given a string in the language, how do we find it's derivation? This is the job of a parser. A parser reverse-engineers a production sequence to first check that it is indeed a valid string, and furthermore how it can be derived from the grammar. It's worth noting that many grammars can describe a single language. However, for a given string, it's derivation will of course be different for each grammar. So technically we write a parser for a grammar not a language. Some grammars are easier to parse, some grammars are easier to read/understand. This one belongs in the former.
Also this doesn't specify the entire language, just what it looks like. A grammar says nothing about semantics.
If you're interested in more about parsing and grammar I recommend [Grune, Jacobs - Parsing Techniques](http://dickgrune.com/Books/PTAPG_1st_Edition/). It's free and good for self-study. | When you write a program in a language, the very first thing your interpreter/compiler must do in order to go from a sequence of characters to actual action is to translate that sequence of characters in a higher complexity structure. To do so, first it chunks up your program in a sequence of tokens expressing what each "word" represents. For example, the construct
```
if foo == 3: print 'hello'
```
will be converted into
```
1,0-1,2: NAME 'if'
1,3-1,6: NAME 'foo'
1,7-1,9: OP '=='
1,10-1,11: NUMBER '3'
1,11-1,12: OP ':'
1,13-1,18: NAME 'print'
1,19-1,26: STRING "'hello'"
2,0-2,0: ENDMARKER ''
```
But note that even something like "if if if if" is correctly made into tokens
```
1,0-1,2: NAME 'if'
1,3-1,5: NAME 'if'
1,6-1,8: NAME 'if'
1,9-1,11: NAME 'if'
2,0-2,0: ENDMARKER ''
```
What follows the tokenization is the parsing into a higher level structure that analyzes if the tokens actually make sense taken together, something that the latter example does not, but the first does. To do so, the parser must recognize the actual meaning of the tokens (e.g. the if is a keyword, and foo is a variable), then build a tree out of the tokens, organizing them in a hierarchy and see if this hierarchy actually makes sense. Here is where the grammar you are seeing comes in. That grammar is in BNF, which is a notation to express the constructs the language can recognize. That grammar is digested by a program (for example, bison) which has the magic property of taking that grammar and generate actual C code that does the heavy work for you, normally by recognizing the tokens, organizing them, returning you a parse tree, or tell you where there's a mistake.
Short version: developing a language is about defining tokens and how these tokens are put together to give something meaningful. This is done through the grammar, which you use to generate the actual "parser" code with automated tools. |
53,515,926 | I have data stored in a parquet files and hive table partitioned by year, month, day. Thus, each parquet file is stored in `/table_name/year/month/day/` folder.
I want to read in data for only some of the partitions. I have list of paths to individual partitions as follows:
```py
paths_to_files = ['hdfs://data/table_name/2018/10/29',
'hdfs://data/table_name/2018/10/30']
```
And then try to do something like:
```py
df = sqlContext.read.format("parquet").load(paths_to_files)
```
However, then my data does not include the information about `year, month and day`, as this is not part of the data per se, rather the information is stored in the path to the file.
I could use sql context and a send hive query with some select statement with where on the year, month and day columns to select only data from partitions i am interested in. However, i'd rather avoid constructing SQL query in python as I am very lazy and don't like reading SQL.
**I have two questions:**
1. what is the optimal way (performance-wise) to read in the data stored as parquet, where information about year, month, day is not present in the parquet file, but is only included in the path to the file? (either send hive query using `sqlContext.sql('...')`, or use `read.parquet`,... anything really.
2. Can i somehow extract the partitioning columns when using the
approach i outlined above? | 2018/11/28 | [
"https://Stackoverflow.com/questions/53515926",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7064628/"
] | Reading the direct file paths to the parent directory of the year partitions should be enough for a dataframe to determine there's partitions under it. However, it wouldn't know what to name the partitions without the directory structure `/year=2018/month=10`, for example.
Therefore, if you have Hive, then going via the metastore would be better because the partitions are named there, Hive stores extra useful information about your table, and then you're not reliant on knowing the direct path to the files on disk from the Spark code.
Not sure why you think you need to read/write SQL, though.
Use the Dataframe API instead, e.g
```py
df = spark.table("table_name")
df_2018 = df.filter(df['year'] == 2018)
df_2018.show()
``` | Your data isn't stored in a way optimal for parquet so you'd have to load files one by one and add the dates
Alternatively, you can move the files to a directory structure fit for parquet
( e.g. .../table/year=2018/month=10/day=29/file.parquet)
then you can read the parent directory (table) and filter on year, month, and day (and spark will only read the relevant directories) also you'd get these as attributes in your dataframe |
12,568,689 | I am new to python (2nd) day and working on a problem that asks me to Write a program that reads ASCII files (asks for file name as input), checks if it has more than
two words and prints out the two first words of the file on screen.
Its a little vague but I am going to assume the file is all str, deliminiated by spaces.
ex.
```
text1 text2 text text text
```
So far I have:
```
name = (raw_input("Please enter the name of the file: "))
f=open(name)
with codecs.open(name, encoding='utf-8') as f:
for line in f:
line = line.lstrip(BOM)
words=line.split()
print words
if len(words) > 2:
print 'There are more than two words'
firsttow = words[:2]
print firstrow
```
I am having problems writing the else statement, I want to have,
```
if len(words) > 2:
print 'There are more than two words'
firsttow = words[:2]
print firstrow
else:
if len(words) <2:
print 'There are under 2 words, no words will be shown'
```
How should this be added and is there any other ways to improve my code for this questions?
I really appreciate the help
Thanks in advance
\*Edit: Thanks for all the help, the last problem I had was when i run .py file, I want to be able to see the results before the cmd window closes.
Adding: `raw_input("Press return to close this window...")` does notwork and it closes right away. Any ideas?
Edit2\* This is my current code, still trying to work on having the cmd window open after
```
import codecs
BOM = codecs.BOM_UTF8.decode('utf8')
name = (raw_input("Please enter the name of the file: "))
with codecs.open(name, encoding='utf-8') as f:
words=[] #define words here
for line in f:
line = line.lstrip(BOM)
words.extend(line.split()) #append words from each line to words
if len(words) > 2:
print 'There are more than two words'
firstrow = words[:2]
print firstrow #indentation problem here
elif len(words) <2: #use if
print 'There are under 2 words, no words will be shown'
raw_input("Press return to close this window...")
``` | 2012/09/24 | [
"https://Stackoverflow.com/questions/12568689",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1690243/"
] | That code should be written as:
```
if len(words) > 2:
print 'There are more than two words'
firsttow = words[:2]
print firstrow
elif len(words) <2:
print 'There are under 2 words, no words will be shown'
```
Note the indentation, and the use of `elif` (which means "else if"). | ```
with codecs.open(name, encoding='utf-8') as f:
words=[] #define words here
for line in f:
line = line.lstrip(BOM)
words.extend(line.split()) #append words from each line to words
if len(words) > 2:
print 'There are more than two words'
firsttow = words[:2]
print firstrow #indentation problem here
if len(words) <2: #use if
print 'There are under 2 words, no words will be shown'
``` |
5,093,153 | I'm wondering how to go about implementing a macro recorder for a python gui (probably PyQt, but ideally agnostic). Something much like in Excel but instead of getting VB macros, it would create python code. Previously I made something for Tkinter where all callbacks pass through a single class that logged actions. Unfortunately my class doing the logging was a bit ugly and I'm looking for a nicer one. While this did make a nice separation of the gui from the rest of the code, it seems to be unusual in terms of the usual signals/slots wiring. Is there a better way?
The intention is that a user can work their way through a data analysis procedure in a graphical interface, seeing the effect of their decisions. Later the recorded procedure could be applied to other data with minor modification and without needing the start up the gui. | 2011/02/23 | [
"https://Stackoverflow.com/questions/5093153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/98967/"
] | Thinking in high level, this is what I'd do:
Develop a decorator function, with which I'd decorate every event-handling functions.
This decorator functions would take note of thee function called, and its parameters (and possibly returning values) in a unified data-structure - taking care, on this data structure, to mark Widget and Control instances as a special type of object. That is because in other runs these widgets won't be the same instances - ah, you can't even serialize a toolkit widget instances, be it Qt or otherwise.
When the time comes to play a macro, you fill-in the gaps replacing the widget-representating object with the instances of the actually running objects, and simply call the original functions with the remaining parameters.
In toolkits that have an specialized "event" parameter that is passed down to event-handling functions, you will have to take care of serializing and de-serializing this event as well.
I hope this can help. I could come up with some proof of concept code for that (although I am in a mood to use tkinter today - would have to read a lot to come up with a Qt4 example). | An example of what you're looking for is in [mayavi2](http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/application.html#automatic-script-generation). For your purposes, mayavi2's "script record" functionality will generate a Python script that can then be trivially modified for other cases. I hear that it works pretty well. |
5,093,153 | I'm wondering how to go about implementing a macro recorder for a python gui (probably PyQt, but ideally agnostic). Something much like in Excel but instead of getting VB macros, it would create python code. Previously I made something for Tkinter where all callbacks pass through a single class that logged actions. Unfortunately my class doing the logging was a bit ugly and I'm looking for a nicer one. While this did make a nice separation of the gui from the rest of the code, it seems to be unusual in terms of the usual signals/slots wiring. Is there a better way?
The intention is that a user can work their way through a data analysis procedure in a graphical interface, seeing the effect of their decisions. Later the recorded procedure could be applied to other data with minor modification and without needing the start up the gui. | 2011/02/23 | [
"https://Stackoverflow.com/questions/5093153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/98967/"
] | You could apply the command design pattern: when your user executes an action, generate a command that represents the changes required. You then implement some sort of command pipeline that executes the commands themselves, most likely just calling the methods you already have. Once the commands are executed, you can serialize them or take note of them the way you want and load the series of commands when you need to re-execute the procedure. | An example of what you're looking for is in [mayavi2](http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/application.html#automatic-script-generation). For your purposes, mayavi2's "script record" functionality will generate a Python script that can then be trivially modified for other cases. I hear that it works pretty well. |
2,032,706 | i have been trying to running some pinax code inside pydev eclipse
i keep on having this error
Error: Can't import Pinax. Make sure you are in a virtual environment that has Pinax installed or create one with pinax-boot.py.
my question is how do i run pinax inside eclipse using django built in server
i am python newbie | 2010/01/09 | [
"https://Stackoverflow.com/questions/2032706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/81850/"
] | How about
```
map.connect ':user/:repo/commit/:sha', :action => :index
```
Or use `map.resource` instead of `map.connect` if you need a RESTful route.
In the controller, the URL information can be retrieved from params, for example `params[:user]` returns the username. | You can name your routes as you like, and specify which controllers and actions you'd like to use them with.
For example, you might have:
```
map.connect ':user/:repo/commit/:sha', :controller => 'transactions', :action => 'commit'
```
This would send the request to the 'commit' method in 'transactions' controller.
The other variables can then be accessed in your controller using params:
```
@user = User.find(params[:user])
@repo = Repo.find(params[:repo])
``` |
2,032,706 | i have been trying to running some pinax code inside pydev eclipse
i keep on having this error
Error: Can't import Pinax. Make sure you are in a virtual environment that has Pinax installed or create one with pinax-boot.py.
my question is how do i run pinax inside eclipse using django built in server
i am python newbie | 2010/01/09 | [
"https://Stackoverflow.com/questions/2032706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/81850/"
] | If commit is a RESTful controller that uses :sha instead of an id to find records. You could do this instead:
```
map.resource :commits, :path_prefix => ':user/:repo', :as => 'commit'
```
It will create the standard RESTful routes that look like `http://yoursite.tld/:user/:repo/commit/:id`
Again, if you'll never be translating the id portion in the url to a commit id, then there's no reason you can't use it as a :sha value.
example:
```
class CommitController < ApplicationController
def show
@commit = Commit.find(:first, :conditions => {:sha => params[:id]})
end
...
end
```
You may also want to over ride to\_param in the commit model to return the sha value.
```
class Commt < ActiveRecord::Base
...
def to_param
sha
end
end
```
So that now `link_to commit_url(@commit, :user => current_user, :repo => @repo)` will provide a url that matches your scheme. | How about
```
map.connect ':user/:repo/commit/:sha', :action => :index
```
Or use `map.resource` instead of `map.connect` if you need a RESTful route.
In the controller, the URL information can be retrieved from params, for example `params[:user]` returns the username. |
2,032,706 | i have been trying to running some pinax code inside pydev eclipse
i keep on having this error
Error: Can't import Pinax. Make sure you are in a virtual environment that has Pinax installed or create one with pinax-boot.py.
my question is how do i run pinax inside eclipse using django built in server
i am python newbie | 2010/01/09 | [
"https://Stackoverflow.com/questions/2032706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/81850/"
] | If commit is a RESTful controller that uses :sha instead of an id to find records. You could do this instead:
```
map.resource :commits, :path_prefix => ':user/:repo', :as => 'commit'
```
It will create the standard RESTful routes that look like `http://yoursite.tld/:user/:repo/commit/:id`
Again, if you'll never be translating the id portion in the url to a commit id, then there's no reason you can't use it as a :sha value.
example:
```
class CommitController < ApplicationController
def show
@commit = Commit.find(:first, :conditions => {:sha => params[:id]})
end
...
end
```
You may also want to over ride to\_param in the commit model to return the sha value.
```
class Commt < ActiveRecord::Base
...
def to_param
sha
end
end
```
So that now `link_to commit_url(@commit, :user => current_user, :repo => @repo)` will provide a url that matches your scheme. | You can name your routes as you like, and specify which controllers and actions you'd like to use them with.
For example, you might have:
```
map.connect ':user/:repo/commit/:sha', :controller => 'transactions', :action => 'commit'
```
This would send the request to the 'commit' method in 'transactions' controller.
The other variables can then be accessed in your controller using params:
```
@user = User.find(params[:user])
@repo = Repo.find(params[:repo])
``` |
7,321,113 | I'm using Cairo/RSVG based solution for rasterizing SVG to PNG. It's already beeb described on StackOverflow in [Convert SVG to PNG in Python](https://stackoverflow.com/questions/6589358/convert-svg-to-png-in-python).
However, this solution doesn't seem to work with custom fonts.
I've found [this page describing embedding SVG fonts](http://frabru.de/c.php/article/SVGFonts-usage).
I've tried to include them from external SVG via XLink, as described in example. I've tried embedding the font directly in the same SVG file. Having failed with that, I've tried CSS Web Fonts syntax. None of the 3 methods works when rendering with Cairo (nor does it work in the Ubuntu's default viewer Eye of GNOME).
I've tried ImageMagick, with exactly same results as Cairo.
On the other hand with all 3 font embedding methods, fonts are rendered fine in WebKit, either using Google Chrome or [webkit2png.py](https://github.com/AdamN/python-webkit2png). However, if possible I'd like to avoid using QT WebKit on server, as it's non-trivial setup (including xvfb etc.), and I'm afraid that might not result very efficient nor stable.
Is there any alternative method of rendering SVG to PNG from Python? | 2011/09/06 | [
"https://Stackoverflow.com/questions/7321113",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/60711/"
] | I have spent a week researching this very issue and concluded that the best way to handle server-side rendering/rasterizing of SVG with custom fonts is to install those fonts on the server. The tools I tried (rsvg, imagemagick, phantomjs, qtwebkit...) could not handle web fonts and svg fonts.
Google has [several hundred fonts](http://code.google.com/p/googlefontdirectory/) (and growing) that one can download and use on a server.
* Download OTF or TTF fonts
* Install them on the server and refresh the font cache
* Replace the CSS definitions of the external fonts with font family names in your SVG docs.
And if you also need to use those same fonts in a web page, you could link to Google CDN directly for the WOFF files to conserve your own server time and network bandwidth. | You can try to use [inkscape](http://www.inkscape.org), perhaps this gives you better results:
```
inkscape inputfile.svg --export-png=exportfile.png
```
Running this from python is described here: [Calling an external command in Python](https://stackoverflow.com/questions/89228/how-to-call-external-command-in-python) |
7,321,113 | I'm using Cairo/RSVG based solution for rasterizing SVG to PNG. It's already beeb described on StackOverflow in [Convert SVG to PNG in Python](https://stackoverflow.com/questions/6589358/convert-svg-to-png-in-python).
However, this solution doesn't seem to work with custom fonts.
I've found [this page describing embedding SVG fonts](http://frabru.de/c.php/article/SVGFonts-usage).
I've tried to include them from external SVG via XLink, as described in example. I've tried embedding the font directly in the same SVG file. Having failed with that, I've tried CSS Web Fonts syntax. None of the 3 methods works when rendering with Cairo (nor does it work in the Ubuntu's default viewer Eye of GNOME).
I've tried ImageMagick, with exactly same results as Cairo.
On the other hand with all 3 font embedding methods, fonts are rendered fine in WebKit, either using Google Chrome or [webkit2png.py](https://github.com/AdamN/python-webkit2png). However, if possible I'd like to avoid using QT WebKit on server, as it's non-trivial setup (including xvfb etc.), and I'm afraid that might not result very efficient nor stable.
Is there any alternative method of rendering SVG to PNG from Python? | 2011/09/06 | [
"https://Stackoverflow.com/questions/7321113",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/60711/"
] | I have spent a week researching this very issue and concluded that the best way to handle server-side rendering/rasterizing of SVG with custom fonts is to install those fonts on the server. The tools I tried (rsvg, imagemagick, phantomjs, qtwebkit...) could not handle web fonts and svg fonts.
Google has [several hundred fonts](http://code.google.com/p/googlefontdirectory/) (and growing) that one can download and use on a server.
* Download OTF or TTF fonts
* Install them on the server and refresh the font cache
* Replace the CSS definitions of the external fonts with font family names in your SVG docs.
And if you also need to use those same fonts in a web page, you could link to Google CDN directly for the WOFF files to conserve your own server time and network bandwidth. | With Imagemagick I still struggle with svg rasterizing with fonts that are installed on the server and can be used in certain operations, but fail when using -convert from .svg to .png.... It seems to turn every type of text into arial. I think it may be a bug with ImageMagick or a certain format needed in the .svg |
7,321,113 | I'm using Cairo/RSVG based solution for rasterizing SVG to PNG. It's already beeb described on StackOverflow in [Convert SVG to PNG in Python](https://stackoverflow.com/questions/6589358/convert-svg-to-png-in-python).
However, this solution doesn't seem to work with custom fonts.
I've found [this page describing embedding SVG fonts](http://frabru.de/c.php/article/SVGFonts-usage).
I've tried to include them from external SVG via XLink, as described in example. I've tried embedding the font directly in the same SVG file. Having failed with that, I've tried CSS Web Fonts syntax. None of the 3 methods works when rendering with Cairo (nor does it work in the Ubuntu's default viewer Eye of GNOME).
I've tried ImageMagick, with exactly same results as Cairo.
On the other hand with all 3 font embedding methods, fonts are rendered fine in WebKit, either using Google Chrome or [webkit2png.py](https://github.com/AdamN/python-webkit2png). However, if possible I'd like to avoid using QT WebKit on server, as it's non-trivial setup (including xvfb etc.), and I'm afraid that might not result very efficient nor stable.
Is there any alternative method of rendering SVG to PNG from Python? | 2011/09/06 | [
"https://Stackoverflow.com/questions/7321113",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/60711/"
] | I have spent a week researching this very issue and concluded that the best way to handle server-side rendering/rasterizing of SVG with custom fonts is to install those fonts on the server. The tools I tried (rsvg, imagemagick, phantomjs, qtwebkit...) could not handle web fonts and svg fonts.
Google has [several hundred fonts](http://code.google.com/p/googlefontdirectory/) (and growing) that one can download and use on a server.
* Download OTF or TTF fonts
* Install them on the server and refresh the font cache
* Replace the CSS definitions of the external fonts with font family names in your SVG docs.
And if you also need to use those same fonts in a web page, you could link to Google CDN directly for the WOFF files to conserve your own server time and network bandwidth. | A couple of things to check with RSVG:
* That the font source files are in your system or user font path
* That font names in the SVG are not quoted, even if they contain spaces. librsvg will assume the quotes are part of the font name and it won't find the source files. |
66,977,521 | I want to run arbitrary "code" in an argument like an anonymous function in Python.
How to do this in one single line?
Lambdas seems that does not work since they only take one expression.
```
def call_func(callback):
callback()
def f():
pkg_set_status(package_name, status)
print('ok')
call_func(f)
```
Update:
Adding some context since the question is not entirely clear in my opinion.
I would like to run something like a multiline lambda or anonymous function.
Something like:
```
call_func(lambda:
# my multiline code here
pkg_set_status(package_name, status)
print('ok')
)
```
so that the argument itself is the anonymous function body...
Update2: found the answer here
<https://softwareengineering.stackexchange.com/questions/99243/why-doesnt-python-allow-multi-line-lambdas>
Looks like python does not support multiline lambdas by design. | 2021/04/06 | [
"https://Stackoverflow.com/questions/66977521",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3274630/"
] | The structure defined by myList with the `<ol>` elements is never actually added to the document. If you concatenate the `<ol>`, then the `<li>` entries, then `</ol>` all to wrapper.innerHTML then it should work.
For example something like...
```
var myList = "<ol>";
for (var i = 0; i < properties.length; i++) {
myList += "<li>" + properties[i].unitNo + ": " + properties[i].owner + "</li>";
}
myList += "</ol>";
wrapper.innerHTML = myList;
``` | If you would check your structure in dev tools you would see there was no `ol` element in finale result.
So you can create it:
```
var myList = document.createElement("ol");
```
Then fill it with `li`:
```
myList.innerHTML
```
And then insert it:
```
idk.insertAdjacentElement("afterbegin", myList);
```
```js
function onloadFunction() {
var properties = [{
unitNo: "C-8-1",
owner: "Foo Yoke Wai"
},
{
unitNo: "C-3A-3A",
owner: "Chia Kim Hooi"
},
{
unitNo: "B-18-8",
owner: "Heng Tee See"
},
{
unitNo: "A-10-10",
owner: "Tang So Ny"
},
{
unitNo: "B-19-10",
owner: "Tang Xiao Mi"
},
];
var idk = document.getElementById("wrapper");
var myList = document.createElement("ol");
for (var i = 0; i < properties.length; i++) {
myList.innerHTML += "<li>" + properties[i].unitNo + ": " + properties[i].owner + "</li>";
}
idk.insertAdjacentElement("afterbegin", myList);
}
```
```html
<html>
<body onload="onloadFunction()">
<head></head>
<div id="wrapper"></div>
</body>
</html>
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.