qid
int64 469
74.7M
| question
stringlengths 36
37.8k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 5
31.5k
| response_k
stringlengths 10
31.6k
|
---|---|---|---|---|---|
6,367,014 | In my `settings.py`, I have the following:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending e-mail.
EMAIL_HOST = 'localhost'
# Port for sending e-mail.
EMAIL_PORT = 1025
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
```
My email code:
```
from django.core.mail import EmailMessage
email = EmailMessage('Hello', 'World', to=['user@gmail.com'])
email.send()
```
Of course, if I setup a debugging server via `python -m smtpd -n -c DebuggingServer localhost:1025`, I can see the email in my terminal.
However, how do I actually send the email not to the debugging server but to user@gmail.com?
After reading your answers, let me get something straight:
1. Can't you use localhost(simple ubuntu pc) to send e-mails?
2. I thought in django 1.3 `send_mail()` is somewhat deprecated and `EmailMessage.send()` is used instead? | 2011/06/16 | [
"https://Stackoverflow.com/questions/6367014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749477/"
] | I use Gmail as my SMTP server for Django. Much easier than dealing with postfix or whatever other server. I'm not in the business of managing email servers.
In settings.py:
```
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_PORT = 587
EMAIL_HOST_USER = 'me@gmail.com'
EMAIL_HOST_PASSWORD = 'password'
```
**NOTE**: In 2016 Gmail is not allowing this anymore by default. You can either use an external service like Sendgrid, or you can follow this tutorial from Google to reduce security but allow this option: <https://support.google.com/accounts/answer/6010255> | For **SendGrid - Django** Specifically:
[SendGrid Django Docs here](https://sendgrid.com/docs/Integrate/Frameworks/django.html)
Set these variables in
**settings.py**
```
EMAIL_HOST = 'smtp.sendgrid.net'
EMAIL_HOST_USER = 'sendgrid_username'
EMAIL_HOST_PASSWORD = 'sendgrid_password'
EMAIL_PORT = 587
EMAIL_USE_TLS = True
```
**in views.py**
```
from django.core.mail import send_mail
send_mail('Subject here', 'Here is the message.', 'from@example.com', ['to@example.com'], fail_silently=False)
``` |
6,367,014 | In my `settings.py`, I have the following:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending e-mail.
EMAIL_HOST = 'localhost'
# Port for sending e-mail.
EMAIL_PORT = 1025
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
```
My email code:
```
from django.core.mail import EmailMessage
email = EmailMessage('Hello', 'World', to=['user@gmail.com'])
email.send()
```
Of course, if I setup a debugging server via `python -m smtpd -n -c DebuggingServer localhost:1025`, I can see the email in my terminal.
However, how do I actually send the email not to the debugging server but to user@gmail.com?
After reading your answers, let me get something straight:
1. Can't you use localhost(simple ubuntu pc) to send e-mails?
2. I thought in django 1.3 `send_mail()` is somewhat deprecated and `EmailMessage.send()` is used instead? | 2011/06/16 | [
"https://Stackoverflow.com/questions/6367014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749477/"
] | I had actually done this from Django a while back. Open up a legitimate GMail account & enter the credentials here. Here's my code -
```
from email import Encoders
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email.MIMEMultipart import MIMEMultipart
def sendmail(to, subject, text, attach=[], mtype='html'):
ok = True
gmail_user = settings.EMAIL_HOST_USER
gmail_pwd = settings.EMAIL_HOST_PASSWORD
msg = MIMEMultipart('alternative')
msg['From'] = gmail_user
msg['To'] = to
msg['Cc'] = 'you@gmail.com'
msg['Subject'] = subject
msg.attach(MIMEText(text, mtype))
for a in attach:
part = MIMEBase('application', 'octet-stream')
part.set_payload(open(attach, 'rb').read())
Encoders.encode_base64(part)
part.add_header('Content-Disposition','attachment; filename="%s"' % os.path.basename(a))
msg.attach(part)
try:
mailServer = smtplib.SMTP("smtp.gmail.com", 687)
mailServer.ehlo()
mailServer.starttls()
mailServer.ehlo()
mailServer.login(gmail_user, gmail_pwd)
mailServer.sendmail(gmail_user, [to,msg['Cc']], msg.as_string())
mailServer.close()
except:
ok = False
return ok
``` | For **SendGrid - Django** Specifically:
[SendGrid Django Docs here](https://sendgrid.com/docs/Integrate/Frameworks/django.html)
Set these variables in
**settings.py**
```
EMAIL_HOST = 'smtp.sendgrid.net'
EMAIL_HOST_USER = 'sendgrid_username'
EMAIL_HOST_PASSWORD = 'sendgrid_password'
EMAIL_PORT = 587
EMAIL_USE_TLS = True
```
**in views.py**
```
from django.core.mail import send_mail
send_mail('Subject here', 'Here is the message.', 'from@example.com', ['to@example.com'], fail_silently=False)
``` |
6,367,014 | In my `settings.py`, I have the following:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending e-mail.
EMAIL_HOST = 'localhost'
# Port for sending e-mail.
EMAIL_PORT = 1025
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
```
My email code:
```
from django.core.mail import EmailMessage
email = EmailMessage('Hello', 'World', to=['user@gmail.com'])
email.send()
```
Of course, if I setup a debugging server via `python -m smtpd -n -c DebuggingServer localhost:1025`, I can see the email in my terminal.
However, how do I actually send the email not to the debugging server but to user@gmail.com?
After reading your answers, let me get something straight:
1. Can't you use localhost(simple ubuntu pc) to send e-mails?
2. I thought in django 1.3 `send_mail()` is somewhat deprecated and `EmailMessage.send()` is used instead? | 2011/06/16 | [
"https://Stackoverflow.com/questions/6367014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749477/"
] | For Django version 1.7, if above solutions dont work then try the following
in **settings.py** add
```
#For email
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = 'sender@gmail.com'
#Must generate specific password for your app in [gmail settings][1]
EMAIL_HOST_PASSWORD = 'app_specific_password'
EMAIL_PORT = 587
#This did the trick
DEFAULT_FROM_EMAIL = EMAIL_HOST_USER
```
The last line did the trick for django 1.7 | I had actually done this from Django a while back. Open up a legitimate GMail account & enter the credentials here. Here's my code -
```
from email import Encoders
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email.MIMEMultipart import MIMEMultipart
def sendmail(to, subject, text, attach=[], mtype='html'):
ok = True
gmail_user = settings.EMAIL_HOST_USER
gmail_pwd = settings.EMAIL_HOST_PASSWORD
msg = MIMEMultipart('alternative')
msg['From'] = gmail_user
msg['To'] = to
msg['Cc'] = 'you@gmail.com'
msg['Subject'] = subject
msg.attach(MIMEText(text, mtype))
for a in attach:
part = MIMEBase('application', 'octet-stream')
part.set_payload(open(attach, 'rb').read())
Encoders.encode_base64(part)
part.add_header('Content-Disposition','attachment; filename="%s"' % os.path.basename(a))
msg.attach(part)
try:
mailServer = smtplib.SMTP("smtp.gmail.com", 687)
mailServer.ehlo()
mailServer.starttls()
mailServer.ehlo()
mailServer.login(gmail_user, gmail_pwd)
mailServer.sendmail(gmail_user, [to,msg['Cc']], msg.as_string())
mailServer.close()
except:
ok = False
return ok
``` |
6,367,014 | In my `settings.py`, I have the following:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending e-mail.
EMAIL_HOST = 'localhost'
# Port for sending e-mail.
EMAIL_PORT = 1025
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
```
My email code:
```
from django.core.mail import EmailMessage
email = EmailMessage('Hello', 'World', to=['user@gmail.com'])
email.send()
```
Of course, if I setup a debugging server via `python -m smtpd -n -c DebuggingServer localhost:1025`, I can see the email in my terminal.
However, how do I actually send the email not to the debugging server but to user@gmail.com?
After reading your answers, let me get something straight:
1. Can't you use localhost(simple ubuntu pc) to send e-mails?
2. I thought in django 1.3 `send_mail()` is somewhat deprecated and `EmailMessage.send()` is used instead? | 2011/06/16 | [
"https://Stackoverflow.com/questions/6367014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749477/"
] | You need to use **smtp as backend** in settings.py
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
```
If you use backend as console, you will receive output in console
```
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
```
And also below settings in addition
```
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_PORT = 587
EMAIL_HOST_USER = 'urusername@gmail.com'
EMAIL_HOST_PASSWORD = 'password'
```
If you are using **gmail** for this, setup **2-step verification** and **Application specific password** and copy and paste that password in above EMAIL\_HOST\_PASSWORD value. | I found using SendGrid to be the easiest way to set up sending email with Django. Here's how it works:
1. [Create a SendGrid account](https://app.sendgrid.com/signup) (and verify your email)
2. Add the following to your `settings.py`:
`EMAIL_HOST = 'smtp.sendgrid.net'
EMAIL_HOST_USER = '<your sendgrid username>'
EMAIL_HOST_PASSWORD = '<your sendgrid password>'
EMAIL_PORT = 587
EMAIL_USE_TLS = True`
And you're all set!
To send email:
```
from django.core.mail import send_mail
send_mail('<Your subject>', '<Your message>', 'from@example.com', ['to@example.com'])
```
If you want Django to email you whenever there's a 500 internal server error, add the following to your `settings.py`:
```
DEFAULT_FROM_EMAIL = 'your.email@example.com'
ADMINS = [('<Your name>', 'your.email@example.com')]
```
Sending email with SendGrid is free up to 12k emails per month. |
58,414,393 | Say i have a dataframe as shown below
```
Stock open high low close Avg
0 SBIN 255.85 256.00 255.80 255.90 Nan
1 HDFC 1222.25 1222.45 1220.45 1220.45 Nan
2 SBIN 255.95 255.95 255.85 255.85 Nan
3 HDFC 1222.00 1222.50 1221.70 1221.95 Nan
4 SBIN 255.80 256.00 255.80 255.95 255.9
5 HDFC 1221.90 1222.00 1221.55 1221.70 1221.366667
6 SBIN 255.85 255.95 255.70 255.80 255.8666667
7 HDFC 1222.00 1222.00 1221.45 1221.95 1221.866667
8 SBIN 255.80 255.95 255.70 255.80 255.85
9 HDFC 1222.00 1222.00 1221.25 1221.90 1221.85
10 SBIN 255.75 255.80 255.60 255.65 255.75
11 HDFC 1222.00 1222.00 1221.40 1221.80 1221.88333333
```
here is the link of the df i took screenshot off. <https://photos.app.goo.gl/kTZERCerFzggDfHEA>
Now the average column is the average of the previous three closes for that particular stock. How can i achieve this in python pandas? | 2019/10/16 | [
"https://Stackoverflow.com/questions/58414393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10559306/"
] | You can use `groupby` and `rolling`
```
df['Avg'] = df.groupby('Stock', as_index=False)['close'].rolling(3).mean().reset_index(0,drop=True)
df
Out[1]:
Stock open high low close Avg
0 SBIN 255.85 256.00 255.80 255.90 NaN
1 HDFC 1222.25 1222.45 1220.45 1220.45 NaN
2 SBIN 255.95 255.95 255.85 255.85 NaN
3 HDFC 1222.00 1222.50 1221.70 1221.95 NaN
4 SBIN 255.80 256.00 255.80 255.95 255.900000
5 HDFC 1221.90 1222.00 1221.55 1221.70 1221.366667
6 SBIN 255.85 255.95 255.70 255.80 255.866667
7 HDFC 1222.00 1222.00 1221.45 1221.95 1221.866667
8 SBIN 255.80 255.95 255.70 255.80 255.850000
9 HDFC 1222.00 1222.00 1221.25 1221.90 1221.850000
10 SBIN 255.75 255.80 255.60 255.65 255.750000
11 HDFC 1222.00 1222.00 1221.40 1221.80 1221.883333
``` | As I understood from your df you are trying to calculate something like moving average metric.
To do this you can simply use for iteration:
```
for i in range(0, df.shape[0] - 2):
df.loc[df.index[i + 2], 'AVG'] = np.round(((df.iloc[i, 1] + df.iloc[i + 1, 1] + df.iloc[i + 2, 1]) / 3), 1)
```
Where in pd.loc clauses you specify the columns on which you want to calculate the metric, I used own.
Also (if you want more specific metric) you can try to calculate an average using rolling method:
```
df['AVG'] = df.iloc[:, 1].rolling(window = 3).mean()
```
Instead of 1 you must use your columns on which you want to calculate the metric. In window column you specify the number of previuous columns on which you want to calculate this.
Hope this will help! |
14,568,370 | I have the following code
```
# logging
from twisted.python import log
import sys
# MIME Multipart handling
import email
import email.mime.application
import uuid
# IMAP Connection
from twisted.mail import imap4
from twisted.internet import protocol
#SMTP Sending
import os.path
from OpenSSL.SSL import SSLv3_METHOD
from twisted.internet import ssl
from twisted.mail.smtp import ESMTPSenderFactory
from twisted.internet.ssl import ClientContextFactory
from twisted.internet.defer import Deferred
from twisted.internet import reactor
#class AccountsManager(object):
def connectToIMAPServer(imap_server, username, password):
factory = IMAP4ClientFactory(username, password, login_insecure = True)
host, port = imap_server.split(":")
# connect to reactor
if port == '993':
reactor.connectSSL(host, int(port), factory, ssl.ClientContextFactory())
else:
if not port:
port = 143
reactor.connectTCP(host, int(port), factory)
d = factory.deferred
d.addCallback(lambda r: factory.proto)
return d
class IMAP4Client(imap4.IMAP4Client):
"""
A client with callbacks for greeting messages from an IMAP server.
"""
greetDeferred = None
def serverGreeting(self, caps):
self.serverCapabilities = caps
if self.greetDeferred is not None:
d, self.greetDeferred = self.greetDeferred, None
d.callback(self)
class IMAP4ClientFactory(protocol.ClientFactory):
usedUp = False
protocol = IMAP4Client
def __init__(self, username, password, mailbox = "INBOX", login_insecure = False):
self.ctx = ssl.ClientContextFactory()
self.username = username
self.password = password
self.mailbox = mailbox
self.login_insecure = login_insecure
self.deferred = Deferred()
def buildProtocol(self, addr):
"""
Initiate the protocol instance. Since we are building a simple IMAP
client, we don't bother checking what capabilities the server has. We
just add all the authenticators twisted.mail has. Note: Gmail no
longer uses any of the methods below, it's been using XOAUTH since
2010.
"""
assert not self.usedUp
self.usedUp = True
p = self.protocol(self.ctx)
p.factory = self
p.greetDeferred = self.deferred
p.registerAuthenticator(imap4.PLAINAuthenticator(self.username))
p.registerAuthenticator(imap4.LOGINAuthenticator(self.username))
p.registerAuthenticator(imap4.CramMD5ClientAuthenticator(self.username))
self.deferred.addCallback(self.GreetingCallback)
self.deferred.addErrback(self.GreetingErrback)
self.proto = p
return p
def GreetingCallback(self, result):
print "Secure Login"
auth_d = self.proto.authenticate(self.password)
auth_d.addCallback(self.AuthenticationCallback)
auth_d.addErrback(self.AuthenticationErrback)
return auth_d # attach it to the main deferred
def GreetingErrback(self, error):
log.err(error)
self.CloseConnection()
return error
def AuthenticationCallback(self, result):
print "Selecting Mailbox"
d = self.proto.examine(self.mailbox)
return d
def AuthenticationErrback(self, failure):
if self.login_insecure:
failure.trap(imap4.NoSupportedAuthentication)
return self.InsecureLogin()
else:
return error
def InsecureLogin(self):
print "Insecure Login"
d = self.proto.login(self.username, self.password)
d.addCallback(self.AuthenticationCallback)
return d
def CloseConnection(self):
self.proto.transport.loseConnection()
def clientConnectionFailed(self, connector, reason):
d, self.deferred = self.deferred, None
d.errback(reason)
class MailServer(object):
"Manages a server"
size = 0
used_space = 0
def __init__(self, smtp_server, imap_server, username, password):
self.smtp_server, self.smtp_port = smtp_server.split(":")
self.imap_server, self.imap_port = imap_server.split(":")
self.username = username
self.password = password
self.imap_connection = IMAP4ClientFactory(username, password)
def upload_data(self, data):
"""
Uploads data to email server returns deferred that will return with the imap uid
"""
# Create a text/plain message
id = str(uuid.uuid4()).upper()
msg = email.mime.Multipart.MIMEMultipart()
msg['Subject'] = 'GMA ID: %s' % id
msg['From'] = self.email_address
msg['To'] = self.email_address
# The main body is just another attachment
body = email.mime.Text.MIMEText("GMA ID: %s" % (self.uuid_id))
msg.attach(body)
att = email.mime.application.MIMEApplication(data,_subtype="raw")
att.add_header('Content-Disposition','attachment',filename = os.path.basename(self.filename))
msg.attach(att)
# Create a context factory which only allows SSLv3 and does not verify
# the peer's certificate.
contextFactory = ClientContextFactory()
contextFactory.method = SSLv3_METHOD
d = Deferred()
mime_obj = StringIO(str(msg))
senderFactory = ESMTPSenderFactory(
self.username,
self.password,
self.email_address,
self.email_address,
mime_obj,
d,
contextFactory=contextFactory)
d.addCallback(lambda r: self.email_sent(id, int(self.parts)) )
d.addErrback(self.email_error)
reactor.connectTCP(self.smtp_server, self.smtp_port, senderFactory)
d.addCallback(self.upload_success, *args, **kw)
d.addErrback(self.upload_error, 1)
return d
def upload_success(self, result):
print "upload was succesful!"
def upload_error(self, result):
print "upload error"
def download_data(self, uid):
"""
Downloads data from the email server returns a deferred that will return with the data
"""
print "uid"
if __name__ == "__main__":
log.startLogging(sys.stdout)
d = connectToIMAPServer("imap.gmail.com:993", "username", "password")
def f(s):
print s
d.addCallback(lambda r: f("These are fired before the auth and examine callbacks, why?"))
d.addCallback(lambda r: f("These are fired before the auth and examine callbacks, why?"))
reactor.run()
```
The class is suppose to handle logging in and selecting a mailbox and nicely return a IMAP proto ready to use however the two callbacks at the bottom are fired before the other ones, I get why, the callbacks are added before the other ones because buildProtocol hasn't been called yet so what is the best way to handle this, just have a dummy callback added in **init** that "holds" the first spot? | 2013/01/28 | [
"https://Stackoverflow.com/questions/14568370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/410368/"
] | ```
from twisted.internet.endpoints import TCP4ClientEndpoint
d = TCP4ClientEndpoint(reactor, host, int(port)).connect(factory)
```
and
```
d.addCallback(lambda r: factory.deferred)
```
instead of
```
d = factory.deferred
```
in `connectToIMAPServer` should do it - your `factory.deferred` will be returned only after protocol is ready. ([Twisted Documentation on writing clients](http://twistedmatrix.com/documents/11.1.0/core/howto/clients.html)) | I eventually edited the code around and just managed the deferred's callback or errback internally
update code
```
# logging
from twisted.python import log
import sys
# MIME Multipart handling
import email
import email.mime.application
import uuid
# IMAP Connection
from twisted.mail import imap4
from twisted.internet import protocol
#SMTP Sending
import os.path
from OpenSSL.SSL import SSLv3_METHOD
from twisted.internet import ssl
from twisted.mail.smtp import ESMTPSenderFactory
from twisted.internet.ssl import ClientContextFactory
from twisted.internet.defer import Deferred
from twisted.internet import reactor
#class AccountsManager(object):
def connectToIMAPServer(imap_server, username, password):
factory = IMAP4ClientFactory(username, password, login_insecure = True)
host, port = imap_server.split(":")
# connect to reactor
if port == '993':
reactor.connectSSL(host, int(port), factory, ssl.ClientContextFactory())
else:
if not port:
port = 143
reactor.connectTCP(host, int(port), factory)
return factory.deferred
class IMAP4Client(imap4.IMAP4Client):
"""
A client with callbacks for greeting messages from an IMAP server.
"""
greetDeferred = None
def serverGreeting(self, caps):
self.serverCapabilities = caps
if self.greetDeferred is not None:
d, self.greetDeferred = self.greetDeferred, None
d.callback(self)
class IMAP4ClientFactory(protocol.ClientFactory):
usedUp = False
protocol = IMAP4Client
def __init__(self, username, password, mailbox = "INBOX", login_insecure = False):
self.ctx = ssl.ClientContextFactory()
self.username = username
self.password = password
self.mailbox = mailbox
self.login_insecure = login_insecure
# called when the protocol is all set up or there is an error setting it up
self.deferred = Deferred()
def buildProtocol(self, addr):
"""
Initiate the protocol instance. Since we are building a simple IMAP
client, we don't bother checking what capabilities the server has. We
just add all the authenticators twisted.mail has. Note: Gmail no
longer uses any of the methods below, it's been using XOAUTH since
2010.
"""
assert not self.usedUp
self.usedUp = True
p = self.protocol(self.ctx)
p.factory = self
# deferred for when the IMAP Greeting is done
p.greetDeferred = Deferred()
p.registerAuthenticator(imap4.PLAINAuthenticator(self.username))
p.registerAuthenticator(imap4.LOGINAuthenticator(self.username))
p.registerAuthenticator(imap4.CramMD5ClientAuthenticator(self.username))
p.greetDeferred.addCallback(self.GreetingCallback)
p.greetDeferred.addErrback(self.GreetingErrback)
self.proto = p
return p
def GreetingCallback(self, result):
log.msg("Succesfully sent IMAP Greeting.")
auth_d = self.proto.authenticate(self.password)
auth_d.addCallback(self.AuthenticationCallback)
auth_d.addErrback(self.AuthenticationErrback)
return auth_d
def GreetingErrback(self, error):
log.msg("Problem sending greeting")
log.err(error)
self.CloseConnection()
self.deferred.errback(error)
def AuthenticationCallback(self, result):
log.msg("Authenticated")
log.msg("Selecting Mailbox")
d = self.proto.examine(self.mailbox)
d.addCallback(self.MailboxSelectCallback)
d.addErrback(self.MailboxSelectErrback)
return d
def AuthenticationErrback(self, failure):
log.msg("Unable to authenticate securly")
if self.login_insecure:
log.msg("Trying to login insecurly")
failure.trap(imap4.NoSupportedAuthentication)
return self.InsecureLogin()
else:
log.err(failure)
self.deferred.errback(failure)
def InsecureLogin(self):
log.msg("Logging in insecurly")
d = self.proto.login(self.username, self.password)
d.addCallback(self.AuthenticationCallback)
return d
def MailboxSelectCallback(self, result):
# connected and protocol set up
log.msg("IMAP4 protocol setup")
self.deferred.callback(self.proto)
def MailboxSelectErrback(self, error):
log.msg("Cannot select mailbox %s" % self.mailbox)
log.err(error)
self.deferred.errback(error)
def CloseConnection(self):
self.proto.transport.loseConnection()
def clientConnectionFailed(self, connector, reason):
log.msg("Connecting was lost")
log.err(reason)
d, self.deferred = self.deferred, None
d.errback(reason)
class MailServer(object):
"Manages a server"
size = 0
used_space = 0
def __init__(self, smtp_server, imap_server, username, password):
self.smtp_server, self.smtp_port = smtp_server.split(":")
self.imap_server, self.imap_port = imap_server.split(":")
self.username = username
self.password = password
self.imap_connection = IMAP4ClientFactory(username, password)
def upload_data(self, data):
"""
Uploads data to email server returns deferred that will return with the imap uid
"""
# Create a text/plain message
id = str(uuid.uuid4()).upper()
msg = email.mime.Multipart.MIMEMultipart()
msg['Subject'] = 'GMA ID: %s' % id
msg['From'] = self.email_address
msg['To'] = self.email_address
# The main body is just another attachment
body = email.mime.Text.MIMEText("GMA ID: %s" % (self.uuid_id))
msg.attach(body)
att = email.mime.application.MIMEApplication(data,_subtype="raw")
att.add_header('Content-Disposition','attachment',filename = os.path.basename(self.filename))
msg.attach(att)
# Create a context factory which only allows SSLv3 and does not verify
# the peer's certificate.
contextFactory = ClientContextFactory()
contextFactory.method = SSLv3_METHOD
d = Deferred()
mime_obj = StringIO(str(msg))
senderFactory = ESMTPSenderFactory(
self.username,
self.password,
self.email_address,
self.email_address,
mime_obj,
d,
contextFactory=contextFactory)
d.addCallback(lambda r: self.email_sent(id, int(self.parts)) )
d.addErrback(self.email_error)
reactor.connectTCP(self.smtp_server, self.smtp_port, senderFactory)
d.addCallback(self.upload_success, *args, **kw)
d.addErrback(self.upload_error, 1)
return d
def upload_success(self, result):
print "upload was succesful!"
def upload_error(self, result):
print "upload error"
def download_data(self, uid):
"""
Downloads data from the email server returns a deferred that will return with the data
"""
print "uid"
if __name__ == "__main__":
log.startLogging(sys.stdout)
d = connectToIMAPServer("imap.gmail.com:993", "email", "password")
def f(s):
print s
d.addCallback(lambda r: f("These are fired before the auth and examine callbacks, why?"))
d.addCallback(lambda r: f("These are fired before the auth and examine callbacks, why?"))
reactor.run()
``` |
35,266,464 | I was trying to build this example:
<https://www.linuxvoice.com/build-a-web-browser-with-20-lines-of-python/>
I'll just repost it here for completeness:
```
from PyQt5.QtCore import QUrl
from PyQt5.QtWidgets import QApplication
from PyQt5.QtWebKitWidgets import QWebView
import sys
app = QApplication(sys.argv)
view = QWebView()
view.show()
view.setUrl(QUrl(“http://linuxvoice.com”))
app.exec()
```
I also used the indications here to install pyqt5
<https://askubuntu.com/questions/612314/how-to-install-pyqt-for-python3-in-ubunt-14-10>
and also installed Qt5. So I should have everything they say in the linuxvoice tutorial.
When I want to run this with python 2.7, it says:
```
File "brows.py", line 9 SyntaxError: Non-ASCII character '\xe2' in file brows.py on line 9, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details
```
and with Python3:
```
File "brows.py", line 9
view.setUrl(QUrl(“http://linuxvoice.com”))
SyntaxError: invalid character in identifier
```
Did anyone manage to make this work? | 2016/02/08 | [
"https://Stackoverflow.com/questions/35266464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5016028/"
] | SO here's the actual answer. I had the same issue and discovered it very fast.
`view.setUrl(QUrl(“http://linuxvoice.com”))`
Notice that their code uses quotes, look at how the quotes are compared to normal quotes.
Normal: ""
Theirs: “”
Basically, they're using weird ASCII quotes that Python can't handle. Really sneaky way to prevent copy-pasters.
Either way this code doesn't work anymore because in the most recent version of PyQt5, `QtWebKitWidgets` was removed. | You've got a stray byte somewhere in your code. It's popped up on StackOverflow previously and there's a good method for finding it: [Python "SyntaxError: Non-ASCII character '\xe2' in file"](https://stackoverflow.com/questions/21639275/python-syntaxerror-non-ascii-character-xe2-in-file). |
43,322,201 | I have a Flask application using python3. Sometimes it create daemon process to run script, then I want to kill daemon when timeout (use `signal.SIGINT`).
However, some processes which created by `os.system` (for example, `os.system('git clone xxx')`) are still running after daemon was killed.
so what should I do? Thanks all! | 2017/04/10 | [
"https://Stackoverflow.com/questions/43322201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7844505/"
] | In order to be able to kill a process you need its process id (usually referred to as a pid). `os.system` doesn't give you that, simply returning the value of the subprocess's return code.
The newer `subprocess` module gives you much more control, at the expense of somewhat more complexity. In particular it allows you to wait for the process to finish, with a timeout if required, and gives you access to the subprocess's pid. While I am not an expert in its use, this seems to
work. Note that this code needs Python 3.3 or better to use the `timeout` argument to the `Popen.wait` call.
```
import subprocess
process = subprocess.Popen(['git', 'clone', 'https://github.com/username/reponame'])
try:
print('Running in process', process.pid)
process.wait(timeout=10)
except subprocess.TimeoutExpired:
print('Timed out - killing', process.pid)
process.kill()
print("Done")
``` | The following command on the command line will show you all the running instances of python.
```
$ ps aux | grep -i python
username 6488 0.0 0.0 2434840 712 s003 R+ 1:41PM 0:00.00 python
```
The first number, `6488`, is the PID, process identifier. Look through the output of the command on your machine to find the PID of the process you want to kill.
You can run another command to kill the correct process.
```
$ kill 6488
```
You might need to use `sudo` with this command. Be careful though, you don't want to kill the wrong thing or bad stuff could happen! |
40,821,604 | I want to write to an element in a nested list named `foo`, but the nesting depth and indexes is only known at runtime, in a (non-nested!) list variable named `indexes`.
Examples:
If `indexes` is `[4]`, I want `foo[4]`.
If `indexes` is `[4,7]`, I want `foo[4][7]`.
If `indexes` is `[4,7,3]`, I want `foo[4][7][3]`.
What I could think of is to put together the command string (`"foo[4][7][3]"` in the last example), then call `eval`.
That would be `eval("foo["+']['.join([str(n) for n in indexes])+']')`.
That works, and is short enough, but I was hoping for a simpler, more pythonic way.
Does anyone know any alternatives?
Is there a way to *not* treat the empty list separately? As in:
If `indexes` is `[]`, I want the whole `foo`.
This needs an `if` in the `eval` solution.
EDIT: I need to **write to**, not read from, the element. I don't think either of the two existing answers, nor the answers to the indicated duplicate, can be used for writing. Apologies for misleading. | 2016/11/26 | [
"https://Stackoverflow.com/questions/40821604",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/174365/"
] | You can use the [`reduce()` function](https://docs.python.org/2/library/functions.html#reduce):
```
from functools import reduce # Python 3 forward compatibility
import operator
def access(lst, indexes):
return reduce(operator.getitem, indexes, lst)
```
You *could* use `list.__getitem__` instead of [`operator.getitem`](https://docs.python.org/2/library/operator.html#operator.getitem), but then you limit the application to list objects only (it wouldn't work for tuples or dictionaries, the latter with keys rather than integer indices).
This repeatedly applies the indices, in order, to the previous result (starting the original list). This meets all your criteria, including the empty-list case:
```
>>> foo = ['a', 'b', 'c', 'd', ['foo', 'bar', 'baz', 'spam', 'eggs', 'ham', 'monty', ['alpha', 'beta', 'delta', 'gamma']]]
>>> access(foo, [])
['a', 'b', 'c', 'd', ['foo', 'bar', 'baz', 'spam', 'eggs', 'ham', 'monty', ['alpha', 'beta', 'delta', 'gamma']]]
>>> access(foo, [4])
['foo', 'bar', 'baz', 'spam', 'eggs', 'ham', 'monty', ['alpha', 'beta', 'delta', 'gamma']]
>>> access(foo, [4, 7])
['alpha', 'beta', 'delta', 'gamma']
>>> access(foo, [4, 7, 3])
'gamma'
```
If you needed to *assign* to the last element, keep the *last* index aside to then assign to that:
```
if indexes:
target = access(foo, indexes[:-1])
target[indexes[-1]] = new_value
else:
foo = new_value
``` | You can set `item` to `foo`, then proceeds with the indexes list to access deeper nested elements:
```
def access (foo, indexes):
item = foo
for index in indexes:
item = item[index]
return item
``` |
22,864,305 | Im a very new Python user (2.7) and have been working my way through the Learn Python The Hard Way course and up to chap 37 and decided to do read through some other learning materials and go over the basics again and do exercises there. I have been reading through this:
<http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/ifstatements.html>
and I just did this:
>
> 3.1.4.1. Graduate Exercise
>
> Write a program, graduate.py, that prompts students for how many credits they have. Print whether of not they have enough credits for graduation. (At Loyola University Chicago 120 credits are needed for graduation.)
>
>
>
and this is my code for that:
```
print "How many credits do you currently have: "
credits = raw_input("> ")
if credits >= 120:
print "You have graduated!"
else:
print "Sorry not enough credits"
```
but no mater what number I input it just gives "Sorry not enough credits" as the answer every time, why? I have tried moving some things around and making it > instead of >= but nothing worked. Im sure this is something stupidly simple Im missing but I cant figure it out.
I've done several other if statement exercises in the LPTHW course similar to this and never had a problem. | 2014/04/04 | [
"https://Stackoverflow.com/questions/22864305",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3286810/"
] | [`raw_input()`](https://docs.python.org/2.7/library/functions.html#raw_input) returns a string:
```
>>> credits = raw_input("> ")
> 150
>>> type(credits)
<type 'str'>
```
You need to cast it to `int`:
```
credits = int(raw_input("> "))
``` | In your code, at the if statement you are comparing a `str` type with a `int` type. so it is not working as you axpected. Cast the `credit` as `int`
```
print "How many credits do you currently have: "
credits = raw_input("> ")
credits = int(credits)
if credits >= 120:
print "You have graduated!"
else:
print "Sorry not enough credits"
``` |
22,864,305 | Im a very new Python user (2.7) and have been working my way through the Learn Python The Hard Way course and up to chap 37 and decided to do read through some other learning materials and go over the basics again and do exercises there. I have been reading through this:
<http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/ifstatements.html>
and I just did this:
>
> 3.1.4.1. Graduate Exercise
>
> Write a program, graduate.py, that prompts students for how many credits they have. Print whether of not they have enough credits for graduation. (At Loyola University Chicago 120 credits are needed for graduation.)
>
>
>
and this is my code for that:
```
print "How many credits do you currently have: "
credits = raw_input("> ")
if credits >= 120:
print "You have graduated!"
else:
print "Sorry not enough credits"
```
but no mater what number I input it just gives "Sorry not enough credits" as the answer every time, why? I have tried moving some things around and making it > instead of >= but nothing worked. Im sure this is something stupidly simple Im missing but I cant figure it out.
I've done several other if statement exercises in the LPTHW course similar to this and never had a problem. | 2014/04/04 | [
"https://Stackoverflow.com/questions/22864305",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3286810/"
] | [`raw_input()`](https://docs.python.org/2.7/library/functions.html#raw_input) returns a string:
```
>>> credits = raw_input("> ")
> 150
>>> type(credits)
<type 'str'>
```
You need to cast it to `int`:
```
credits = int(raw_input("> "))
``` | I am refering to the same material by Dr. Andrew Harrington and I am doing the same program, my program may look pretty much amatuerish so I would highly appreciate if someone can kindly refine it
```
def graduateEligibility(credits):
if credits >= 120:
print("Congratulations on successfully completing the course.....see you on graduation day!")
else:
print("Sorry! Your credits are below 120, please kindly retake the evaluaton tests")
def main():
E = float(input("Enter your English marks:"))
M = float(input("Enter your Mathematics marks:"))
P = float(input("Enter your Physics marks:"))
C = float(input("Enter your Chem marks:"))
Cf = float(input("Enter your Comp.Fundamentals marks:"))
Fin_Edu = float(input("Enter your finance marks:"))
Const = float(input("Enter your Constitutional.Stds marks:"))
R = float(input("Enter your Reasoning marks:"))
TotalCredits = (E+M+P+C+Cf+Fin_Edu+Const+R)
YourCredits = graduateEligibility(TotalCredits)
main()
```
For simplicity sakes i have taken 8 subjects each have 20 credits. |
22,864,305 | Im a very new Python user (2.7) and have been working my way through the Learn Python The Hard Way course and up to chap 37 and decided to do read through some other learning materials and go over the basics again and do exercises there. I have been reading through this:
<http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/ifstatements.html>
and I just did this:
>
> 3.1.4.1. Graduate Exercise
>
> Write a program, graduate.py, that prompts students for how many credits they have. Print whether of not they have enough credits for graduation. (At Loyola University Chicago 120 credits are needed for graduation.)
>
>
>
and this is my code for that:
```
print "How many credits do you currently have: "
credits = raw_input("> ")
if credits >= 120:
print "You have graduated!"
else:
print "Sorry not enough credits"
```
but no mater what number I input it just gives "Sorry not enough credits" as the answer every time, why? I have tried moving some things around and making it > instead of >= but nothing worked. Im sure this is something stupidly simple Im missing but I cant figure it out.
I've done several other if statement exercises in the LPTHW course similar to this and never had a problem. | 2014/04/04 | [
"https://Stackoverflow.com/questions/22864305",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3286810/"
] | [`raw_input()`](https://docs.python.org/2.7/library/functions.html#raw_input) returns a string:
```
>>> credits = raw_input("> ")
> 150
>>> type(credits)
<type 'str'>
```
You need to cast it to `int`:
```
credits = int(raw_input("> "))
``` | You need to accept integer input for that and also need to handle non integer inputs.
```
print "How many credits do you currently have: "
try:
credits = int(raw_input("> "))
if credits >= 120:
print "You have graduated!"
else:
print "Sorry not enough credits"
except ValueError:
print "Invalid input"
```
Output:
```
> 100
Sorry not enough credits
> 121
You have graduated!
> aaaa
Invalid input
``` |
11,958,728 | python 2.6.8
```
s= '''
foo
bar
baz
'''
>>>re.findall(r'^\S*',s,re.MULTILINE)
['', 'foo', 'bar', 'baz', '']
>>>ptrn = re.compile(r'^\S*',re.MULTILINE)
>>>ptrn.findall(s)
['', 'foo', 'bar', 'baz', '']
>>>ptrn.findall(s,re.MULTILINE)
['baz', '']
```
Why is there a difference between using MULTILINE flag in findall? | 2012/08/14 | [
"https://Stackoverflow.com/questions/11958728",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1293804/"
] | When calling the `findall()` method on a regex object, the second parameter is not the `flags` argument (because that has already been used when compiling the regex) but the `pos` argument, telling the regex engine at which point in the string to start matching.
`re.MULTILINE` is just an integer (that happens to be `8`).
See [the docs](http://docs.python.org/library/re.html#re.RegexObject.findall). | Because the `findall` method of the compiled object `ptrn` doesn't take the MULTILINE parameter. It takes a `position` argument.
See here: <http://docs.python.org/library/re.html#re.RegexObject.findall>
The MULTILINE specifier is only used when you call `re.compile()` The resulting `ptrn` object already 'knows' that it is `MULTILINE`. |
45,110,802 | I am working on using an ElasticSearch database to store data I am pulling from online. However, when I try to index the data in the database I receive an error.
Here is my code for creating and indexing the data:
```
es = Elasticsearch()
es.index(index='weather', doc_type='data', body=doc)
```
However when I run this program, the second of those lines causes an error, here is the complete traceback:
```
Traceback (most recent call last):
File "weatherScraper.py", line 79, in <module>
main()
File "weatherScraper.py", line 73, in main
es.index(index='weather', doc_type='data', body=doc)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 73, in _wrapped
return func(*args, params=params, **kwargs)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/__init__.py", line 298, in index
_make_path(index, doc_type, id), params=params, body=body)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/transport.py", line 312, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 128, in perform_request
self._raise_error(response.status, raw_data)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/base.py", line 125, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.AuthenticationException: TransportError(401, u'security_exception', u'missing authentication token for REST request [/weather/data]')
``` | 2017/07/14 | [
"https://Stackoverflow.com/questions/45110802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6848466/"
] | ''missing authentication token' means you need to authenticate before you can talk to this Elasticsearch instance. To index documents, the user must have write access. You can include a username and password in a URL like this: <http://user:password@hostname:port>
For example, in a shell:
```
export ES_ENDPOINT="http://usernameWithWriteAccess:password@localhost:9200"
```
Then in python:
```
es = Elasticsearch(os.environ['ES_ENDPOINT'])
``` | Also, if you do it from Postman tool, for example:
Go to Authorization tab, Basic Auth, write here username and password which you was received by clicking elasticsearch-setup-passwords.bat
[](https://i.stack.imgur.com/pUSlW.png) |
45,110,802 | I am working on using an ElasticSearch database to store data I am pulling from online. However, when I try to index the data in the database I receive an error.
Here is my code for creating and indexing the data:
```
es = Elasticsearch()
es.index(index='weather', doc_type='data', body=doc)
```
However when I run this program, the second of those lines causes an error, here is the complete traceback:
```
Traceback (most recent call last):
File "weatherScraper.py", line 79, in <module>
main()
File "weatherScraper.py", line 73, in main
es.index(index='weather', doc_type='data', body=doc)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 73, in _wrapped
return func(*args, params=params, **kwargs)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/__init__.py", line 298, in index
_make_path(index, doc_type, id), params=params, body=body)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/transport.py", line 312, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 128, in perform_request
self._raise_error(response.status, raw_data)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/base.py", line 125, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.AuthenticationException: TransportError(401, u'security_exception', u'missing authentication token for REST request [/weather/data]')
``` | 2017/07/14 | [
"https://Stackoverflow.com/questions/45110802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6848466/"
] | ''missing authentication token' means you need to authenticate before you can talk to this Elasticsearch instance. To index documents, the user must have write access. You can include a username and password in a URL like this: <http://user:password@hostname:port>
For example, in a shell:
```
export ES_ENDPOINT="http://usernameWithWriteAccess:password@localhost:9200"
```
Then in python:
```
es = Elasticsearch(os.environ['ES_ENDPOINT'])
``` | The HTTP basic auth can be passed to a `http_auth` parameter when creating the ElasticSearch client:
```
client = Elasticsearch(
hosts=['localhost:5000'],
http_auth=('username', 'password'),
)
s = Search(using=client, index='something')
```
This assumes you are using the underlying [`Urllib3HttpConnection`](https://elasticsearch-py.readthedocs.io/en/7.10.0/transports.html#urllib3httpconnection) transport class which has the `http_auth` parameter.
>
>
> ```
> class elasticsearch.connection.Urllib3HttpConnection(host='localhost',
> http_auth=None,
> ...,
> **kwargs)
>
> ```
>
> Default connection class using the urllib3 library and the http
> protocol.
>
>
> **Parameters**:
>
>
> * **http\_auth** – optional http auth information as either ‘:’ separated string or a tuple
>
>
>
For SSL and other params to authentication, see the [SSL and Authentication](https://elasticsearch-py.readthedocs.io/en/7.10.0/#ssl-and-authentication) section of the docs:
```
from ssl import create_default_context
context = create_default_context(cafile="path/to/cert.pem")
es = Elasticsearch(
['localhost', 'otherhost'],
http_auth=('user', 'secret'),
scheme="https",
port=443,
ssl_context=context,
)
``` |
45,110,802 | I am working on using an ElasticSearch database to store data I am pulling from online. However, when I try to index the data in the database I receive an error.
Here is my code for creating and indexing the data:
```
es = Elasticsearch()
es.index(index='weather', doc_type='data', body=doc)
```
However when I run this program, the second of those lines causes an error, here is the complete traceback:
```
Traceback (most recent call last):
File "weatherScraper.py", line 79, in <module>
main()
File "weatherScraper.py", line 73, in main
es.index(index='weather', doc_type='data', body=doc)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 73, in _wrapped
return func(*args, params=params, **kwargs)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/__init__.py", line 298, in index
_make_path(index, doc_type, id), params=params, body=body)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/transport.py", line 312, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 128, in perform_request
self._raise_error(response.status, raw_data)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/base.py", line 125, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.AuthenticationException: TransportError(401, u'security_exception', u'missing authentication token for REST request [/weather/data]')
``` | 2017/07/14 | [
"https://Stackoverflow.com/questions/45110802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6848466/"
] | ''missing authentication token' means you need to authenticate before you can talk to this Elasticsearch instance. To index documents, the user must have write access. You can include a username and password in a URL like this: <http://user:password@hostname:port>
For example, in a shell:
```
export ES_ENDPOINT="http://usernameWithWriteAccess:password@localhost:9200"
```
Then in python:
```
es = Elasticsearch(os.environ['ES_ENDPOINT'])
``` | OS ubuntu
/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http\_urllib3.py
http\_auth=None, -> http\_auth=('username', 'password'),
reference: <https://rootkey.tistory.com/113> |
45,110,802 | I am working on using an ElasticSearch database to store data I am pulling from online. However, when I try to index the data in the database I receive an error.
Here is my code for creating and indexing the data:
```
es = Elasticsearch()
es.index(index='weather', doc_type='data', body=doc)
```
However when I run this program, the second of those lines causes an error, here is the complete traceback:
```
Traceback (most recent call last):
File "weatherScraper.py", line 79, in <module>
main()
File "weatherScraper.py", line 73, in main
es.index(index='weather', doc_type='data', body=doc)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 73, in _wrapped
return func(*args, params=params, **kwargs)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/__init__.py", line 298, in index
_make_path(index, doc_type, id), params=params, body=body)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/transport.py", line 312, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 128, in perform_request
self._raise_error(response.status, raw_data)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/base.py", line 125, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.AuthenticationException: TransportError(401, u'security_exception', u'missing authentication token for REST request [/weather/data]')
``` | 2017/07/14 | [
"https://Stackoverflow.com/questions/45110802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6848466/"
] | The HTTP basic auth can be passed to a `http_auth` parameter when creating the ElasticSearch client:
```
client = Elasticsearch(
hosts=['localhost:5000'],
http_auth=('username', 'password'),
)
s = Search(using=client, index='something')
```
This assumes you are using the underlying [`Urllib3HttpConnection`](https://elasticsearch-py.readthedocs.io/en/7.10.0/transports.html#urllib3httpconnection) transport class which has the `http_auth` parameter.
>
>
> ```
> class elasticsearch.connection.Urllib3HttpConnection(host='localhost',
> http_auth=None,
> ...,
> **kwargs)
>
> ```
>
> Default connection class using the urllib3 library and the http
> protocol.
>
>
> **Parameters**:
>
>
> * **http\_auth** – optional http auth information as either ‘:’ separated string or a tuple
>
>
>
For SSL and other params to authentication, see the [SSL and Authentication](https://elasticsearch-py.readthedocs.io/en/7.10.0/#ssl-and-authentication) section of the docs:
```
from ssl import create_default_context
context = create_default_context(cafile="path/to/cert.pem")
es = Elasticsearch(
['localhost', 'otherhost'],
http_auth=('user', 'secret'),
scheme="https",
port=443,
ssl_context=context,
)
``` | Also, if you do it from Postman tool, for example:
Go to Authorization tab, Basic Auth, write here username and password which you was received by clicking elasticsearch-setup-passwords.bat
[](https://i.stack.imgur.com/pUSlW.png) |
45,110,802 | I am working on using an ElasticSearch database to store data I am pulling from online. However, when I try to index the data in the database I receive an error.
Here is my code for creating and indexing the data:
```
es = Elasticsearch()
es.index(index='weather', doc_type='data', body=doc)
```
However when I run this program, the second of those lines causes an error, here is the complete traceback:
```
Traceback (most recent call last):
File "weatherScraper.py", line 79, in <module>
main()
File "weatherScraper.py", line 73, in main
es.index(index='weather', doc_type='data', body=doc)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 73, in _wrapped
return func(*args, params=params, **kwargs)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/__init__.py", line 298, in index
_make_path(index, doc_type, id), params=params, body=body)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/transport.py", line 312, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 128, in perform_request
self._raise_error(response.status, raw_data)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/base.py", line 125, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.AuthenticationException: TransportError(401, u'security_exception', u'missing authentication token for REST request [/weather/data]')
``` | 2017/07/14 | [
"https://Stackoverflow.com/questions/45110802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6848466/"
] | Also, if you do it from Postman tool, for example:
Go to Authorization tab, Basic Auth, write here username and password which you was received by clicking elasticsearch-setup-passwords.bat
[](https://i.stack.imgur.com/pUSlW.png) | OS ubuntu
/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http\_urllib3.py
http\_auth=None, -> http\_auth=('username', 'password'),
reference: <https://rootkey.tistory.com/113> |
45,110,802 | I am working on using an ElasticSearch database to store data I am pulling from online. However, when I try to index the data in the database I receive an error.
Here is my code for creating and indexing the data:
```
es = Elasticsearch()
es.index(index='weather', doc_type='data', body=doc)
```
However when I run this program, the second of those lines causes an error, here is the complete traceback:
```
Traceback (most recent call last):
File "weatherScraper.py", line 79, in <module>
main()
File "weatherScraper.py", line 73, in main
es.index(index='weather', doc_type='data', body=doc)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 73, in _wrapped
return func(*args, params=params, **kwargs)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/__init__.py", line 298, in index
_make_path(index, doc_type, id), params=params, body=body)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/transport.py", line 312, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 128, in perform_request
self._raise_error(response.status, raw_data)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/base.py", line 125, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.AuthenticationException: TransportError(401, u'security_exception', u'missing authentication token for REST request [/weather/data]')
``` | 2017/07/14 | [
"https://Stackoverflow.com/questions/45110802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6848466/"
] | The HTTP basic auth can be passed to a `http_auth` parameter when creating the ElasticSearch client:
```
client = Elasticsearch(
hosts=['localhost:5000'],
http_auth=('username', 'password'),
)
s = Search(using=client, index='something')
```
This assumes you are using the underlying [`Urllib3HttpConnection`](https://elasticsearch-py.readthedocs.io/en/7.10.0/transports.html#urllib3httpconnection) transport class which has the `http_auth` parameter.
>
>
> ```
> class elasticsearch.connection.Urllib3HttpConnection(host='localhost',
> http_auth=None,
> ...,
> **kwargs)
>
> ```
>
> Default connection class using the urllib3 library and the http
> protocol.
>
>
> **Parameters**:
>
>
> * **http\_auth** – optional http auth information as either ‘:’ separated string or a tuple
>
>
>
For SSL and other params to authentication, see the [SSL and Authentication](https://elasticsearch-py.readthedocs.io/en/7.10.0/#ssl-and-authentication) section of the docs:
```
from ssl import create_default_context
context = create_default_context(cafile="path/to/cert.pem")
es = Elasticsearch(
['localhost', 'otherhost'],
http_auth=('user', 'secret'),
scheme="https",
port=443,
ssl_context=context,
)
``` | OS ubuntu
/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http\_urllib3.py
http\_auth=None, -> http\_auth=('username', 'password'),
reference: <https://rootkey.tistory.com/113> |
59,777,244 | I am a very new programmer and wanted to try out the AIY voice kit that uses Google Assistant API. I have a step-by-step guide that pretty much tells me how to set it up but now when it's up and running the guide tells me to run "assistant\_library\_demo.py" which is to make sure that the google assistant understands you. I have already done a mic check and sound check which worked perfectly.
However, when I run it I only get
```
pi@raspberrypi:~/AIY-projects-python $ src/examples/voice/assistant_library_demo.py
Segmentation fault
```
I've tried google but only found out that Segmentation fault means that the program crashed. I am not sure what to do from here since I am not very good at programming. Please let me know if I need to give any more information.
This is the code that I'm trying to run
```
import logging
import platform
import sys
from google.assistant.library.event import EventType
from aiy.assistant import auth_helpers
from aiy.assistant.library import Assistant
from aiy.board import Board, Led
def process_event(led, event):
logging.info(event)
if event.type == EventType.ON_START_FINISHED:
led.state = Led.BEACON_DARK # Ready.
logging.info('Say "OK, Google" then speak, or press Ctrl+C to quit...')
elif event.type == EventType.ON_CONVERSATION_TURN_STARTED:
led.state = Led.ON # Listening.
elif event.type == EventType.ON_END_OF_UTTERANCE:
led.state = Led.PULSE_QUICK # Thinking.
elif (event.type == EventType.ON_CONVERSATION_TURN_FINISHED
or event.type == EventType.ON_CONVERSATION_TURN_TIMEOUT
or event.type == EventType.ON_NO_RESPONSE):
led.state = Led.BEACON_DARK
elif event.type == EventType.ON_ASSISTANT_ERROR and event.args and event.args['is_fatal']:
sys.exit(1)
def main():
logging.basicConfig(level=logging.INFO)
credentials = auth_helpers.get_assistant_credentials()
with Board() as board, Assistant(credentials) as assistant:
for event in assistant.start():
process_event(board.led, event)
if __name__ == '__main__':
main()
``` | 2020/01/16 | [
"https://Stackoverflow.com/questions/59777244",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12727174/"
] | You have to return the generate value to the Button.
```
<asp:Button OnClientClick="return Generate()"
<script>
var test = 2;
function Generate() {
if (test === 1)
return true;
else
return false;
}
</script>
``` | Your problem lies in `OnClientClick="Generate();" OnClick="Button2_Click"`.
You're assigning two inline click events here, so they'll both trigger independently.
You have to handle the `Button2_Click` function from inside `Generate`.
One way you might do this is to call `Button2_Click` in the else condition:
```
if (count > 0) {
alert('You must complete all of the Actions fields first!');
return false;
} else {
//All Action fields complete, continue to code behind...
Button2_Click();
}
``` |
40,245,703 | I'm using python to hit a foreman API to gather some facts about all the hosts that foreman knows about. Unfortunately, there is not *get-all-hosts-facts* (or something similar) in the v1 foreman API, so I'm having to loop through all the hosts and get the information. Doing so has lead me to an annoying problem. Each call to a given host return a JSON object like so:
```
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
```
This is totally fine, the issue arises when I append the next host's information. I then get a json file that looks something like this:
```
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}{
"host2.com": {
"apt_update_last_success": "1452703454",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
```
Here's the code that's doing this:
```
for i in hosts_data:
log.info("Gathering host facts for host: {}".format(i['host']['name']))
try:
facts = requests.get(foreman_host+api+"hosts/{}/facts".format(i['host']['id']), auth=(username, password))
if hosts.status_code != 200:
log.error("Unable to connect to Foreman! Got retcode '{}' and error message '{}'"
.format(hosts.status_code, hosts.text))
sys.exit(1)
except requests.exceptions.RequestException as e:
log.error(e)
facts_data = json.loads(facts.text)
log.debug(facts_data)
with open(results_file, 'a') as f:
f.write(json.dumps(facts_data, sort_keys=True, indent=4))
```
Here's what I need the file to look like:
```
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
},
"host2.com": {
"apt_update_last_success": "1452703454",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
``` | 2016/10/25 | [
"https://Stackoverflow.com/questions/40245703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2872525/"
] | It would be better to assemble all of your data into one dict and then write it all out one time, instead of each time in the loop.
```
d = {}
for i in hosts_data:
log.info("Gathering host facts for host: {}".format(i['host']['name']))
try:
facts = requests.get(foreman_host+api+"hosts/{}/facts".format(i['host']['id']), auth=(username, password))
if hosts.status_code != 200:
log.error("Unable to connect to Foreman! Got retcode '{}' and error message '{}'"
.format(hosts.status_code, hosts.text))
sys.exit(1)
except requests.exceptions.RequestException as e:
log.error(e)
facts_data = json.loads(facts.text)
log.debug(facts_data)
d.update(facts_data) #add to dict
# write everything at the end
with open(results_file, 'a') as f:
f.write(json.dumps(d, sort_keys=True, indent=4))
``` | Instead of writing json inside the loop, insert the data into a `dict` with the correct structure. Then write that dict to json when the loop is finished.
This assumes your dataset fit into memory. |
40,245,703 | I'm using python to hit a foreman API to gather some facts about all the hosts that foreman knows about. Unfortunately, there is not *get-all-hosts-facts* (or something similar) in the v1 foreman API, so I'm having to loop through all the hosts and get the information. Doing so has lead me to an annoying problem. Each call to a given host return a JSON object like so:
```
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
```
This is totally fine, the issue arises when I append the next host's information. I then get a json file that looks something like this:
```
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}{
"host2.com": {
"apt_update_last_success": "1452703454",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
```
Here's the code that's doing this:
```
for i in hosts_data:
log.info("Gathering host facts for host: {}".format(i['host']['name']))
try:
facts = requests.get(foreman_host+api+"hosts/{}/facts".format(i['host']['id']), auth=(username, password))
if hosts.status_code != 200:
log.error("Unable to connect to Foreman! Got retcode '{}' and error message '{}'"
.format(hosts.status_code, hosts.text))
sys.exit(1)
except requests.exceptions.RequestException as e:
log.error(e)
facts_data = json.loads(facts.text)
log.debug(facts_data)
with open(results_file, 'a') as f:
f.write(json.dumps(facts_data, sort_keys=True, indent=4))
```
Here's what I need the file to look like:
```
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
},
"host2.com": {
"apt_update_last_success": "1452703454",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
``` | 2016/10/25 | [
"https://Stackoverflow.com/questions/40245703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2872525/"
] | Instead of writing json inside the loop, insert the data into a `dict` with the correct structure. Then write that dict to json when the loop is finished.
This assumes your dataset fit into memory. | For safety/consistency, you need to load in the old data, mutate it, then write it back out.
Change the current `with` and `write` to:
```
# If file guaranteed to exist, can use r+ and avoid initial seek
with open(results_file, 'a+') as f:
f.seek(0)
combined_facts = json.load(f)
combined_facts.update(facts_data)
f.seek(0)
json.dump(combined_facts, f, sort_keys=True, indent=4)
f.truncate() # In case new JSON encoding smaller, e.g. due to replaced key
```
Note: If possible, you want to use [pault's answer](https://stackoverflow.com/a/40245798/364696) to minimize unnecessary I/O, this is just how you'd do it if the data retrieval should be done piecemeal, with immediate updates for each item as it becomes available.
FYI, the unsafe way is to basically find the trailing curly brace, delete it, then write out a comma followed by the new JSON (removing the leading curly brace from it's JSON representation). It's much less I/O intensive, but it's also less safe, doesn't clean out duplicates, doesn't sort the hosts, doesn't validate the input file at all, etc. So don't do it. |
40,245,703 | I'm using python to hit a foreman API to gather some facts about all the hosts that foreman knows about. Unfortunately, there is not *get-all-hosts-facts* (or something similar) in the v1 foreman API, so I'm having to loop through all the hosts and get the information. Doing so has lead me to an annoying problem. Each call to a given host return a JSON object like so:
```
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
```
This is totally fine, the issue arises when I append the next host's information. I then get a json file that looks something like this:
```
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}{
"host2.com": {
"apt_update_last_success": "1452703454",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
```
Here's the code that's doing this:
```
for i in hosts_data:
log.info("Gathering host facts for host: {}".format(i['host']['name']))
try:
facts = requests.get(foreman_host+api+"hosts/{}/facts".format(i['host']['id']), auth=(username, password))
if hosts.status_code != 200:
log.error("Unable to connect to Foreman! Got retcode '{}' and error message '{}'"
.format(hosts.status_code, hosts.text))
sys.exit(1)
except requests.exceptions.RequestException as e:
log.error(e)
facts_data = json.loads(facts.text)
log.debug(facts_data)
with open(results_file, 'a') as f:
f.write(json.dumps(facts_data, sort_keys=True, indent=4))
```
Here's what I need the file to look like:
```
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
},
"host2.com": {
"apt_update_last_success": "1452703454",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
``` | 2016/10/25 | [
"https://Stackoverflow.com/questions/40245703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2872525/"
] | It would be better to assemble all of your data into one dict and then write it all out one time, instead of each time in the loop.
```
d = {}
for i in hosts_data:
log.info("Gathering host facts for host: {}".format(i['host']['name']))
try:
facts = requests.get(foreman_host+api+"hosts/{}/facts".format(i['host']['id']), auth=(username, password))
if hosts.status_code != 200:
log.error("Unable to connect to Foreman! Got retcode '{}' and error message '{}'"
.format(hosts.status_code, hosts.text))
sys.exit(1)
except requests.exceptions.RequestException as e:
log.error(e)
facts_data = json.loads(facts.text)
log.debug(facts_data)
d.update(facts_data) #add to dict
# write everything at the end
with open(results_file, 'a') as f:
f.write(json.dumps(d, sort_keys=True, indent=4))
``` | For safety/consistency, you need to load in the old data, mutate it, then write it back out.
Change the current `with` and `write` to:
```
# If file guaranteed to exist, can use r+ and avoid initial seek
with open(results_file, 'a+') as f:
f.seek(0)
combined_facts = json.load(f)
combined_facts.update(facts_data)
f.seek(0)
json.dump(combined_facts, f, sort_keys=True, indent=4)
f.truncate() # In case new JSON encoding smaller, e.g. due to replaced key
```
Note: If possible, you want to use [pault's answer](https://stackoverflow.com/a/40245798/364696) to minimize unnecessary I/O, this is just how you'd do it if the data retrieval should be done piecemeal, with immediate updates for each item as it becomes available.
FYI, the unsafe way is to basically find the trailing curly brace, delete it, then write out a comma followed by the new JSON (removing the leading curly brace from it's JSON representation). It's much less I/O intensive, but it's also less safe, doesn't clean out duplicates, doesn't sort the hosts, doesn't validate the input file at all, etc. So don't do it. |
66,528,149 | I'm trying to deploy a Django application on Google App Engine. I followed the instructions given [here](https://cloud.google.com/python/django/appengine#macos-64-bit). The only problem is that when I execute the command `gcloud app deploy` I than get the error:
>
> ERROR: (gcloud.app.deploy) NOT\_FOUND: Unable to retrieve P4SA: [service-290226755412@gcp-gae-service.iam.gserviceaccount.com] from GAIA. Could be GAIA propagation delay or request from deleted apps.
>
>
>
I didn't find anything online and can't get through.
Thanks in advance! | 2021/03/08 | [
"https://Stackoverflow.com/questions/66528149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13922881/"
] | Same problem with Node, **just waited and tryed again** the command later and that's work for me (in the same shell, no steps between). | I had the same issue when deploying a Java application to an App Engine.
Enabling the 'Cloud Build API' under the APIs & Services section in the Google console resolved the issue for me. |
66,528,149 | I'm trying to deploy a Django application on Google App Engine. I followed the instructions given [here](https://cloud.google.com/python/django/appengine#macos-64-bit). The only problem is that when I execute the command `gcloud app deploy` I than get the error:
>
> ERROR: (gcloud.app.deploy) NOT\_FOUND: Unable to retrieve P4SA: [service-290226755412@gcp-gae-service.iam.gserviceaccount.com] from GAIA. Could be GAIA propagation delay or request from deleted apps.
>
>
>
I didn't find anything online and can't get through.
Thanks in advance! | 2021/03/08 | [
"https://Stackoverflow.com/questions/66528149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13922881/"
] | I had the same problem but with flask.
I follow this tutorial of google [here](https://codelabs.developers.google.com/codelabs/cloud-run-hello-python3?hl=en&continue=https%3A%2F%2Fcodelabs.developers.google.com%2Feuropython%3Fhl%3Den) and When I ran the comand `gcloud app deploy` I got the same error.
My solution was to `Enable` the **Cloud Build API** service in my project (in the Navigation Menu -> CI/CD). I did that and everything worked fine and I was able to complete the tutorial. | I had the same issue when deploying a Java application to an App Engine.
Enabling the 'Cloud Build API' under the APIs & Services section in the Google console resolved the issue for me. |
66,528,149 | I'm trying to deploy a Django application on Google App Engine. I followed the instructions given [here](https://cloud.google.com/python/django/appengine#macos-64-bit). The only problem is that when I execute the command `gcloud app deploy` I than get the error:
>
> ERROR: (gcloud.app.deploy) NOT\_FOUND: Unable to retrieve P4SA: [service-290226755412@gcp-gae-service.iam.gserviceaccount.com] from GAIA. Could be GAIA propagation delay or request from deleted apps.
>
>
>
I didn't find anything online and can't get through.
Thanks in advance! | 2021/03/08 | [
"https://Stackoverflow.com/questions/66528149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13922881/"
] | Same problem with Node, **just waited and tryed again** the command later and that's work for me (in the same shell, no steps between). | >
> Got the same error, I just activated billing account on the current project on gcp and retry deploy. All works for me
> ---------------------------------------------------------------------------------------------------------------------
>
>
> |
66,528,149 | I'm trying to deploy a Django application on Google App Engine. I followed the instructions given [here](https://cloud.google.com/python/django/appengine#macos-64-bit). The only problem is that when I execute the command `gcloud app deploy` I than get the error:
>
> ERROR: (gcloud.app.deploy) NOT\_FOUND: Unable to retrieve P4SA: [service-290226755412@gcp-gae-service.iam.gserviceaccount.com] from GAIA. Could be GAIA propagation delay or request from deleted apps.
>
>
>
I didn't find anything online and can't get through.
Thanks in advance! | 2021/03/08 | [
"https://Stackoverflow.com/questions/66528149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13922881/"
] | I had the same issue when deploying a Java application to an App Engine.
Enabling the 'Cloud Build API' under the APIs & Services section in the Google console resolved the issue for me. | >
> Got the same error, I just activated billing account on the current project on gcp and retry deploy. All works for me
> ---------------------------------------------------------------------------------------------------------------------
>
>
> |
66,528,149 | I'm trying to deploy a Django application on Google App Engine. I followed the instructions given [here](https://cloud.google.com/python/django/appengine#macos-64-bit). The only problem is that when I execute the command `gcloud app deploy` I than get the error:
>
> ERROR: (gcloud.app.deploy) NOT\_FOUND: Unable to retrieve P4SA: [service-290226755412@gcp-gae-service.iam.gserviceaccount.com] from GAIA. Could be GAIA propagation delay or request from deleted apps.
>
>
>
I didn't find anything online and can't get through.
Thanks in advance! | 2021/03/08 | [
"https://Stackoverflow.com/questions/66528149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13922881/"
] | I had the same error with different service ID, I deleted the specified service with that ID than it started working for me. | Trivial, but I had success by making sure all settings are configured outside
```
if __name__ == "__main__":
...
``` |
66,528,149 | I'm trying to deploy a Django application on Google App Engine. I followed the instructions given [here](https://cloud.google.com/python/django/appengine#macos-64-bit). The only problem is that when I execute the command `gcloud app deploy` I than get the error:
>
> ERROR: (gcloud.app.deploy) NOT\_FOUND: Unable to retrieve P4SA: [service-290226755412@gcp-gae-service.iam.gserviceaccount.com] from GAIA. Could be GAIA propagation delay or request from deleted apps.
>
>
>
I didn't find anything online and can't get through.
Thanks in advance! | 2021/03/08 | [
"https://Stackoverflow.com/questions/66528149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13922881/"
] | I had the same issue when deploying a Java application to an App Engine.
Enabling the 'Cloud Build API' under the APIs & Services section in the Google console resolved the issue for me. | I had the same error with different service ID, I deleted the specified service with that ID than it started working for me. |
66,528,149 | I'm trying to deploy a Django application on Google App Engine. I followed the instructions given [here](https://cloud.google.com/python/django/appengine#macos-64-bit). The only problem is that when I execute the command `gcloud app deploy` I than get the error:
>
> ERROR: (gcloud.app.deploy) NOT\_FOUND: Unable to retrieve P4SA: [service-290226755412@gcp-gae-service.iam.gserviceaccount.com] from GAIA. Could be GAIA propagation delay or request from deleted apps.
>
>
>
I didn't find anything online and can't get through.
Thanks in advance! | 2021/03/08 | [
"https://Stackoverflow.com/questions/66528149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13922881/"
] | I had the same problem but with flask.
I follow this tutorial of google [here](https://codelabs.developers.google.com/codelabs/cloud-run-hello-python3?hl=en&continue=https%3A%2F%2Fcodelabs.developers.google.com%2Feuropython%3Fhl%3Den) and When I ran the comand `gcloud app deploy` I got the same error.
My solution was to `Enable` the **Cloud Build API** service in my project (in the Navigation Menu -> CI/CD). I did that and everything worked fine and I was able to complete the tutorial. | >
> Got the same error, I just activated billing account on the current project on gcp and retry deploy. All works for me
> ---------------------------------------------------------------------------------------------------------------------
>
>
> |
66,528,149 | I'm trying to deploy a Django application on Google App Engine. I followed the instructions given [here](https://cloud.google.com/python/django/appengine#macos-64-bit). The only problem is that when I execute the command `gcloud app deploy` I than get the error:
>
> ERROR: (gcloud.app.deploy) NOT\_FOUND: Unable to retrieve P4SA: [service-290226755412@gcp-gae-service.iam.gserviceaccount.com] from GAIA. Could be GAIA propagation delay or request from deleted apps.
>
>
>
I didn't find anything online and can't get through.
Thanks in advance! | 2021/03/08 | [
"https://Stackoverflow.com/questions/66528149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13922881/"
] | >
> Got the same error, I just activated billing account on the current project on gcp and retry deploy. All works for me
> ---------------------------------------------------------------------------------------------------------------------
>
>
> | Trivial, but I had success by making sure all settings are configured outside
```
if __name__ == "__main__":
...
``` |
66,528,149 | I'm trying to deploy a Django application on Google App Engine. I followed the instructions given [here](https://cloud.google.com/python/django/appengine#macos-64-bit). The only problem is that when I execute the command `gcloud app deploy` I than get the error:
>
> ERROR: (gcloud.app.deploy) NOT\_FOUND: Unable to retrieve P4SA: [service-290226755412@gcp-gae-service.iam.gserviceaccount.com] from GAIA. Could be GAIA propagation delay or request from deleted apps.
>
>
>
I didn't find anything online and can't get through.
Thanks in advance! | 2021/03/08 | [
"https://Stackoverflow.com/questions/66528149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13922881/"
] | I had the same problem but with flask.
I follow this tutorial of google [here](https://codelabs.developers.google.com/codelabs/cloud-run-hello-python3?hl=en&continue=https%3A%2F%2Fcodelabs.developers.google.com%2Feuropython%3Fhl%3Den) and When I ran the comand `gcloud app deploy` I got the same error.
My solution was to `Enable` the **Cloud Build API** service in my project (in the Navigation Menu -> CI/CD). I did that and everything worked fine and I was able to complete the tutorial. | Trivial, but I had success by making sure all settings are configured outside
```
if __name__ == "__main__":
...
``` |
66,528,149 | I'm trying to deploy a Django application on Google App Engine. I followed the instructions given [here](https://cloud.google.com/python/django/appengine#macos-64-bit). The only problem is that when I execute the command `gcloud app deploy` I than get the error:
>
> ERROR: (gcloud.app.deploy) NOT\_FOUND: Unable to retrieve P4SA: [service-290226755412@gcp-gae-service.iam.gserviceaccount.com] from GAIA. Could be GAIA propagation delay or request from deleted apps.
>
>
>
I didn't find anything online and can't get through.
Thanks in advance! | 2021/03/08 | [
"https://Stackoverflow.com/questions/66528149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13922881/"
] | I had the same issue when deploying a Java application to an App Engine.
Enabling the 'Cloud Build API' under the APIs & Services section in the Google console resolved the issue for me. | Trivial, but I had success by making sure all settings are configured outside
```
if __name__ == "__main__":
...
``` |
4,042,995 | >
> **Possible Duplicate:**
>
> [What is the equivalent of the C# “using” block in IronPython?](https://stackoverflow.com/questions/1757296/what-is-the-equivalent-of-the-c-using-block-in-ironpython)
>
>
>
I'm writing some IronPython using some disposable .NET objects, and wondering whether there is a nice "pythonic" way of doing this. Currently I have a bunch of finally statements (and I suppose there should be checks for None in each of them too - or will the variable not even exist if the constructor fails?)
```
def Save(self):
filename = "record.txt"
data = "{0}:{1}".format(self.Level,self.Name)
isf = IsolatedStorageFile.GetUserStoreForApplication()
try:
isfs = IsolatedStorageFileStream(filename, FileMode.Create, isf)
try:
sw = StreamWriter(isfs)
try:
sw.Write(data)
finally:
sw.Dispose()
finally:
isfs.Dispose()
finally:
isf.Dispose()
``` | 2010/10/28 | [
"https://Stackoverflow.com/questions/4042995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7532/"
] | Python 2.6 introduced the `with` statement, which provides for automatic clean up of objects when they leave the `with` statement. I don't know if the IronPython libraries support it, but it would be a natural fit.
Dup question with authoritative answer: [What is the equivalent of the C# "using" block in IronPython?](https://stackoverflow.com/questions/1757296/what-is-the-equivalent-of-the-c-using-block-in-ironpython) | If I understand correctly, it looks like the equivalent is the [`with`](http://docs.python.org/reference/compound_stmts.html#with) statement. If your classes define context managers, they will be called automatically after the with block. |
4,042,995 | >
> **Possible Duplicate:**
>
> [What is the equivalent of the C# “using” block in IronPython?](https://stackoverflow.com/questions/1757296/what-is-the-equivalent-of-the-c-using-block-in-ironpython)
>
>
>
I'm writing some IronPython using some disposable .NET objects, and wondering whether there is a nice "pythonic" way of doing this. Currently I have a bunch of finally statements (and I suppose there should be checks for None in each of them too - or will the variable not even exist if the constructor fails?)
```
def Save(self):
filename = "record.txt"
data = "{0}:{1}".format(self.Level,self.Name)
isf = IsolatedStorageFile.GetUserStoreForApplication()
try:
isfs = IsolatedStorageFileStream(filename, FileMode.Create, isf)
try:
sw = StreamWriter(isfs)
try:
sw.Write(data)
finally:
sw.Dispose()
finally:
isfs.Dispose()
finally:
isf.Dispose()
``` | 2010/10/28 | [
"https://Stackoverflow.com/questions/4042995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7532/"
] | Python 2.6 introduced the `with` statement, which provides for automatic clean up of objects when they leave the `with` statement. I don't know if the IronPython libraries support it, but it would be a natural fit.
Dup question with authoritative answer: [What is the equivalent of the C# "using" block in IronPython?](https://stackoverflow.com/questions/1757296/what-is-the-equivalent-of-the-c-using-block-in-ironpython) | I think you are looking for the [with statement](http://docs.python.org/reference/compound_stmts.html#with). More info [here](http://www.python.org/dev/peps/pep-0343/). |
4,042,995 | >
> **Possible Duplicate:**
>
> [What is the equivalent of the C# “using” block in IronPython?](https://stackoverflow.com/questions/1757296/what-is-the-equivalent-of-the-c-using-block-in-ironpython)
>
>
>
I'm writing some IronPython using some disposable .NET objects, and wondering whether there is a nice "pythonic" way of doing this. Currently I have a bunch of finally statements (and I suppose there should be checks for None in each of them too - or will the variable not even exist if the constructor fails?)
```
def Save(self):
filename = "record.txt"
data = "{0}:{1}".format(self.Level,self.Name)
isf = IsolatedStorageFile.GetUserStoreForApplication()
try:
isfs = IsolatedStorageFileStream(filename, FileMode.Create, isf)
try:
sw = StreamWriter(isfs)
try:
sw.Write(data)
finally:
sw.Dispose()
finally:
isfs.Dispose()
finally:
isf.Dispose()
``` | 2010/10/28 | [
"https://Stackoverflow.com/questions/4042995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7532/"
] | Python 2.6 introduced the `with` statement, which provides for automatic clean up of objects when they leave the `with` statement. I don't know if the IronPython libraries support it, but it would be a natural fit.
Dup question with authoritative answer: [What is the equivalent of the C# "using" block in IronPython?](https://stackoverflow.com/questions/1757296/what-is-the-equivalent-of-the-c-using-block-in-ironpython) | Your code with some comments :
```
def Save(self):
filename = "record.txt"
data = "{0}:{1}".format(self.Level,self.Name)
isf = IsolatedStorageFile.GetUserStoreForApplication()
try:
isfs = IsolatedStorageFileStream(filename, FileMode.Create, isf)
try: # These try is useless....
sw = StreamWriter(isfs)
try:
sw.Write(data)
finally:
sw.Dispose()
finally: # Because next finally statement (isfs.Dispose) will be always executed
isfs.Dispose()
finally:
isf.Dispose()
```
For StreamWrite, you can use a with statment (if your object as **\_\_*enter*\_\_** and **\_*exit*\_\_** methods) then your code will looks like :
```
def Save(self):
filename = "record.txt"
data = "{0}:{1}".format(self.Level,self.Name)
isf = IsolatedStorageFile.GetUserStoreForApplication()
try:
isfs = IsolatedStorageFileStream(filename, FileMode.Create, isf)
with StreamWriter(isfs) as sw:
sw.Write(data)
finally:
isf.Dispose()
```
and StreamWriter in his **\_\_*exit*\_\_** method has
```
sw.Dispose()
``` |
4,042,995 | >
> **Possible Duplicate:**
>
> [What is the equivalent of the C# “using” block in IronPython?](https://stackoverflow.com/questions/1757296/what-is-the-equivalent-of-the-c-using-block-in-ironpython)
>
>
>
I'm writing some IronPython using some disposable .NET objects, and wondering whether there is a nice "pythonic" way of doing this. Currently I have a bunch of finally statements (and I suppose there should be checks for None in each of them too - or will the variable not even exist if the constructor fails?)
```
def Save(self):
filename = "record.txt"
data = "{0}:{1}".format(self.Level,self.Name)
isf = IsolatedStorageFile.GetUserStoreForApplication()
try:
isfs = IsolatedStorageFileStream(filename, FileMode.Create, isf)
try:
sw = StreamWriter(isfs)
try:
sw.Write(data)
finally:
sw.Dispose()
finally:
isfs.Dispose()
finally:
isf.Dispose()
``` | 2010/10/28 | [
"https://Stackoverflow.com/questions/4042995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7532/"
] | I think you are looking for the [with statement](http://docs.python.org/reference/compound_stmts.html#with). More info [here](http://www.python.org/dev/peps/pep-0343/). | If I understand correctly, it looks like the equivalent is the [`with`](http://docs.python.org/reference/compound_stmts.html#with) statement. If your classes define context managers, they will be called automatically after the with block. |
4,042,995 | >
> **Possible Duplicate:**
>
> [What is the equivalent of the C# “using” block in IronPython?](https://stackoverflow.com/questions/1757296/what-is-the-equivalent-of-the-c-using-block-in-ironpython)
>
>
>
I'm writing some IronPython using some disposable .NET objects, and wondering whether there is a nice "pythonic" way of doing this. Currently I have a bunch of finally statements (and I suppose there should be checks for None in each of them too - or will the variable not even exist if the constructor fails?)
```
def Save(self):
filename = "record.txt"
data = "{0}:{1}".format(self.Level,self.Name)
isf = IsolatedStorageFile.GetUserStoreForApplication()
try:
isfs = IsolatedStorageFileStream(filename, FileMode.Create, isf)
try:
sw = StreamWriter(isfs)
try:
sw.Write(data)
finally:
sw.Dispose()
finally:
isfs.Dispose()
finally:
isf.Dispose()
``` | 2010/10/28 | [
"https://Stackoverflow.com/questions/4042995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7532/"
] | I think you are looking for the [with statement](http://docs.python.org/reference/compound_stmts.html#with). More info [here](http://www.python.org/dev/peps/pep-0343/). | Your code with some comments :
```
def Save(self):
filename = "record.txt"
data = "{0}:{1}".format(self.Level,self.Name)
isf = IsolatedStorageFile.GetUserStoreForApplication()
try:
isfs = IsolatedStorageFileStream(filename, FileMode.Create, isf)
try: # These try is useless....
sw = StreamWriter(isfs)
try:
sw.Write(data)
finally:
sw.Dispose()
finally: # Because next finally statement (isfs.Dispose) will be always executed
isfs.Dispose()
finally:
isf.Dispose()
```
For StreamWrite, you can use a with statment (if your object as **\_\_*enter*\_\_** and **\_*exit*\_\_** methods) then your code will looks like :
```
def Save(self):
filename = "record.txt"
data = "{0}:{1}".format(self.Level,self.Name)
isf = IsolatedStorageFile.GetUserStoreForApplication()
try:
isfs = IsolatedStorageFileStream(filename, FileMode.Create, isf)
with StreamWriter(isfs) as sw:
sw.Write(data)
finally:
isf.Dispose()
```
and StreamWriter in his **\_\_*exit*\_\_** method has
```
sw.Dispose()
``` |
54,560,326 | so i'm fairly new to python and coding in general and i decided to make a text based trivia game as a sort of test. and I've coded everything for the first question. code which i will repeat for each question. my problem is specifically on lines 10-11. the intended function is to add one to the current score, then print the scoregained variable which uses format to tell you the score. but it doesn't work. the variable still prints fine but the score variable isn't added to, remaining at zero.
```
TRIVIA = input('TRIVIA: press enter to start')
strike = int('3')
strikesleft = ('strikes left: {} ').format(strike)
score = int('0')
scoregained = ('Your score is {}' ).format(score)
Q1 = input('What is the diameter of the earth? ')
if Q1 == ('7917.5'):
print('correct!')
input()
score = score+1
print(scoregained)
input()
``` | 2019/02/06 | [
"https://Stackoverflow.com/questions/54560326",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11024647/"
] | `scoregained` isn't a function, it is a variable you assign but do not update. This would be a great place for a function, which you can reuse whenever you want to print the score. For example:
```
def print_score(score):
print('Your score is {}'.format(score))
```
You can reuse this function anytime you wish to print the score. | I'd probably use something like:
```
def score_stats(score):
print('Your score is {}'.format(score))
input('TRIVIA: press enter to start')
score, strike = 0, 3
strikesleft = 'strikes left: {}'.format(strike)
score_stats(score)
Q1 = input('What is the diameter of the earth?')
if Q1 == '7917.5':
print('correct!')
score += 1
score_stats(score)
else:
print('incorrect!')
score_stats(score)
Q2...
```
---
Output:
```
TRIVIA: press enter to start
Your score is 0
What is the diameter of the earth? 7917.5
correct!
Your score is 1
``` |
63,442,333 | When running `npm start`, my code shows a blank page, without favicon either, and the browse console shows
```
Loading failed for the <script> with source “http://localhost:3000/short_text_understanding/static/js/bundle.js”. bundle.js:23:1
Loading failed for the <script> with source “http://localhost:3000/short_text_understanding/static/js/0.chunk.js”. bundle.js:23:1
Loading failed for the <script> with source “http://localhost:3000/short_text_understanding/static/js/main.chunk.js”. bundle.js:23:1
```
If it helps debugging, my code works previously, but after `npm audit`, my package.json changed
```
- "react-scripts": "3.4.1"
+ "react-scripts": "^3.4.3"
```
My package.json
```
{
"name": "short_text_understand",
"version": "0.1.0",
"private": true,
"dependencies": {
"@testing-library/jest-dom": "^4.2.4",
"@testing-library/react": "^9.5.0",
"@testing-library/user-event": "^7.2.1",
"react": "^16.13.1",
"react-dom": "^16.13.1",
"react-scripts": "^3.4.3"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"test:debug": "react-scripts --inspect-brk test --runInBand --no-cache",
"eject": "react-scripts eject",
"lint": "eslint .",
"predeploy": "npm run build",
"deploy": "gh-pages -d build"
},
"eslintConfig": {
"extends": "react-app"
},
"proxy": "http://localhost:5000",
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
},
"devDependencies": {
"eslint-config-airbnb": "^18.2.0",
"eslint-config-prettier": "^6.11.0",
"eslint-plugin-jsx-a11y": "^6.3.1",
"eslint-plugin-prettier": "^3.1.4",
"gh-pages": "^3.1.0",
"prettier": "^2.0.5"
}
}
```
Structure of source is
```
.
├── Dockerfile
├── LICENSE
├── README.md
├── docker-compose.yml
├── nginx
│ ├── Dockerfile
│ └── nginx.conf
├── package-lock.json
├── package.json
├── public
│ ├── android-chrome-192x192.png
│ ├── android-chrome-512x512.png
│ ├── apple-touch-icon.png
│ ├── favicon-16x16.png
│ ├── favicon-32x32.png
│ ├── favicon.ico
│ ├── index.html
│ ├── manifest.json
│ ├── robots.txt
│ └── site.webmanifest
├── src
│ ├── index.js
│ ├── normalize.css
│ ├── skeleton.css
│ └── style.css
└── src_python
├── Dockerfile
├── __pycache__
```
What I do not understand is why the `bundle.js` disappears, and why my favicon is not loaded, even though I did not move the `public` folder, and things worked before.
I am totally new in this, please tell if you need any other information | 2020/08/16 | [
"https://Stackoverflow.com/questions/63442333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1780570/"
] | ```
/// <summary>
/// Passengers array
/// </summary>
public Passenger[] Passengers = new Passenger[10];
public class Passenger
{
public int Age { get; set; }
public Passenger(int age)
{
Age = age;
}
}
public void AddPassenger()
{
// Get the number of passengers
int passengerCount = Passengers.Count(p => p != null);
if (passengerCount == Passengers.Length)
Console.WriteLine("Maximum number of passengers");
else
{
Console.WriteLine("How old are the passengers?");
int age = int.Parse(Console.ReadLine());
// Add passenger
Passengers[passengerCount] = new Passenger(age);
}
}
``` | You can try:
```
public void AddPassengers(Passenger[] passengers)
{
int i = Array.IndexOf(passengers, null);
if (i < 0)
{
Console.WriteLine("The array is full.");
return;
}
Console.WriteLine("How old is the passenger?");
int age = Int32.Parse(Console.ReadLine());
passengers[i] = new Passenger(age);
}
```
Hope this helps. |
63,442,333 | When running `npm start`, my code shows a blank page, without favicon either, and the browse console shows
```
Loading failed for the <script> with source “http://localhost:3000/short_text_understanding/static/js/bundle.js”. bundle.js:23:1
Loading failed for the <script> with source “http://localhost:3000/short_text_understanding/static/js/0.chunk.js”. bundle.js:23:1
Loading failed for the <script> with source “http://localhost:3000/short_text_understanding/static/js/main.chunk.js”. bundle.js:23:1
```
If it helps debugging, my code works previously, but after `npm audit`, my package.json changed
```
- "react-scripts": "3.4.1"
+ "react-scripts": "^3.4.3"
```
My package.json
```
{
"name": "short_text_understand",
"version": "0.1.0",
"private": true,
"dependencies": {
"@testing-library/jest-dom": "^4.2.4",
"@testing-library/react": "^9.5.0",
"@testing-library/user-event": "^7.2.1",
"react": "^16.13.1",
"react-dom": "^16.13.1",
"react-scripts": "^3.4.3"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"test:debug": "react-scripts --inspect-brk test --runInBand --no-cache",
"eject": "react-scripts eject",
"lint": "eslint .",
"predeploy": "npm run build",
"deploy": "gh-pages -d build"
},
"eslintConfig": {
"extends": "react-app"
},
"proxy": "http://localhost:5000",
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
},
"devDependencies": {
"eslint-config-airbnb": "^18.2.0",
"eslint-config-prettier": "^6.11.0",
"eslint-plugin-jsx-a11y": "^6.3.1",
"eslint-plugin-prettier": "^3.1.4",
"gh-pages": "^3.1.0",
"prettier": "^2.0.5"
}
}
```
Structure of source is
```
.
├── Dockerfile
├── LICENSE
├── README.md
├── docker-compose.yml
├── nginx
│ ├── Dockerfile
│ └── nginx.conf
├── package-lock.json
├── package.json
├── public
│ ├── android-chrome-192x192.png
│ ├── android-chrome-512x512.png
│ ├── apple-touch-icon.png
│ ├── favicon-16x16.png
│ ├── favicon-32x32.png
│ ├── favicon.ico
│ ├── index.html
│ ├── manifest.json
│ ├── robots.txt
│ └── site.webmanifest
├── src
│ ├── index.js
│ ├── normalize.css
│ ├── skeleton.css
│ └── style.css
└── src_python
├── Dockerfile
├── __pycache__
```
What I do not understand is why the `bundle.js` disappears, and why my favicon is not loaded, even though I did not move the `public` folder, and things worked before.
I am totally new in this, please tell if you need any other information | 2020/08/16 | [
"https://Stackoverflow.com/questions/63442333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1780570/"
] | ```
/// <summary>
/// Passengers array
/// </summary>
public Passenger[] Passengers = new Passenger[10];
public class Passenger
{
public int Age { get; set; }
public Passenger(int age)
{
Age = age;
}
}
public void AddPassenger()
{
// Get the number of passengers
int passengerCount = Passengers.Count(p => p != null);
if (passengerCount == Passengers.Length)
Console.WriteLine("Maximum number of passengers");
else
{
Console.WriteLine("How old are the passengers?");
int age = int.Parse(Console.ReadLine());
// Add passenger
Passengers[passengerCount] = new Passenger(age);
}
}
``` | ```
public void Add_Passengers()
{
PassengerClass PassengerClass = new PassengerClass();
Console.WriteLine("How old are the passengers?");
PassengerClass.AddPassanger(Int32.Parse(Console.ReadLine()));
}
public class PassengerClass
{
public List<int> passangerList;
public PassengerClass()
{
if (passangerList == null)
passangerList = new List<int>();
}
public void AddPassanger(int age)
{
passangerList.Add(age);
}
}
``` |
63,442,333 | When running `npm start`, my code shows a blank page, without favicon either, and the browse console shows
```
Loading failed for the <script> with source “http://localhost:3000/short_text_understanding/static/js/bundle.js”. bundle.js:23:1
Loading failed for the <script> with source “http://localhost:3000/short_text_understanding/static/js/0.chunk.js”. bundle.js:23:1
Loading failed for the <script> with source “http://localhost:3000/short_text_understanding/static/js/main.chunk.js”. bundle.js:23:1
```
If it helps debugging, my code works previously, but after `npm audit`, my package.json changed
```
- "react-scripts": "3.4.1"
+ "react-scripts": "^3.4.3"
```
My package.json
```
{
"name": "short_text_understand",
"version": "0.1.0",
"private": true,
"dependencies": {
"@testing-library/jest-dom": "^4.2.4",
"@testing-library/react": "^9.5.0",
"@testing-library/user-event": "^7.2.1",
"react": "^16.13.1",
"react-dom": "^16.13.1",
"react-scripts": "^3.4.3"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"test:debug": "react-scripts --inspect-brk test --runInBand --no-cache",
"eject": "react-scripts eject",
"lint": "eslint .",
"predeploy": "npm run build",
"deploy": "gh-pages -d build"
},
"eslintConfig": {
"extends": "react-app"
},
"proxy": "http://localhost:5000",
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
},
"devDependencies": {
"eslint-config-airbnb": "^18.2.0",
"eslint-config-prettier": "^6.11.0",
"eslint-plugin-jsx-a11y": "^6.3.1",
"eslint-plugin-prettier": "^3.1.4",
"gh-pages": "^3.1.0",
"prettier": "^2.0.5"
}
}
```
Structure of source is
```
.
├── Dockerfile
├── LICENSE
├── README.md
├── docker-compose.yml
├── nginx
│ ├── Dockerfile
│ └── nginx.conf
├── package-lock.json
├── package.json
├── public
│ ├── android-chrome-192x192.png
│ ├── android-chrome-512x512.png
│ ├── apple-touch-icon.png
│ ├── favicon-16x16.png
│ ├── favicon-32x32.png
│ ├── favicon.ico
│ ├── index.html
│ ├── manifest.json
│ ├── robots.txt
│ └── site.webmanifest
├── src
│ ├── index.js
│ ├── normalize.css
│ ├── skeleton.css
│ └── style.css
└── src_python
├── Dockerfile
├── __pycache__
```
What I do not understand is why the `bundle.js` disappears, and why my favicon is not loaded, even though I did not move the `public` folder, and things worked before.
I am totally new in this, please tell if you need any other information | 2020/08/16 | [
"https://Stackoverflow.com/questions/63442333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1780570/"
] | ```
/// <summary>
/// Passengers array
/// </summary>
public Passenger[] Passengers = new Passenger[10];
public class Passenger
{
public int Age { get; set; }
public Passenger(int age)
{
Age = age;
}
}
public void AddPassenger()
{
// Get the number of passengers
int passengerCount = Passengers.Count(p => p != null);
if (passengerCount == Passengers.Length)
Console.WriteLine("Maximum number of passengers");
else
{
Console.WriteLine("How old are the passengers?");
int age = int.Parse(Console.ReadLine());
// Add passenger
Passengers[passengerCount] = new Passenger(age);
}
}
``` | I would go with the class solution:
```cs
public class Passanger {
public int age;
public Passanger(int age)
{
this.age = age;
}
}
public class Passangers
{
private List<Passanger> passangers;
private int limit;
public bool ready { get; private set; }
public Passangers(int capacity)
{
passangers = new List<Passanger>(capacity);
limit = capacity;
ready = false;
}
public void addPassanger(Passanger passanger)
{
passangers.Add(passanger);
if(passangers.Count == limit)
{
ready = true;
}
}
public List<Passanger> getPassangers()
{
passangers.Sort((p1, p2) => p1.age.CompareTo(p2.age));
return passangers;
}
}
``` |
63,442,333 | When running `npm start`, my code shows a blank page, without favicon either, and the browse console shows
```
Loading failed for the <script> with source “http://localhost:3000/short_text_understanding/static/js/bundle.js”. bundle.js:23:1
Loading failed for the <script> with source “http://localhost:3000/short_text_understanding/static/js/0.chunk.js”. bundle.js:23:1
Loading failed for the <script> with source “http://localhost:3000/short_text_understanding/static/js/main.chunk.js”. bundle.js:23:1
```
If it helps debugging, my code works previously, but after `npm audit`, my package.json changed
```
- "react-scripts": "3.4.1"
+ "react-scripts": "^3.4.3"
```
My package.json
```
{
"name": "short_text_understand",
"version": "0.1.0",
"private": true,
"dependencies": {
"@testing-library/jest-dom": "^4.2.4",
"@testing-library/react": "^9.5.0",
"@testing-library/user-event": "^7.2.1",
"react": "^16.13.1",
"react-dom": "^16.13.1",
"react-scripts": "^3.4.3"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"test:debug": "react-scripts --inspect-brk test --runInBand --no-cache",
"eject": "react-scripts eject",
"lint": "eslint .",
"predeploy": "npm run build",
"deploy": "gh-pages -d build"
},
"eslintConfig": {
"extends": "react-app"
},
"proxy": "http://localhost:5000",
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
},
"devDependencies": {
"eslint-config-airbnb": "^18.2.0",
"eslint-config-prettier": "^6.11.0",
"eslint-plugin-jsx-a11y": "^6.3.1",
"eslint-plugin-prettier": "^3.1.4",
"gh-pages": "^3.1.0",
"prettier": "^2.0.5"
}
}
```
Structure of source is
```
.
├── Dockerfile
├── LICENSE
├── README.md
├── docker-compose.yml
├── nginx
│ ├── Dockerfile
│ └── nginx.conf
├── package-lock.json
├── package.json
├── public
│ ├── android-chrome-192x192.png
│ ├── android-chrome-512x512.png
│ ├── apple-touch-icon.png
│ ├── favicon-16x16.png
│ ├── favicon-32x32.png
│ ├── favicon.ico
│ ├── index.html
│ ├── manifest.json
│ ├── robots.txt
│ └── site.webmanifest
├── src
│ ├── index.js
│ ├── normalize.css
│ ├── skeleton.css
│ └── style.css
└── src_python
├── Dockerfile
├── __pycache__
```
What I do not understand is why the `bundle.js` disappears, and why my favicon is not loaded, even though I did not move the `public` folder, and things worked before.
I am totally new in this, please tell if you need any other information | 2020/08/16 | [
"https://Stackoverflow.com/questions/63442333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1780570/"
] | ```
/// <summary>
/// Passengers array
/// </summary>
public Passenger[] Passengers = new Passenger[10];
public class Passenger
{
public int Age { get; set; }
public Passenger(int age)
{
Age = age;
}
}
public void AddPassenger()
{
// Get the number of passengers
int passengerCount = Passengers.Count(p => p != null);
if (passengerCount == Passengers.Length)
Console.WriteLine("Maximum number of passengers");
else
{
Console.WriteLine("How old are the passengers?");
int age = int.Parse(Console.ReadLine());
// Add passenger
Passengers[passengerCount] = new Passenger(age);
}
}
``` | Rather than iterating the array each time, you can keep an additional field tracking the index and increment it within the add operation.
```
private Passenger[] passengers = new Passenger[10];
private int _count = 0;
public void Add_Passengers()
{
// validate overflow. >= not strictly necessary but a better safeguard than ==
if (_count >= passengers.Length)
throw new InvalidOperationException("Array is filled");
Console.WriteLine("How old are the passengers?");
// validate input
if (!int.TryParse(Console.ReadLine(), out int age))
throw new InvalidOperationException("Not an integer");
// Postfix increment returns original value
passengers[_count++] = new Passenger(age);
}
```
Obviously you should handle the error cases as you see fit. For demonstration I'm simply throwing an `InvalidOperationException`.
Edit: I just noticed you asked explicitly to display an Error message. That could look like this:
```
private Passenger[] passengers = new Passenger[10];
private int _count = 0;
public void Add_Passengers()
{
// validate overflow. >= not strictly necessary but a better safeguard than ==
if (_count >= passengers.Length)
{
Console.WriteLine("Error: No more passengers are allowed. Size limit reached.");
return;
}
Console.WriteLine("How old are the passengers?");
// validate input
if (!int.TryParse(Console.ReadLine(), out int age))
{
Console.WriteLine("Error: Value is not an integer");
return;
}
// Postfix increment returns original value
passengers[_count++] = new Passenger(age);
}
```
A `List<Passenger>` is probably a better option. Then you don't need to track the index or worry about the size overflowing. Note that neither solution is thread-safe, but that's a different question. |
73,225,062 | I am trying to use `multiprocessing.Queue` to manage some tasks that are sent by the main process and picked up by "worker" processes (`multiprocessing.Process`). The workers then run the task and put the results into a result queue.
Here is my main script:
```py
from multiprocessing import Process, Queue, freeze_support
import auxiliaries as aux
import functions
if __name__ == '__main__':
freeze_support()
start = time.perf_counter()
# number of processess
nprocs = 3
# define the tasks
tasks = [(functions.get_stats_from_uniform_dist, (2**23, i)) for i in range(600)]
# start the queues
task_queue = Queue()
result_queue = Queue()
# populate task queue
for task in tasks:
task_queue.put(task)
# after all tasks are in the queue, send a message to stop picking...
for _ in range(nprocs):
task_queue.put('STOP')
# start workers
procs = []
for _ in range(nprocs):
p = Process(target=aux.worker, args=(task_queue, result_queue))
p.start()
procs.append(p)
for p in procs:
p.join()
# print what's in the result queue
while not result_queue.empty():
print(result_queue.get())
```
The imported modules are
**auxiliaries.py**
```py
from multiprocessing import current_process
def calculate(func, args):
"""
Calculates a certain function for a list of arguments. Returns a string with the result.
Arguments:
- func (string): function name
- args (list): list of arguments
"""
result = func(*args)
string = current_process().name
string = string + " says " + func.__name__ + str(args)
string = string + " = " + str(result)
return string
def worker(inputQueue, outputQueue):
"""
Picks up work from the inputQueue and outputs result to outputQueue.
Inputs:
- inputQueue (multiprocessing.Queue)
- outputQueue (multiprocessing.Queue)
"""
for func, args in iter(inputQueue.get, 'STOP'):
result = calculate(func, args)
outputQueue.put(result)
```
and
**functions.py**
```py
import numpy as np
def get_stats_from_uniform_dist(nDraws, seed):
"""
Calculates average and standard deviation of nDraws from NumPy's random.rand().
Arguments:
- nDraws (int): number of elements to draw
- seed (int): random number generator's seed
Returns:
- results (list): [average, std]
"""
np.random.seed(seed)
x = np.random.rand(nDraws)
return [x.mean(), x.std()]
```
This is entirely based on <https://docs.python.org/3/library/multiprocessing.html#multiprocessing-examples>
Everything runs okay with up to ~500 tasks. After that, the code hangs. It's looking like one of the processes never finishes so it gets stuck when I join them.
It does not look like the queues are getting full. I suspect that one of the processes is not finding the "STOP" entry in the `task_queue`, so it keeps trying to `.get()` forever, but I can't understand how and why that would happen. Any ideas on what could be causing the lock? Thanks! | 2022/08/03 | [
"https://Stackoverflow.com/questions/73225062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19102984/"
] | ```
import pandas as pd
import numpy as np
rng = np.random.default_rng(92)
df = pd.DataFrame({'a':rng.integers(0,5, 10),
'b':rng.integers(0,5, 10),
'c':rng.integers(0,5, 10)})
df
###
a b c
0 2 3 1
1 3 4 0
2 4 1 1
3 0 0 1
4 2 3 3
5 1 0 2
6 2 2 2
7 1 3 2
8 3 0 3
9 0 0 2
```
```
df['rollMeanColumn_a'] = df[df['a'] != 0]['a'].rolling(window=3).mean()
df['rollMeanColumn_b'] = df['b'].replace(0,np.nan).dropna().rolling(window=3).mean()
df['rollMeanColumn_c'] = df.query('c != 0')['c'].rolling(3).mean()
df
###
a b c rollMeanColumn_a rollMeanColumn_b rollMeanColumn_c
0 2 3 1 NaN NaN NaN
1 3 4 0 NaN NaN NaN
2 4 1 1 3.000000 2.666667 NaN
3 0 0 1 NaN NaN 1.000000
4 2 3 3 3.000000 2.666667 1.666667
5 1 0 2 2.333333 NaN 2.000000
6 2 2 2 1.666667 2.000000 2.333333
7 1 3 2 1.333333 2.666667 2.000000
8 3 0 3 2.000000 NaN 2.333333
9 0 0 2 NaN NaN 2.333333
``` | here is one way to do it. If you post the data to reproduce, i would have posted the result set.
```
window=5
df[df['Column']!=0]['Column'].rolling(window).mean()
``` |
73,225,062 | I am trying to use `multiprocessing.Queue` to manage some tasks that are sent by the main process and picked up by "worker" processes (`multiprocessing.Process`). The workers then run the task and put the results into a result queue.
Here is my main script:
```py
from multiprocessing import Process, Queue, freeze_support
import auxiliaries as aux
import functions
if __name__ == '__main__':
freeze_support()
start = time.perf_counter()
# number of processess
nprocs = 3
# define the tasks
tasks = [(functions.get_stats_from_uniform_dist, (2**23, i)) for i in range(600)]
# start the queues
task_queue = Queue()
result_queue = Queue()
# populate task queue
for task in tasks:
task_queue.put(task)
# after all tasks are in the queue, send a message to stop picking...
for _ in range(nprocs):
task_queue.put('STOP')
# start workers
procs = []
for _ in range(nprocs):
p = Process(target=aux.worker, args=(task_queue, result_queue))
p.start()
procs.append(p)
for p in procs:
p.join()
# print what's in the result queue
while not result_queue.empty():
print(result_queue.get())
```
The imported modules are
**auxiliaries.py**
```py
from multiprocessing import current_process
def calculate(func, args):
"""
Calculates a certain function for a list of arguments. Returns a string with the result.
Arguments:
- func (string): function name
- args (list): list of arguments
"""
result = func(*args)
string = current_process().name
string = string + " says " + func.__name__ + str(args)
string = string + " = " + str(result)
return string
def worker(inputQueue, outputQueue):
"""
Picks up work from the inputQueue and outputs result to outputQueue.
Inputs:
- inputQueue (multiprocessing.Queue)
- outputQueue (multiprocessing.Queue)
"""
for func, args in iter(inputQueue.get, 'STOP'):
result = calculate(func, args)
outputQueue.put(result)
```
and
**functions.py**
```py
import numpy as np
def get_stats_from_uniform_dist(nDraws, seed):
"""
Calculates average and standard deviation of nDraws from NumPy's random.rand().
Arguments:
- nDraws (int): number of elements to draw
- seed (int): random number generator's seed
Returns:
- results (list): [average, std]
"""
np.random.seed(seed)
x = np.random.rand(nDraws)
return [x.mean(), x.std()]
```
This is entirely based on <https://docs.python.org/3/library/multiprocessing.html#multiprocessing-examples>
Everything runs okay with up to ~500 tasks. After that, the code hangs. It's looking like one of the processes never finishes so it gets stuck when I join them.
It does not look like the queues are getting full. I suspect that one of the processes is not finding the "STOP" entry in the `task_queue`, so it keeps trying to `.get()` forever, but I can't understand how and why that would happen. Any ideas on what could be causing the lock? Thanks! | 2022/08/03 | [
"https://Stackoverflow.com/questions/73225062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19102984/"
] | ```
import pandas as pd
import numpy as np
rng = np.random.default_rng(92)
df = pd.DataFrame({'a':rng.integers(0,5, 10),
'b':rng.integers(0,5, 10),
'c':rng.integers(0,5, 10)})
df
###
a b c
0 2 3 1
1 3 4 0
2 4 1 1
3 0 0 1
4 2 3 3
5 1 0 2
6 2 2 2
7 1 3 2
8 3 0 3
9 0 0 2
```
```
df['rollMeanColumn_a'] = df[df['a'] != 0]['a'].rolling(window=3).mean()
df['rollMeanColumn_b'] = df['b'].replace(0,np.nan).dropna().rolling(window=3).mean()
df['rollMeanColumn_c'] = df.query('c != 0')['c'].rolling(3).mean()
df
###
a b c rollMeanColumn_a rollMeanColumn_b rollMeanColumn_c
0 2 3 1 NaN NaN NaN
1 3 4 0 NaN NaN NaN
2 4 1 1 3.000000 2.666667 NaN
3 0 0 1 NaN NaN 1.000000
4 2 3 3 3.000000 2.666667 1.666667
5 1 0 2 2.333333 NaN 2.000000
6 2 2 2 1.666667 2.000000 2.333333
7 1 3 2 1.333333 2.666667 2.000000
8 3 0 3 2.000000 NaN 2.333333
9 0 0 2 NaN NaN 2.333333
``` | As commented by @JK Chai
```
window=5
df['rollMeanColumn'] = df.loc[:, 'Column'].replace(0,np.nan).rolling(window).mean()
``` |
60,155,158 | I'm using selenium in python and trying to click an element that is not a button class. I'm using Google Chrome as my browser/web driver
Here is my code:
```
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome(executable_path="/Users/ep9k/Desktop/SeleniumTest/drivers/chromedriver")
driver.get('http://tax.watgov.org/WataugaNC/search/commonsearch.aspx?mode=address')
driver.find_element_by_name('btAgree').click() #clicks 'Agree' button to agree to site's terms
driver.find_element_by_name('inpNumber').send_keys('190')
driver.find_element_by_name('inpStreet').send_keys('ELI HARTLEY')
driver.find_element_by_name('btSearch').click()
```
This takes me to this page:
[](https://i.stack.imgur.com/9r5Gs.jpg)
I can parse the results HTML (with Beautiful Soup for example), but I want to Click on them. If I inspect the first row of elements, I see this is kept in a div element, with a style of "margin-left:3px;".
But this is not a button element, so the normal click() function does not work. Is there a way to click on this?
For example, If I click on the first row of results, I am taken to this page with more information (which is what I really want):
[](https://i.stack.imgur.com/1SiKK.jpg) | 2020/02/10 | [
"https://Stackoverflow.com/questions/60155158",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9305645/"
] | The element doesn't need to be a button to be clickable.
after I ran your code, I've added:
```py
results = driver.find_elements_by_class_name('SearchResults')
first_result = results[0]
first_result.click()
```
And it worked perfectly fine for me.
Most probably you tried to click on some different element and that's why it didn't work
EDIT:
Just to be more precise, most probably you tried to click on a div element inside `<tr>` tag.
while the `<tr>` tag contains `javascript:selectSearchRow('../Datalets/Datalet.aspx?sIndex=1&idx=1')` so your script should click this tag not `<div>` | Clicking the first row with xpath - see below.
Assuming you want to parse each of the results(parcels) after that, make use of the navigation buttons; this is a structure you could use:
```
table = driver.find_elements_by_xpath("//table[@id='searchResults']")
table[0].click()
# Extract the total number of parcels from string e.g. "1 of 24"
string=driver.find_element_by_xpath("//input[@name='DTLNavigator$txtFromTo']").get_attribute('value')
# split string in separate words; last word i.e. [-1] is the total number of parcels e.g. "24"
total_parcels=string.split(' ')[-1]
for record in range(int(total_parcels)):
# >>> parse record here <<<
driver.find_element_by_xpath("//input[@name='DTLNavigator$imageNext']").click()
time.sleep(0.5) # be considerate to your source and don't load their server with numerous quick requests
``` |
60,155,158 | I'm using selenium in python and trying to click an element that is not a button class. I'm using Google Chrome as my browser/web driver
Here is my code:
```
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome(executable_path="/Users/ep9k/Desktop/SeleniumTest/drivers/chromedriver")
driver.get('http://tax.watgov.org/WataugaNC/search/commonsearch.aspx?mode=address')
driver.find_element_by_name('btAgree').click() #clicks 'Agree' button to agree to site's terms
driver.find_element_by_name('inpNumber').send_keys('190')
driver.find_element_by_name('inpStreet').send_keys('ELI HARTLEY')
driver.find_element_by_name('btSearch').click()
```
This takes me to this page:
[](https://i.stack.imgur.com/9r5Gs.jpg)
I can parse the results HTML (with Beautiful Soup for example), but I want to Click on them. If I inspect the first row of elements, I see this is kept in a div element, with a style of "margin-left:3px;".
But this is not a button element, so the normal click() function does not work. Is there a way to click on this?
For example, If I click on the first row of results, I am taken to this page with more information (which is what I really want):
[](https://i.stack.imgur.com/1SiKK.jpg) | 2020/02/10 | [
"https://Stackoverflow.com/questions/60155158",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9305645/"
] | The element doesn't need to be a button to be clickable.
after I ran your code, I've added:
```py
results = driver.find_elements_by_class_name('SearchResults')
first_result = results[0]
first_result.click()
```
And it worked perfectly fine for me.
Most probably you tried to click on some different element and that's why it didn't work
EDIT:
Just to be more precise, most probably you tried to click on a div element inside `<tr>` tag.
while the `<tr>` tag contains `javascript:selectSearchRow('../Datalets/Datalet.aspx?sIndex=1&idx=1')` so your script should click this tag not `<div>` | induce `WebDriverWait` and following `css` selector to click table item.
```
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from bs4 import BeautifulSoup
driver = webdriver.Chrome(executable_path="/Users/ep9k/Desktop/SeleniumTest/drivers/chromedriver")
driver.get('http://tax.watgov.org/WataugaNC/search/commonsearch.aspx?mode=address')
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.NAME,"btAgree"))).click()
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.NAME,"inpNumber"))).send_keys('190')
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.NAME,"inpStreet"))).send_keys('ELI HARTLEY')
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.NAME,"btSearch"))).click()
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,"tr.SearchResults"))).click()
```
**Browser snapshot:**
[](https://i.stack.imgur.com/6tArN.jpg) |
34,645,978 | I am new to python and would like to have a script that looks at a feature class and compares the values in two text fields and then populates a third field with a `Y` or `N` depending on if the values are the same or not. I think I need to use an UpdateCursor with an if statement. I have tried the following but I get a syntax error when I try to run it. I am using ArcGIS 10.1 and know that the daCursor is better but I am just trying to wrap my head around cursors and thought I would try and keep it simple for now.
```
#import system modules
import arcpy
from arcpy import env
import os
import sys
#set environment settings
working_fc = sys.argv[1]
working_gdb = os.path.split(working_fc)[0]
#use an update cursor to populate the field BEC_UPDATED based on the result of a query
#query = ("SELECT * FROM working_fc" "WHERE [BEC_LABEL] = [BEC_V9]")
#if the query is true, then BEC_UPDATED should be popluated with "N"
#if the query is false, then BEC_UPDATED should be populated with "Y"
rows = arcpy.UpdateCursor (working_fc)
for row in rows:
if row.getValue("BEC_LABEL") == row.getValue("BEC_V9")
row.BEC_UPDATED = "N"
else
row.BEC_UPDATED = "Y"
rows.updateRow(row)
print "BEC_UPDATED field populated"
``` | 2016/01/07 | [
"https://Stackoverflow.com/questions/34645978",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5755063/"
] | Try splitting up the for loop that goes through each item and the actual get\_multi call itself. So something like:
```
all_values = ndb.get_multi(all_keys)
for counter in all_values:
# Insert amazeballs codes here
```
I have a feeling it's one of these:
1. The generator pattern (yield from for loop) is causing something funky with get\_multi execution paths
2. Perhaps the number of items you are expecting doesn't match actual result counts, which could reveal a problem with GeneralCounterShardConfig.all\_keys(name)
3. The number of shards is set too high. I've realized that anything over 10 shards causes performance issues. | When I've dug into similar issues, one thing I've learned is that `get_multi` can cause multiple RPCs to be sent from your application. It looks like the default in the SDK is set to 1000 keys per get, but the batch size I've observed in production apps is much smaller: something more like 10 (going from memory).
I suspect the reason it does this is that at some batch size, it actually is better to use multiple RPCs: there is more RPC overhead for your app, but there is more Datastore parallelism. In other words: this is still probably the best way to read a lot of datastore objects.
However, if you don't need to read the absolute most current value, you can try setting the `db.EVENTUAL_CONSISTENCY` option, but that seems to only be available in the older `db` library and not in `ndb`. (Although it also appears to be available via the [Cloud Datastore API](https://cloud.google.com/datastore/docs/reference/rpc/google.datastore.v1#google.datastore.v1.LookupRequest)).
**Details**
If you look at the Python code in the App Engine SDK, specifically the file `google/appengine/datastore/datastore_rpc.py`, you will see the following lines:
```
max_count = (Configuration.max_get_keys(config, self.__config) or
self.MAX_GET_KEYS)
...
if is_read_current and txn is None:
max_egs_per_rpc = self.__get_max_entity_groups_per_rpc(config)
else:
max_egs_per_rpc = None
...
pbsgen = self._generate_pb_lists(indexed_keys_by_entity_group,
base_req.ByteSize(), max_count,
max_egs_per_rpc, config)
rpcs = []
for pbs, indexes in pbsgen:
rpcs.append(make_get_call(base_req, pbs,
self.__create_result_index_pairs(indexes)))
```
My understanding of this:
* Set `max_count` from the configuration object, or `1000` as a default
* If the request must read the current value, set `max_gcs_per_rpc` from the configuration, or `10` as a default
* Split the input keys into individual RPCs, using both `max_count` and `max_gcs_per_rpc` as limits.
So, this is being done by the Python Datastore library. |
61,135,030 | I'm running a few tasks on the same terminal in bash. Is there a way I can stream all the logs I'm seeing on the bash terminal to a log file? I know can technically pipe the logs of individual tasks, but wondering if there's a more elegant way. So far this is what I'm doing:
```
$> python background1.py > logs/bg1.log & \
python background2.py > logs/bg2.log & \
python foreground.py | tee logs/fg.log
```
Is there a way I can somehow capture everything together? (somewhat similar to how CI/CD tools show all of the terminal output in the browser). | 2020/04/10 | [
"https://Stackoverflow.com/questions/61135030",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/499363/"
] | Sorry I didn't understand your question :D
For this case you can use an input to specify what do you need:
type=int(input("What Type Of Def You Want To Use? "))
And then you can put an IF for you selection:
if(type==1):
command.a(args)
elif(type==2):
command.b(args)
elif(type==3):
command.c(args)
else:
print("Invalid Command.. Use:[1,2,3]")
Hope it works for you this time :D | You can use `input`:
```
name = input ("What's your name")
print ("Hello, ", name )
```
If you're writing a command line tool, it's very doable with the [click package](https://click.palletsprojects.com/en/7.x/). See their Hello World example:
```
import click
@click.command()
@click.option('--count', default=1, help='Number of greetings.')
@click.option('--name', prompt='Your name',
help='The person to greet.')
def hello(count, name):
"""Simple program that greets NAME for a total of COUNT times."""
for x in range(count):
click.echo('Hello %s!' % name)
if __name__ == '__main__':
hello()
```
Running `python hello.py --count=3` will give you the output below:
```
Your name: John
Hello John!
Hello John!
Hello John!
``` |
69,555,581 | This might be heavily related to similar questions as [Python 3.3: Split string and create all combinations](https://stackoverflow.com/questions/22911367/python-3-3-split-string-and-create-all-combinations/22911505) , but I can't infer a pythonic solution out of this.
Question is:
Let there be a str such as `'hi|guys|whats|app'`, and I need all permutations of splitting that str by a separator. Example:
```
#splitting only once
['hi','guys|whats|app']
['hi|guys','whats|app']
['hi|guys|whats','app']
#splitting only twice
['hi','guys','whats|app']
['hi','guys|whats','app']
#splitting only three times
...
etc
```
I could write a backtracking algorithm, but does python (itertools, e.g.) offer a library that simplifies this algorithm?
Thanks in advance!! | 2021/10/13 | [
"https://Stackoverflow.com/questions/69555581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6042172/"
] | An approach, once you have split the string is to use `itertools.combinations` to define the split points in the list, the other positions should be fused again.
```
def lst_merge(lst, positions, sep='|'):
'''merges a list on points other than positions'''
'''A, B, C, D and 0, 1 -> A, B, C|D'''
a = -1
out = []
for b in list(positions)+[len(lst)-1]:
out.append('|'.join(lst[a+1:b+1]))
a = b
return out
def split_comb(s, split=1, sep='|'):
from itertools import combinations
l = s.split(sep)
return [lst_merge(l, pos, sep=sep)
for pos in combinations(range(len(l)-1), split)]
```
#### examples
```
>>> split_comb('hi|guys|whats|app', 0)
[['hi|guys|whats|app']]
>>> split_comb('hi|guys|whats|app', 1)
[['hi', 'guys|whats|app'],
['hi|guys', 'whats|app'],
['hi|guys|whats', 'app']]
>>> split_comb('hi|guys|whats|app', 2)
[['hi', 'guys', 'whats|app'],
['hi', 'guys|whats', 'app'],
['hi|guys', 'whats', 'app']]
>>> split_comb('hi|guys|whats|app', 3)
[['hi', 'guys', 'whats', 'app']]
>>> split_comb('hi|guys|whats|app', 4)
[] ## impossible
```
#### rationale
```
ABCD -> A B C D
0 1 2
combinations of split points: 0/1 or 0/2 or 1/2
0/1 -> merge on 2 -> A B CD
0/2 -> merge on 1 -> A BC D
1/2 -> merge on 0 -> AB C D
```
#### generic function
Here is a generic version, working like above but also taking `-1` as parameter for `split`, in which case it will output all combinations
```
def lst_merge(lst, positions, sep='|'):
a = -1
out = []
for b in list(positions)+[len(lst)-1]:
out.append('|'.join(lst[a+1:b+1]))
a = b
return out
def split_comb(s, split=1, sep='|'):
from itertools import combinations, chain
l = s.split(sep)
if split == -1:
pos = chain.from_iterable(combinations(range(len(l)-1), r)
for r in range(len(l)+1))
else:
pos = combinations(range(len(l)-1), split)
return [lst_merge(l, pos, sep=sep)
for pos in pos]
```
example:
```
>>> split_comb('hi|guys|whats|app', -1)
[['hi|guys|whats|app'],
['hi', 'guys|whats|app'],
['hi|guys', 'whats|app'],
['hi|guys|whats', 'app'],
['hi', 'guys', 'whats|app'],
['hi', 'guys|whats', 'app'],
['hi|guys', 'whats', 'app'],
['hi', 'guys', 'whats', 'app']]
``` | One approach using [`combinations`](https://docs.python.org/3/library/itertools.html#itertools.combinations) and [`chain`](https://docs.python.org/3/library/itertools.html#itertools.chain)
```
from itertools import combinations, chain
def partition(alist, indices):
# https://stackoverflow.com/a/1198876/4001592
pairs = zip(chain([0], indices), chain(indices, [None]))
return (alist[i:j] for i, j in pairs)
s = 'hi|guys|whats|app'
delimiter_count = s.count("|")
splits = s.split("|")
for i in range(1, delimiter_count + 1):
print("split", i)
for combination in combinations(range(1, delimiter_count + 1), i):
res = ["|".join(part) for part in partition(splits, combination)]
print(res)
```
**Output**
```
split 1
['hi', 'guys|whats|app']
['hi|guys', 'whats|app']
['hi|guys|whats', 'app']
split 2
['hi', 'guys', 'whats|app']
['hi', 'guys|whats', 'app']
['hi|guys', 'whats', 'app']
split 3
['hi', 'guys', 'whats', 'app']
```
The idea is to generate all the ways to pick (or remove) a delimiter 1, 2, 3 times and generate the partitions from there. |
69,555,581 | This might be heavily related to similar questions as [Python 3.3: Split string and create all combinations](https://stackoverflow.com/questions/22911367/python-3-3-split-string-and-create-all-combinations/22911505) , but I can't infer a pythonic solution out of this.
Question is:
Let there be a str such as `'hi|guys|whats|app'`, and I need all permutations of splitting that str by a separator. Example:
```
#splitting only once
['hi','guys|whats|app']
['hi|guys','whats|app']
['hi|guys|whats','app']
#splitting only twice
['hi','guys','whats|app']
['hi','guys|whats','app']
#splitting only three times
...
etc
```
I could write a backtracking algorithm, but does python (itertools, e.g.) offer a library that simplifies this algorithm?
Thanks in advance!! | 2021/10/13 | [
"https://Stackoverflow.com/questions/69555581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6042172/"
] | An approach, once you have split the string is to use `itertools.combinations` to define the split points in the list, the other positions should be fused again.
```
def lst_merge(lst, positions, sep='|'):
'''merges a list on points other than positions'''
'''A, B, C, D and 0, 1 -> A, B, C|D'''
a = -1
out = []
for b in list(positions)+[len(lst)-1]:
out.append('|'.join(lst[a+1:b+1]))
a = b
return out
def split_comb(s, split=1, sep='|'):
from itertools import combinations
l = s.split(sep)
return [lst_merge(l, pos, sep=sep)
for pos in combinations(range(len(l)-1), split)]
```
#### examples
```
>>> split_comb('hi|guys|whats|app', 0)
[['hi|guys|whats|app']]
>>> split_comb('hi|guys|whats|app', 1)
[['hi', 'guys|whats|app'],
['hi|guys', 'whats|app'],
['hi|guys|whats', 'app']]
>>> split_comb('hi|guys|whats|app', 2)
[['hi', 'guys', 'whats|app'],
['hi', 'guys|whats', 'app'],
['hi|guys', 'whats', 'app']]
>>> split_comb('hi|guys|whats|app', 3)
[['hi', 'guys', 'whats', 'app']]
>>> split_comb('hi|guys|whats|app', 4)
[] ## impossible
```
#### rationale
```
ABCD -> A B C D
0 1 2
combinations of split points: 0/1 or 0/2 or 1/2
0/1 -> merge on 2 -> A B CD
0/2 -> merge on 1 -> A BC D
1/2 -> merge on 0 -> AB C D
```
#### generic function
Here is a generic version, working like above but also taking `-1` as parameter for `split`, in which case it will output all combinations
```
def lst_merge(lst, positions, sep='|'):
a = -1
out = []
for b in list(positions)+[len(lst)-1]:
out.append('|'.join(lst[a+1:b+1]))
a = b
return out
def split_comb(s, split=1, sep='|'):
from itertools import combinations, chain
l = s.split(sep)
if split == -1:
pos = chain.from_iterable(combinations(range(len(l)-1), r)
for r in range(len(l)+1))
else:
pos = combinations(range(len(l)-1), split)
return [lst_merge(l, pos, sep=sep)
for pos in pos]
```
example:
```
>>> split_comb('hi|guys|whats|app', -1)
[['hi|guys|whats|app'],
['hi', 'guys|whats|app'],
['hi|guys', 'whats|app'],
['hi|guys|whats', 'app'],
['hi', 'guys', 'whats|app'],
['hi', 'guys|whats', 'app'],
['hi|guys', 'whats', 'app'],
['hi', 'guys', 'whats', 'app']]
``` | Here is a recursive function I came up with:
```
def splitperms(string, i=0):
if len(string) == i:
return [[string]]
elif string[i] == "|":
return [*[[string[:i]] + split for split in splitperms(string[i + 1:])], *splitperms(string, i + 1)]
else:
return splitperms(string, i + 1)
```
Output:
```
>>> splitperms('hi|guys|whats|app')
[['hi', 'guys', 'whats', 'app'], ['hi', 'guys', 'whats|app'], ['hi', 'guys|whats', 'app'], ['hi', 'guys|whats|app'], ['hi|guys', 'whats', 'app'], ['hi|guys', 'whats|app'], ['hi|guys|whats', 'app'], ['hi|guys|whats|app']]
>>>
``` |
69,555,581 | This might be heavily related to similar questions as [Python 3.3: Split string and create all combinations](https://stackoverflow.com/questions/22911367/python-3-3-split-string-and-create-all-combinations/22911505) , but I can't infer a pythonic solution out of this.
Question is:
Let there be a str such as `'hi|guys|whats|app'`, and I need all permutations of splitting that str by a separator. Example:
```
#splitting only once
['hi','guys|whats|app']
['hi|guys','whats|app']
['hi|guys|whats','app']
#splitting only twice
['hi','guys','whats|app']
['hi','guys|whats','app']
#splitting only three times
...
etc
```
I could write a backtracking algorithm, but does python (itertools, e.g.) offer a library that simplifies this algorithm?
Thanks in advance!! | 2021/10/13 | [
"https://Stackoverflow.com/questions/69555581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6042172/"
] | An approach, once you have split the string is to use `itertools.combinations` to define the split points in the list, the other positions should be fused again.
```
def lst_merge(lst, positions, sep='|'):
'''merges a list on points other than positions'''
'''A, B, C, D and 0, 1 -> A, B, C|D'''
a = -1
out = []
for b in list(positions)+[len(lst)-1]:
out.append('|'.join(lst[a+1:b+1]))
a = b
return out
def split_comb(s, split=1, sep='|'):
from itertools import combinations
l = s.split(sep)
return [lst_merge(l, pos, sep=sep)
for pos in combinations(range(len(l)-1), split)]
```
#### examples
```
>>> split_comb('hi|guys|whats|app', 0)
[['hi|guys|whats|app']]
>>> split_comb('hi|guys|whats|app', 1)
[['hi', 'guys|whats|app'],
['hi|guys', 'whats|app'],
['hi|guys|whats', 'app']]
>>> split_comb('hi|guys|whats|app', 2)
[['hi', 'guys', 'whats|app'],
['hi', 'guys|whats', 'app'],
['hi|guys', 'whats', 'app']]
>>> split_comb('hi|guys|whats|app', 3)
[['hi', 'guys', 'whats', 'app']]
>>> split_comb('hi|guys|whats|app', 4)
[] ## impossible
```
#### rationale
```
ABCD -> A B C D
0 1 2
combinations of split points: 0/1 or 0/2 or 1/2
0/1 -> merge on 2 -> A B CD
0/2 -> merge on 1 -> A BC D
1/2 -> merge on 0 -> AB C D
```
#### generic function
Here is a generic version, working like above but also taking `-1` as parameter for `split`, in which case it will output all combinations
```
def lst_merge(lst, positions, sep='|'):
a = -1
out = []
for b in list(positions)+[len(lst)-1]:
out.append('|'.join(lst[a+1:b+1]))
a = b
return out
def split_comb(s, split=1, sep='|'):
from itertools import combinations, chain
l = s.split(sep)
if split == -1:
pos = chain.from_iterable(combinations(range(len(l)-1), r)
for r in range(len(l)+1))
else:
pos = combinations(range(len(l)-1), split)
return [lst_merge(l, pos, sep=sep)
for pos in pos]
```
example:
```
>>> split_comb('hi|guys|whats|app', -1)
[['hi|guys|whats|app'],
['hi', 'guys|whats|app'],
['hi|guys', 'whats|app'],
['hi|guys|whats', 'app'],
['hi', 'guys', 'whats|app'],
['hi', 'guys|whats', 'app'],
['hi|guys', 'whats', 'app'],
['hi', 'guys', 'whats', 'app']]
``` | If you want *all* partitions, try `partitions` from [more-itertools](https://more-itertools.readthedocs.io/en/stable/api.html#more_itertools.partitions):
```
from more_itertools import partitions
s = 'hi|guys|whats|app'
for p in partitions(s.split('|')):
print(list(map('|'.join, p)))
```
Output:
```
['hi|guys|whats|app']
['hi', 'guys|whats|app']
['hi|guys', 'whats|app']
['hi|guys|whats', 'app']
['hi', 'guys', 'whats|app']
['hi', 'guys|whats', 'app']
['hi|guys', 'whats', 'app']
['hi', 'guys', 'whats', 'app']
```
If you only want a certain number of splits, then instead of splitting at *all* separators and then re-joining the parts, you could just get combinations of the separator indexes and take substrings accordingly:
```
from itertools import combinations
s = 'hi|guys|whats|app'
splits = 2
indexes = [i for i, c in enumerate(s) if c == '|']
for I in combinations(indexes, splits):
print([s[i+1:j] for i, j in zip([-1, *I], [*I, None])])
```
Output:
```
['hi', 'guys', 'whats|app']
['hi', 'guys|whats', 'app']
['hi|guys', 'whats', 'app']
``` |
69,555,581 | This might be heavily related to similar questions as [Python 3.3: Split string and create all combinations](https://stackoverflow.com/questions/22911367/python-3-3-split-string-and-create-all-combinations/22911505) , but I can't infer a pythonic solution out of this.
Question is:
Let there be a str such as `'hi|guys|whats|app'`, and I need all permutations of splitting that str by a separator. Example:
```
#splitting only once
['hi','guys|whats|app']
['hi|guys','whats|app']
['hi|guys|whats','app']
#splitting only twice
['hi','guys','whats|app']
['hi','guys|whats','app']
#splitting only three times
...
etc
```
I could write a backtracking algorithm, but does python (itertools, e.g.) offer a library that simplifies this algorithm?
Thanks in advance!! | 2021/10/13 | [
"https://Stackoverflow.com/questions/69555581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6042172/"
] | An approach, once you have split the string is to use `itertools.combinations` to define the split points in the list, the other positions should be fused again.
```
def lst_merge(lst, positions, sep='|'):
'''merges a list on points other than positions'''
'''A, B, C, D and 0, 1 -> A, B, C|D'''
a = -1
out = []
for b in list(positions)+[len(lst)-1]:
out.append('|'.join(lst[a+1:b+1]))
a = b
return out
def split_comb(s, split=1, sep='|'):
from itertools import combinations
l = s.split(sep)
return [lst_merge(l, pos, sep=sep)
for pos in combinations(range(len(l)-1), split)]
```
#### examples
```
>>> split_comb('hi|guys|whats|app', 0)
[['hi|guys|whats|app']]
>>> split_comb('hi|guys|whats|app', 1)
[['hi', 'guys|whats|app'],
['hi|guys', 'whats|app'],
['hi|guys|whats', 'app']]
>>> split_comb('hi|guys|whats|app', 2)
[['hi', 'guys', 'whats|app'],
['hi', 'guys|whats', 'app'],
['hi|guys', 'whats', 'app']]
>>> split_comb('hi|guys|whats|app', 3)
[['hi', 'guys', 'whats', 'app']]
>>> split_comb('hi|guys|whats|app', 4)
[] ## impossible
```
#### rationale
```
ABCD -> A B C D
0 1 2
combinations of split points: 0/1 or 0/2 or 1/2
0/1 -> merge on 2 -> A B CD
0/2 -> merge on 1 -> A BC D
1/2 -> merge on 0 -> AB C D
```
#### generic function
Here is a generic version, working like above but also taking `-1` as parameter for `split`, in which case it will output all combinations
```
def lst_merge(lst, positions, sep='|'):
a = -1
out = []
for b in list(positions)+[len(lst)-1]:
out.append('|'.join(lst[a+1:b+1]))
a = b
return out
def split_comb(s, split=1, sep='|'):
from itertools import combinations, chain
l = s.split(sep)
if split == -1:
pos = chain.from_iterable(combinations(range(len(l)-1), r)
for r in range(len(l)+1))
else:
pos = combinations(range(len(l)-1), split)
return [lst_merge(l, pos, sep=sep)
for pos in pos]
```
example:
```
>>> split_comb('hi|guys|whats|app', -1)
[['hi|guys|whats|app'],
['hi', 'guys|whats|app'],
['hi|guys', 'whats|app'],
['hi|guys|whats', 'app'],
['hi', 'guys', 'whats|app'],
['hi', 'guys|whats', 'app'],
['hi|guys', 'whats', 'app'],
['hi', 'guys', 'whats', 'app']]
``` | I am surprised that most answers are using `combinations`, where this is clearly a *binary power* sequence (that is, multiple *binary cartesian products concatenated*).
Let me elaborate: if we have `n` separators, we have `2**n` possible strings, where each separator is either `on` or `off`. So if we map each bit of an integer sequence from `0` to `2**n` to each separator (`0` means we don't split, `1` means we split) we can generate the whole thing quite efficiently (without running into stack depth limits, and being able to *pause* and *resume* the generator -or even run it in parallel!- using just a simple integer to keep track of progress).
```
def partition(index, tokens, separator):
def helper():
n = index
for token in tokens:
yield token
if n % 2:
yield separator
n //= 2
return ''.join(helper())
def all_partitions(txt, separator):
tokens = txt.split(separator)
for i in range(2**(len(tokens)-1)):
yield partition(i, tokens, separator)
for x in all_partitions('hi|guys|whats|app', '|'):
print(x)
```
Explanation:
```
hi|guys|whats|app
^ ^ ^
bit 0 1 2 (big endian representation)
```
```
hi guys whats up
^ ^ ^
0 = 0 0 0
hi|guys whats up
^ ^ ^
1 = 1 0 0
hi guys|whats up
^ ^ ^
2 = 0 1 0
hi|guys|whats up
^ ^ ^
3 = 1 1 0
hi guys whats|up
^ ^ ^
4 = 0 0 1
hi|guys whats|up
^ ^ ^
5 = 1 0 1
hi guys|whats|up
^ ^ ^
6 = 0 1 1
hi|guys|whats|up
^ ^ ^
7 = 1 1 1
``` |
64,163,749 | I have asyncio crawler, that visits URLs and collects new URLs from HTML responses. I was inspired that great tool: <https://github.com/aio-libs/aiohttp/blob/master/examples/legacy/crawl.py>
Here is a very simplified piece of workflow, how it works:
```
import asyncio
import aiohttp
class Requester:
def __init__(self):
self.sem = asyncio.BoundedSemaphore(1)
async def fetch(self, url, client):
async with client.get(url) as response:
data = (await response.read()).decode('utf-8', 'replace')
print("URL:", url, " have code:", response.status)
return response, data
async def run(self, urls):
async with aiohttp.ClientSession() as client:
for url in urls:
await self.sem.acquire()
task = asyncio.create_task(self.fetch(url, client))
task.add_done_callback(lambda t: self.sem.release())
def http_crawl(self, _urls_list):
loop = asyncio.get_event_loop()
crawl_loop = asyncio.ensure_future(self.run(_urls_list))
loop.run_until_complete(crawl_loop)
r = Requester()
_url_list = ['https://www.google.com','https://images.google.com','https://maps.google.com','https://mail.google.com','https://news.google.com','https://video.google.com','https://books.google.com']
r.http_crawl(_url_list)
```
What I need now is to add some very slow beautifulsoap based function. I need that function do not block main loop and work as background process. For instance, I will handle HTTP responses.
I read python docs about it and found that: <https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor>
I tried to add it to my code, but it does not work as should (I use cpu\_bound only for demo):
```
import asyncio
import aiohttp
import concurrent.futures
def cpu_bound():
return sum(i * i for i in range(10 ** 7))
class Requester:
def __init__(self):
self.sem = asyncio.BoundedSemaphore(1)
async def fetch(self, url, client):
async with client.get(url) as response:
data = (await response.read()).decode('utf-8', 'replace')
print("URL:", url, " have code:", response.status)
####### Blocking operation #######
loop = asyncio.get_running_loop()
with concurrent.futures.ProcessPoolExecutor() as pool:
result = await loop.run_in_executor(pool, cpu_bound)
print('custom process pool', result)
#################################
return response, data
async def run(self, urls):
async with aiohttp.ClientSession() as client:
for url in urls:
await self.sem.acquire()
task = asyncio.create_task(self.fetch(url, client))
task.add_done_callback(lambda t: self.sem.release())
def http_crawl(self, _urls_list):
loop = asyncio.get_event_loop()
crawl_loop = asyncio.ensure_future(self.run(_urls_list))
loop.run_until_complete(crawl_loop)
r = Requester()
_url_list = ['https://www.google.com','https://images.google.com','https://maps.google.com','https://mail.google.com','https://news.google.com','https://video.google.com','https://books.google.com']
r.http_crawl(_url_list)
```
For now, it doesn't work as expected, it blocks HTTP requests every time:
```
URL: https://www.google.com have code: 200
custom process pool 333333283333335000000
URL: https://images.google.com have code: 200
custom process pool 333333283333335000000
URL: https://maps.google.com have code: 200
custom process pool 333333283333335000000
URL: https://mail.google.com have code: 200
custom process pool 333333283333335000000
URL: https://news.google.com have code: 200
custom process pool 333333283333335000000
URL: https://video.google.com have code: 200
custom process pool 333333283333335000000
```
How to correctly put the task in the background inside the main asyncio process?
Are there best practices on how to do that in a simple way, or I should use Redis for task planning? | 2020/10/01 | [
"https://Stackoverflow.com/questions/64163749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14376515/"
] | I have created an example in Node.js that is based on the steps from my previous answer to this question.
The first action expects a valid apikey in `params.apikey` as input parameter and returns a bearer token:
```js
/**
*
* main() will be run when you invoke this action
*
* @param Cloud Functions actions accept a single parameter, which must be a JSON object.
*
* @return The output of this action, which must be a JSON object.
*
*/
function main(params) {
const axios = require('axios');
const querystring = require('querystring');
return axios.post('https://iam.cloud.ibm.com/cloudfoundry/login/us-south/oauth/token',
querystring.stringify({
grant_type: "password",
username: "apikey",
password: params.apikey
}), {
auth: {
username: 'cf'
}
})
.then(res => {
console.log(`statusCode: ${res.status}`);
console.log(res.data);
return {
token: res.data.access_token
};
})
.catch(error => {
console.error(error);
return {
error: err.message
};
})
}
```
The second action expects a valid bearer token in `params.token` and is then executing an API call against the IBM Cloud CF Public API. In this example a get request against /v2/organizations:
```js
/**
*
* main() will be run when you invoke this action
*
* @param Cloud Functions actions accept a single parameter, which must be a JSON object.
*
* @return The output of this action, which must be a JSON object.
*
*/
function main(params) {
const axios = require('axios');
axios.defaults.headers.common['Authorization'] = "bearer " + params.token;
const querystring = require('querystring');
return axios.get('https://api.us-south.cf.cloud.ibm.com/v2/organizations')
.then(res => {
console.log(`statusCode: ${res.status}`);
console.log(res.data);
return {
organizations: res.data.resources
};
})
.catch(error => {
console.error(error);
return {
error: error.message
};
})
}
```
Now you can put both actions into a sequence, so that the output from the first action (the bearer token) is used as token within the second action. | I can't guide you the full way right now, but I hope the information that I can provide will guide you into the right direction.
First you'll need to identify the authorization endpoint:
`curl http://api.us-south.cf.cloud.ibm.com/info`
With that and a valid IAM API token for your account you can get the bearer token that will work against the IBM Cloud CF Public API:
`curl -v -X POST "https://iam.cloud.ibm.com/cloudfoundry/login/us-south/oauth/token" -d "grant_type=password&scope=&username=apikey&password=<yourApiKey>" --user "cf:"`
Note that you need to append `/oauth/token` to the authorization endpoint that you received in step 1.
The response contains the access token that you need. For this example, just put it into an environment variable:
`export TOKEN=<yourAccessToken>`
Next try a command against the IBM Cloud CF Public API:
`curl "https://api.us-south.cf.cloud.ibm.com/v2/organizations" -X GET -H "Authorization: bearer $TOKEN"`
I hope once you have followed these steps in your command line, you will be able to do the same steps in your IBM Cloud Function and you'll reach your goal. |
53,098,413 | I am storing discount codes with different prefixes and unique digits at the end (`10OFF<abc>`, `25OFF<abc>`, `50OFF<abc>`, etc.) in a file, and then loading that file into a list.
I am trying to make a function so that when they are redeemed, they are removed from the list, and the file is overwritten. Right now what I am doing looks like this:
```
for x in range(0, 5):
total += codes[0] + '\n'
codes.remove(codes[0])
with open('codes.txt', 'w') as f:
for code in codes:
f.write(code+'\n')
```
For one thing, I don't think this is a very pythonic way of doing things, and it feels dirty. And for another, theres not really a way for me to specify which discount code to select and remove - doing it this way i would have to make separate files for the `10OFF`, `25OFF`, and `50OFF` codes.
Does anyone have any suggestions? | 2018/11/01 | [
"https://Stackoverflow.com/questions/53098413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10142229/"
] | this should do what you want
```
list_of_codes=open('codes.txt','rt').read().split('\n')
while True:
code=input('enter code to remove:')
if code in list_of_codes:
break
else:
print('code you entered is not in the list')
continue
list_of_codes.pop(list_of_codes.index(code))
with open('codes.txt','wt') as f:
[f.write(item+'\n') for item in list_of_codes]
``` | This code promps the user to input all the code he whishes to remove, then it reads the current file and overwrites the file with only the code that were NOT input by the user. The file considers the whole content of a line as a code (must contain prefix + unique digits).
The code also leaves the old file as a backup, so you can inspect the changes afterwards.
```
import datetime
import shutil
def remove(base_file_name, codes_to_remove):
now_ = datetime.datetime.now()
current_file_name = '{}.txt'.format(base_file_name)
backup_file_name = '{}-{}.txt'.format(base_file_name, now_.strftime('%Y%m%d-%H%M%S'))
# copy current file to the new name, which will also be kept as backup
shutil.copy2(current_file_name, backup_file_name)
with open(backup_file_name, 'r') as fr, open(current_file_name, 'w') as fw:
for line in fr:
line = line.strip()
if len(line) > 0:
# 'line' will be an individual code
if line not in codes_to_remove:
fw.write('{}\n'.format(line))
if __name__ == '__main__':
code_list = input('Enter codes to remove (separated by spaces): ').split()
remove('my-codes', code_list)
```
Suppose the file **my-codes.txt** contains the following lines:
```
25OFF123456
25OFF123457
25OFF123458
50OFF111112
50OFF111113
```
When you run this code, and when prompted you input a few codes:
```
Enter codes to remove (separated by spaces): 50OFF111112 50OFF111114
Process finished with exit code 0
```
Then afterwards the file **my-codes.txt** will contain one less code (the second code from the prompt `50OFF111114` does not exist in the file and will have no effect):
```
25OFF123456
25OFF123457
25OFF123458
50OFF111113
```
You also will be left with a new file called **my-codes-20181101-120102.txt** which will contain the original 5 codes from before running the script.
---
Notes:
* if you have thousands of codes and are looking for more speed, then you should look into using databases. But for only a few hunderd codes this should sufice.
* Your promo codes can be of any format, as long as the file contains a single code per line, and you have to input the code into the prompt exactly like it is in the file. Of course, you can modify this script to fit your specific use case. |
53,098,413 | I am storing discount codes with different prefixes and unique digits at the end (`10OFF<abc>`, `25OFF<abc>`, `50OFF<abc>`, etc.) in a file, and then loading that file into a list.
I am trying to make a function so that when they are redeemed, they are removed from the list, and the file is overwritten. Right now what I am doing looks like this:
```
for x in range(0, 5):
total += codes[0] + '\n'
codes.remove(codes[0])
with open('codes.txt', 'w') as f:
for code in codes:
f.write(code+'\n')
```
For one thing, I don't think this is a very pythonic way of doing things, and it feels dirty. And for another, theres not really a way for me to specify which discount code to select and remove - doing it this way i would have to make separate files for the `10OFF`, `25OFF`, and `50OFF` codes.
Does anyone have any suggestions? | 2018/11/01 | [
"https://Stackoverflow.com/questions/53098413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10142229/"
] | this should do what you want
```
list_of_codes=open('codes.txt','rt').read().split('\n')
while True:
code=input('enter code to remove:')
if code in list_of_codes:
break
else:
print('code you entered is not in the list')
continue
list_of_codes.pop(list_of_codes.index(code))
with open('codes.txt','wt') as f:
[f.write(item+'\n') for item in list_of_codes]
``` | I'd suggest using a database structure for this such as maybe [TinyDB](https://pypi.org/project/tinydb/) *(uses json in the background)*. Or at the very least making the file a .json file and using [dataIO](https://pypi.org/project/dataIO/). This way saving and loading are faster and you keep a `list` in python that's easy to use.
Also python has a built in unique generator called [uuid](https://docs.python.org/2/library/uuid.html) that I use for unique strings.
```
from uuid import uuid4 as gen
import dataIO
filepath = '~/codes.json'
codes = ('10OFF<{}>', '25OFF<{}>', '50OFF<{}>', etc.)
#initial setup
if not dataIO.is_valid_json(filepath):
all_codes = {'codes':[]}
save_codes()
#load the database
db = dataIO.load_json(filepath)
all_codes = db['codes']
def get_codes():
generated_codes = [c.format(gen.hex[:8]) for c in codes]
#Display the codes generated
print('\n'.join(generated_codes))
#Save them to file
all_codes.append(generated_codes)
save_codes()
def use_code(code):
if code in all_codes:
all_codes.pop(code)
print("Enjoy {}% Off!".format(code[2:]))
else:
print("Invalid code!")
#just for ease of typing out the following
def save_codes()
dataIO.save_json(filename, all_codes)
```
This has the basics on saving/loading and using just a .json file as your mini database. You could even just use python's `csv` library to save and load as a .csv and load it back as a list if this is the only detail you're saving to file *(idk why)*. I'd do some research and find what works best for you!
**Edit**
Just to inform you, `db` isn't an actual database, it's just a .json file which is basically a dict and it's saved to disk. `codes` is just a key in `db` whish holds a list of all the codes. With this approach, you can also have a users list of dicts in `db` and more. You may want to associate codes to certain users and such, this and more are much easier if you use a database format. This is just a quick and dirty, but imo efficient way of accomplishing this as your question is only asking about storing codes. Good luck! |
40,703,228 | I am trying to run a Flask REST service on CentOS Apache2 using WSGI. The REST service requires a very small storage. So i decided to use SQLite with `sqlite3` python package. The whole application worked perfectly well on my local system and on the CentOS server when ran using `app.run()`. But when i used WSGI to host the application on Apache, i am getting
```
OperationalError: attempt to write a readonly database
```
I have checked the permissions of the file. The user and group of the file are set to apache (under which the server is running) using `chown` and`chgrp`. Also, the file has `rwx` permission. Still i am getting read-only database error. Following is what i get by running `ls -al` on the db file:
```
-rwxrwxrwx. 1 apache apache 8192 Nov 19 01:39 dbfile.db
```
My Apache Configuration:
```
<VirtualHost *>
ServerName wlc.host.com
WSGIDaemonProcess wlcd
WSGIScriptAlias / /var/www/html/wlcd.wsgi
RewriteEngine on
RewriteCond %{HTTP:Authorization} ^(.*)
RewriteRule .* - [e=HTTP_AUTHORIZATION:%1]
<Directory /var/www/html/>
WSGIProcessGroup wlcd
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
Require all granted
</Directory>
``` | 2016/11/20 | [
"https://Stackoverflow.com/questions/40703228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6935236/"
] | In addition to changing the database file permissions, you need also to change permissions for the directory that hosts the database file. You can try the following command:
```
chmod 664 /path/to/your/directory/
```
You can also change the directory's owner as follows:
```
chown apache:apache /path/to/your/directory/
``` | What worked for me (I don't have sudo) was removing the database file and all migrations and starting again, as described here: [How do I delete DB (sqlite3) in Django 1.9 to start from scratch?](https://stackoverflow.com/questions/42150499/how-do-i-delete-db-sqlite3-in-django-1-9-to-start-from-scratch/42150639) |
40,703,228 | I am trying to run a Flask REST service on CentOS Apache2 using WSGI. The REST service requires a very small storage. So i decided to use SQLite with `sqlite3` python package. The whole application worked perfectly well on my local system and on the CentOS server when ran using `app.run()`. But when i used WSGI to host the application on Apache, i am getting
```
OperationalError: attempt to write a readonly database
```
I have checked the permissions of the file. The user and group of the file are set to apache (under which the server is running) using `chown` and`chgrp`. Also, the file has `rwx` permission. Still i am getting read-only database error. Following is what i get by running `ls -al` on the db file:
```
-rwxrwxrwx. 1 apache apache 8192 Nov 19 01:39 dbfile.db
```
My Apache Configuration:
```
<VirtualHost *>
ServerName wlc.host.com
WSGIDaemonProcess wlcd
WSGIScriptAlias / /var/www/html/wlcd.wsgi
RewriteEngine on
RewriteCond %{HTTP:Authorization} ^(.*)
RewriteRule .* - [e=HTTP_AUTHORIZATION:%1]
<Directory /var/www/html/>
WSGIProcessGroup wlcd
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
Require all granted
</Directory>
``` | 2016/11/20 | [
"https://Stackoverflow.com/questions/40703228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6935236/"
] | In addition to changing the database file permissions, you need also to change permissions for the directory that hosts the database file. You can try the following command:
```
chmod 664 /path/to/your/directory/
```
You can also change the directory's owner as follows:
```
chown apache:apache /path/to/your/directory/
``` | run
```
sudo chmod 774 path-to-your-djangoproject-directory
```
The directory should be the one which contains manage.py and db.sqlite3 files. (the parent container).
Worked in my case. |
40,703,228 | I am trying to run a Flask REST service on CentOS Apache2 using WSGI. The REST service requires a very small storage. So i decided to use SQLite with `sqlite3` python package. The whole application worked perfectly well on my local system and on the CentOS server when ran using `app.run()`. But when i used WSGI to host the application on Apache, i am getting
```
OperationalError: attempt to write a readonly database
```
I have checked the permissions of the file. The user and group of the file are set to apache (under which the server is running) using `chown` and`chgrp`. Also, the file has `rwx` permission. Still i am getting read-only database error. Following is what i get by running `ls -al` on the db file:
```
-rwxrwxrwx. 1 apache apache 8192 Nov 19 01:39 dbfile.db
```
My Apache Configuration:
```
<VirtualHost *>
ServerName wlc.host.com
WSGIDaemonProcess wlcd
WSGIScriptAlias / /var/www/html/wlcd.wsgi
RewriteEngine on
RewriteCond %{HTTP:Authorization} ^(.*)
RewriteRule .* - [e=HTTP_AUTHORIZATION:%1]
<Directory /var/www/html/>
WSGIProcessGroup wlcd
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
Require all granted
</Directory>
``` | 2016/11/20 | [
"https://Stackoverflow.com/questions/40703228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6935236/"
] | What worked for me (I don't have sudo) was removing the database file and all migrations and starting again, as described here: [How do I delete DB (sqlite3) in Django 1.9 to start from scratch?](https://stackoverflow.com/questions/42150499/how-do-i-delete-db-sqlite3-in-django-1-9-to-start-from-scratch/42150639) | run
```
sudo chmod 774 path-to-your-djangoproject-directory
```
The directory should be the one which contains manage.py and db.sqlite3 files. (the parent container).
Worked in my case. |
13,984,423 | I am very new to python, this is my first program that I am trying.
This function reads the password from the standard input.
```
def getPassword() :
passwordArray =[]
while 1:
char = sys.stdin.read(1)
if char == '\\n':
break
passwordArray.append(char)
return passwordArray
print (username)
print (URL)
```
getting this error:
```
Problem invoking WLST - Traceback (innermost last):
(no code object) at line 0
File "/scratch/aime/work/stmp/wlstCommand.py", line 10
while 1:
^
SyntaxError: invalid syntax
``` | 2012/12/21 | [
"https://Stackoverflow.com/questions/13984423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1731553/"
] | Your indentation is not correct. Your `while` should be indented the same as the line above it. | Python uses indentation to "separate" stuff and the thing with that is you need to have the same kind of indentation across the file. Having a fixed kind of indentation in the code you write is good practice. You might want to consider a tab or four spaces(The later being the suggestion in the PEP8 style guide) |
13,984,423 | I am very new to python, this is my first program that I am trying.
This function reads the password from the standard input.
```
def getPassword() :
passwordArray =[]
while 1:
char = sys.stdin.read(1)
if char == '\\n':
break
passwordArray.append(char)
return passwordArray
print (username)
print (URL)
```
getting this error:
```
Problem invoking WLST - Traceback (innermost last):
(no code object) at line 0
File "/scratch/aime/work/stmp/wlstCommand.py", line 10
while 1:
^
SyntaxError: invalid syntax
``` | 2012/12/21 | [
"https://Stackoverflow.com/questions/13984423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1731553/"
] | Your indentation is not correct. Your `while` should be indented the same as the line above it. | Python is sensitive and depended on indentation. If it complains "invalid Format", that is better than "invalid syntax". |
43,380,783 | I wrote a MoviePy script that takes an input video, does some processing, and outputs a video file. I want to run this through an entire folder of videos. Any help or direction is appreciated.
Here's what I tried...
```
for f in *; do python resize.py $f; done
```
and resize.py source code here:
```
from moviepy.editor import *
clip = VideoFileClip(input)
clip1 = clip.rotate(270)
clip2 = clip1.crop(x_center=540,y_center=960,width=1080,height=608)
clip3 = clip2.resize(width=1920)
clip3.write_videofile(output,codec='libx264')
```
Really wasn't sure what to put for "input" and "output" in my .py file.
Thanks,
Evan | 2017/04/12 | [
"https://Stackoverflow.com/questions/43380783",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7859068/"
] | I know you have an answer [on Github](https://github.com/Zulko/moviepy/issues/542#issuecomment-293735347), but I'll add my own solution.
First, you'll want to put your code inside a function:
```
def process_video(input):
"""Parameter input should be a string with the full path for a video"""
clip = VideoFileClip(input, output)
clip1 = clip.rotate(270)
clip2 = clip1.crop(x_center=540,y_center=960,width=1080,height=608)
clip3 = clip2.resize(width=1920)
clip3.write_videofile(output,codec='libx264')
```
Then, you can have a function that returns a list of file paths, and a list of final file names to use with the above function (note that the final file names will be the same as the original file names but with "output" in front):
```
import os
def get_video_paths(folder_path):
"""
Parameter folder_path should look like "Users/documents/folder1/"
Returns a list of complete paths
"""
file_name_list = os.listdir(folder_path)
path_name_list = []
final_name_list = []
for name in file_name_list:
# Put any sanity checks here, e.g.:
if name == ".DS_Store":
pass
else:
path_name_list.append(folder_path + name)
# Change the format of the output file names below
final_name_list.append(folder_path + "output" + name)
return path_name_list, final_name_list
```
Finally, at the bottom, we get the input folder, and utilise the above two functions:
```
if __name__ == "__main__":
video_folder = input("What folder would you like to process? ")
path_list, final_name_list = get_video_paths(video_folder)
for path, name in zip(path_list, final_name_list):
process_video(path, name)
print("Finished")
```
Just watch out, because this will crash if there are any files in the folder that can't be read as a movie. For instance, on mac, the OS puts a ".DS\_Store" file in each folder, which will crash the program. I've put an area for a sanity check to ignore certain filenames.
Complete code:
```
import os
from moviepy.editor import *
def process_video(input, output):
"""Parameter input should be a string with the full path for a video"""
clip = VideoFileClip(input)
clip1 = clip.rotate(270)
clip2 = clip1.crop(x_center=540,y_center=960,width=1080,height=608)
clip3 = clip2.resize(width=1920)
clip3.write_videofile(output,codec='libx264')
def get_video_paths(folder_path):
"""
Parameter folder_path should look like "Users/documents/folder1/"
Returns a list of complete paths
"""
file_name_list = os.listdir(folder_path)
path_name_list = []
final_name_list = []
for name in file_name_list:
# Put any sanity checks here, e.g.:
if name == ".DS_Store":
pass
else:
path_name_list.append(folder_path + name)
final_name_list.append(folder_path + "output" + name)
return path_name_list, final_name_list
if __name__ == "__main__":
video_folder = input("What folder would you like to process? ")
path_list, final_name_list = get_video_paths(video_folder)
for path, name in zip(path_list, final_name_list):
process_video(path, name)
print("Finished")
``` | I responded on your [Github issue #542](https://github.com/Zulko/moviepy/issues/542#issuecomment-293843765), but I copied it here for future reference!
First off, the below example isn't ironclad, but it should do what you need.
You can achieve this via something like this:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Convert all media assets located in a specified directory."""
import glob
import os
from optparse import OptionParser
from moviepy.editor import VideoFileClip
def get_dir_files(dir_path, patterns=None):
"""Get all absolute paths for pattern matched files in a directory.
Args:
dir_path (str): The path to of the directory containing media assets.
patterns (list of str): The list of patterns/file extensions to match.
Returns:
(list of str): A list of all pattern-matched files in a directory.
"""
if not patterns or type(patterns) != list:
print('No patterns list passed to get_dir_files, defaulting to patterns.')
patterns = ['*.mp4', '*.avi', '*.mov', '*.flv']
files = []
for pattern in patterns:
dir_path = os.path.abspath(dir_path) + '/' + pattern
files.extend(glob.glob(dir_path))
return files
def modify_clip(path, output):
"""Handle conversion of a video file.
Args:
path (str): The path to the directory of video files to be converted.
output (str): The filename to associate with the converted file.
"""
clip = VideoFileClip(path)
clip = clip.rotate(270)
clip = clip.crop(x_center=540, y_center=960, width=1080, height=608)
clip = clip.resize(width=1920)
clip.write_videofile(output, codec='libx264')
print('File: {} should have been created.'.format(output))
if __name__ == '__main__':
status = 'Failed!'
parser = OptionParser(version='%prog 1.0.0')
parser.add_option('-p', '--path', action='store', dest='dir_path',
default='.', type='string',
help='the path of the directory of assets, defaults to .')
options, args = parser.parse_args()
print('Running against directory path: {}'.format(options.dir_path))
path_correct = raw_input('Is that correct?').lower()
if path_correct.startswith('y'):
dir_paths = get_dir_files(options.dir_path)
for dir_path in dir_paths:
output_filename = 'converted_' + os.path.basename(dir_path)
modify_clip(path=dir_path, output=output_filename)
status = 'Successful!'
print('Conversion {}'.format(status))
```
With the above example, you can simply drop that into the directory of assets you wish to convert and run: `python this_file.py` and it should convert the files for you in the same directory with the name prepended with: `converted_`
Likewise, you can drop that file anywhere and run it against an absolute path:
`python this_file.py -p /Users/thisguy/media` and it will convert all files with the extensions: `['*.mp4', '*.avi', '*.mov', '*.flv']`
Either way, let me know if you have any questions (or if this resolves your issue) and I'll do my best to help you out!
Thanks for using moviepy! |
37,124,342 | I am "using" `Statsmodel`for less than 2 days and am not at all familiar with the import commands etc. I want to run a simple `variance_inflation_factor` from [here](http://statsmodels.sourceforge.net/devel/generated/statsmodels.stats.outliers_influence.variance_inflation_factor.html) but am having some issues. My code follows:
```
from numpy import *
import numpy as np
import pandas as pd
from pandas import DataFrame, Series
import statsmodels.formula.api as sm
from sklearn.linear_model import LinearRegression
import scipy, scipy.stats
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
from statsmodels.api import add_constant
from numpy import linalg as LA
import statsmodels as sm
## I have been adding libraries and modules/packages with the intention of erring on the side of caution
a = df1.years_exp
b = df1.leg_totalbills
c = df1.log_diff_rgdp
d = df1.unemployment
e = df1.expendituresfor
f = df1.direct_expenditures
g = df1.indirect_expenditures
sm.variance_inflation_factor((['a', 'b', 'c', 'd', 'e', 'f']), g)
then I get the following error:
AttributeError Traceback (most recent call last)
<ipython-input-61-bb126535eadd> in <module>()
----> 1 sm.variance_inflation_factor((['a', 'b', 'c', 'd', 'e', 'f']), g)
AttributeError: module 'statsmodels' has no attribute 'variance_inflation_factor'
```
Can someone direct me to the proper syntax for loading and executing this module? If it is more convenient that I post a link to some source code please ask. However, I have a feeling that this is just a simple syntax issue. | 2016/05/09 | [
"https://Stackoverflow.com/questions/37124342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5211377/"
] | The function `variance_inflation_factor` is found in `statsmodels.stats.outlier_influence` as seen [in the docs](http://statsmodels.sourceforge.net/devel/_modules/statsmodels/stats/outliers_influence.html), so to use it you must import correctly, an option would be
```
from statsmodels.stats import outliers_influence
# code here
outliers_influence.variance_inflation_factor((['a', 'b', 'c', 'd', 'e', 'f']), g)
``` | ```
a = df1.years_exp
b = df1.leg_totalbills
c = df1.log_diff_rgdp
d = df1.unemployment
e = df1.expendituresfor
f = df1.direct_expenditures
g = df1.indirect_expenditures
ck=np.array([a,b,c,d,e,f,g])
outliers_influence.variance_inflation_factor(ck, 6)
``` |
37,124,342 | I am "using" `Statsmodel`for less than 2 days and am not at all familiar with the import commands etc. I want to run a simple `variance_inflation_factor` from [here](http://statsmodels.sourceforge.net/devel/generated/statsmodels.stats.outliers_influence.variance_inflation_factor.html) but am having some issues. My code follows:
```
from numpy import *
import numpy as np
import pandas as pd
from pandas import DataFrame, Series
import statsmodels.formula.api as sm
from sklearn.linear_model import LinearRegression
import scipy, scipy.stats
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
from statsmodels.api import add_constant
from numpy import linalg as LA
import statsmodels as sm
## I have been adding libraries and modules/packages with the intention of erring on the side of caution
a = df1.years_exp
b = df1.leg_totalbills
c = df1.log_diff_rgdp
d = df1.unemployment
e = df1.expendituresfor
f = df1.direct_expenditures
g = df1.indirect_expenditures
sm.variance_inflation_factor((['a', 'b', 'c', 'd', 'e', 'f']), g)
then I get the following error:
AttributeError Traceback (most recent call last)
<ipython-input-61-bb126535eadd> in <module>()
----> 1 sm.variance_inflation_factor((['a', 'b', 'c', 'd', 'e', 'f']), g)
AttributeError: module 'statsmodels' has no attribute 'variance_inflation_factor'
```
Can someone direct me to the proper syntax for loading and executing this module? If it is more convenient that I post a link to some source code please ask. However, I have a feeling that this is just a simple syntax issue. | 2016/05/09 | [
"https://Stackoverflow.com/questions/37124342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5211377/"
] | Thanks for asking this question! I had the same question today, except I wanted to calculate the variance inflation factor for each of the features. Here is a programmatic way to do this:
```
from patsy import dmatrices
from statsmodels.stats.outliers_influence import variance_inflation_factor
# 'feature_1 + feature_2 ... feature_p'
features_formula = "+".join(df1.columns - ["indirect_expenditures"])
# get y and X dataframes based on this formula:
# indirect_expenditures ~ feature_1 + feature_2 ... feature_p
y, X = dmatrices('indirect_expenditures ~' + features_formula, df1, return_type='dataframe')
# For each Xi, calculate VIF and save in dataframe
vif = pd.DataFrame()
vif["vif"] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
vif["features"] = X.columns
vif
```
Please note the above code works only if you have imported `pandas` and df1 is a `pandas DataFrame` | ```
a = df1.years_exp
b = df1.leg_totalbills
c = df1.log_diff_rgdp
d = df1.unemployment
e = df1.expendituresfor
f = df1.direct_expenditures
g = df1.indirect_expenditures
ck=np.array([a,b,c,d,e,f,g])
outliers_influence.variance_inflation_factor(ck, 6)
``` |
11,809,643 | I have some python code with many lines like this:
```
print "some text" + variables + "more text and special characters .. etc"
```
I want to modify this to put everything after print within brackets, like this:
```
print ("some text" + variables + "more text and special characters .. etc")
```
How to do this in vim using regex? | 2012/08/04 | [
"https://Stackoverflow.com/questions/11809643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1338814/"
] | Use this substitute:
```
%s/print \(.*$\)/print (\1)
```
`\(.*$\)` matches everything up to the end of the line and captures it in a group using the escaped parentheses. The replacement includes this group using `\1`, surrounded by literal parentheses. | ```
:%s/print \(.*\)/print(\1)/c
```
OR if you visually select multiple lines
```
:'<,'>s/print \(.*\)/print(\1)/c
```
`%` - every line
`'<,'>` - selected lines
`s` - substitute
`c` - confirm - show you what matched before you convert
`print \(.*\)` - exactly match print followed by a space then group everything between the `\(` and `\)`
`print(\1)` - replace with print(<first match>)
Vim has some function rules for regex, you can do `:help substitute` or `:help regex` to see what they are. |
20,553,695 | I'm fairly green in Python and trying to get django working to build a simple website. I've installed Django 1.6 under Python 2.7.6 but can't get django-admin to run. According to the tutorial I should create a project as follows, but I get a syntax error:
```
Python 2.7.6 (default, Nov 10 2013, 19:24:18) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
>>> import sys
>>> print(django.VERSION)
(1, 6, 0, 'final', 0)
>>> django-admin.py startproject Nutana
File "<stdin>", line 1
django-admin.py startproject Nutana
^
SyntaxError: invalid syntax
>>>
```
I've created a .pth file in the site-packages directory with this:
```
c:\python27\lib\site-packages\django
c:\python27\lib\site-packages\django\bin
```
but that doesn't help. I've tried it with relative paths as well, and with the slashes going the other way.
I've also tried straight from the command line:
```
Z:\Nutana GeophysicsXXX\Web_Django>python django-admin.py startproject Nutana
python: can't open file 'django-admin.py': [Errno 2] No such file or directory
```
Where have I gone wrong?? | 2013/12/12 | [
"https://Stackoverflow.com/questions/20553695",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1324833/"
] | ```
django-admin.py startproject Nutana
```
should be run in the command line, and not in the django shell.
If the second case is not working
1. If you are using a virtual-env, did you forget to activate it ?
2. Make sure you add `C:\Python27\Scripts` to the path, and you would not face this issue. | Try this `$ django-admin.py startproject mysite`
You don't need the python statement in front. |
26,721,113 | I have an equation 'a\*x+logx-b=0,(a and b are constants)', and I want to solve x. The problem is that I have numerous constants a(accordingly numerous b). How do I solve this equation by using python? | 2014/11/03 | [
"https://Stackoverflow.com/questions/26721113",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4211557/"
] | You could check out something like
<http://docs.scipy.org/doc/scipy-0.13.0/reference/optimize.nonlin.html>
which has tools specifically designed for these kinds of equations. | Cool - today I learned about Python's numerical solver.
```
from math import log
from scipy.optimize import brentq
def f(x, a, b):
return a * x + log(x) - b
for a in range(1,5):
for b in range(1,5):
result = brentq(lambda x:f(x, a, b), 1e-10, 20)
print a, b, result
```
`brentq` provides estimate where the function crosses the x-axis. You need to give it two points, one which is definitely negative and one which is definitely positive. For negative point choose number that is smaller than exp(-B), where B is maximum value of `b`. For positive point choose number that's bigger than B.
If you cannot predict range of `b` values, you can use a solver instead. This will probably produce a solution - but this is not guaranteed.
```
from scipy.optimize import fsolve
for a in range(1,5):
for b in range(1,5):
result = fsolve(f, 1, (a,b))
print a, b, result
``` |
26,906,586 | **Background:**
I have an OpenShift Python 2.7 gear containing my Django 1.6 application. I used django-openshift-quickstart.git as a starting point for my own project and it works well.
However, if I have a syntax error in my code or some other exception I have no way of finding it. I can do a tail of the logs via:
```
rhc tail -a appname
```
However, this only shows me that a 500 error occurred. I never see any exceptions or details other than:
```
10.137.24.60, x.x.x.x - - [13/Nov/2014:17:12:27 -0500] "GET /snapper/snapshots HTTP/1.1" 500 27 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.104 Safari/537.36"
```
The client web browser reports:
```
Server Error (500)
```
I turned on the DEBUG setting (DEBUG = True) in my settings.py but that made no difference. I still see no exceptions in the logs or in the browser.
I believe the container (gear) is using haproxy + apache + mod\_wsgi + python2.7.
I'd dearly love to start getting Django exceptions reporting to my browser.
**Question:**
Why do I not see Django exceptions in my browser (or log files) under OpenShift when DEBUG is set to True ?
*I realise this appears similar to the existing question [How to debug Django exceptions in OpenShift applications](https://stackoverflow.com/questions/20586363/how-to-debug-django-exceptions-in-openshift-applications) but "rhc tail -a" simply displays the 500 error lines - I still see no Django exceptions.*
AtDhVaAnNkCsE
Doug | 2014/11/13 | [
"https://Stackoverflow.com/questions/26906586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3538533/"
] | I don't know zilch about OpenShift, but
1. you may have to configure your loggers (<https://docs.djangoproject.com/en/1.6/topics/logging/#configuring-logging>) and
2. you have to restart the wsgi processes when you make some changes to your settings.
Now I strongly advise you to NOT set `DEBUG=True` on a production server - with a properly configured logger and SMTP server you should get all unhandled exceptions by mail.
As a last point: if you have something like a syntax error or such, you may not even get to the point where Django can do any logging. In this case what info you can get is up to the server process itself. BUT there's no reason you should get a SyntaxError on a production server anyway, cause you shouldn't be editing code on production... | To my horror... it turns out I simply hadn't set DEBUG=True! I could have sworn I had set it in settings.py at some point but my commit history strongly suggests I'm wrong.
With DEBUG=True in my wsgi/settings.py I can now debug my application on OpenShift.
Apologies for the noise.
Doug |
63,802,423 | I have troubles checking the user token inside of middleware. I'm getting token from cookies and then I need to query database to check if this token exists and belongs to user that made a request.
**routing.py**
```
from channels.routing import ProtocolTypeRouter, URLRouter
import game.routing
from authentication.utils import TokenAuthMiddlewareStack
application = ProtocolTypeRouter({
# (http->django views is added by default)
'websocket': TokenAuthMiddlewareStack(
URLRouter(
game.routing.websocket_urlpatterns
)
),
})
```
**middleware.py**
```
from rest_framework.authentication import TokenAuthentication
from rest_framework.exceptions import AuthenticationFailed
from rest_auth.models import TokenModel
from channels.auth import AuthMiddlewareStack
from django.contrib.auth.models import AnonymousUser
from django.db import close_old_connections
...
class TokenAuthMiddleware:
"""
Token authorization middleware for Django Channels 2
"""
def __init__(self, inner):
self.inner = inner
def __call__(self, scope):
close_old_connections()
headers = dict(scope['headers'])
if b'Authorization' in headers[b'cookie']:
try:
cookie_str = headers[b'cookie'].decode('utf-8')
try: # no cookie Authorization=Token in the request
token_str = [x for x in cookie_str.split(';') if re.search(' Authorization=Token', x)][0].strip()
except IndexError:
scope['user'] = AnonymousUser()
return self.inner(scope)
token_name, token_key = token_str.replace('Authorization=', '').split()
if token_name == 'Token':
token = TokenModel.objects.get(key=token_key)
scope['user'] = token.user
except TokenModel.DoesNotExist:
scope['user'] = AnonymousUser()
return self.inner(scope)
TokenAuthMiddlewareStack = lambda inner: TokenAuthMiddleware(AuthMiddlewareStack(inner))
```
And this gives me
```
django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async.
```
I also tried the following approaches
```
async def __call__(self, scope):
...
if token_name == 'Token':
token = await self.get_token(token_key)
scope['user'] = token.user
...
# approach 1
@sync_to_async
def get_token(self, token_key):
return TokenModel.objects.get(key=token_key)
# approach 2
@database_sync_to_async
def get_token(self, token_key):
return TokenModel.objects.get(key=token_key)
```
Those approaches give the following error
```
[Failure instance: Traceback: <class 'TypeError'>: 'coroutine' object is not callable
/Users/nikitatonkoshkur/Documents/work/svoya_igra/venv/lib/python3.8/site-packages/autobahn/websocket/protocol.py:2847:processHandshake
/Users/nikitatonkoshkur/Documents/work/svoya_igra/venv/lib/python3.8/site-packages/txaio/tx.py:366:as_future
/Users/nikitatonkoshkur/Documents/work/svoya_igra/venv/lib/python3.8/site-packages/twisted/internet/defer.py:151:maybeDeferred
/Users/nikitatonkoshkur/Documents/work/svoya_igra/venv/lib/python3.8/site-packages/daphne/ws_protocol.py:72:onConnect
--- <exception caught here> ---
/Users/nikitatonkoshkur/Documents/work/svoya_igra/venv/lib/python3.8/site-packages/twisted/internet/defer.py:151:maybeDeferred
/Users/nikitatonkoshkur/Documents/work/svoya_igra/venv/lib/python3.8/site-packages/daphne/server.py:206:create_application
]```
``` | 2020/09/08 | [
"https://Stackoverflow.com/questions/63802423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9631956/"
] | Works on my machine.
Hard to say without knowing what the data looks like, so I took a stab below:
```js
const value = {
expenses: [{amount: 1}, {amount: 2}]
}
const totalExpense = value.expenses.length > 0 ? (
value.expenses.reduce((acc, curr) => {
acc += curr.amount
return acc
}, 0)) : 0;
console.log(totalExpense);
console.log(value.expenses);
```
You can also simplify it further with the following changes:
```js
const value = {
expenses: [{amount: 1}, {amount: 2}]
}
// no need to check length
const totalExpense = value.expenses.reduce((acc, curr) => acc + curr.amount, 0);
console.log(totalExpense);
console.log(value.expenses);
``` | ```
{value => {
const totalExpense = value.expenses.length > 0 ? (
value.expenses.reduce((acc, curr) => {
acc += parseInt(curr.amount)
return acc
}, 0)) : 0;
console.log(totalExpense);
console.log(value.expenses);
```
curr.amount wasnt coming accross as an integer so it needed to be parseInt'd |
48,688,693 | New to Django framework. Mostly reading through documentations.
But this one i am unable to crack.
Trying to add a URL to an headline, that will be forwarded to the 'headlines' post.
The Error:
>
> NoReverseMatch at / Reverse for 'assignment\_detail' with arguments
> '('',)' not found. 1 pattern(s) tried: ['assignment\_detail/'] Request
> Method: GET Request URL: <http://127.0.0.1:8000/> Django Version: 2.0.2
> Exception Type: NoReverseMatch Exception Value: Reverse for
> 'assignment\_detail' with arguments '('',)' not found. 1 pattern(s)
> tried: ['assignment\_detail/'] Exception
> Location: C:\Users\internit\Dropbox\Python\codepython\env\lib\site-packages\django\urls\resolvers.py
> in \_reverse\_with\_prefix, line 632 Python
> Executable: C:\Users\internit\Dropbox\Python\codepython\env\Scripts\python.exe
> Python Version: 3.6.2 Python Path:
>
> ['C:\Users\internit\Dropbox\Python\codepython\codepython',
> 'C:\Users\internit\Dropbox\Python\codepython\env\Scripts\python36.zip',
> 'C:\Users\internit\Dropbox\Python\codepython\env\DLLs',
> 'C:\Users\internit\Dropbox\Python\codepython\env\lib',
> 'C:\Users\internit\Dropbox\Python\codepython\env\Scripts',
> 'c:\program files (x86)\python36-32\Lib', 'c:\program files
> (x86)\python36-32\DLLs',
> 'C:\Users\internit\Dropbox\Python\codepython\env',
> 'C:\Users\internit\Dropbox\Python\codepython\env\lib\site-packages']
> Server time: Thu, 8 Feb 2018 14:53:07 +0000 Error during template
> rendering In template
> C:\Users\internit\Dropbox\Python\codepython\codepython\codepython\templates\base.html,
> error at line 0
>
>
> Reverse for 'assignment\_detail' with arguments '('',)' not found. 1
> pattern(s) tried: ['assignment\_detail/'] 1 {% load static %}
> 2 3 4 5 6 7 8 9 10 CODEPYTHON.NET
> Traceback Switch to copy-and-paste view
> C:\Users\internit\Dropbox\Python\codepython\env\lib\site-packages\django\core\handlers\exception.py
> in inner
> response = get\_response(request) ... ▶ Local vars C:\Users\internit\Dropbox\Python\codepython\env\lib\site-packages\django\core\handlers\base.py
> in \_get\_response
> response = self.process\_exception\_by\_middleware(e, request) ... ▶ Local vars
> C:\Users\internit\Dropbox\Python\codepython\env\lib\site-packages\django\core\handlers\base.py
> in \_get\_response
> response = wrapped\_callback(request, \*callback\_args, \*\*callback\_kwargs) ... ▶ Local vars C:\Users\internit\Dropbox\Python\codepython\codepython\home\views.py
> in home
> return render(request, 'home.html', {'post':post}) ... ▶ Local vars
>
>
>
home/urls.py
```
from django.conf.urls import url
from django.conf import settings
from django.conf.urls.static import static
from codepython.posts import views
from posts import views as ps
app_name ='home'
urlpatterns = [
url(r'^$/', views.create, name='create'),
url(r'(?P<pk>\d+)/$', views.home, name='home'),
url(r'(?P<pk>\d+)/$', views.userposts, name='userposts')
url(r'^posts/(?P<post_id>[0-9]+)/$', ps.assignment_detail, name='assignment_detail'),
]+ static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
```
home/views.py
```
from django.shortcuts import render, get_object_or_404
from django.apps import apps
# Create your views here.
def home(request):
posts = apps.get_model("posts", "Post")
post = posts.objects.all().order_by('-pub_date')[0:6]
return render(request, 'home.html', {'post':post})
def assignment_detail(request, post_id):
posts = apps.get_model('posts', 'Post')
post = get_object_or_404(posts, pk=post_id)
return render(request, "assignment_detail.html", {'post': post})
```
home.html
```
<div class="row">
{% for post in post.all %}
<div class="col-md-4">
<div class="thumbnail">
<div class="caption">
<p>Level: {{post.assignment_level}}</p>
<a href="{% url 'assignment_detail' post_id %}"><h3>{{ post.title }}</h3></a>
<p>by {{post.author}} from {{post.pub_date}}</p>
<h4>{{post.assignment_body}}</h4>
<p><a href="#" class="btn btn-primary" role="button">Read...</a></p>
</div>
</div>
</div>
{% endfor %}
</div>
{% endblock%}
```
myproject/urls.py
```
url(r'^assignment_detail/', views.assignment_detail,name='assignment_detail'),
```
What am I missing here.
Thank you in advance. | 2018/02/08 | [
"https://Stackoverflow.com/questions/48688693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8929670/"
] | Your url does not imply that you have to pass an id, but you're passing one in the template:
```
<a href="{% url 'assignment_detail' post_id %}"><h3>{{ post.title }}</h3></a>
```
It should be:
```
url(r'^assignment_detail/(?P<post_id>[0-9]+)', views.assignment_detail,name='assignment_detail'),
``` | That error is Django telling you that it can't find any URLs named 'assignment\_detail' that have an argument to pass in.
This is because your url entry in `myproject/urls.py` is missing the argument (`post_id`) that you use in your view. You'll need to update that url line to something similar to this:
```
url(r'^assignment_detail/(?P<post_id>[0-9]+)/$', views.assignment_detail, name='assignment_detail'),
```
The change at the end of the URL adds a named regular expression to capture the `post_id` value which will then be passed into the view.
Looking at your template code, you'll need to update your {% url %} block to use `post.id` (notice period) not `post_id` |
36,620,175 | I am receiving a warning and I want to check if this will break. I am using np.where like this in a lot of cases (it is similar, for me, to an if statement in excel). Is there a better or more pythonic or pandas way to do this? I'm trying to turn one dimension into something I can easily do mathematical operations on.
```
df['closed_item'] = np.where(df['result']=='Action Taken', 1, 0)
FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
result = getattr(x, name)(y)
INSTALLED VERSIONS
------------------
python: 3.5.1.final.0
python-bits: 64
OS: Windows
OS-release: 10
pandas: 0.18.0
nose: 1.3.7
pip: 8.1.0
setuptools: 20.2.2
Cython: 0.23.4
numpy: 1.11.0
scipy: 0.17.0
statsmodels: 0.6.1
xarray: None
IPython: 4.0.0
sphinx: 1.3.1
patsy: 0.4.0
dateutil: 2.4.2
pytz: 2015.7
blosc: None
bottleneck: None
tables: 3.2.2
numexpr: 2.5.1
matplotlib: 1.5.1
openpyxl: 2.2.6
xlrd: 0.9.4
xlwt: 1.0.0
xlsxwriter: 0.7.7
lxml: 3.4.4
bs4: 4.4.1
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.0.9
pymysql: None
psycopg2: None
jinja2: 2.8
boto: 2.38.0
``` | 2016/04/14 | [
"https://Stackoverflow.com/questions/36620175",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3966601/"
] | This warning occurs when comparing "int" and "str" in your dataset. Add .astype(int) to your comparison dataset.
Try:
```
df['closed_item'] = np.where(df['result'].astype(str)=='Action Taken', 1, 0)
``` | The issue that you mentioned is actually quite complex, so let me divide it into parts using your words:
>
> I am receiving a warning and I want to check if this will *break*
>
>
>
A `Warning` is a statement that is telling you to be cautious with how you handle your coding logic. A well-designed warning is not going to break your code; if it were a case, it would be an `Exception`.
While you need to be concerned if there are problems with your output or performance, often you may ignore a warning *ceteris paribus*. So in your case, if everything else is OK and you do not plan to update the software, you do not need to do anything to suppress the warning. However, if you need to, you may use the following snippet:
```
import warnings
with warnings.catch_warnings():
warnings.filterwarnings('ignore', r'elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison(.*)')
```
>
> I am using np.where like this in a lot of cases (it is similar, for me, to an if statement in excel).
>
>
>
Note that there is a `DataFrame.where` method in pandas.
>
> Is there a better or more pythonic or pandas way to do this?
>
>
>
Yes, there are two ways that you can use to make your code more pandas-like:
If you want to get a number of columns that will work like dummies, you may use
```
pd.get_dummies(df.result)
```
It will produce a data frame with all possible dummy values it could find in a series.
If this sounds to you like an overkill, do not worry there are ways to single out just one such variable.
---
In pandas boolean `True` and `False` are commonly used to binary classify matches within a series or a dataframe so in your case, one could perform the following operation:
```py
df.closed_item = df.result == 'Action Taken'
```
>
> I'm trying to turn one dimension into something I can easily do mathematical operations on.
>
>
>
However, if you want the output to contain integer values so that it matches yours, you may use this piece of code:
```py
df.closed_item = (df.result == 'Action Taken'`).astype(int)
```
---
*As a side note, I do not think this warning propagates to newer versions, i.e. `0.13` and above (as expected since it is a future warning), so you may also considering an update.* |
4,976,776 | In my vim plugin, I have two files:
```
myplugin/plugin.vim
myplugin/plugin_helpers.py
```
I would like to import plugin\_helpers from plugin.vim (using the vim python support), so I believe I first need to put the directory of my plugin on python's sys.path.
How can I (in vimscript) get the path to the currently executing script? In python, this is `__file__`. In ruby, it's `__FILE__`. I couldn't find anything similar for vim by googling, can it be done?
**Note:** I am not looking for the currently *edited* file ("%:p" and friends). | 2011/02/12 | [
"https://Stackoverflow.com/questions/4976776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/144135/"
] | ```
" Relative path of script file:
let s:path = expand('<sfile>')
" Absolute path of script file:
let s:path = expand('<sfile>:p')
" Absolute path of script file with symbolic links resolved:
let s:path = resolve(expand('<sfile>:p'))
" Folder in which script resides: (not safe for symlinks)
let s:path = expand('<sfile>:p:h')
" If you're using a symlink to your script, but your resources are in
" the same directory as the actual script, you'll need to do this:
" 1: Get the absolute path of the script
" 2: Resolve all symbolic links
" 3: Get the folder of the resolved absolute file
let s:path = fnamemodify(resolve(expand('<sfile>:p')), ':h')
```
I use that last one often because my `~/.vimrc` is a symbolic link to a script in a git repository. | Found it:
```
let s:current_file=expand("<sfile>")
``` |
4,976,776 | In my vim plugin, I have two files:
```
myplugin/plugin.vim
myplugin/plugin_helpers.py
```
I would like to import plugin\_helpers from plugin.vim (using the vim python support), so I believe I first need to put the directory of my plugin on python's sys.path.
How can I (in vimscript) get the path to the currently executing script? In python, this is `__file__`. In ruby, it's `__FILE__`. I couldn't find anything similar for vim by googling, can it be done?
**Note:** I am not looking for the currently *edited* file ("%:p" and friends). | 2011/02/12 | [
"https://Stackoverflow.com/questions/4976776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/144135/"
] | Found it:
```
let s:current_file=expand("<sfile>")
``` | It is worth mentioning that the above solution will only work outside of a function.
This will not give the desired result:
```
function! MyFunction()
let s:current_file=expand('<sfile>:p:h')
echom s:current_file
endfunction
```
But this will:
```
let s:current_file=expand('<sfile>')
function! MyFunction()
echom s:current_file
endfunction
```
Here's a full solution to OP's original question:
```
let s:path = expand('<sfile>:p:h')
function! MyPythonFunction()
import sys
import os
script_path = vim.eval('s:path')
lib_path = os.path.join(script_path, '.')
sys.path.insert(0, lib_path)
import vim
import plugin_helpers
plugin_helpers.do_some_cool_stuff_here()
vim.command("badd %(result)s" % {'result':plugin_helpers.get_result()})
EOF
endfunction
``` |
4,976,776 | In my vim plugin, I have two files:
```
myplugin/plugin.vim
myplugin/plugin_helpers.py
```
I would like to import plugin\_helpers from plugin.vim (using the vim python support), so I believe I first need to put the directory of my plugin on python's sys.path.
How can I (in vimscript) get the path to the currently executing script? In python, this is `__file__`. In ruby, it's `__FILE__`. I couldn't find anything similar for vim by googling, can it be done?
**Note:** I am not looking for the currently *edited* file ("%:p" and friends). | 2011/02/12 | [
"https://Stackoverflow.com/questions/4976776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/144135/"
] | Found it:
```
let s:current_file=expand("<sfile>")
``` | If you really want to get the script path inside a function (which is what I'd like to), you can still use `<sfile>`'s second semantic, or its equivalent `<stack>` inside `expand()`.
>
>
> ```
> <sfile> ...
> When executing a legacy function, is replaced with the call
> stack, as with <stack>
> ...
> :<stack> <stack>
> <stack> is replaced with the call stack, using
> "function {function-name}[{lnum}]" for a function line
> and "script {file-name}[{lnum}]" for a script line, and
> ".." in between items. E.g.:
> "function {function-name1}[{lnum}]..{function-name2}[{lnum}]"
> If there is no call stack you get error E489 .
>
> ```
>
>
However you possibly don't want to use it in a plugin, as you can use autoload functions in plugin, using this `relative#path#to#plugin#root#script` notation.
I use this for sourcing purpose:
```
function! s:SourceLocal(script)
let l:callstack = expand("<stack>")
let l:list = split(l:callstack, '\.\.')
" list[-1] is SourceLocal function itself
" list[-2] is the calling script
let l:script_name = matchstr(l:list[-2], '^\(script \)\=\zs.\+\ze\[\d\+\]$')
let l:script_path = fnamemodify(l:script_name, ":p:h")
" l:script_path is the path where the script calling this function resides
execute printf("source %s/%s", l:script_path, a:script)
endfunction
command! -nargs=1 SourceLocal :call s:SourceLocal(<f-args>)
```
Then you can `SourceLocal` inside any script to source another script relative to it. |
4,976,776 | In my vim plugin, I have two files:
```
myplugin/plugin.vim
myplugin/plugin_helpers.py
```
I would like to import plugin\_helpers from plugin.vim (using the vim python support), so I believe I first need to put the directory of my plugin on python's sys.path.
How can I (in vimscript) get the path to the currently executing script? In python, this is `__file__`. In ruby, it's `__FILE__`. I couldn't find anything similar for vim by googling, can it be done?
**Note:** I am not looking for the currently *edited* file ("%:p" and friends). | 2011/02/12 | [
"https://Stackoverflow.com/questions/4976776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/144135/"
] | ```
" Relative path of script file:
let s:path = expand('<sfile>')
" Absolute path of script file:
let s:path = expand('<sfile>:p')
" Absolute path of script file with symbolic links resolved:
let s:path = resolve(expand('<sfile>:p'))
" Folder in which script resides: (not safe for symlinks)
let s:path = expand('<sfile>:p:h')
" If you're using a symlink to your script, but your resources are in
" the same directory as the actual script, you'll need to do this:
" 1: Get the absolute path of the script
" 2: Resolve all symbolic links
" 3: Get the folder of the resolved absolute file
let s:path = fnamemodify(resolve(expand('<sfile>:p')), ':h')
```
I use that last one often because my `~/.vimrc` is a symbolic link to a script in a git repository. | It is worth mentioning that the above solution will only work outside of a function.
This will not give the desired result:
```
function! MyFunction()
let s:current_file=expand('<sfile>:p:h')
echom s:current_file
endfunction
```
But this will:
```
let s:current_file=expand('<sfile>')
function! MyFunction()
echom s:current_file
endfunction
```
Here's a full solution to OP's original question:
```
let s:path = expand('<sfile>:p:h')
function! MyPythonFunction()
import sys
import os
script_path = vim.eval('s:path')
lib_path = os.path.join(script_path, '.')
sys.path.insert(0, lib_path)
import vim
import plugin_helpers
plugin_helpers.do_some_cool_stuff_here()
vim.command("badd %(result)s" % {'result':plugin_helpers.get_result()})
EOF
endfunction
``` |
4,976,776 | In my vim plugin, I have two files:
```
myplugin/plugin.vim
myplugin/plugin_helpers.py
```
I would like to import plugin\_helpers from plugin.vim (using the vim python support), so I believe I first need to put the directory of my plugin on python's sys.path.
How can I (in vimscript) get the path to the currently executing script? In python, this is `__file__`. In ruby, it's `__FILE__`. I couldn't find anything similar for vim by googling, can it be done?
**Note:** I am not looking for the currently *edited* file ("%:p" and friends). | 2011/02/12 | [
"https://Stackoverflow.com/questions/4976776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/144135/"
] | ```
" Relative path of script file:
let s:path = expand('<sfile>')
" Absolute path of script file:
let s:path = expand('<sfile>:p')
" Absolute path of script file with symbolic links resolved:
let s:path = resolve(expand('<sfile>:p'))
" Folder in which script resides: (not safe for symlinks)
let s:path = expand('<sfile>:p:h')
" If you're using a symlink to your script, but your resources are in
" the same directory as the actual script, you'll need to do this:
" 1: Get the absolute path of the script
" 2: Resolve all symbolic links
" 3: Get the folder of the resolved absolute file
let s:path = fnamemodify(resolve(expand('<sfile>:p')), ':h')
```
I use that last one often because my `~/.vimrc` is a symbolic link to a script in a git repository. | If you really want to get the script path inside a function (which is what I'd like to), you can still use `<sfile>`'s second semantic, or its equivalent `<stack>` inside `expand()`.
>
>
> ```
> <sfile> ...
> When executing a legacy function, is replaced with the call
> stack, as with <stack>
> ...
> :<stack> <stack>
> <stack> is replaced with the call stack, using
> "function {function-name}[{lnum}]" for a function line
> and "script {file-name}[{lnum}]" for a script line, and
> ".." in between items. E.g.:
> "function {function-name1}[{lnum}]..{function-name2}[{lnum}]"
> If there is no call stack you get error E489 .
>
> ```
>
>
However you possibly don't want to use it in a plugin, as you can use autoload functions in plugin, using this `relative#path#to#plugin#root#script` notation.
I use this for sourcing purpose:
```
function! s:SourceLocal(script)
let l:callstack = expand("<stack>")
let l:list = split(l:callstack, '\.\.')
" list[-1] is SourceLocal function itself
" list[-2] is the calling script
let l:script_name = matchstr(l:list[-2], '^\(script \)\=\zs.\+\ze\[\d\+\]$')
let l:script_path = fnamemodify(l:script_name, ":p:h")
" l:script_path is the path where the script calling this function resides
execute printf("source %s/%s", l:script_path, a:script)
endfunction
command! -nargs=1 SourceLocal :call s:SourceLocal(<f-args>)
```
Then you can `SourceLocal` inside any script to source another script relative to it. |
4,976,776 | In my vim plugin, I have two files:
```
myplugin/plugin.vim
myplugin/plugin_helpers.py
```
I would like to import plugin\_helpers from plugin.vim (using the vim python support), so I believe I first need to put the directory of my plugin on python's sys.path.
How can I (in vimscript) get the path to the currently executing script? In python, this is `__file__`. In ruby, it's `__FILE__`. I couldn't find anything similar for vim by googling, can it be done?
**Note:** I am not looking for the currently *edited* file ("%:p" and friends). | 2011/02/12 | [
"https://Stackoverflow.com/questions/4976776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/144135/"
] | It is worth mentioning that the above solution will only work outside of a function.
This will not give the desired result:
```
function! MyFunction()
let s:current_file=expand('<sfile>:p:h')
echom s:current_file
endfunction
```
But this will:
```
let s:current_file=expand('<sfile>')
function! MyFunction()
echom s:current_file
endfunction
```
Here's a full solution to OP's original question:
```
let s:path = expand('<sfile>:p:h')
function! MyPythonFunction()
import sys
import os
script_path = vim.eval('s:path')
lib_path = os.path.join(script_path, '.')
sys.path.insert(0, lib_path)
import vim
import plugin_helpers
plugin_helpers.do_some_cool_stuff_here()
vim.command("badd %(result)s" % {'result':plugin_helpers.get_result()})
EOF
endfunction
``` | If you really want to get the script path inside a function (which is what I'd like to), you can still use `<sfile>`'s second semantic, or its equivalent `<stack>` inside `expand()`.
>
>
> ```
> <sfile> ...
> When executing a legacy function, is replaced with the call
> stack, as with <stack>
> ...
> :<stack> <stack>
> <stack> is replaced with the call stack, using
> "function {function-name}[{lnum}]" for a function line
> and "script {file-name}[{lnum}]" for a script line, and
> ".." in between items. E.g.:
> "function {function-name1}[{lnum}]..{function-name2}[{lnum}]"
> If there is no call stack you get error E489 .
>
> ```
>
>
However you possibly don't want to use it in a plugin, as you can use autoload functions in plugin, using this `relative#path#to#plugin#root#script` notation.
I use this for sourcing purpose:
```
function! s:SourceLocal(script)
let l:callstack = expand("<stack>")
let l:list = split(l:callstack, '\.\.')
" list[-1] is SourceLocal function itself
" list[-2] is the calling script
let l:script_name = matchstr(l:list[-2], '^\(script \)\=\zs.\+\ze\[\d\+\]$')
let l:script_path = fnamemodify(l:script_name, ":p:h")
" l:script_path is the path where the script calling this function resides
execute printf("source %s/%s", l:script_path, a:script)
endfunction
command! -nargs=1 SourceLocal :call s:SourceLocal(<f-args>)
```
Then you can `SourceLocal` inside any script to source another script relative to it. |
74,266,511 | I am making a blackjack simulator with python and are having problems with when the player want another card. To begin with the player gets a random sample of two numbers from a list and then get the option to take another card or not to. When the answer is yes another card is added to the random sample but it gets added as a list inside of the list.
This is is the line when the answer is yes to another card.
```
if svar == "JA":
handspelare.append(random.sample(kortlek,1))
print(handspelare)
```
This returns, [5, 10, [13]] and it is this list inside of the list i want to get rid of so i can sum the numbers, any suggestions on how i can get rid of this? | 2022/10/31 | [
"https://Stackoverflow.com/questions/74266511",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19321160/"
] | `random.sample(kortlek,1)`
`random.sample` returns a list, so you end up `append`ing a list to `handspelare` (which creates the sublists).
You could change `append` to `extend`, but `random.sample(..., 1)` is just `random.choice`, so it makes more sense to use `handspelare.append(random.choice(kortlek))`. | Use list concatenation rather than append.
```
handspelare += random.sample(kortlek,1)
```
`append` will not unbundle its argument
```
a = [1]
a.append([2]) # [1, [2]]
a = [1]
a += [2] # [1, 2]
``` |
74,266,511 | I am making a blackjack simulator with python and are having problems with when the player want another card. To begin with the player gets a random sample of two numbers from a list and then get the option to take another card or not to. When the answer is yes another card is added to the random sample but it gets added as a list inside of the list.
This is is the line when the answer is yes to another card.
```
if svar == "JA":
handspelare.append(random.sample(kortlek,1))
print(handspelare)
```
This returns, [5, 10, [13]] and it is this list inside of the list i want to get rid of so i can sum the numbers, any suggestions on how i can get rid of this? | 2022/10/31 | [
"https://Stackoverflow.com/questions/74266511",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19321160/"
] | `random.sample(kortlek,1)`
`random.sample` returns a list, so you end up `append`ing a list to `handspelare` (which creates the sublists).
You could change `append` to `extend`, but `random.sample(..., 1)` is just `random.choice`, so it makes more sense to use `handspelare.append(random.choice(kortlek))`. | you should create a deck then shuffle it
```
deck = [52 cards]
deck.shuffle()
```
then just draw off it like a normal deck in the real world
```
hand.append(deck.pop())
if len(deck) < 15: # or something
deck = [52 cards]
deck.shuffle()
``` |
74,266,511 | I am making a blackjack simulator with python and are having problems with when the player want another card. To begin with the player gets a random sample of two numbers from a list and then get the option to take another card or not to. When the answer is yes another card is added to the random sample but it gets added as a list inside of the list.
This is is the line when the answer is yes to another card.
```
if svar == "JA":
handspelare.append(random.sample(kortlek,1))
print(handspelare)
```
This returns, [5, 10, [13]] and it is this list inside of the list i want to get rid of so i can sum the numbers, any suggestions on how i can get rid of this? | 2022/10/31 | [
"https://Stackoverflow.com/questions/74266511",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19321160/"
] | you should create a deck then shuffle it
```
deck = [52 cards]
deck.shuffle()
```
then just draw off it like a normal deck in the real world
```
hand.append(deck.pop())
if len(deck) < 15: # or something
deck = [52 cards]
deck.shuffle()
``` | Use list concatenation rather than append.
```
handspelare += random.sample(kortlek,1)
```
`append` will not unbundle its argument
```
a = [1]
a.append([2]) # [1, [2]]
a = [1]
a += [2] # [1, 2]
``` |
19,616,168 | I am new to Django and I try to follow the official tutorial. since I want to connect to mysql (installed on my computer, and i checked mysql module does exit in python command line), I set the ENGINE in setting.py to be django.db.backends.mysql . and then I tried to run
```
python manage.py syncdb
```
then I got error message like this:
```
Error loading MySQLdb module
```
and I cannot run
```
pip install mysql-python
```
the error msg is:
```
Unable to find vcvarsall.bat
```
so what is this error? and honestly I am not sure about the difference between mysql-python and mysql-connector-python. Since i tried with "pip install mysql-connector-python" and it tells me that requirement already satisfied... | 2013/10/27 | [
"https://Stackoverflow.com/questions/19616168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2379736/"
] | You need to download the [windows binary installer](http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysql-python) for the MySQL drivers for Python. Installing from source will not work since you do not have the development headers in Windows. | You need to install the mysql python connector
sudo apt-get install python-mysqldb |
13,877,907 | ```
# python
enter code herePython 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os,sys
>>> import setup
..........
..........
..........
>>> reload(setup)
<module 'setup' from 'setup.pyc'>
>>>
```
But after executing reload its not taking updated 'setup' module
For example:
Doing some change in 'setup' file in another session and reloading in interpreter mode. But unable to use updated 'setup'
Could any will help me, how to overcome from this issue or where i am doing wrong
Thanks in Advance
Abhishek | 2012/12/14 | [
"https://Stackoverflow.com/questions/13877907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1468198/"
] | `reload` reloads a module, but doesn't recompile it.
```
>>> reload(setup)
<module 'setup' from 'setup.pyc'>
```
It is reloading from the compiled `setup.pyc`, not `setup.py`. The easiest way to get around this is simply to delete `setup.pyc` after making changes. Then when it reloads `setup.py` it will first recompile it. | Try assigning the value returned by `reload` to the same variable:
```
setup = reload(setup)
``` |
49,844,925 | I have the following python code to write processed words into excel file. The words are about 7729
```
From openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
for x in range (7729):
sheet.cell (row=1,column=x+1).value=x
book.save ('test.xlsx')
```
This is the what the code I used looks like, but when I run it, it gives me an error that says
```
openpyxl.utils.exceptions.IllegalCharacterError
```
This is my first time using this module, I would appreciate any kind of help. | 2018/04/15 | [
"https://Stackoverflow.com/questions/49844925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8616724/"
] | **Try this :**
This code works for me .
```
from openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
x = 0
with open("temp.txt") as myfile :
text = myfile.readline()
while text !="":
sheet.cell (row=1,column=x+1).value=str(text).encode("ascii",errors="ignore")
x+=1
text = myfile.readline()
book.save ('test.xlsx')
``` | You missed to add the value for cell `sheet.cell (row=1,column=x+1).value =`
Try like this
```
from openpyxl import *
book = Workbook ()
sheet = book.active
sheet.title = "test"
for x in range (7):
sheet.cell (row=1,column=x+1).value = "Hello"
book.save ('test.xlsx')
``` |
49,844,925 | I have the following python code to write processed words into excel file. The words are about 7729
```
From openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
for x in range (7729):
sheet.cell (row=1,column=x+1).value=x
book.save ('test.xlsx')
```
This is the what the code I used looks like, but when I run it, it gives me an error that says
```
openpyxl.utils.exceptions.IllegalCharacterError
```
This is my first time using this module, I would appreciate any kind of help. | 2018/04/15 | [
"https://Stackoverflow.com/questions/49844925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8616724/"
] | I faced similar issue and found out that it is because of \xa1 character which is hex value of ascii 26 (SUB). Openpyxl is not allowing to write such characters (ascii code < 32). I tried [xlsxwriter](https://xlsxwriter.readthedocs.io/) library without any issue it worte this character in xlsx file. | You missed to add the value for cell `sheet.cell (row=1,column=x+1).value =`
Try like this
```
from openpyxl import *
book = Workbook ()
sheet = book.active
sheet.title = "test"
for x in range (7):
sheet.cell (row=1,column=x+1).value = "Hello"
book.save ('test.xlsx')
``` |
49,844,925 | I have the following python code to write processed words into excel file. The words are about 7729
```
From openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
for x in range (7729):
sheet.cell (row=1,column=x+1).value=x
book.save ('test.xlsx')
```
This is the what the code I used looks like, but when I run it, it gives me an error that says
```
openpyxl.utils.exceptions.IllegalCharacterError
```
This is my first time using this module, I would appreciate any kind of help. | 2018/04/15 | [
"https://Stackoverflow.com/questions/49844925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8616724/"
] | `openpyxl` comes with an illegal characters regular expression, ready for you to use. Presuming you're happy to simply remove these characters, you can do:
```
import re
from openpyxl.cell.cell import ILLEGAL_CHARACTERS_RE
from openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
for x in range (7729):
sheet.cell (row=1,column=x+1).value = ILLEGAL_CHARACTERS_RE.sub(r'',x)
book.save ('test.xlsx')
```
To speed it up, you could put the original cell value assignment inside a try/except and only run the re substitution when an `openpyxl.utils.exceptions.IllegalCharacterError` is caught.
Source: <https://www.programmersought.com/article/43315246046/> | You missed to add the value for cell `sheet.cell (row=1,column=x+1).value =`
Try like this
```
from openpyxl import *
book = Workbook ()
sheet = book.active
sheet.title = "test"
for x in range (7):
sheet.cell (row=1,column=x+1).value = "Hello"
book.save ('test.xlsx')
``` |
49,844,925 | I have the following python code to write processed words into excel file. The words are about 7729
```
From openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
for x in range (7729):
sheet.cell (row=1,column=x+1).value=x
book.save ('test.xlsx')
```
This is the what the code I used looks like, but when I run it, it gives me an error that says
```
openpyxl.utils.exceptions.IllegalCharacterError
```
This is my first time using this module, I would appreciate any kind of help. | 2018/04/15 | [
"https://Stackoverflow.com/questions/49844925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8616724/"
] | I faced similar issue and found out that it is because of \xa1 character which is hex value of ascii 26 (SUB). Openpyxl is not allowing to write such characters (ascii code < 32). I tried [xlsxwriter](https://xlsxwriter.readthedocs.io/) library without any issue it worte this character in xlsx file. | **Try this :**
This code works for me .
```
from openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
x = 0
with open("temp.txt") as myfile :
text = myfile.readline()
while text !="":
sheet.cell (row=1,column=x+1).value=str(text).encode("ascii",errors="ignore")
x+=1
text = myfile.readline()
book.save ('test.xlsx')
``` |
49,844,925 | I have the following python code to write processed words into excel file. The words are about 7729
```
From openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
for x in range (7729):
sheet.cell (row=1,column=x+1).value=x
book.save ('test.xlsx')
```
This is the what the code I used looks like, but when I run it, it gives me an error that says
```
openpyxl.utils.exceptions.IllegalCharacterError
```
This is my first time using this module, I would appreciate any kind of help. | 2018/04/15 | [
"https://Stackoverflow.com/questions/49844925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8616724/"
] | `openpyxl` comes with an illegal characters regular expression, ready for you to use. Presuming you're happy to simply remove these characters, you can do:
```
import re
from openpyxl.cell.cell import ILLEGAL_CHARACTERS_RE
from openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
for x in range (7729):
sheet.cell (row=1,column=x+1).value = ILLEGAL_CHARACTERS_RE.sub(r'',x)
book.save ('test.xlsx')
```
To speed it up, you could put the original cell value assignment inside a try/except and only run the re substitution when an `openpyxl.utils.exceptions.IllegalCharacterError` is caught.
Source: <https://www.programmersought.com/article/43315246046/> | **Try this :**
This code works for me .
```
from openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
x = 0
with open("temp.txt") as myfile :
text = myfile.readline()
while text !="":
sheet.cell (row=1,column=x+1).value=str(text).encode("ascii",errors="ignore")
x+=1
text = myfile.readline()
book.save ('test.xlsx')
``` |
49,844,925 | I have the following python code to write processed words into excel file. The words are about 7729
```
From openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
for x in range (7729):
sheet.cell (row=1,column=x+1).value=x
book.save ('test.xlsx')
```
This is the what the code I used looks like, but when I run it, it gives me an error that says
```
openpyxl.utils.exceptions.IllegalCharacterError
```
This is my first time using this module, I would appreciate any kind of help. | 2018/04/15 | [
"https://Stackoverflow.com/questions/49844925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8616724/"
] | `openpyxl` comes with an illegal characters regular expression, ready for you to use. Presuming you're happy to simply remove these characters, you can do:
```
import re
from openpyxl.cell.cell import ILLEGAL_CHARACTERS_RE
from openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
for x in range (7729):
sheet.cell (row=1,column=x+1).value = ILLEGAL_CHARACTERS_RE.sub(r'',x)
book.save ('test.xlsx')
```
To speed it up, you could put the original cell value assignment inside a try/except and only run the re substitution when an `openpyxl.utils.exceptions.IllegalCharacterError` is caught.
Source: <https://www.programmersought.com/article/43315246046/> | I faced similar issue and found out that it is because of \xa1 character which is hex value of ascii 26 (SUB). Openpyxl is not allowing to write such characters (ascii code < 32). I tried [xlsxwriter](https://xlsxwriter.readthedocs.io/) library without any issue it worte this character in xlsx file. |
58,007,418 | I've got a CASIO fx-CG50 with python running extended version of micropython 1.9.4
Decided to make a game but I really need a sleep function, I cannot use any imports as everything is pretty barebones. Any help would be greatly appreciated.
I've tried downloading utilities but they're just extra applications, nothing seems to really exist for the casio.
Cheers! | 2019/09/19 | [
"https://Stackoverflow.com/questions/58007418",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8402836/"
] | If you cannot import time (or utime) in your code, you could always implement a simple function that loops for a certain number of steps:
```
def wait(step):
for i in range(step):
pass
wait(999999)
```
In that case, the actual time spent in the function will depend on the computational power of your device. | I am trying to do the same exact things and I was trying to benchmark a wait function by animating a square accross the scree. Here is what I have come up width:
```
from casioplot import *
def wait(milli):
time = milli*50
for i in range(time):
pass
def drawSquare(x,y,l):
for i in range(l):
for j in range(l):
set_pixel(x+j,y+i,(0,0,0))
draw_string(0, 0, "start.", (0, 0, 0,), "small")
show_screen()
waitMillis = 1000
screenWidth = 384
screenHeight = 192
xPos = 0
yPos = 0
squareSide = 24
while xPos + squareSide < screenWidth:
clear_screen()
drawSquare(xPos,yPos,squareSide)
show_screen()
wait(waitMillis)
xPos = xPos + 5
draw_string(int(screenWidth/2), 0, "done", (0, 0, 0,), "small")
show_screen()
```
So when just using a manual stopwatch app, I noticed that the wait function has a 1 second turnaround time for every 50k iterations. So I set a ratio to the value in order
to have a comfortable millisecond parameter. Unfortunately, performance degrade drastically probably do to the drawing utilities and the constant clear and setting of pixels. The performance degradation is exponential and it is hard to capture and control. The square moves well up to half of the screen after which it moves very sluggishly so. I am not sure if we can fix this problem easily... |
65,559,632 | Seems to be impossible currently with Anaconda as well as with Xcode 12. Via idle, it runs via Rosetta. There seems to be no discussion of this so either I'm quite naive or maybe this will be useful to others as well.
Python says: "As of 3.9.1, Python now fully supports building and running on macOS 11.0 (Big Sur) and on Apple Silicon Macs (based on the ARM64 architecture). A new universal build variant, universal2, is now available to natively support both ARM64 and Intel 64 in one set of executables" <https://docs.python.org/3/whatsnew/3.9.html>
Please help a newbie figure out how to take advantage of his recent impulse-buy. | 2021/01/04 | [
"https://Stackoverflow.com/questions/65559632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14936216/"
] | You can now install python 3.9.1 through multiple pathways now but the most comprehensive build environment for the full data-science suite for python at the moment (Feb 2021) on M1 ARM architecture is via miniforge.
e.g.
```
brew install --cask miniforge
conda init zsh
conda activate
conda install numpy scipy scikit-learn
``` | I am using python3.9.4. I installed it using homebrew only.
```
brew install python@3.9
``` |
65,559,632 | Seems to be impossible currently with Anaconda as well as with Xcode 12. Via idle, it runs via Rosetta. There seems to be no discussion of this so either I'm quite naive or maybe this will be useful to others as well.
Python says: "As of 3.9.1, Python now fully supports building and running on macOS 11.0 (Big Sur) and on Apple Silicon Macs (based on the ARM64 architecture). A new universal build variant, universal2, is now available to natively support both ARM64 and Intel 64 in one set of executables" <https://docs.python.org/3/whatsnew/3.9.html>
Please help a newbie figure out how to take advantage of his recent impulse-buy. | 2021/01/04 | [
"https://Stackoverflow.com/questions/65559632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14936216/"
] | You can now install python 3.9.1 through multiple pathways now but the most comprehensive build environment for the full data-science suite for python at the moment (Feb 2021) on M1 ARM architecture is via miniforge.
e.g.
```
brew install --cask miniforge
conda init zsh
conda activate
conda install numpy scipy scikit-learn
``` | I upgraded to 3.9.4
1. Download the Python universal installer -
<https://www.python.org/downloads/mac-osx/>
Note: I still could not get sudo pip install mysqlclient to install.
I had add to
1. update homebrew - See <https://brew.sh>
2. Add /opt/homebrew/bin to PATH in .bash\_profile (don't forget to source .bash\_profile)
or
Add /opt/homebrew/bin to PATH in .zprofile (don't forget to source .zprofile) if using zsh |
65,559,632 | Seems to be impossible currently with Anaconda as well as with Xcode 12. Via idle, it runs via Rosetta. There seems to be no discussion of this so either I'm quite naive or maybe this will be useful to others as well.
Python says: "As of 3.9.1, Python now fully supports building and running on macOS 11.0 (Big Sur) and on Apple Silicon Macs (based on the ARM64 architecture). A new universal build variant, universal2, is now available to natively support both ARM64 and Intel 64 in one set of executables" <https://docs.python.org/3/whatsnew/3.9.html>
Please help a newbie figure out how to take advantage of his recent impulse-buy. | 2021/01/04 | [
"https://Stackoverflow.com/questions/65559632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14936216/"
] | You can now install python 3.9.1 through multiple pathways now but the most comprehensive build environment for the full data-science suite for python at the moment (Feb 2021) on M1 ARM architecture is via miniforge.
e.g.
```
brew install --cask miniforge
conda init zsh
conda activate
conda install numpy scipy scikit-learn
``` | You can now install Python 3.9.4 natively on Mac M1 (Apple Silicon). I'm using pyenv to install Python 3.7, 3.8 and 3.9 all native ARM. For example, to install 3.9.4:
```
$ pyenv install 3.9.4
python-build: use openssl@1.1 from homebrew
python-build: use readline from homebrew
Downloading Python-3.9.4.tar.xz...
-> https://www.python.org/ftp/python/3.9.4/Python-3.9.4.tar.xz
Installing Python-3.9.4...
python-build: use readline from homebrew
python-build: use zlib from xcode sdk
Installed Python-3.9.4 to /Users/squademy/.pyenv/versions/3.9.4
```
For a complete guide on install pyenv and multiple Python version, you might read this article: <https://squademy.medium.com/install-python-on-macos-11-m1-apple-silicon-using-pyenv-12e0729427a9>. |
65,559,632 | Seems to be impossible currently with Anaconda as well as with Xcode 12. Via idle, it runs via Rosetta. There seems to be no discussion of this so either I'm quite naive or maybe this will be useful to others as well.
Python says: "As of 3.9.1, Python now fully supports building and running on macOS 11.0 (Big Sur) and on Apple Silicon Macs (based on the ARM64 architecture). A new universal build variant, universal2, is now available to natively support both ARM64 and Intel 64 in one set of executables" <https://docs.python.org/3/whatsnew/3.9.html>
Please help a newbie figure out how to take advantage of his recent impulse-buy. | 2021/01/04 | [
"https://Stackoverflow.com/questions/65559632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14936216/"
] | I am using python3.9.4. I installed it using homebrew only.
```
brew install python@3.9
``` | I upgraded to 3.9.4
1. Download the Python universal installer -
<https://www.python.org/downloads/mac-osx/>
Note: I still could not get sudo pip install mysqlclient to install.
I had add to
1. update homebrew - See <https://brew.sh>
2. Add /opt/homebrew/bin to PATH in .bash\_profile (don't forget to source .bash\_profile)
or
Add /opt/homebrew/bin to PATH in .zprofile (don't forget to source .zprofile) if using zsh |
65,559,632 | Seems to be impossible currently with Anaconda as well as with Xcode 12. Via idle, it runs via Rosetta. There seems to be no discussion of this so either I'm quite naive or maybe this will be useful to others as well.
Python says: "As of 3.9.1, Python now fully supports building and running on macOS 11.0 (Big Sur) and on Apple Silicon Macs (based on the ARM64 architecture). A new universal build variant, universal2, is now available to natively support both ARM64 and Intel 64 in one set of executables" <https://docs.python.org/3/whatsnew/3.9.html>
Please help a newbie figure out how to take advantage of his recent impulse-buy. | 2021/01/04 | [
"https://Stackoverflow.com/questions/65559632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14936216/"
] | You can now install Python 3.9.4 natively on Mac M1 (Apple Silicon). I'm using pyenv to install Python 3.7, 3.8 and 3.9 all native ARM. For example, to install 3.9.4:
```
$ pyenv install 3.9.4
python-build: use openssl@1.1 from homebrew
python-build: use readline from homebrew
Downloading Python-3.9.4.tar.xz...
-> https://www.python.org/ftp/python/3.9.4/Python-3.9.4.tar.xz
Installing Python-3.9.4...
python-build: use readline from homebrew
python-build: use zlib from xcode sdk
Installed Python-3.9.4 to /Users/squademy/.pyenv/versions/3.9.4
```
For a complete guide on install pyenv and multiple Python version, you might read this article: <https://squademy.medium.com/install-python-on-macos-11-m1-apple-silicon-using-pyenv-12e0729427a9>. | I upgraded to 3.9.4
1. Download the Python universal installer -
<https://www.python.org/downloads/mac-osx/>
Note: I still could not get sudo pip install mysqlclient to install.
I had add to
1. update homebrew - See <https://brew.sh>
2. Add /opt/homebrew/bin to PATH in .bash\_profile (don't forget to source .bash\_profile)
or
Add /opt/homebrew/bin to PATH in .zprofile (don't forget to source .zprofile) if using zsh |
23,936,239 | Strings are iterable.
Lists are iterable.
And with a List of Strings, both the List and the Strings can be iterated through with a nested loop.
For Example:
```
input = [ 'abcdefg', 'hijklmn', 'opqrstu']
for item in input:
for letter in item:
print letter
```
Out:
```
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
```
It seems we can then iterate within the iteration over the string, i.e. iterate over each letter of 'abcdefg'. Is there a more pythonic way to iterate as above, possibly in one statement?
I've re-read through Learning Python Chapter 4, Chapter 1 of the Python Cookbook, and looked here [in python: iterate over each string in a list](https://stackoverflow.com/questions/20968823/in-python-iterate-over-each-string-in-a-list), and here [Displaying and iterating a list within a string?](https://stackoverflow.com/questions/23146299/displaying-and-iterating-a-list-within-a-string). | 2014/05/29 | [
"https://Stackoverflow.com/questions/23936239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1851961/"
] | You can use [`itertools.chain.from_iterable()`](https://docs.python.org/3/library/itertools.html#itertools.chain.from_iterable):
```
>>> from itertools import chain
>>> input = ['abcdefg', 'hijklmn', 'opqrstu']
>>>
>>> for letter in chain.from_iterable(input):
... print letter
...
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
``` | Use `itertools.chain`:
```
for letter in itertools.chain(*input):
print letter
``` |
23,936,239 | Strings are iterable.
Lists are iterable.
And with a List of Strings, both the List and the Strings can be iterated through with a nested loop.
For Example:
```
input = [ 'abcdefg', 'hijklmn', 'opqrstu']
for item in input:
for letter in item:
print letter
```
Out:
```
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
```
It seems we can then iterate within the iteration over the string, i.e. iterate over each letter of 'abcdefg'. Is there a more pythonic way to iterate as above, possibly in one statement?
I've re-read through Learning Python Chapter 4, Chapter 1 of the Python Cookbook, and looked here [in python: iterate over each string in a list](https://stackoverflow.com/questions/20968823/in-python-iterate-over-each-string-in-a-list), and here [Displaying and iterating a list within a string?](https://stackoverflow.com/questions/23146299/displaying-and-iterating-a-list-within-a-string). | 2014/05/29 | [
"https://Stackoverflow.com/questions/23936239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1851961/"
] | You can use [`itertools.chain.from_iterable()`](https://docs.python.org/3/library/itertools.html#itertools.chain.from_iterable):
```
>>> from itertools import chain
>>> input = ['abcdefg', 'hijklmn', 'opqrstu']
>>>
>>> for letter in chain.from_iterable(input):
... print letter
...
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
``` | What you've written is already the most pythonic way to do it; there are already two levels of nesting (letters within strings within a list) so it's correct to have two nested `for` loops.
If you really want to use a single `for` statement, you can collapse the loops with a generator comprehension:
```
for letter in (letter for item in input for letter in item):
print letter
``` |
23,936,239 | Strings are iterable.
Lists are iterable.
And with a List of Strings, both the List and the Strings can be iterated through with a nested loop.
For Example:
```
input = [ 'abcdefg', 'hijklmn', 'opqrstu']
for item in input:
for letter in item:
print letter
```
Out:
```
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
```
It seems we can then iterate within the iteration over the string, i.e. iterate over each letter of 'abcdefg'. Is there a more pythonic way to iterate as above, possibly in one statement?
I've re-read through Learning Python Chapter 4, Chapter 1 of the Python Cookbook, and looked here [in python: iterate over each string in a list](https://stackoverflow.com/questions/20968823/in-python-iterate-over-each-string-in-a-list), and here [Displaying and iterating a list within a string?](https://stackoverflow.com/questions/23146299/displaying-and-iterating-a-list-within-a-string). | 2014/05/29 | [
"https://Stackoverflow.com/questions/23936239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1851961/"
] | You can use [`itertools.chain.from_iterable()`](https://docs.python.org/3/library/itertools.html#itertools.chain.from_iterable):
```
>>> from itertools import chain
>>> input = ['abcdefg', 'hijklmn', 'opqrstu']
>>>
>>> for letter in chain.from_iterable(input):
... print letter
...
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
``` | That is good enough,
everything else is just an academic puzzle
```
for item in ''.join(input):
print item
``` |
23,936,239 | Strings are iterable.
Lists are iterable.
And with a List of Strings, both the List and the Strings can be iterated through with a nested loop.
For Example:
```
input = [ 'abcdefg', 'hijklmn', 'opqrstu']
for item in input:
for letter in item:
print letter
```
Out:
```
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
```
It seems we can then iterate within the iteration over the string, i.e. iterate over each letter of 'abcdefg'. Is there a more pythonic way to iterate as above, possibly in one statement?
I've re-read through Learning Python Chapter 4, Chapter 1 of the Python Cookbook, and looked here [in python: iterate over each string in a list](https://stackoverflow.com/questions/20968823/in-python-iterate-over-each-string-in-a-list), and here [Displaying and iterating a list within a string?](https://stackoverflow.com/questions/23146299/displaying-and-iterating-a-list-within-a-string). | 2014/05/29 | [
"https://Stackoverflow.com/questions/23936239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1851961/"
] | Use `itertools.chain`:
```
for letter in itertools.chain(*input):
print letter
``` | That is good enough,
everything else is just an academic puzzle
```
for item in ''.join(input):
print item
``` |
23,936,239 | Strings are iterable.
Lists are iterable.
And with a List of Strings, both the List and the Strings can be iterated through with a nested loop.
For Example:
```
input = [ 'abcdefg', 'hijklmn', 'opqrstu']
for item in input:
for letter in item:
print letter
```
Out:
```
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
```
It seems we can then iterate within the iteration over the string, i.e. iterate over each letter of 'abcdefg'. Is there a more pythonic way to iterate as above, possibly in one statement?
I've re-read through Learning Python Chapter 4, Chapter 1 of the Python Cookbook, and looked here [in python: iterate over each string in a list](https://stackoverflow.com/questions/20968823/in-python-iterate-over-each-string-in-a-list), and here [Displaying and iterating a list within a string?](https://stackoverflow.com/questions/23146299/displaying-and-iterating-a-list-within-a-string). | 2014/05/29 | [
"https://Stackoverflow.com/questions/23936239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1851961/"
] | What you've written is already the most pythonic way to do it; there are already two levels of nesting (letters within strings within a list) so it's correct to have two nested `for` loops.
If you really want to use a single `for` statement, you can collapse the loops with a generator comprehension:
```
for letter in (letter for item in input for letter in item):
print letter
``` | That is good enough,
everything else is just an academic puzzle
```
for item in ''.join(input):
print item
``` |
14,610,459 | Go's time package claims giving a nanosecond precision.
<http://golang.org/src/pkg/time/time.go>
I'm wondering how it is implemented and if I can trust it. My doubts come from Python, which clearly documents its difficulties and limitations with time:
>
> *From <http://docs.python.org/2/library/time.html>*
>
>
> The precision of the various real-time functions may be less than
> suggested by the units in which their value or argument is expressed.
> E.g. on most Unix systems, the clock “ticks” only 50 or 100 times a
> second.
>
>
> On the other hand, the precision of time() and sleep() is better than
> their Unix equivalents: times are expressed as floating point numbers,
> time() returns the most accurate time available (using Unix
> gettimeofday() where available), and sleep() will accept a time with a
> nonzero fraction (Unix select() is used to implement this, where
> available).
>
>
>
Since the operating systems are giving such a hard time to python, how does Go achieve its nanosecond precision? | 2013/01/30 | [
"https://Stackoverflow.com/questions/14610459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/520159/"
] | Well as for the implementation, `time.Now()` falls back to a function implemented in the runtime.
You can review the [C time implementation](http://code.google.com/p/go/source/browse/src/pkg/runtime/time.goc) and the implementation for [`time·now` in assembly](http://code.google.com/p/go/source/browse/src/pkg/runtime/sys_linux_amd64.s?name=go1.1.2#103) (linux amd64 in this case). This then uses [`clock_gettime`](http://linux.die.net/man/3/clock_gettime), which provides nano seconds resolution. On windows, this is realized by calling `GetSystemTimeAsFileTime`, which [too generates nanoseconds](https://stackoverflow.com/a/11743614/1643939) (not as high res but nanoseconds).
So yes, the resolution depends on the operating system and you can't expect it to be accurate on every OS but the developers are trying to make it as good as it can be. For example, in go1.0.3, `time·now` for FreeBSD [used `gettimeofday`](http://code.google.com/p/go/source/browse/src/pkg/runtime/sys_freebsd_386.s?name=go1.0.3#110) instead of `clock_gettime`, which only offers millisecond precision. You can see this by looking at the value stored in `AX`, as it is the [syscall id](http://www.acsu.buffalo.edu/~charngda/freebsd_syscalls.html). If you take a look at the referenced assembly, you can see that the ms value is mulitplied by 1000 to get the nanoseconds. However, this is fixed [now](http://code.google.com/p/go/source/browse/src/pkg/runtime/sys_freebsd_386.s?name=go1.1#134).
If you want to be sure, check the corresponding implementations in the runtime source code and ask the manuals of your operating system. | One of the problems with Python's [time.time](http://docs.python.org/2/library/time.html#time.time) function is that it returns a [float](http://docs.python.org/2/library/functions.html#float). A float is an [IEEE 754 double-precision number](http://en.wikipedia.org/wiki/Double-precision_floating-point_format) which has 53 bits of precision.
Since it is now more than 2\*\*30 seconds since 1970-01-01 (the epoch) you need 61 (31 + 30) bits of precision to store time accurate to the nanosecond since 1970-01-01.
Unfortunately that is 7 or 8 bits short of what you can store in a python float, meaning that python floats will always be less precise than go time.
To quantify that the demonstration below shows that python time is at most accurate to 100nS just due to the limitations of the `float` type.
```
>>> t = time()
>>> t
1359587524.591781
>>> t == t + 1E-6
False
>>> t == t + 1E-7
True
```
So go, starting with an `int64` and counting in nS doesn't have these limitations and is limited to the precision of the underlying OS as explained very well by nemo. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.