qid
int64 469
74.7M
| question
stringlengths 36
37.8k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 5
31.5k
| response_k
stringlengths 10
31.6k
|
---|---|---|---|---|---|
31,471,751 | I would like to install netcdf4-python to my Ubuntu14.04. The libhdf5-dev\_1.8.11\_5ubuntu7\_amd64.deb and libnetcdf-4.1.3-7ubuntu2\_amd64.deb are installed. I downloaded netcdf4-1.1.8.tar.gz from <https://pypi.python.org/pypi/netCDF4#downloads>
I tried configure it by
```
./configure --enable-netcdf-4 –with-hdf5=/usr/include/ --enable-share –prefix=/usr
```
but I got the following message:
```
bash: ./configure: No such file or directory
```
I do not know how I can install netcdf4-python.
I would appreciated if someone helped me. | 2015/07/17 | [
"https://Stackoverflow.com/questions/31471751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4265720/"
] | I would strongly recommend using the [Anaconda](https://store.continuum.io/cshop/anaconda/) Python distribution. The full Anaconda distribution includes netcdf4 and the required libraries. | The netCDF4 python module documentation can be found [here](http://unidata.github.io/netcdf4-python/ "here"). Check out the "Install" section; it'll have what you're looking for. But, if you satisfy all of the pre-requisites you can simply do the following:
```
python setup.py build && python setup.py install
``` |
31,471,751 | I would like to install netcdf4-python to my Ubuntu14.04. The libhdf5-dev\_1.8.11\_5ubuntu7\_amd64.deb and libnetcdf-4.1.3-7ubuntu2\_amd64.deb are installed. I downloaded netcdf4-1.1.8.tar.gz from <https://pypi.python.org/pypi/netCDF4#downloads>
I tried configure it by
```
./configure --enable-netcdf-4 –with-hdf5=/usr/include/ --enable-share –prefix=/usr
```
but I got the following message:
```
bash: ./configure: No such file or directory
```
I do not know how I can install netcdf4-python.
I would appreciated if someone helped me. | 2015/07/17 | [
"https://Stackoverflow.com/questions/31471751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4265720/"
] | You can also use an alternative wrapper for the netCDF4, like the [netcdf](https://pypi.python.org/pypi/netcdf) library (it compile the hdf5 and netCDF4 libraries from sourcecode, automatically), using:
```
pip install netcdf
``` | The netCDF4 python module documentation can be found [here](http://unidata.github.io/netcdf4-python/ "here"). Check out the "Install" section; it'll have what you're looking for. But, if you satisfy all of the pre-requisites you can simply do the following:
```
python setup.py build && python setup.py install
``` |
31,471,751 | I would like to install netcdf4-python to my Ubuntu14.04. The libhdf5-dev\_1.8.11\_5ubuntu7\_amd64.deb and libnetcdf-4.1.3-7ubuntu2\_amd64.deb are installed. I downloaded netcdf4-1.1.8.tar.gz from <https://pypi.python.org/pypi/netCDF4#downloads>
I tried configure it by
```
./configure --enable-netcdf-4 –with-hdf5=/usr/include/ --enable-share –prefix=/usr
```
but I got the following message:
```
bash: ./configure: No such file or directory
```
I do not know how I can install netcdf4-python.
I would appreciated if someone helped me. | 2015/07/17 | [
"https://Stackoverflow.com/questions/31471751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4265720/"
] | The instructions for Ubuntu are [here](https://code.google.com/p/netcdf4-python/wiki/UbuntuInstall) which are basically:
**HDF5**
Download the current HDF5 source release.
Unpack, go into the directory and execute:
```
./configure --prefix=/usr/local --enable-shared --enable-hl
make
sudo make install
```
To speed things up, compile on more than one processor using
```
make -j n
```
where n is the number of processes to be launched.
**netCDF4**
e
Download the current netCDF4 source release.
Unpack, go into the directory and execute:
```
LDFLAGS=-L/usr/local/lib CPPFLAGS=-I/usr/local/include ./configure --enable-netcdf-4 --enable-dap --enable-shared --prefix=/usr/local
make
make install
```
Installing netcdf4-python
When both HDF5 and netCDF4 are in /usr/local, make sure the linker will be able to find those libraries by executing
```
sudo ldconfig
```
then installing netcdf4-python is just a matter of doing
```
python setup.py install
```
Make sure you actually **untar** the files and **cd** to the correct directories. | After much struggle with the installation and getting errors similar to the ones mentioned in this post, I ended up installing it as follows:
1) Installed HDF5
```
./configure --prefix=/usr/local --enable-shared --enable-hl
make
sudo make install
```
2) Installed netcdf4
```
sudo pip install netcdf4
```
I guess the pip command would have installed the pre-requisite HDF5 as well even if I didn't do step (1).
Btw, I have *pip version 8.0.2* and *python 2.7* |
31,471,751 | I would like to install netcdf4-python to my Ubuntu14.04. The libhdf5-dev\_1.8.11\_5ubuntu7\_amd64.deb and libnetcdf-4.1.3-7ubuntu2\_amd64.deb are installed. I downloaded netcdf4-1.1.8.tar.gz from <https://pypi.python.org/pypi/netCDF4#downloads>
I tried configure it by
```
./configure --enable-netcdf-4 –with-hdf5=/usr/include/ --enable-share –prefix=/usr
```
but I got the following message:
```
bash: ./configure: No such file or directory
```
I do not know how I can install netcdf4-python.
I would appreciated if someone helped me. | 2015/07/17 | [
"https://Stackoverflow.com/questions/31471751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4265720/"
] | I would strongly recommend using the [Anaconda](https://store.continuum.io/cshop/anaconda/) Python distribution. The full Anaconda distribution includes netcdf4 and the required libraries. | After much struggle with the installation and getting errors similar to the ones mentioned in this post, I ended up installing it as follows:
1) Installed HDF5
```
./configure --prefix=/usr/local --enable-shared --enable-hl
make
sudo make install
```
2) Installed netcdf4
```
sudo pip install netcdf4
```
I guess the pip command would have installed the pre-requisite HDF5 as well even if I didn't do step (1).
Btw, I have *pip version 8.0.2* and *python 2.7* |
31,471,751 | I would like to install netcdf4-python to my Ubuntu14.04. The libhdf5-dev\_1.8.11\_5ubuntu7\_amd64.deb and libnetcdf-4.1.3-7ubuntu2\_amd64.deb are installed. I downloaded netcdf4-1.1.8.tar.gz from <https://pypi.python.org/pypi/netCDF4#downloads>
I tried configure it by
```
./configure --enable-netcdf-4 –with-hdf5=/usr/include/ --enable-share –prefix=/usr
```
but I got the following message:
```
bash: ./configure: No such file or directory
```
I do not know how I can install netcdf4-python.
I would appreciated if someone helped me. | 2015/07/17 | [
"https://Stackoverflow.com/questions/31471751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4265720/"
] | You can also use an alternative wrapper for the netCDF4, like the [netcdf](https://pypi.python.org/pypi/netcdf) library (it compile the hdf5 and netCDF4 libraries from sourcecode, automatically), using:
```
pip install netcdf
``` | After much struggle with the installation and getting errors similar to the ones mentioned in this post, I ended up installing it as follows:
1) Installed HDF5
```
./configure --prefix=/usr/local --enable-shared --enable-hl
make
sudo make install
```
2) Installed netcdf4
```
sudo pip install netcdf4
```
I guess the pip command would have installed the pre-requisite HDF5 as well even if I didn't do step (1).
Btw, I have *pip version 8.0.2* and *python 2.7* |
59,843,953 | Trying to solve TSP as linear programming task using cvxpy and have problem with this. It is my first experience so thanks for help. As a result I want to have matrix with 0 and 1 that shows every next city for salesman.
need to use exactly cvxpy
[here you can read theory](http://github.com/cochoa0x1/integer-programming-with-python/blob/master/05-routes-and-schedules/traveling_salesman.ipynbhttps://)
[cvxpy website](https://cvxpy.readthedocs.io/en/latest/index.html)
thanks for help
```
import cvxpy as cp
import numpy as np
np.random.seed(1)
N = 10
distances = np.random.rand(N, N)
x = cp.Variable((N, N), boolean=True)
u = cp.Variable(N, integer=True)
constraints = []
for j in range(N):
indices = list(range(0, j)) + list(range(j + 1, N))
constraints.append(cp.sum(x[indices, j]) == 1)
for i in range(N):
indices = list(range(0, i)) + list(range(i + 1, N))
constraints.append(cp.sum(x[i, indices]) == 1)
for i in range(1, N):
for j in range(1, N):
if i != j:
constraints.append(u[i] - u[j] + N*x[i, j] <= N-1)
for i in range(N):
for j in range(N):
if i != j:
сost += (x[i,j]*distances[i,j])
prob = cp.Problem(cp.Minimize(cost), constraints)
prob.solve()
print(prob.value)
```
receive "None"
feel like the problem in cost defining, but don`t know how to make it correct
maybe I should use cvxpy.multiply or cvxpy.sum? | 2020/01/21 | [
"https://Stackoverflow.com/questions/59843953",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12754834/"
] | Restrict access to IAM role for S3 bucket. Use this role in Android App. | You can do this both by:
1. creating a bucket policy for specific IAM user
2. creating an ACL policy for an account and then delegating permissions to a specific user |
59,843,953 | Trying to solve TSP as linear programming task using cvxpy and have problem with this. It is my first experience so thanks for help. As a result I want to have matrix with 0 and 1 that shows every next city for salesman.
need to use exactly cvxpy
[here you can read theory](http://github.com/cochoa0x1/integer-programming-with-python/blob/master/05-routes-and-schedules/traveling_salesman.ipynbhttps://)
[cvxpy website](https://cvxpy.readthedocs.io/en/latest/index.html)
thanks for help
```
import cvxpy as cp
import numpy as np
np.random.seed(1)
N = 10
distances = np.random.rand(N, N)
x = cp.Variable((N, N), boolean=True)
u = cp.Variable(N, integer=True)
constraints = []
for j in range(N):
indices = list(range(0, j)) + list(range(j + 1, N))
constraints.append(cp.sum(x[indices, j]) == 1)
for i in range(N):
indices = list(range(0, i)) + list(range(i + 1, N))
constraints.append(cp.sum(x[i, indices]) == 1)
for i in range(1, N):
for j in range(1, N):
if i != j:
constraints.append(u[i] - u[j] + N*x[i, j] <= N-1)
for i in range(N):
for j in range(N):
if i != j:
сost += (x[i,j]*distances[i,j])
prob = cp.Problem(cp.Minimize(cost), constraints)
prob.solve()
print(prob.value)
```
receive "None"
feel like the problem in cost defining, but don`t know how to make it correct
maybe I should use cvxpy.multiply or cvxpy.sum? | 2020/01/21 | [
"https://Stackoverflow.com/questions/59843953",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12754834/"
] | The normal architecture for this is:
* Keep the Amazon S3 bucket **private** (no Bucket Policy)
* Users of the Android app provide their login information to the app, which **authenticates against your back-end service**
* Once authenticated, your back-end service can **generate temporary credentials using AWS Security Token Service (STS)** — permissions are assigned to these credentials that grant access to Amazon S3 (eg only for a certain path within a particular bucket)
* The mobile app can then use these temporary credentials to **directly communicate with Amazon S3** to upload/download objects
You could use **Amazon Cognito** for authentication and provisioning of credentials, or you could code your own authentication process (eg checking against your own database).
References:
* [Using Amazon Cognito for Mobile Apps - AWS Identity and Access Management](https://docs.amazonaws.cn/en_us/IAM/latest/UserGuide/id_roles_providers_oidc_cognito.html)
* [AWS IAM Now Supports Amazon, Facebook, and Google Identity Federation | AWS News Blog](https://aws.amazon.com/blogs/aws/aws-iam-now-supports-amazon-facebook-and-google-identity-federation/)
* [About SAML 2.0-based Federation - AWS Identity and Access Management](https://docs.amazonaws.cn/en_us/IAM/latest/UserGuide/id_roles_providers_saml.html)
If you only wish to **view photos**, then a simpler method would be for the back-end to generate **Amazon S3 pre-signed URLs**, which permit time-limited access to private objects in Amazon S3.
See: [Amazon S3 pre-signed URLs](https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html) | You can do this both by:
1. creating a bucket policy for specific IAM user
2. creating an ACL policy for an account and then delegating permissions to a specific user |
73,988,510 | I would like to use the apply and lambda methods in python in order to change the pricing in a column. The column name is Price. So, if the price is less than 20 I would like to pass and keep it the same. If 30>price>20 I would like to add 1. If the price is 40>price>30 then I would like to add 1.50. And so on. I am trying to figure out a way to apply these functions over a column and then send it back to an excel format in order to updating the pricing. I am confused as to how to do so. I have tried putting this operation in a function using an if clause but it is not spitting out the results that I would need to (k is the name of the dataframe):
```
def addition():
if k[k['Price']] < 20]:
pass
if k[(k['Price']] > 20) & (k['Price] < 30)]:
return k + 1
if k[(k['Price']] > 30.01) & (k['Price] < 40)]:
return k + 1.50
```
and so on. However, at the end, when I attempt to send out (what I thought was the newly updated k[k['Price] format in xlsx it doesn't even show up. I have tried to make the xlsx variable global as well but still no luck. I think it is simpler to use the lambda function, but I am having trouble deciding on how to separate and update the prices in that column based off the conditions. Much help would be appreciated.
This is the dataframe that I am trying to perform the different functions on:
```
0 23.198824
1 21.080706
2 15.810118
3 21.787059
4 18.821882
...
33525 20.347059
33526 25.665882
33527 33.077647
33528 21.803529
33529 23.043529
Name: Price, Length: 33530, dtype: float64
``` | 2022/10/07 | [
"https://Stackoverflow.com/questions/73988510",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18281733/"
] | If `k` is the dataframe,then `k+1` won't work, it will cause an error. You can write a function to change the price and apply it to the column -
```
def update_price(price):
if 20<price<30:
price += 1
elif 30<price<40:
price += 1.5
return price
df['Updated_Price'] = df['Price'].apply(lambda x: update_price(x))
```
```
In [39]: df
Out[39]:
Name Price
0 a 15
1 b 23
2 c 37
In [43]: df
Out[43]:
Name Price Updated_Price
0 a 15 15.0
1 b 23 24.0
2 c 37 38.5
``` | You can use `apply` method and `lambda` for this purpose alongside with nested `if..else`s.
```
import pandas as pd
df = pd.DataFrame({
'Price': [10.0, 23.0, 50.0, 32.0, 12.0, 50.0]
})
df = df['Price'].apply(lambda x: x if x < 20.0 else (x + 1.0 if 30.0 > x > 20.0 else x + 1.5))
print(df)
```
Output:
```
0 10.0
1 24.0
2 51.5
3 33.5
4 12.0
5 51.5
Name: Price, dtype: float64
``` |
37,770,872 | I wanted to find every instances of a file under different directories and search for value 0 in each of those files and replace them with 500.
Please find the code below:
!/usr/bin/python
```py
import glob
import os
a = glob.glob('/home/*/hostresolve')
for i in a:
print i
```
=================================
Now that I found all instances of hostresolve file under home. I wanted to search for value 0 and replace them with value 500 in each of these files. I know there is find and replace function in python but I wanted to know how can we use it to output that we got through glob. | 2016/06/12 | [
"https://Stackoverflow.com/questions/37770872",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6404505/"
] | As from [Python Docs] (<https://docs.python.org/2/library/glob.html>) glob.glob returns a list. In your case, its a list of matching files in the directory. hence to replace the required text in the all the files, we should iterative over the list. Accordingly the code would be
```
import glob
import os
a = glob.glob('/home/*/host*')
for files in a:
with open(files, 'r') as writingfile:
read_data = writingfile.read()
with open(files, 'w') as writingfile:
write_data = read_data.replace('0', '500')
writingfile.write(write_data)
```
Also using "with" to operate on file data is efficient, because it handles close() and flush() automatically avoiding excess code and it has been suggested in previous answers [1] (<https://stackoverflow.com/a/17141572/6005652>).
Further to reuse or make it more efficient, u can refer to maps (<https://docs.python.org/3/library/functions.html?highlight=map#map>) as the list of files is an iterable object.
From my understanding, this suffices an answer to your question. | It worked except for one thing
There are three files instances the code worked for 2 and 3 instance but first instance file remains same.
```
[root@localhost home]# cat /home/dir1/hostresolve
O
[root@localhost home]# cat /home/dir2/hostresolve
500
[root@localhost home]# cat /home/dir3/hostresolve
500
```
Please find the code below :
```py
!/usr/bin/python
import glob
import os
a = glob.glob('/home/*/hostresolve')
for files in a:
print files
with open(files, 'r') as writingfile:
read_data = writingfile.read()
with open(files , 'w') as writingfile:
write_data = read_data.replace('0','500')
writingfile.write(write_data)
```
But when I print files I get all instance of the file which means for loop will process all 3 instances and also checked the permission of these files and I found that all 3 have same permissions |
31,251,334 | I created a python script that validates attributes based on a complex set of While/If/Elif/Else loops. For my purposes, I created a while loop to make sure the target row's attribute was of an acceptable input, and then it follows the matrix after that. If the attribute is not accepted, it is supposed to use Tkinter to create a GUI with the accepted options, then you click one, and it assigns it to that attribute, and continues along down the matrix!!
I was pretty stoked with it, and wrote it all out because I had faith in my coding, and wanted it to work the first time around (how foolish). But the time came to test it, and lo and behold, Tkinter doesn't work within ArcGIS/arcpy... So I was hoping you just could suggest an alternate route to take the following code:
```
def assign(value):
global x
x = value
mGui.destroy()
def gui3(CONVWGID, a, b, c):
global mGui
mGui = Tk()
mGui.geometry("600x50+500+300")
mGui.title("Attribute Selection Window")
labeltext = "Please select one of the following attributes to assign to the selected Convwks feature, CONVWGID: " + str(CONVWGID)
frame1 = Frame(mGui)
frame1.pack()
mLabel = Label(frame1, text = labeltext).grid(row=0, column=0)
frame2 = Frame(mGui)
frame2.pack()
mButton = Button(frame2, text = a, command = lambda: assign(a)).grid(row=0, column=0, padx=10)
mButton = Button(frame2, text = b, command = lambda: assign(b)).grid(row=0, column=1, padx=10)
mButton = Button(frame2, text = c, command = lambda: assign(c)).grid(row=0, column=2, padx=10)
mGui.Mainloop() #FOR WINDOWS ONLY
```
I was thinking of doing a raw\_input() command instead of the GUI, but I really like how the GUI looks and operates... Do you have any suggestions?
What I think is happening is that it's just kind of skipping the call to my Tkinter gui3() function, which skips the assign() function.Then right after those are called, I assign the global variable 'x' to the attribute, but since x hasn't been defined, it just stops. Screenshot: imgur.com/psLlnUD
I'm ultimately getting a "NameError: global name 'x' is not defined." error
Here is the first step in the actual decision matrix:
```
for row in ucursor:
while row[0] != "X" or row[0] != "Y" or row[0] != "Z":
gui3(row[8], "X", "Y", "Z")
row[0] = x
if row[0] == "X":
...
``` | 2015/07/06 | [
"https://Stackoverflow.com/questions/31251334",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2847985/"
] | As soon as I tried to run your code, the answer popped up in the traceback:
`AttributeError: '_tkinter.tkapp' object has no attribute 'Mainloop'`
The uncaught exception in gui3 kills the code that calls it and `row[0] = x` is never executed, even though the gui remains displayed and even though assign might be called. With 'M' lowercased, (and index 8 changed to 1)
```
for row in (('A', 1), ('X', 2), ('b', 3)):
if row[0] not in {"X", "Y", "Z"}:
gui3(row[1], "X", "Y", "Z")
print( x)
```
works fine.
Having the window disappear and reappear was visually annoying to me. I personally would put up a window -- 'Verifying data' (with line numbers rolling up -- and change the content when action is needed. | You can create GUI windows in wxPython that run inside ArcMap and interact with the map document, but you need to put them into an ArcMap Extension which loads when ArcMap starts. This needs to create an wx.App object ONCE when the extension loads and then you can make one or more action buttons, tools, or other components on a toolbar inside the extension. Any GUI windows that are opened by wxPython need to be hidden using wx.Show(False) and NOT closed using wx.Close() or wx.Destroy. I have a post on this forum about it on StackOverflow:
[Repeated calls to ArcMap Python Add-In with wxPython fail after one successful call](https://stackoverflow.com/questions/49887825/repeated-calls-to-arcmap-python-add-in-with-wxpython-fail-after-one-successful-c)
<https://stackoverflow.com/a/50377929/8766731>
There is a good video on how to do this on the ESRI website:
<https://www.esri.com/videos/watch?videoid=1229&isLegacy=true>
This shows how to build the ArcMap Extension, a Toolbar in the Extension, and a Tool on the Toolbar. In my case, I put an action Button on the Toolbar instead of a Tool, but the same approach works. |
34,399,656 | **I met a problem of pickle, Code is that:**
```
import cPickle
class A(object):
def __init__(self):
self.a = 1
def methoda(self):
print(self.a)
class B(object):
def __init__(self):
self.b = 2
a = A()
self.b_a = a.methoda
def methodb(self):
print(self.b)
if __name__ == '__main__':
b = B()
with open('best_model1.pkl', 'w') as f:
cPickle.dump(b, f)
```
**Error is that:**
>
> File "/usr/lib/python2.7/copy\_reg.py", line 70, in \_reduce\_ex
> raise TypeError, "can't pickle %s objects" % base.**name** TypeError: can't pickle instancemethod objects
>
>
> | 2015/12/21 | [
"https://Stackoverflow.com/questions/34399656",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5704035/"
] | You can if you use `dill` instead of `cPickle`.
```
>>> import dill
>>>
>>> class A(object):
... def __init__(self):
... self.a = 1
... def methods(self):
... print(self.a)
...
>>>
>>> class B(object):
... def __init__(self):
... self.b = 2
... a = A()
... self.b_a = a.methods
... def methodb(self):
... print(self.b)
...
>>> b = B()
>>> b_ = dill.dumps(b)
>>> _b = dill.loads(b_)
>>> _b.methodb()
2
>>>
```
Also see:
[Can't pickle <type 'instancemethod'> when using python's multiprocessing Pool.map()](https://stackoverflow.com/questions/1816958/cant-pickle-type-instancemethod-when-using-pythons-multiprocessing-pool-ma?rq=1) | Also, when dill is installed pickle will work but as usual not cPickle.
```
import cPickle, pickle
class A(object):
def __init__(self):
self.a = 1
def methoda(self):
print(self.a)
class B(object):
def __init__(self):
self.b = 2
a = A()
self.b_a = a.methoda
def methodb(self):
print(self.b)
# try using cPickle
try:
c = cPickle.dumps(b)
d = cPickle.loads(c)
except Exception as err:
print('Unable to use cPickle (%s)'%err)
else:
print('Using cPickle was successful')
print(b)
print(d)
# try using pickle
try:
c = pickle.dumps(b)
d = pickle.loads(c)
except Exception as err:
print('Unable to use pickle (%s)'%err)
else:
print('Using pickle was successful')
print(b)
print(d)
>>> Unable to use cPickle (can't pickle instancemethod objects)
>>> Using pickle was successful
>>> <__main__.B object at 0x10e9b84d0>
>>> <__main__.B object at 0x13df07190>
```
for whatever reason, cPickle is not simply a C version of pickle 100 times faster but there are some differences |
70,073,499 | I have a column which has data as :
| Date |
| --- |
| '2021-01-01' |
| '2021-01-10' |
| '2021-01-09' |
| '2021-01-11' |
I need to get only the "year and month" as one column and have it as an integer instead of string like '2021-01-01' should be saved as 202101. (I don't need the day part).
When I try to clean the data I am able to do it but it removes the leading zeroes.
```
df['period'] = df['Date'].str[:4] + df['Date'].str[6:7]
```
This gives me:
| Date |
| --- |
| 20211 |
| 202110 |
| 20219 |
| 202111 |
As you can see, for months Jan to Sept, it returns only 1 to 9 instead of 01 to 09, which creates discrepancy. If I add a zero manually as part of the merge it will make '2021-10' as 2021010. I want it simply as the Year and month without the hyphen and keeping the leading zeroes for months. See below how I would want it to come in the new column.
| Date |
| --- |
| **202101** |
| 202110 |
| **202109** |
| 202111 |
I can do it using loop but that's not efficient. Is there a better way to do it in python? | 2021/11/22 | [
"https://Stackoverflow.com/questions/70073499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12203155/"
] | The leading zeros are being dropped because of a misunderstanding about the use of [slice notation](https://stackoverflow.com/questions/509211/understanding-slice-notation) in Python.
Try changing your code to:
```
df['period'] = df['Date'].str[:4] + df['Date'].str[5:7]
```
Note the change from [6:7] to [5:7]. | strip the inverted comma, coerce the date to datetime in your desired format and convert it to integer. Code below
```
df['Date_edited']=pd.to_datetime(df['Date'].str.strip("''")).dt.strftime('%Y%m').astype(int)
Date Date_edited
0 '2021-01-01' 202101
1 '2021-01-10' 202101
2 '2021-01-09' 202101
3 '2021-01-11' 202101
``` |
72,553,699 | If I have a python generator function, let's say this one:
```py
def gen():
x = 0
while (true):
yield x
x += 1
```
This function remembers its current state, and every time you call `gen()`, yields a new value. Essentially, I would like a Kotlin sequence which can remember its state. | 2022/06/09 | [
"https://Stackoverflow.com/questions/72553699",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7060972/"
] | >
>
> ```python
> def gen():
> x = 0
> while (true):
> yield x
> x += 1
>
> ```
>
> This function remembers its current state, and every time you call `gen()`, yields a new value.
>
>
>
This is incorrect. Every time you call `gen()` you get a new "generator object" whose state is independent of any other generator object created by this function. You then query the generator object to get the next number. For example:
```python
def demo():
numbers = gen() # 'gen()' from your question
for _ in range(0, 3):
next_number = next(numbers)
print(next_number)
if __name__ == '__main__'
demo()
print()
demo()
```
Output:
```none
0
1
2
0
1
2
```
As you can see, the sequence of numbers "starts over" when you call `gen()` again (though if you kept a reference to the old generator object it would continue from 2, even after calling `gen()` again).
In Kotlin, you can use the [kotlin.sequences.iterator](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.sequences/iterator.html) function. It creates an `Iterator` which lazily yields the next value, just like a Python generator object. For example:
```kotlin
fun gen() = iterator {
var x = 0
while (true) {
yield(x)
x++
}
}
fun demo() {
val numbers = gen()
repeat(3) {
val nextNumber = numbers.next()
println(nextNumber)
}
}
fun main() {
demo()
println()
demo()
}
```
Which will output:
```none
0
1
2
0
1
2
```
Just like the Python code.
Note you can do the essentially the same thing with a Kotlin `Sequence`, you just have to convert the `Sequence` into an `Iterator` if you want to use it like a Python generator object. Though keep in mind that Kotlin sequences are meant more for defining a series of operations and then lazily processing a group of elements in one go (sort of like Java streams, if you're familiar with them). | As stated before in the comments <https://kotlinlang.org/docs/sequences.html> are the answer, and you don't even need an iterator. You can generate sequence using <https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.sequences/generate-sequence.html>
and here is a little playground witch produces similar sequence as your generator <https://pl.kotl.in/LdboRzAzr> |
17,197,446 | So i have an android app, and a google app engine server written in python.
The android app needs to send some **sensible** information to the server, and the way I do that is by doing an http post.
Now i have been thinking about encrypting the data in android before sending it, and decrypting it once it is on the gae server.
This is how i encrypt and decrypt in java :
```
private static final String ALGO = "AES";
public static String encrypt(String Data) throws Exception {
Key key = generateKey();
Cipher c = Cipher.getInstance(ALGO);
c.init(Cipher.ENCRYPT_MODE, key);
byte[] encVal = c.doFinal(Data.getBytes());
// String encryptedValue = new BASE64Encoder().encode(encVal);
byte[] decoded = Base64.encodeBase64(encVal);
return (new String(decoded, "UTF-8") + "\n");
}
public static String decrypt(String encryptedData) throws Exception {
Key key = generateKey();
Cipher c = Cipher.getInstance(ALGO);
c.init(Cipher.DECRYPT_MODE, key);
byte[] decordedValue =Base64.decodeBase64(encryptedData);
byte[] decValue = c.doFinal(decordedValue);
String decryptedValue = new String(decValue);
return decryptedValue;
}
private static Key generateKey() throws Exception {
Key key = new SecretKeySpec(Constant.keyValue, ALGO);
return key;
}
```
And this is how i try to decrypt on the server (i don't know yet how to do the encryption..maybe you guys can help with that too)
```
def decrypt(value):
key = b'1234567891234567'
cipher = AES.new(key, AES.MODE_ECB)
msg = cipher.decrypt(value)
return msg
```
As i looked in the logs, the string test that i get is : `xVF79DzOplxBTMCwAx+hoeDJhyhifPZEoACQJcFhrXA=` and because it is not a multiple of 16 (idk why, i guess this is because of the java encryption) i get the error
ValueError: Input strings must be a multiple of 16 in lenght
What am i doing wrong? | 2013/06/19 | [
"https://Stackoverflow.com/questions/17197446",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1206610/"
] | Why are you not using ssl (aka https)? That should provide all the encryption needed to transport data securely and privately between the phone and App Engine.
The basics of it: Instead of sending data to <http://yourapp.appspot.com/>, send it to <https://yourapp.appspot.com/>.
For a complete secure and authenticated channel between App Engine and Android, you can use Google Cloud Endpoints. It will even generate the Android side code to call it.
Java:
* <https://developers.google.com/appengine/docs/java/endpoints/>
* <https://developers.google.com/appengine/docs/java/endpoints/consume_android>
Python:
* <https://developers.google.com/appengine/docs/python/endpoints/>
* <https://developers.google.com/appengine/docs/python/endpoints/consume_android>
For a longer show and tell, check the IO 13 talk: <https://www.youtube.com/watch?v=v5u_Owtbfew> | This string "xVF79DzOplxBTMCwAx+hoeDJhyhifPZEoACQJcFhrXA=" is a base64-encoded value.
<https://en.wikipedia.org/wiki/Base64>
Base64 encoding is widely used lots of applications, it's a good way to encode binary data into text. If you're looking at a long encoded value, the "=" at the end can be a good indicator of base64 encoding.
In your python code you probably need to base64 decode the data before handing it to the decryption function.
I have two recommendations:
1. If crypto isn't a comfort zone for you, consult with someone who is good in this area for your project.
2. Be aware that embedding a symmetric encryption key in an Android app that you distribute is a bad idea. Anyone that can get a copy of your app can extract that key and use it to decrypt or spoof your messages. |
24,468,794 | I'm asking some help to show notifications using [python-crontab](https://pypi.python.org/pypi/python-crontab), because everything I've tried do not work. The display is not initilised when the script is launched by cron. When I start it manually, that's work.
The codes I've tried:
```
#!/usr/bin/env python
# coding: utf8
import subprocess
import os
#os.environ.setdefault("XAUTHORITY", "/home/guillaume" + "/.Xauthority")
#os.environ.setdefault('DISPLAY', ':0.0') # do not work
#os.environ['DISPLAY'] = ':0.0' # do not work
print = os.environ
cmd2 = 'notify-send test'
subprocess.call(cmd2, shell=True)
# more code, which is working (using VLC)
cmd3 = "cvlc rtp://232.0.2.183:8200 --sout file/mkv:/path/save/file.mkv" # to download TV's flow
with open("/path/debug_cvlc.log", 'w') as out:
proc = subprocess.Popen(cmd3, stderr=out, shell=True, preexec_fn=os.setsid)
pid = proc.pid # to get the pid
with open("/path/pid.log", "w") as f:
f.write(str(pid)) # to write the pid in a file
# I'm using the pid to stop the download with another cron's task, and to display another notify message.
# Download and stop is working very well, and zenity too. But not notify-send
```
Thanks
### Edit: here are the environment variables I have for this cron's script:
```
{'LANG': 'fr_FR.UTF-8', 'SHELL': '/bin/sh', 'PWD': '/home/guillaume', 'LOGNAME': 'guillaume', 'PATH': '/usr/bin:/bin', 'HOME': '/home/guillaume', 'DISPLAY': ':0.0'}
```
### Edit2: I'm calling my script in cron like this:
```
45 9 30 6 * export DISPLAY=:0.0 && python /home/path/script.py > /home/path/debug_cron_on.log 2>&1
```
I precise I have two screens, so I think DISPLAY:0.0 is the way to display this notify..
But I don't see it.
### Edit3: It appears that I've a problem with notify-send, because it's working using zenity:
```
subprocess.call("zenity --warning --timeout 5 --text='this test is working'", shell=True)
```
I have notify-send version 0.7.3, and I precise that notify-send is working with the terminal.
### Edit4: Next try with python-notify.
```
import pynotify
pynotify.init("Basic")
n = pynotify.Notification("Title", "TEST")
n.show()
```
The log file show this: (in french)
```
Traceback (most recent call last):
File "/home/path/script.py", line 22, in <module>
n.show()
gio.Error: Impossible de se connecter : Connexion refusée
#Translating: Unable to connect : Connection refused
```
So, I have problem with dbus? what is this?
Solution: Get the DBUS\_SESSION\_BUS\_ADDRESS before creating the cron order:
-----------------------------------------------------------------------------
```
cron = CronTab()
dbus = os.getenv("DBUS_SESSION_BUS_ADDRESS") # get the dbus
# creating cron
cmd_start = "export DBUS_SESSION_BUS_ADDRESS=" + str(dbus) + " && export DISPLAY=:0.0 && cd /path && python /path/script.py > path/debug_cron.log 2>&1"
job = cron.new(cmd_start)
job = job_start.day.on(self.day_on) # and all the lines to set cron, with hours etc..
cron.write() # write the cron's file
```
Finally, the cron's line is like that:
```
20 15 1 7 * export DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-M0JCXXbuhC && export DISPLAY=:0.0 && python script.py
```
Then the notification is displaying. Problem resolved !! :) | 2014/06/28 | [
"https://Stackoverflow.com/questions/24468794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3314648/"
] | You are calling the cron like
```
45 9 30 6 * DISPLAY=:0.0 python /home/path/script.py > /home/path/debug_cron_on.log 2>&1
```
which is incorrect, since you are not exporting the `DISPLAY` variable, and the subsequent command does not run.
Try this instead
```
45 9 30 6 * export DISPLAY=:0.0 && cd /home/path/ && python script.py >> debug_cron.log 2>&1
```
Also, you are setting the `DISPLAY` variable within your cron job as well, so try if the cron job works without exporting it in the job line
```
45 9 30 6 * cd /home/path/ && python script.py >> debug_cron.log 2>&1
```
**EDIT**
While debugging, run the cron job every minute. Following worked for me:
Cron entry
```
* * * * * cd /home/user/Desktop/test/send-notify && python script.py
```
script.py
```
#!/usr/bin/env python
import subprocess
import os
os.environ.setdefault('DISPLAY', ':0.0')
print os.environ
cmd2 = 'notify-send test'
subprocess.call(cmd2, shell=True)
```
**EDIT 2**
Using `pynotify`, script.py becomes
```
#!/usr/bin/env python
import pynotify
import os
os.environ.setdefault('DISPLAY', ':0.0')
pynotify.init("Basic")
n = pynotify.Notification("Title", "TEST123")
n.show()
```
and cron entry becomes
```
* * * * * cd /home/user/Desktop/test/send-notify && python script.py
```
**EDIT 3**
One environment variable `DBUS_SESSION_BUS_ADDRESS` is missing from the cron environment.
It can be set in [this](https://unix.stackexchange.com/questions/28463/run-a-dbus-program-in-crontab-how-to-know-about-the-session-id/28496#28496) and [this](https://stackoverflow.com/questions/3302240/running-a-python-script-from-crontab) fashion | crontab is considered an external host -- it doesn't have permission to write to your display.
Workaround: allow anyone to write to your display. Type this in your shell when you're logged in:
```
xhost +
``` |
12,735,224 | Is it possible to make list comprehension stop when n items are added to the new list?
e.g.,
```
[x for x in xrange(20) if len(<this>) < 10 ]
```
To confirm: I wants to limit the length of the resultant list to 10. "" is pseudocode for the current list being made. Im using python 2.7 | 2012/10/04 | [
"https://Stackoverflow.com/questions/12735224",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/904887/"
] | No. But [`itertools.islice()`](http://docs.python.org/library/itertools.html#itertools.islice) can give you a generator that only yields a given number of items.
```
>>> list(itertools.islice('foobar', 3))
['f', 'o', 'o']
``` | If `xrange` is the generator your use, just do
```
list(xrange(20))[:10]
``` |
12,735,224 | Is it possible to make list comprehension stop when n items are added to the new list?
e.g.,
```
[x for x in xrange(20) if len(<this>) < 10 ]
```
To confirm: I wants to limit the length of the resultant list to 10. "" is pseudocode for the current list being made. Im using python 2.7 | 2012/10/04 | [
"https://Stackoverflow.com/questions/12735224",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/904887/"
] | No. But [`itertools.islice()`](http://docs.python.org/library/itertools.html#itertools.islice) can give you a generator that only yields a given number of items.
```
>>> list(itertools.islice('foobar', 3))
['f', 'o', 'o']
``` | This can be done, but it will depend on the implementation of Python you are using, since it depends on the implementation of list comprehensions. The name of "this list comprehension" in CPython is "`_[1]`", so for example you could do the following (try it yourself, it really will work):
```
[i for i in xrange(20) if len(locals().get("_[1]")) < 10]
```
**It is very unlikely that you should be doing this with real production code.** It's absurdly obscure, and if someone is using any other implementation, it could break. Instead, just use a loop:
```
l = []
for i in xrange(20):
l.append(i)
if len(l) >= 10:
break
```
`locals().get("_[1]")` works because Python needs to store the current list in some place during the list comprehension construction. "`_[1]`" was chosen somewhat arbitrarily since it is highly unlikely that anyone would use that as a variable name. If you happen to build a nested list comprehension, those will store lists in higher incremented variable names, eg, "`_[2]`" and "`_[3]`" and so on. |
12,735,224 | Is it possible to make list comprehension stop when n items are added to the new list?
e.g.,
```
[x for x in xrange(20) if len(<this>) < 10 ]
```
To confirm: I wants to limit the length of the resultant list to 10. "" is pseudocode for the current list being made. Im using python 2.7 | 2012/10/04 | [
"https://Stackoverflow.com/questions/12735224",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/904887/"
] | This can be done, but it will depend on the implementation of Python you are using, since it depends on the implementation of list comprehensions. The name of "this list comprehension" in CPython is "`_[1]`", so for example you could do the following (try it yourself, it really will work):
```
[i for i in xrange(20) if len(locals().get("_[1]")) < 10]
```
**It is very unlikely that you should be doing this with real production code.** It's absurdly obscure, and if someone is using any other implementation, it could break. Instead, just use a loop:
```
l = []
for i in xrange(20):
l.append(i)
if len(l) >= 10:
break
```
`locals().get("_[1]")` works because Python needs to store the current list in some place during the list comprehension construction. "`_[1]`" was chosen somewhat arbitrarily since it is highly unlikely that anyone would use that as a variable name. If you happen to build a nested list comprehension, those will store lists in higher incremented variable names, eg, "`_[2]`" and "`_[3]`" and so on. | If `xrange` is the generator your use, just do
```
list(xrange(20))[:10]
``` |
13,116,543 | Up to now, I've been peppering my code with 'print debug message' and even 'if condition: print debug message'. But a number of people have told me that's not the best way to do it, and I really should learn how to use the logging module. After a quick read, it looks as though it does everything I could possibly want, and then some. It looks like a learning project in its own right, and I want to work on other projects now and simply use the minimum functionality to help me. If it makes any difference, I am on python 2.6 and will be for the forseeable future, due to library and legacy compatibilities.
All I want to do at the moment is pepper my code with messages that I can turn on and off section by section, as I manage to debug specific regions. As a 'hello\_log\_world', I tried this, and it doesn't do what I expected
```
import logging
# logging.basicConfig(level=logging.DEBUG)
logging.error('first error')
logging.debug('first debug')
logging.basicConfig(level=logging.DEBUG)
logging.error('second error')
logging.debug('second debug')
```
You'll notice I'm using the really basic config, using as many defaults as possible, to keep things simple. But appears that it's too simple, or that I don't understand the programming model behind logging.
I had expected that sys.stderr would end up with
```
ERROR:root:first error
ERROR:root:second error
DEBUG:root:second debug
```
... but only the two error messages appear. Setting level=DEBUG doesn't make the second one appear. If I uncomment the basicConfig call at the start of the program, all four get output.
Am I trying to run it at too simple a level?
What's the simplest thing I can add to what I've written there to get my expected behaviour? | 2012/10/29 | [
"https://Stackoverflow.com/questions/13116543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1305287/"
] | Logging actually follows a particular hierarchy (`DEBUG -> INFO -> WARNING -> ERROR -> CRITICAL`), and the default level is `WARNING`. Therefore the reason you see the two ERROR messages is because it is ahead of `WARNING` on the hierarchy chain.
As for the odd commenting behavior, the explanation is found in the [logging docs](http://docs.python.org/2/library/logging.html#logging.basicConfig) (which as you say are a task unto themselves :) ):
>
> The call to basicConfig() should come before any calls to debug(),
> info() etc. As it’s intended as a one-off simple configuration
> facility, only the first call will actually do anything: subsequent
> calls are effectively no-ops.
>
>
>
However you can use the `setLevel` parameter to get what you desire:
```
import logging
logging.getLogger().setLevel(logging.ERROR)
logging.error('first error')
logging.debug('first debug')
logging.getLogger().setLevel(logging.DEBUG)
logging.error('second error')
logging.debug('second debug')
```
The lack of an argument to `getLogger()` means that the root logger is modified. This is essentially one step before @del's (good) answer, where you start getting into multiple loggers, each with their own specific properties/output levels/etc. | Rather than modifying the logging levels in your code to control the output, you should consider creating multiple loggers, and setting the logging level for each one individually. For example:
```
import logging
first_logger = logging.getLogger('first')
second_logger = logging.getLogger('second')
logging.basicConfig()
first_logger.setLevel(logging.ERROR)
second_logger.setLevel(logging.DEBUG)
first_logger.error('first error')
first_logger.debug('first debug')
second_logger.error('second error')
second_logger.debug('second debug')
```
This outputs:
```
ERROR:first:first error
ERROR:second:second error
DEBUG:second:second debug
``` |
5,124,232 | If I were to import some module called modx, how would that be different from saying
```
from modx import *
```
Wouldn't all the contents be imported from each either way? This is in python just to clarify. | 2011/02/26 | [
"https://Stackoverflow.com/questions/5124232",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/440062/"
] | If you `import somemodule` the contained globals will be available via `somemodule.someglobal`. If you `from somemodule import *` ALL its globals (or those listed in `__all__` if it exists) will be made globals, i.e. you can access them using `someglobal` without the module name in front of it.
Using `from module import *` is discouraged as it clutters the global scope and if you import stuff from multiple modules you are likely to get conflicts and overwrite existing classes/functions. | Common question with many faq's to answer... here is one: <http://effbot.org/zone/import-confusion.htm>
Essentially to answer your specific question the second form (`from modx import *`) you get only the public items in modx |
5,124,232 | If I were to import some module called modx, how would that be different from saying
```
from modx import *
```
Wouldn't all the contents be imported from each either way? This is in python just to clarify. | 2011/02/26 | [
"https://Stackoverflow.com/questions/5124232",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/440062/"
] | If `a` defines `a.b` and `a.c`...
```
import a
a.b()
a.c()
```
vs.
```
from a import b
b()
c() # fails because c isn't imported
```
vs.
```
from a import *
b()
c()
```
Note that `from foo import *` is generally frowned upon since:
1. It puts things into the global namespace without giving you fine control
2. It can cause collisions, *due* to everything being in the global namespace
3. It makes it unclear what is actually defined in the current file, since the list of what it defines can vary depending on what's imported. | Common question with many faq's to answer... here is one: <http://effbot.org/zone/import-confusion.htm>
Essentially to answer your specific question the second form (`from modx import *`) you get only the public items in modx |
5,124,232 | If I were to import some module called modx, how would that be different from saying
```
from modx import *
```
Wouldn't all the contents be imported from each either way? This is in python just to clarify. | 2011/02/26 | [
"https://Stackoverflow.com/questions/5124232",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/440062/"
] | If you `import somemodule` the contained globals will be available via `somemodule.someglobal`. If you `from somemodule import *` ALL its globals (or those listed in `__all__` if it exists) will be made globals, i.e. you can access them using `someglobal` without the module name in front of it.
Using `from module import *` is discouraged as it clutters the global scope and if you import stuff from multiple modules you are likely to get conflicts and overwrite existing classes/functions. | If `a` defines `a.b` and `a.c`...
```
import a
a.b()
a.c()
```
vs.
```
from a import b
b()
c() # fails because c isn't imported
```
vs.
```
from a import *
b()
c()
```
Note that `from foo import *` is generally frowned upon since:
1. It puts things into the global namespace without giving you fine control
2. It can cause collisions, *due* to everything being in the global namespace
3. It makes it unclear what is actually defined in the current file, since the list of what it defines can vary depending on what's imported. |
63,885,189 | This code work once, show current datetime and wait user input 'q' to quit:
```
#!/usr/bin/python
import curses
import datetime
import traceback
from curses import wrapper
def schermo(scr, *args):
try:
stdscr = curses.initscr()
stdscr.clear()
curses.cbreak()
stdscr.addstr(3, 2, f'{datetime.datetime.now()}', curses.A_NORMAL)
while True:
ch = stdscr.getch()
if ch == ord('q'):
break
stdscr.refresh()
except:
traceback.print_exc()
finally:
curses.echo()
curses.nocbreak()
curses.endwin()
curses.wrapper(schermo)
```
What is the best practice to make data on the screen change each second? | 2020/09/14 | [
"https://Stackoverflow.com/questions/63885189",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2239318/"
] | If you replace:
```
movie_list = html.select("#page_filling_chart > table > tbody > tr > td > b > a")
```
With:
```
movie_list = html.select("#page_filling_chart table tr > td > b > a")
```
You get what I think you're looking for. The primary change here is replacing child-selectors (`parent > child`) with descendant selectors (`ancestor descendant`), which is a lot more forgiving with respect to what the intervening content looks like.
---
Update: this is interesting. Your choice of `BeautifulSoup` parser seems to lead to different behavior.
Compare:
```
>>> html = BeautifulSoup(raw, 'html.parser')
>>> html.select('#page_filling_chart > table')
[]
```
With:
```
>>> html = BeautifulSoup(raw, 'lxml')
>>> html.select('#page_filling_chart > table')
[<table>
<tr><th>Rank</th><th>Movie</th><th>Release<br/>Date</th><th>Distributor</th><th>Genre</th><th>2019 Gross</th><th>Tickets Sold</th></tr>
<tr>
[...]
```
In fact, using the `lxml` parser you can *almost* use your original selector. This works:
```
html.select("#page_filling_chart > table > tr > td > b > a"
```
After parsing, a `table` has no `tbody`.
After experimenting for a bit, you would have to rewrite your original query like this to get it to work with `html.parser`:
```
html.select("#page_filling_chart2 > p > p > p > p > p > table > tr > td > b > a")
```
It looks like `html.parser` doesn't synthesize closing `</p>` elements when they are missing from the source, so all the unclosed `<p>` tags result in a weird parsed document structure. | Here is the solution for this question:
```
from bs4 import BeautifulSoup
import requests
url = "https://www.the-numbers.com/market/" + "2019" + "/top-grossing-movies"
raw = requests.get(url, headers={'User-Agent':'Mozilla/5.0'})
html = BeautifulSoup(raw.text, "html.parser")
movie_table_rows = html.findAll("table")[0].findAll('tr')
movie_list = []
for tr in movie_table_rows[1:]:
tds = tr.findAll('td')
movie_list.append(tds[1].text) #Extract Movie Names
print(movie_list)
```
Basically, the way you are trying to extract the text is incorrect as selectors are different for each movie name anchor tag. |
63,885,189 | This code work once, show current datetime and wait user input 'q' to quit:
```
#!/usr/bin/python
import curses
import datetime
import traceback
from curses import wrapper
def schermo(scr, *args):
try:
stdscr = curses.initscr()
stdscr.clear()
curses.cbreak()
stdscr.addstr(3, 2, f'{datetime.datetime.now()}', curses.A_NORMAL)
while True:
ch = stdscr.getch()
if ch == ord('q'):
break
stdscr.refresh()
except:
traceback.print_exc()
finally:
curses.echo()
curses.nocbreak()
curses.endwin()
curses.wrapper(schermo)
```
What is the best practice to make data on the screen change each second? | 2020/09/14 | [
"https://Stackoverflow.com/questions/63885189",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2239318/"
] | If you replace:
```
movie_list = html.select("#page_filling_chart > table > tbody > tr > td > b > a")
```
With:
```
movie_list = html.select("#page_filling_chart table tr > td > b > a")
```
You get what I think you're looking for. The primary change here is replacing child-selectors (`parent > child`) with descendant selectors (`ancestor descendant`), which is a lot more forgiving with respect to what the intervening content looks like.
---
Update: this is interesting. Your choice of `BeautifulSoup` parser seems to lead to different behavior.
Compare:
```
>>> html = BeautifulSoup(raw, 'html.parser')
>>> html.select('#page_filling_chart > table')
[]
```
With:
```
>>> html = BeautifulSoup(raw, 'lxml')
>>> html.select('#page_filling_chart > table')
[<table>
<tr><th>Rank</th><th>Movie</th><th>Release<br/>Date</th><th>Distributor</th><th>Genre</th><th>2019 Gross</th><th>Tickets Sold</th></tr>
<tr>
[...]
```
In fact, using the `lxml` parser you can *almost* use your original selector. This works:
```
html.select("#page_filling_chart > table > tr > td > b > a"
```
After parsing, a `table` has no `tbody`.
After experimenting for a bit, you would have to rewrite your original query like this to get it to work with `html.parser`:
```
html.select("#page_filling_chart2 > p > p > p > p > p > table > tr > td > b > a")
```
It looks like `html.parser` doesn't synthesize closing `</p>` elements when they are missing from the source, so all the unclosed `<p>` tags result in a weird parsed document structure. | This should work:
```py
url = 'https://www.the-numbers.com/market/2019/top-grossing-movies'
raw = requests.get(url)
html = BeautifulSoup(raw.text, "html.parser")
movie_list = html.select("table > tr > td > b > a")
for i in range(len(movie_list)):
print(movie_list[i].text)
``` |
35,854,289 | I got a brand-new laptop with a resolution of 3840 x 2160 running Windows 10. After I installed Anaconda + Spyder for python coding, I noticed that the icons are extremely small, as well as the text on the Object Inspector section:
[](https://i.stack.imgur.com/pDDjD.png)
Is there a way to fix this? It seems closely related to the resolution as [it is also happening with Eclipse.](https://stackoverflow.com/questions/20718093/eclipse-interface-icons-very-small-on-high-resolution-screen-in-windows-8-1)
PS: The Eclipse workarounds includes lowering resolution which is far from ideal. I was wondering if this can be fixed maintaining the resolution settings. | 2016/03/07 | [
"https://Stackoverflow.com/questions/35854289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1193136/"
] | (*Spyder maintainer here*) Please use the screen resolution options provided by Spyder to fix this. They are present in
`Tools > Preferences > Application > Interface > Screen resolution`
for Spyder 5
`Tools > Preferences > General > Interface > Screen resolution`.
for Spyder 4 and
`Tools > Preferences > General > Appearance > Screen resolution`
for Spyder 3. | I know this question is old but I ran into the same problem. I fixed it by adding
```
[Platforms]
WindowsArguments = dpiawareness=0
```
to qt.conf in my Anaconda3 installation folder (C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3\_64, as it is part of my MS VS installation). |
35,854,289 | I got a brand-new laptop with a resolution of 3840 x 2160 running Windows 10. After I installed Anaconda + Spyder for python coding, I noticed that the icons are extremely small, as well as the text on the Object Inspector section:
[](https://i.stack.imgur.com/pDDjD.png)
Is there a way to fix this? It seems closely related to the resolution as [it is also happening with Eclipse.](https://stackoverflow.com/questions/20718093/eclipse-interface-icons-very-small-on-high-resolution-screen-in-windows-8-1)
PS: The Eclipse workarounds includes lowering resolution which is far from ideal. I was wondering if this can be fixed maintaining the resolution settings. | 2016/03/07 | [
"https://Stackoverflow.com/questions/35854289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1193136/"
] | I know this question is old but I ran into the same problem. I fixed it by adding
```
[Platforms]
WindowsArguments = dpiawareness=0
```
to qt.conf in my Anaconda3 installation folder (C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3\_64, as it is part of my MS VS installation). | Sorry, can't comment on Djerro Neth's answer above, so here goes:
I did his suggestion of adding
```
[Platforms]
WindowsArguments = dpiawareness=0
```
and this sort of solved the issue for me. However, the whole UI looked blurry. So I closed spyder, changed `dpiawareness` to `1`
```
[Platforms]
WindowsArguments = dpiawareness=1
```
...and, voilà, everything looks just right.
Weird. |
35,854,289 | I got a brand-new laptop with a resolution of 3840 x 2160 running Windows 10. After I installed Anaconda + Spyder for python coding, I noticed that the icons are extremely small, as well as the text on the Object Inspector section:
[](https://i.stack.imgur.com/pDDjD.png)
Is there a way to fix this? It seems closely related to the resolution as [it is also happening with Eclipse.](https://stackoverflow.com/questions/20718093/eclipse-interface-icons-very-small-on-high-resolution-screen-in-windows-8-1)
PS: The Eclipse workarounds includes lowering resolution which is far from ideal. I was wondering if this can be fixed maintaining the resolution settings. | 2016/03/07 | [
"https://Stackoverflow.com/questions/35854289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1193136/"
] | (*Spyder maintainer here*) Please use the screen resolution options provided by Spyder to fix this. They are present in
`Tools > Preferences > Application > Interface > Screen resolution`
for Spyder 5
`Tools > Preferences > General > Interface > Screen resolution`.
for Spyder 4 and
`Tools > Preferences > General > Appearance > Screen resolution`
for Spyder 3. | Sorry, can't comment on Djerro Neth's answer above, so here goes:
I did his suggestion of adding
```
[Platforms]
WindowsArguments = dpiawareness=0
```
and this sort of solved the issue for me. However, the whole UI looked blurry. So I closed spyder, changed `dpiawareness` to `1`
```
[Platforms]
WindowsArguments = dpiawareness=1
```
...and, voilà, everything looks just right.
Weird. |
45,713,599 | I have following 2D array
```
name_list = [['Orange', '5'],['Mango','6'],['Banana','3']]
```
I want to get each fruit name alone with its count and print it using a python code. So how do I read above array to extract the data (inside for loop)
I need print out as
```
Name:Orange<br/>
Count:5<br/>
Name:Mango<br/>
Count:6<br/>
Name:Banana<br/>
Count:3<br/>
``` | 2017/08/16 | [
"https://Stackoverflow.com/questions/45713599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8097728/"
] | You can unpack your list like this:
```
for name, amount in name_list:
print("Name:{}".format(name))
print("Count:{}".format(amount))
``` | Try this:
```
name_list = [['Orange', '5'],['Mango','6'],['Banana','3']]
for item in name_list:
print("Name: {}".format(item[0]))
print("Count: {}".format(item[1]))
``` |
44,679,883 | **Problem statement:** Create a dataframe with multiple columns and populate one column with daterange series of 5 minute interval.
**Tried solution:**
1. Created a dataframe initially with just one row / 5 columns (all "NAN") .
2. **Command used to generate daterange:**
```
rf = pd.date_range('2000-1-1', periods=5, freq='5min').
```
**O/P of rf :**
```
DatetimeIndex(['2000-01-01 00:00:00', '2000-01-01 00:05:00',
'2000-01-01 00:10:00', '2000-01-01 00:15:00',
'2000-01-01 00:20:00'],
dtype='datetime64[ns]', freq='5T')
```
3. When I try to assign rf to one of the columns of df (df['column1'] = rf)., it is throwing exception as shown below (copying the last line of exception).
```
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.6/site-packages/pandas/core/series.py", line 2879, in _sanitize_index
raise ValueError('Length of values does not match length of ' 'index')
```
Though I understood the issue, I don't know the solution. I'm looking for a easy way to achieve this. | 2017/06/21 | [
"https://Stackoverflow.com/questions/44679883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4942655/"
] | I think, I was slowly understanding the power/usage of dataframes.
* Initially create a dataframe :
```
df = pd.DataFrame(index=range(100),columns=['A','B','C'])
```
* Then created a date\_range.
```
date = pd.date_range('2000-1-1', periods=100, freq='5T')
```
* Using "assign" function , added date\_range as new column to already created dataframe (df).
```
df = df.assign(D=date)
```
**Final O/P of df:**
```
df[:5]
A B C D
0 NaN NaN NaN 2000-01-01 00:00:00
1 NaN NaN NaN 2000-01-01 00:05:00
2 NaN NaN NaN 2000-01-01 00:10:00
3 NaN NaN NaN 2000-01-01 00:15:00
4 NaN NaN NaN 2000-01-01 00:20:00
``` | Your dataframe has only one row and you try to insert data for five rows. |
19,042,353 | Is there a module for python 3.3 to connect with Oracle Databases? Which is the easiest to use? Something like the mysql module, only works with Oracle.
Preferably version 10g, but 11g will do just fine. | 2013/09/27 | [
"https://Stackoverflow.com/questions/19042353",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2106009/"
] | **There is:** `cx_Oracle`
```
# Install --> You should have oracle installed otherwise exception will be raised
pip install cx_Oracle
import cx_Oracle
con = cx_Oracle.connect('pythonhol/welcome@127.0.0.1/orcl')
print con.version
con.close()
```
<http://www.orafaq.com/wiki/Python>
<http://www.oracle.com/technetwork/articles/dsl/python-091105.html> | if you're using python3
```
pip3 install cx_Oracle
```
How to connet oracle and get oracle time:
```
#!/usr/bin/python3
#coding=utf8
# import module
import cx_Oracle
con = cx_Oracle.connect('username/password@databasename')
# create cursor
cursor = con.cursor()
# execute sql
cursor.execute('select sysdate from dual')
# fetch one data, or fetchall()
data = cursor.fetchone()
print('Database time:%s' % data)
# close cursor and oracle
cursor.close()
con.close()
``` |
71,367,651 | I'm studying how Selenium works. There are some elements that are fully loaded, but can't be clickable. Here's the example of the case. Selenium took a time until it is fully loaded. But Still the button is not clickable. I think Selenium can't find the element. How can I resolve this issue?
FYI, a pop up is loaded when I click the button manually. Thank you very much for your assistance
url
```
https://cafe.naver.com/codeuniv
```
my python code
```
from selenium import webdriver
import time
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
driver.get("https://cafe.naver.com/codeuniv")
time.sleep(1)
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="menuLink107"]'))).click()
time.sleep(1)
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="upperArticleList"]/table/tbody/tr[1]/td[2]/div/table/tbody/tr/td/a'))).click()
```
button
```
<a href="#" class="m-tcol-c" onclick="ui(event, 'ftKPZDy0W2UYCbEIxQ-50g',3,'코뮤','30026525','', 'false', 'true', 'codeuniv', 'false', '107'); return false;">코뮤</a>
```
ERROR
```
Traceback (most recent call last):
File "C:\project\naver.py", line 12, in <module>
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="upperArticleList"]/table/tbody/tr[1]/td[2]/div/table/tbody/tr/td/a'))).click()
File "C:\project\venv\lib\site-packages\selenium\webdriver\support\wait.py", line 89, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
``` | 2022/03/06 | [
"https://Stackoverflow.com/questions/71367651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18186131/"
] | As @JohanC points out, if you're trying to draw elements that are close to or smaller than the resolution of your raster graphic, you have to expect some artifacts. But it also seems like you'd have an easier time making this plot directly in matplotlib, since `catplot` is not designed to make histograms:
```
f, ax = plt.subplots(figsize=(8, 4), dpi=96)
ax.bar(
bins[:-1], counts,
yerr=[i**(1/2) for i in counts],
width=(bins[1] - bins[0]), align="edge",
linewidth=0, error_kw=dict(linewidth=1),
)
ax.set(
xmargin=.01,
xlabel='Muon decay times ($\mu s$)',
ylabel='Count',
title='Distribution for muon decay times'
)
```
[](https://i.stack.imgur.com/ItFns.png) | Matplotlib doesn't have a good way to deal with bars that are thinner than one pixel. If you save to an image file, you can increase the dpi and/or the figsize.
Some white space is due to the bars being `0.8` wide, leaving a gap of `0.2`. Seaborn's barplot doesn't let you set the bar widths, but you could iterate through the generated bars and change their width (also updating their x-value to keep them centered around the tick position).
The edges of the bars get a fixed color (default 'none', or fully transparent). While iterating through the generated bars, you could set the edge color equal to the face color.
```py
from matplotlib import pyplot as plt
from matplotlib.ticker import MultipleLocator
import seaborn as sns
import pandas as pd
import numpy as np
bins = np.linspace(0, 20, 401)
x = np.random.exponential(2.2, 3000)
counts, _ = np.histogram(x, bins)
df = pd.DataFrame({'bin': bins[:-1], 'count': counts})
g = sns.catplot(data=df, x='bin', y='count', yerr=[i ** (1 / 2) for i in counts], kind='bar',
height=4, aspect=2, palette='Dark2_r', lw=0.5)
g.set(xlabel='Muon decay times ($\mu s$)', ylabel='Count', title='Distribution for muon decay times')
for ax in g.axes.flat:
ax.xaxis.set_major_locator(MultipleLocator(40))
ax.tick_params(axis='x', labelrotation=30)
for bar in ax.patches:
bar.set_edgecolor(bar.get_facecolor())
bar.set_x(bar.get_x() - (1 - bar.get_width()) / 2)
bar.set_width(1)
plt.tight_layout()
plt.show()
```
[](https://i.stack.imgur.com/uM8A9.png) |
51,483,774 | My input data looks like this:
```
cat start target
0 1 2016-09-01 00:00:00 4.370279
1 1 2016-09-01 00:00:00 1.367778
2 1 2016-09-01 00:00:00 0.385834
```
I want to build out a series using "start" for the Start Date and "target" for the series values. The iterrows() is pulling the correct values for "imp", but when appending to the time\_series, only the first value is carried through to all series points. What's the reason for "data=imp" pulling the 0th row every time?
```
t0 = model_input_test['start'][0] # t0 = 2016-09-01 00:00:00
num_ts = len(model_input_test.index) # num_ts = 1348
time_series = []
for i, row in model_input_test.iterrows():
imp = row.loc['target']
print(imp)
index = pd.DatetimeIndex(start=t0, freq='H', periods=num_ts)
time_series.append(pd.Series(data=imp, index=index))
```
[A screenshot can be seen here](https://github.com/datavizhokie/datavizhokie.github.io/blob/master/Jupyter_ss.JPG).
Series "time\_series" should look like this:
```
2016-09-01 00:00:00 4.370279
2016-09-01 01:00:00 1.367778
2016-09-01 02:00:00 0.385834
```
But ends up looking like this:
```
2016-09-01 00:00:00 4.370279
2016-09-01 01:00:00 4.370279
2016-09-01 02:00:00 4.370279
```
I'm using Jupyter conda\_python3 on Sagemaker. | 2018/07/23 | [
"https://Stackoverflow.com/questions/51483774",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8691976/"
] | When using dataframes, there are usually better ways to go about tasks then iterating through the dataframe. For example, in your case, you can create your series like this:
```
time_series = (df.set_index(pd.date_range(pd.to_datetime(df.start).iloc[0],
periods = len(df), freq='H')))['target']
>>> time_series
2016-09-01 00:00:00 4.370279
2016-09-01 01:00:00 1.367778
2016-09-01 02:00:00 0.385834
Freq: H, Name: target, dtype: float64
>>> type(time_series)
<class 'pandas.core.series.Series'>
```
Essentially, this says: "set the index to be a date range incremented hourly from your first date, then take the `target` column" | Given a dataframe `df` and series `start` and `target`, you can simply use `set_index`:
```
time_series = df.set_index('start')['target']
``` |
18,434,302 | I wrote a program in Ruby that takes the user's weight/height as input. I am stuck converting it to Python. Here is my Ruby code, which works fine:
```
print "How tall are you?"
height = gets.chomp()
if height.include? "centimeters"
#truncates everything but numbers and changes the user's input to an integer
height = height.gsub(/[^0-9]/,"").to_i / 2.54
else
height = height
end
print "How much do you weigh?"
weight = gets.chomp()
if weight.include? "kilograms"
weight = weight.gsub(/[^0-9]/,"").to_i * 2.2
else
weight = weight
end
puts "So, you're #{height} inches tall and #{weight} pounds heavy."
```
Does anyone have any hints or pointers on how I can translate this? Here's my Python code:
```
print "How tall are you?",
height = raw_input()
if height.find("centimeters" or "cm")
height = int(height) / 2.54
else
height = height
print "How much do you weight?",
weight = raw_input()
if weight.find("kilograms" or "kg")
weight = int(height) * 2.2
else
weight = weight
print "So, you're %r inches tall and %r pounds heavy." %(height, weight)
```
It's not running. Here is the error I'm getting:
```
MacBook-Air:Python bdeely$ python ex11.py
How old are you? 23
How tall are you? 190cm
Traceback (most recent call last):
File "ex11.py", line 10, in <module>
height = int(height) / 2.54
ValueError: invalid literal for int() with base 10: '190cm'
``` | 2013/08/25 | [
"https://Stackoverflow.com/questions/18434302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1940832/"
] | If you don't care too much about understanding the underlying problem, this might help:
<http://johann.loefflmann.net/en/software/jarfix/index.html>
--and a double-clickable .jar needs to have Main-Class correctly set in MANIFEST.MF | Just to reiterate - Meghan's answer fixed my problem of not being able to double-click to start a jar in Windows 7.
* Open Registry editor
* Navigate to HKEY\_CLASSES\_ROOT->jarfile->shell->open->command
* Modify (Default) to `"[Path to working JRE]/bin/javaw.exe" -jar "%1" %*`
* Make sure .jar files are opened by [Path to working JRE]/bin/javaw.exe by default |
18,434,302 | I wrote a program in Ruby that takes the user's weight/height as input. I am stuck converting it to Python. Here is my Ruby code, which works fine:
```
print "How tall are you?"
height = gets.chomp()
if height.include? "centimeters"
#truncates everything but numbers and changes the user's input to an integer
height = height.gsub(/[^0-9]/,"").to_i / 2.54
else
height = height
end
print "How much do you weigh?"
weight = gets.chomp()
if weight.include? "kilograms"
weight = weight.gsub(/[^0-9]/,"").to_i * 2.2
else
weight = weight
end
puts "So, you're #{height} inches tall and #{weight} pounds heavy."
```
Does anyone have any hints or pointers on how I can translate this? Here's my Python code:
```
print "How tall are you?",
height = raw_input()
if height.find("centimeters" or "cm")
height = int(height) / 2.54
else
height = height
print "How much do you weight?",
weight = raw_input()
if weight.find("kilograms" or "kg")
weight = int(height) * 2.2
else
weight = weight
print "So, you're %r inches tall and %r pounds heavy." %(height, weight)
```
It's not running. Here is the error I'm getting:
```
MacBook-Air:Python bdeely$ python ex11.py
How old are you? 23
How tall are you? 190cm
Traceback (most recent call last):
File "ex11.py", line 10, in <module>
height = int(height) / 2.54
ValueError: invalid literal for int() with base 10: '190cm'
``` | 2013/08/25 | [
"https://Stackoverflow.com/questions/18434302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1940832/"
] | Just to reiterate - Meghan's answer fixed my problem of not being able to double-click to start a jar in Windows 7.
* Open Registry editor
* Navigate to HKEY\_CLASSES\_ROOT->jarfile->shell->open->command
* Modify (Default) to `"[Path to working JRE]/bin/javaw.exe" -jar "%1" %*`
* Make sure .jar files are opened by [Path to working JRE]/bin/javaw.exe by default | You have to do RightClick on the jar file and select `open with...` then, `Choose default program...` (i think that's the correct translation for "Elegir programa predeterminado" in spanish). On the `Open with` dialog select `Java(TM) Platform SE binary`. If you don't see that option you should search for the javaw.exe in your java installation folder, and select it. Also you should check `Use the selected program to open this kind of file`
Best regards. |
18,434,302 | I wrote a program in Ruby that takes the user's weight/height as input. I am stuck converting it to Python. Here is my Ruby code, which works fine:
```
print "How tall are you?"
height = gets.chomp()
if height.include? "centimeters"
#truncates everything but numbers and changes the user's input to an integer
height = height.gsub(/[^0-9]/,"").to_i / 2.54
else
height = height
end
print "How much do you weigh?"
weight = gets.chomp()
if weight.include? "kilograms"
weight = weight.gsub(/[^0-9]/,"").to_i * 2.2
else
weight = weight
end
puts "So, you're #{height} inches tall and #{weight} pounds heavy."
```
Does anyone have any hints or pointers on how I can translate this? Here's my Python code:
```
print "How tall are you?",
height = raw_input()
if height.find("centimeters" or "cm")
height = int(height) / 2.54
else
height = height
print "How much do you weight?",
weight = raw_input()
if weight.find("kilograms" or "kg")
weight = int(height) * 2.2
else
weight = weight
print "So, you're %r inches tall and %r pounds heavy." %(height, weight)
```
It's not running. Here is the error I'm getting:
```
MacBook-Air:Python bdeely$ python ex11.py
How old are you? 23
How tall are you? 190cm
Traceback (most recent call last):
File "ex11.py", line 10, in <module>
height = int(height) / 2.54
ValueError: invalid literal for int() with base 10: '190cm'
``` | 2013/08/25 | [
"https://Stackoverflow.com/questions/18434302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1940832/"
] | You have to do RightClick on the jar file and select `open with...` then, `Choose default program...` (i think that's the correct translation for "Elegir programa predeterminado" in spanish). On the `Open with` dialog select `Java(TM) Platform SE binary`. If you don't see that option you should search for the javaw.exe in your java installation folder, and select it. Also you should check `Use the selected program to open this kind of file`
Best regards. | According to Johann Loefflmann, the root cause for the problem above is, that a program has stolen the .jar association. And he is right! Just follow the link below and install a small file JARFIX and the association will be set back and you can double click any jar file and run it in windows. It works for me (I have the same problem and before these I tried all solutions mentioned in the forum and failed).
<https://johann.loefflmann.net/en/software/jarfix/index.html> |
18,434,302 | I wrote a program in Ruby that takes the user's weight/height as input. I am stuck converting it to Python. Here is my Ruby code, which works fine:
```
print "How tall are you?"
height = gets.chomp()
if height.include? "centimeters"
#truncates everything but numbers and changes the user's input to an integer
height = height.gsub(/[^0-9]/,"").to_i / 2.54
else
height = height
end
print "How much do you weigh?"
weight = gets.chomp()
if weight.include? "kilograms"
weight = weight.gsub(/[^0-9]/,"").to_i * 2.2
else
weight = weight
end
puts "So, you're #{height} inches tall and #{weight} pounds heavy."
```
Does anyone have any hints or pointers on how I can translate this? Here's my Python code:
```
print "How tall are you?",
height = raw_input()
if height.find("centimeters" or "cm")
height = int(height) / 2.54
else
height = height
print "How much do you weight?",
weight = raw_input()
if weight.find("kilograms" or "kg")
weight = int(height) * 2.2
else
weight = weight
print "So, you're %r inches tall and %r pounds heavy." %(height, weight)
```
It's not running. Here is the error I'm getting:
```
MacBook-Air:Python bdeely$ python ex11.py
How old are you? 23
How tall are you? 190cm
Traceback (most recent call last):
File "ex11.py", line 10, in <module>
height = int(height) / 2.54
ValueError: invalid literal for int() with base 10: '190cm'
``` | 2013/08/25 | [
"https://Stackoverflow.com/questions/18434302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1940832/"
] | I had the same problem too. I tried to reinstall my JRE, and nothing happens. I also changed `"C:\Program Files\Java\jre1.8.0_271\bin\javaw.exe" -jar "%1" %*`, but same as before.
In the second experiment, I ran it in CMD using `java -jar [myFileName].jar` in my directory where it saved and it worked. And then I realized that it worked using java.exe instead javaw.exe. So, I changed the registry to `"C:\Program Files\Java\jre1.8.0_271\bin\java.exe" -jar "%1" %*` and it solved my problem.
I hope it helps your problem too. | You have to do RightClick on the jar file and select `open with...` then, `Choose default program...` (i think that's the correct translation for "Elegir programa predeterminado" in spanish). On the `Open with` dialog select `Java(TM) Platform SE binary`. If you don't see that option you should search for the javaw.exe in your java installation folder, and select it. Also you should check `Use the selected program to open this kind of file`
Best regards. |
18,434,302 | I wrote a program in Ruby that takes the user's weight/height as input. I am stuck converting it to Python. Here is my Ruby code, which works fine:
```
print "How tall are you?"
height = gets.chomp()
if height.include? "centimeters"
#truncates everything but numbers and changes the user's input to an integer
height = height.gsub(/[^0-9]/,"").to_i / 2.54
else
height = height
end
print "How much do you weigh?"
weight = gets.chomp()
if weight.include? "kilograms"
weight = weight.gsub(/[^0-9]/,"").to_i * 2.2
else
weight = weight
end
puts "So, you're #{height} inches tall and #{weight} pounds heavy."
```
Does anyone have any hints or pointers on how I can translate this? Here's my Python code:
```
print "How tall are you?",
height = raw_input()
if height.find("centimeters" or "cm")
height = int(height) / 2.54
else
height = height
print "How much do you weight?",
weight = raw_input()
if weight.find("kilograms" or "kg")
weight = int(height) * 2.2
else
weight = weight
print "So, you're %r inches tall and %r pounds heavy." %(height, weight)
```
It's not running. Here is the error I'm getting:
```
MacBook-Air:Python bdeely$ python ex11.py
How old are you? 23
How tall are you? 190cm
Traceback (most recent call last):
File "ex11.py", line 10, in <module>
height = int(height) / 2.54
ValueError: invalid literal for int() with base 10: '190cm'
``` | 2013/08/25 | [
"https://Stackoverflow.com/questions/18434302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1940832/"
] | I had the same problem too. I tried to reinstall my JRE, and nothing happens. I also changed `"C:\Program Files\Java\jre1.8.0_271\bin\javaw.exe" -jar "%1" %*`, but same as before.
In the second experiment, I ran it in CMD using `java -jar [myFileName].jar` in my directory where it saved and it worked. And then I realized that it worked using java.exe instead javaw.exe. So, I changed the registry to `"C:\Program Files\Java\jre1.8.0_271\bin\java.exe" -jar "%1" %*` and it solved my problem.
I hope it helps your problem too. | According to Johann Loefflmann, the root cause for the problem above is, that a program has stolen the .jar association. And he is right! Just follow the link below and install a small file JARFIX and the association will be set back and you can double click any jar file and run it in windows. It works for me (I have the same problem and before these I tried all solutions mentioned in the forum and failed).
<https://johann.loefflmann.net/en/software/jarfix/index.html> |
18,434,302 | I wrote a program in Ruby that takes the user's weight/height as input. I am stuck converting it to Python. Here is my Ruby code, which works fine:
```
print "How tall are you?"
height = gets.chomp()
if height.include? "centimeters"
#truncates everything but numbers and changes the user's input to an integer
height = height.gsub(/[^0-9]/,"").to_i / 2.54
else
height = height
end
print "How much do you weigh?"
weight = gets.chomp()
if weight.include? "kilograms"
weight = weight.gsub(/[^0-9]/,"").to_i * 2.2
else
weight = weight
end
puts "So, you're #{height} inches tall and #{weight} pounds heavy."
```
Does anyone have any hints or pointers on how I can translate this? Here's my Python code:
```
print "How tall are you?",
height = raw_input()
if height.find("centimeters" or "cm")
height = int(height) / 2.54
else
height = height
print "How much do you weight?",
weight = raw_input()
if weight.find("kilograms" or "kg")
weight = int(height) * 2.2
else
weight = weight
print "So, you're %r inches tall and %r pounds heavy." %(height, weight)
```
It's not running. Here is the error I'm getting:
```
MacBook-Air:Python bdeely$ python ex11.py
How old are you? 23
How tall are you? 190cm
Traceback (most recent call last):
File "ex11.py", line 10, in <module>
height = int(height) / 2.54
ValueError: invalid literal for int() with base 10: '190cm'
``` | 2013/08/25 | [
"https://Stackoverflow.com/questions/18434302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1940832/"
] | If you don't care too much about understanding the underlying problem, this might help:
<http://johann.loefflmann.net/en/software/jarfix/index.html>
--and a double-clickable .jar needs to have Main-Class correctly set in MANIFEST.MF | I had the same problem too. I tried to reinstall my JRE, and nothing happens. I also changed `"C:\Program Files\Java\jre1.8.0_271\bin\javaw.exe" -jar "%1" %*`, but same as before.
In the second experiment, I ran it in CMD using `java -jar [myFileName].jar` in my directory where it saved and it worked. And then I realized that it worked using java.exe instead javaw.exe. So, I changed the registry to `"C:\Program Files\Java\jre1.8.0_271\bin\java.exe" -jar "%1" %*` and it solved my problem.
I hope it helps your problem too. |
18,434,302 | I wrote a program in Ruby that takes the user's weight/height as input. I am stuck converting it to Python. Here is my Ruby code, which works fine:
```
print "How tall are you?"
height = gets.chomp()
if height.include? "centimeters"
#truncates everything but numbers and changes the user's input to an integer
height = height.gsub(/[^0-9]/,"").to_i / 2.54
else
height = height
end
print "How much do you weigh?"
weight = gets.chomp()
if weight.include? "kilograms"
weight = weight.gsub(/[^0-9]/,"").to_i * 2.2
else
weight = weight
end
puts "So, you're #{height} inches tall and #{weight} pounds heavy."
```
Does anyone have any hints or pointers on how I can translate this? Here's my Python code:
```
print "How tall are you?",
height = raw_input()
if height.find("centimeters" or "cm")
height = int(height) / 2.54
else
height = height
print "How much do you weight?",
weight = raw_input()
if weight.find("kilograms" or "kg")
weight = int(height) * 2.2
else
weight = weight
print "So, you're %r inches tall and %r pounds heavy." %(height, weight)
```
It's not running. Here is the error I'm getting:
```
MacBook-Air:Python bdeely$ python ex11.py
How old are you? 23
How tall are you? 190cm
Traceback (most recent call last):
File "ex11.py", line 10, in <module>
height = int(height) / 2.54
ValueError: invalid literal for int() with base 10: '190cm'
``` | 2013/08/25 | [
"https://Stackoverflow.com/questions/18434302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1940832/"
] | I had the same problem, and it turns out that the .jar file association in the registry was broken. The following steps fixed it:
* Open the Registry Editor, and navigate to `HKEY_CLASSES_ROOT\jarfile\shell\open\command`, and modify the value of the Default key as follows
* Replace `[Location of your JRE]` in `"[Location of your JRE]\bin\javaw.exe" -jar "%1" %*` with the root directory of your JRE
installation. For example, `"C:\Program Files\Java\jre7\bin\javaw.exe" -jar "%1" %*`.
* Repeat the above steps for `HKEY_LOCAL_MACHINE\SOFTWARE\Classes\jarfile\shell\open\command`.
I have a Windows 7 machine, but I assume this would work for 32-bit environments as well.
EDIT: Fixed the path in the example - it should use the "Program Files" directory and not "Program Files (x86)". | I had the same problem too. I tried to reinstall my JRE, and nothing happens. I also changed `"C:\Program Files\Java\jre1.8.0_271\bin\javaw.exe" -jar "%1" %*`, but same as before.
In the second experiment, I ran it in CMD using `java -jar [myFileName].jar` in my directory where it saved and it worked. And then I realized that it worked using java.exe instead javaw.exe. So, I changed the registry to `"C:\Program Files\Java\jre1.8.0_271\bin\java.exe" -jar "%1" %*` and it solved my problem.
I hope it helps your problem too. |
18,434,302 | I wrote a program in Ruby that takes the user's weight/height as input. I am stuck converting it to Python. Here is my Ruby code, which works fine:
```
print "How tall are you?"
height = gets.chomp()
if height.include? "centimeters"
#truncates everything but numbers and changes the user's input to an integer
height = height.gsub(/[^0-9]/,"").to_i / 2.54
else
height = height
end
print "How much do you weigh?"
weight = gets.chomp()
if weight.include? "kilograms"
weight = weight.gsub(/[^0-9]/,"").to_i * 2.2
else
weight = weight
end
puts "So, you're #{height} inches tall and #{weight} pounds heavy."
```
Does anyone have any hints or pointers on how I can translate this? Here's my Python code:
```
print "How tall are you?",
height = raw_input()
if height.find("centimeters" or "cm")
height = int(height) / 2.54
else
height = height
print "How much do you weight?",
weight = raw_input()
if weight.find("kilograms" or "kg")
weight = int(height) * 2.2
else
weight = weight
print "So, you're %r inches tall and %r pounds heavy." %(height, weight)
```
It's not running. Here is the error I'm getting:
```
MacBook-Air:Python bdeely$ python ex11.py
How old are you? 23
How tall are you? 190cm
Traceback (most recent call last):
File "ex11.py", line 10, in <module>
height = int(height) / 2.54
ValueError: invalid literal for int() with base 10: '190cm'
``` | 2013/08/25 | [
"https://Stackoverflow.com/questions/18434302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1940832/"
] | You have to do RightClick on the jar file and select `open with...` then, `Choose default program...` (i think that's the correct translation for "Elegir programa predeterminado" in spanish). On the `Open with` dialog select `Java(TM) Platform SE binary`. If you don't see that option you should search for the javaw.exe in your java installation folder, and select it. Also you should check `Use the selected program to open this kind of file`
Best regards. | Right click the jar and open with winzip or winrar |
18,434,302 | I wrote a program in Ruby that takes the user's weight/height as input. I am stuck converting it to Python. Here is my Ruby code, which works fine:
```
print "How tall are you?"
height = gets.chomp()
if height.include? "centimeters"
#truncates everything but numbers and changes the user's input to an integer
height = height.gsub(/[^0-9]/,"").to_i / 2.54
else
height = height
end
print "How much do you weigh?"
weight = gets.chomp()
if weight.include? "kilograms"
weight = weight.gsub(/[^0-9]/,"").to_i * 2.2
else
weight = weight
end
puts "So, you're #{height} inches tall and #{weight} pounds heavy."
```
Does anyone have any hints or pointers on how I can translate this? Here's my Python code:
```
print "How tall are you?",
height = raw_input()
if height.find("centimeters" or "cm")
height = int(height) / 2.54
else
height = height
print "How much do you weight?",
weight = raw_input()
if weight.find("kilograms" or "kg")
weight = int(height) * 2.2
else
weight = weight
print "So, you're %r inches tall and %r pounds heavy." %(height, weight)
```
It's not running. Here is the error I'm getting:
```
MacBook-Air:Python bdeely$ python ex11.py
How old are you? 23
How tall are you? 190cm
Traceback (most recent call last):
File "ex11.py", line 10, in <module>
height = int(height) / 2.54
ValueError: invalid literal for int() with base 10: '190cm'
``` | 2013/08/25 | [
"https://Stackoverflow.com/questions/18434302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1940832/"
] | I had the same problem, and it turns out that the .jar file association in the registry was broken. The following steps fixed it:
* Open the Registry Editor, and navigate to `HKEY_CLASSES_ROOT\jarfile\shell\open\command`, and modify the value of the Default key as follows
* Replace `[Location of your JRE]` in `"[Location of your JRE]\bin\javaw.exe" -jar "%1" %*` with the root directory of your JRE
installation. For example, `"C:\Program Files\Java\jre7\bin\javaw.exe" -jar "%1" %*`.
* Repeat the above steps for `HKEY_LOCAL_MACHINE\SOFTWARE\Classes\jarfile\shell\open\command`.
I have a Windows 7 machine, but I assume this would work for 32-bit environments as well.
EDIT: Fixed the path in the example - it should use the "Program Files" directory and not "Program Files (x86)". | You have to do RightClick on the jar file and select `open with...` then, `Choose default program...` (i think that's the correct translation for "Elegir programa predeterminado" in spanish). On the `Open with` dialog select `Java(TM) Platform SE binary`. If you don't see that option you should search for the javaw.exe in your java installation folder, and select it. Also you should check `Use the selected program to open this kind of file`
Best regards. |
18,434,302 | I wrote a program in Ruby that takes the user's weight/height as input. I am stuck converting it to Python. Here is my Ruby code, which works fine:
```
print "How tall are you?"
height = gets.chomp()
if height.include? "centimeters"
#truncates everything but numbers and changes the user's input to an integer
height = height.gsub(/[^0-9]/,"").to_i / 2.54
else
height = height
end
print "How much do you weigh?"
weight = gets.chomp()
if weight.include? "kilograms"
weight = weight.gsub(/[^0-9]/,"").to_i * 2.2
else
weight = weight
end
puts "So, you're #{height} inches tall and #{weight} pounds heavy."
```
Does anyone have any hints or pointers on how I can translate this? Here's my Python code:
```
print "How tall are you?",
height = raw_input()
if height.find("centimeters" or "cm")
height = int(height) / 2.54
else
height = height
print "How much do you weight?",
weight = raw_input()
if weight.find("kilograms" or "kg")
weight = int(height) * 2.2
else
weight = weight
print "So, you're %r inches tall and %r pounds heavy." %(height, weight)
```
It's not running. Here is the error I'm getting:
```
MacBook-Air:Python bdeely$ python ex11.py
How old are you? 23
How tall are you? 190cm
Traceback (most recent call last):
File "ex11.py", line 10, in <module>
height = int(height) / 2.54
ValueError: invalid literal for int() with base 10: '190cm'
``` | 2013/08/25 | [
"https://Stackoverflow.com/questions/18434302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1940832/"
] | If you don't care too much about understanding the underlying problem, this might help:
<http://johann.loefflmann.net/en/software/jarfix/index.html>
--and a double-clickable .jar needs to have Main-Class correctly set in MANIFEST.MF | You have to do RightClick on the jar file and select `open with...` then, `Choose default program...` (i think that's the correct translation for "Elegir programa predeterminado" in spanish). On the `Open with` dialog select `Java(TM) Platform SE binary`. If you don't see that option you should search for the javaw.exe in your java installation folder, and select it. Also you should check `Use the selected program to open this kind of file`
Best regards. |
71,350,406 | I have this python function to upload file through sftp. It works fine.
```
def sftp_upload(destination, username, password,
remote_loc, source_file):
import pysftp
with pysftp.Connection(destination, username,
password, log="pysftp.log") as sftp:
sftp.cwd(remote_loc)
sftp.put(source_file)
sftp.close()
return None
```
The code works as expected. However, I always receive this error at the end `ImportError: sys.meta_path is None, Python is likely shutting down`.
How to remove this error? I'm also puzzled why code runs smoothly to the end despite the error.
In the log file, I saw the following;
```
INF [20220304-18:49:14.727] thr=2 paramiko.transport.sftp: [chan 0] sftp session closed.
DEB [20220304-18:49:14.727] thr=2 paramiko.transport: [chan 0] EOF sent (0)
DEB [20220304-18:49:14.728] thr=1 paramiko.transport: EOF in transport thread
```
Here's the stack trace;
```
Exception ignored in: <function Connection.__del__ at 0x000001A8B08765E0>
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\pysftp\__init__.py", line 1013, in __del__
File "C:\ProgramData\Anaconda3\lib\site-packages\pysftp\__init__.py", line 795, in close
ImportError: sys.meta_path is None, Python is likely shutting down
```
I am using python v3.9 | 2022/03/04 | [
"https://Stackoverflow.com/questions/71350406",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848207/"
] | *Note: `import pysftp` outside the function somehow resolves the issue for me.*
---
It's a bug in `pysftp==0.2.9`.
You can fix it by overriding `close()` to only run once:
```py
class SFTPConnection(pysftp.Connection):
def close(self):
if getattr(self, '_closed', False):
return
self._closed = True
super().close()
```
Usage:
```py
# with pysftp.Connection(destination, username, password=password, log="pysftp.log") as sftp: # -
with SFTPConnection(destination, username, password=password, log="pysftp.log") as sftp: # +
```
References:
* [https://bitbucket.org/dundeemt/pysftp/src/1c07917/pysftp/\_\_init\_\_.py#lines-795](https://bitbucket.org/dundeemt/pysftp/src/1c0791759688a733a558b1a25d9ae04f52cf6a64/pysftp/__init__.py#lines-795)
* <https://github.com/paramiko/paramiko/issues/1948>
* [https://stackoverflow.com/questions/68737761/upload-file-to-sftp-directly-without-storing-it-into-local-system/68738689#68738689](https://stackoverflow.com/a/68738689/8601760) | It looks like your program ends before the `sftp` object is garbage-collected.
Then, the `sftp.__del__` method is called at the middle of the program's teardown, which is causing the error.
From [pysftp.py](https://github.com/ryhsiao/pysftp/blob/685231499b2aea897b01cab370d847907647e40b/pysftp.py) code:
```
def __del__(self):
"""Attempt to clean up if not explicitly closed."""
self.close()
```
I personally think that it should be considered as a bug in the *pysftp* project.
**I can think of two workarounds:**
1. Override the `__del__` method:
`pysftp.Connection.__del__ = lambda x: None`
2. (Less recomended - less efficient) Explicitly delete the `sftp` object and trigger garbage collection:
`del sftp; import gc; gc.collect()` right after the `with` block |
24,092,084 | I've been trying to port a Maya based python project over to PyCharm but I'm having trouble running unit tests.
Maya provides its own python interpreter (mayapy.exe) with a zipped version of the python stdlib (in this case, 'Python27.zip') AFAIK there's nothing special about the stdlib here, but to run the native maya functions you have to use MayaPy rather than a generic python.
The problem appears to be that the jetBrains test runner (utRunner.py) wants to get os.system and it's barfing because it uses a specific import routine that doesn't allow for zip files. It tries this:
```
def import_system_module(name):
if sys.platform == "cli": # hack for the ironpython
return __import__(name)
f, filename, desc = imp.find_module(name)
return imp.load_module('pycharm_' + name, f, filename, desc)
```
and fails with this error:
```
ImportError: No module named os
```
I think because this is bypassing the zip import hook.
There's [one solution posted here](http://eoyilmaz.blogspot.com/2014/02/pycharm-pymel-and-maya-and-you-know.html), which is basically to unzip the standard library zip. I'm reluctant to do that because I might need to run the tests on machines where I don't have admin rights. I'm also reluctant to patch the code above since I'm not clear how it fits in to the whole test process.
So: how to run tests with a zipped standardlib using PyCharm, without unzipping the library or tweaking the PyCharm install too much? | 2014/06/06 | [
"https://Stackoverflow.com/questions/24092084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1936075/"
] | For lurkers: I was unable to find a better solution than the one linked above, so it was necessary to unzip the 2.7 standard libary into a loose folder. Inelegant, but it works,.
There was a further wrinkle that maya users need to watch out for: PyCharm does not like tests which run Maya.standalone -- the standalone session did not exit properly, so when running tests (in onr ore more files) that called
```
import maya.standalone
maya.standalone.initialize()
```
The pycharm test runner would hang on completion. After much frustration I found that adding an `atexit` handler to the test code would allow the standalone to exit in a way that PyCharm could tolerate:
```
def get_out_of_maya():
try:
import maya.commands as cmds
cmds.file(new=True, force=True)
except:
pass
os._exit(0) # note underscore
import atexit
atexit.register(get_out_of_maya)
```
This pre-empts the atexit hook in Maya and allows the tests to complete to the satisfaction of the Pycharm runner. FWIW, it also helps if you are running MayaPy.exe from a subprocess and executing your tests that way. | I ended up just editing Pycharm's utrunner.py file. It already imports os at the top of the file, so I'm not sure why it calls import\_system\_module. The import command automatically handles zip files. Also if you put the maya.standalone in the runner file, you don't need to call it in any of your test files.
```
#os = import_system_module("os")
#re = import_system_module("re")
import re
try:
import maya.standalone
maya.standalone.initialize()
except ImportError:
pass
```
I'm using Pycharm 5.0.1. |
64,345,222 | I'm a python beginner , I was writing a program to get a particular sequence. For example, given the input `5`, it should output the following:
```
12345
2345
345
45
5
```
This is my program:
```
b = int(input("Enter the value"))
i = 0
c = 1
while i <= b:
for g in range(c, b+1):
print(g, end='')
c = c + 1
i = i + 1
```
This program returns the following:
```
123452345345455
```
How do I print this like the pattern above? Note that if I don't use `end=''`, then all the numbers are printed individually. | 2020/10/14 | [
"https://Stackoverflow.com/questions/64345222",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14445761/"
] | very close!
with the `end=''` everything was conjoined, adding a print statement to the `while` loop spaced each iteration out.
```
while i <= b:
for g in range(c, b+1):
print(g, end='')
c = c + 1
i = i + 1
print()
``` | Add this line at the bottom:
```
print()
```
It will add a newline, separating out the numbers. |
64,345,222 | I'm a python beginner , I was writing a program to get a particular sequence. For example, given the input `5`, it should output the following:
```
12345
2345
345
45
5
```
This is my program:
```
b = int(input("Enter the value"))
i = 0
c = 1
while i <= b:
for g in range(c, b+1):
print(g, end='')
c = c + 1
i = i + 1
```
This program returns the following:
```
123452345345455
```
How do I print this like the pattern above? Note that if I don't use `end=''`, then all the numbers are printed individually. | 2020/10/14 | [
"https://Stackoverflow.com/questions/64345222",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14445761/"
] | very close!
with the `end=''` everything was conjoined, adding a print statement to the `while` loop spaced each iteration out.
```
while i <= b:
for g in range(c, b+1):
print(g, end='')
c = c + 1
i = i + 1
print()
``` | For b from 0 to 9:
```
b = int(input("Enter the value:"))
s=''.join([str(i) for i in range(1,b+1)])
for i in range(b):
print(s[i:b])
``` |
64,345,222 | I'm a python beginner , I was writing a program to get a particular sequence. For example, given the input `5`, it should output the following:
```
12345
2345
345
45
5
```
This is my program:
```
b = int(input("Enter the value"))
i = 0
c = 1
while i <= b:
for g in range(c, b+1):
print(g, end='')
c = c + 1
i = i + 1
```
This program returns the following:
```
123452345345455
```
How do I print this like the pattern above? Note that if I don't use `end=''`, then all the numbers are printed individually. | 2020/10/14 | [
"https://Stackoverflow.com/questions/64345222",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14445761/"
] | very close!
with the `end=''` everything was conjoined, adding a print statement to the `while` loop spaced each iteration out.
```
while i <= b:
for g in range(c, b+1):
print(g, end='')
c = c + 1
i = i + 1
print()
``` | How do you think about this solution
```
>>> mynumber=12345
>>> a = str(mynumber)
>>> list(str(a[i:]) for i in range(len(a)))
```
Result should be :
['12345', '2345', '345', '45', '5']
[cmd image of the result](https://i.stack.imgur.com/W02gv.png) |
29,934,201 | I had thought that if you ran perhaps `print mdarray[::][1]`, you would print the first sub-element of every element in the array. Where did I go wrong with this?
I especially need this for a `p.plot(x,y[::][1])` where I definitely do *not* want to use a for loop, as it is horribly slow, unless I'm getting things confused.
What am I getting wrong? Thanks!
**EDIT**
I still don't know where I got the [::] thing but I solved my problem with either
`p.plot(x,c[:,1],color='g',label="Closing value")`
or
```
p.plot(x,[i[1] for i in c],color='g',label="Closing value")
```
There doesn't seem to be any appreciable difference in time, so I guess I'll use the second because it looks more pythonic/readable to me. Or am I missing something?
Thanks for all of the help! | 2015/04/29 | [
"https://Stackoverflow.com/questions/29934201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4333608/"
] | What you did:
You used `mdarray[::]`. That makes a (shallow) copy of `mdarray`. Then you accessed the **second** element of it with `[1]`. `[0]` would be the first.
What you can do is a list comprehension:
```
[item[0] for item in mdarray]
```
This will return a list of the first elements of the lists in `mdarray`.
Talking about loops: A (one time) loop is rather effective to access something. Internally all the magic functions (like the comprehension above) are iterating over the data. | How about:
```
>>> Matrix = [[x for x in range(5)] for x in range(5)]
>>> Matrix
[[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]
>>> [item[0] for item in Matrix]
[0, 0, 0, 0, 0]
```
As for `::`, you can read more about it [here](https://stackoverflow.com/questions/3453085/what-is-double-colon-in-python-when-subscripting-sequences), It will return the same list. |
29,934,201 | I had thought that if you ran perhaps `print mdarray[::][1]`, you would print the first sub-element of every element in the array. Where did I go wrong with this?
I especially need this for a `p.plot(x,y[::][1])` where I definitely do *not* want to use a for loop, as it is horribly slow, unless I'm getting things confused.
What am I getting wrong? Thanks!
**EDIT**
I still don't know where I got the [::] thing but I solved my problem with either
`p.plot(x,c[:,1],color='g',label="Closing value")`
or
```
p.plot(x,[i[1] for i in c],color='g',label="Closing value")
```
There doesn't seem to be any appreciable difference in time, so I guess I'll use the second because it looks more pythonic/readable to me. Or am I missing something?
Thanks for all of the help! | 2015/04/29 | [
"https://Stackoverflow.com/questions/29934201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4333608/"
] | If `mdarray` is a numpy array you can access first column of it with `mdarray[:,0]`
```
In [8]: mdarray = np.array([[1, 2, 4], [4, 5, 6], [7, 8, 9]])
In [9]: mdarray
Out[9]:
array([[1, 2, 4],
[4, 5, 6],
[7, 8, 9]])
In [10]: mdarray[:,0]
Out[10]: array([1, 4, 7])
```
**UPD**
Quick and dirty test
```
In [28]: mdarray = np.zeros((10000,10000))
In [29]: %timeit -n1000 [x[0] for x in mdarray]
1000 loops, best of 3: 2.7 ms per loop
In [30]: %timeit -n1000 mdarray[:,0]
1000 loops, best of 3: 567 ns per loop
``` | How about:
```
>>> Matrix = [[x for x in range(5)] for x in range(5)]
>>> Matrix
[[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]
>>> [item[0] for item in Matrix]
[0, 0, 0, 0, 0]
```
As for `::`, you can read more about it [here](https://stackoverflow.com/questions/3453085/what-is-double-colon-in-python-when-subscripting-sequences), It will return the same list. |
29,934,201 | I had thought that if you ran perhaps `print mdarray[::][1]`, you would print the first sub-element of every element in the array. Where did I go wrong with this?
I especially need this for a `p.plot(x,y[::][1])` where I definitely do *not* want to use a for loop, as it is horribly slow, unless I'm getting things confused.
What am I getting wrong? Thanks!
**EDIT**
I still don't know where I got the [::] thing but I solved my problem with either
`p.plot(x,c[:,1],color='g',label="Closing value")`
or
```
p.plot(x,[i[1] for i in c],color='g',label="Closing value")
```
There doesn't seem to be any appreciable difference in time, so I guess I'll use the second because it looks more pythonic/readable to me. Or am I missing something?
Thanks for all of the help! | 2015/04/29 | [
"https://Stackoverflow.com/questions/29934201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4333608/"
] | How about:
```
>>> Matrix = [[x for x in range(5)] for x in range(5)]
>>> Matrix
[[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]
>>> [item[0] for item in Matrix]
[0, 0, 0, 0, 0]
```
As for `::`, you can read more about it [here](https://stackoverflow.com/questions/3453085/what-is-double-colon-in-python-when-subscripting-sequences), It will return the same list. | Not sure whether you use array or list, but for Python's lists:
Python 2:
```
>>> mdarray = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> zip(*mdarray)[0]
(1, 4, 7)
```
Python 3:
```
>>> mdarray = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> list(zip(*mdarray))[0]
(1, 4, 7)
```
Or for the special case of index 0:
```
>>> mdarray = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> next(zip(*mdarray))
(1, 4, 7)
``` |
29,934,201 | I had thought that if you ran perhaps `print mdarray[::][1]`, you would print the first sub-element of every element in the array. Where did I go wrong with this?
I especially need this for a `p.plot(x,y[::][1])` where I definitely do *not* want to use a for loop, as it is horribly slow, unless I'm getting things confused.
What am I getting wrong? Thanks!
**EDIT**
I still don't know where I got the [::] thing but I solved my problem with either
`p.plot(x,c[:,1],color='g',label="Closing value")`
or
```
p.plot(x,[i[1] for i in c],color='g',label="Closing value")
```
There doesn't seem to be any appreciable difference in time, so I guess I'll use the second because it looks more pythonic/readable to me. Or am I missing something?
Thanks for all of the help! | 2015/04/29 | [
"https://Stackoverflow.com/questions/29934201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4333608/"
] | What you did:
You used `mdarray[::]`. That makes a (shallow) copy of `mdarray`. Then you accessed the **second** element of it with `[1]`. `[0]` would be the first.
What you can do is a list comprehension:
```
[item[0] for item in mdarray]
```
This will return a list of the first elements of the lists in `mdarray`.
Talking about loops: A (one time) loop is rather effective to access something. Internally all the magic functions (like the comprehension above) are iterating over the data. | Not sure whether you use array or list, but for Python's lists:
Python 2:
```
>>> mdarray = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> zip(*mdarray)[0]
(1, 4, 7)
```
Python 3:
```
>>> mdarray = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> list(zip(*mdarray))[0]
(1, 4, 7)
```
Or for the special case of index 0:
```
>>> mdarray = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> next(zip(*mdarray))
(1, 4, 7)
``` |
29,934,201 | I had thought that if you ran perhaps `print mdarray[::][1]`, you would print the first sub-element of every element in the array. Where did I go wrong with this?
I especially need this for a `p.plot(x,y[::][1])` where I definitely do *not* want to use a for loop, as it is horribly slow, unless I'm getting things confused.
What am I getting wrong? Thanks!
**EDIT**
I still don't know where I got the [::] thing but I solved my problem with either
`p.plot(x,c[:,1],color='g',label="Closing value")`
or
```
p.plot(x,[i[1] for i in c],color='g',label="Closing value")
```
There doesn't seem to be any appreciable difference in time, so I guess I'll use the second because it looks more pythonic/readable to me. Or am I missing something?
Thanks for all of the help! | 2015/04/29 | [
"https://Stackoverflow.com/questions/29934201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4333608/"
] | If `mdarray` is a numpy array you can access first column of it with `mdarray[:,0]`
```
In [8]: mdarray = np.array([[1, 2, 4], [4, 5, 6], [7, 8, 9]])
In [9]: mdarray
Out[9]:
array([[1, 2, 4],
[4, 5, 6],
[7, 8, 9]])
In [10]: mdarray[:,0]
Out[10]: array([1, 4, 7])
```
**UPD**
Quick and dirty test
```
In [28]: mdarray = np.zeros((10000,10000))
In [29]: %timeit -n1000 [x[0] for x in mdarray]
1000 loops, best of 3: 2.7 ms per loop
In [30]: %timeit -n1000 mdarray[:,0]
1000 loops, best of 3: 567 ns per loop
``` | Not sure whether you use array or list, but for Python's lists:
Python 2:
```
>>> mdarray = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> zip(*mdarray)[0]
(1, 4, 7)
```
Python 3:
```
>>> mdarray = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> list(zip(*mdarray))[0]
(1, 4, 7)
```
Or for the special case of index 0:
```
>>> mdarray = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> next(zip(*mdarray))
(1, 4, 7)
``` |
65,035,634 | When using an IDE like PyCharm python is being called via pydevd.py (is parent?).
On command prompt like cmd.exe scripts can use the prompt color sequences e.g. "\033[0m".
Well, withing PyCharm it looks strange when using those.
The question is: if you can find out who called the script, if running on cmd line or not to be able to use those sequences properly. | 2020/11/27 | [
"https://Stackoverflow.com/questions/65035634",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9266851/"
] | It seems like Swashbuckle doesn't use the `JsonSerializerOptions` to generate the docs. One workaround i found is to handle the types manually:
```cs
public class FieldsSchemaFilter : ISchemaFilter
{
public void Apply(OpenApiSchema schema, SchemaFilterContext context)
{
var fields = context.Type.GetFields();
if (fields == null) return;
if (fields.Length == 0) return;
foreach (var field in fields)
{
schema.Properties[field.Name] = new OpenApiSchema
{
// this should be mapped to an OpenApiSchema type
Type = field.FieldType.Name
};
}
}
}
```
Then in your Startup.cs ConfigureServices:
```cs
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "WebApplication1", Version = "v1" });
c.SchemaFilter<FieldsSchemaFilter>();
});
```
When stepping through, you'll see the `JsonSerializerOptions` used in the `SchemaFilterContext` (`SchemaGenerator`). `IncludeFields` is set to true. Still only properties are used for docs, so I guess a filter like that is your best bet. | The issue has no thing to do with Swagger, it is pure serialization issue.
You have 3 solutions:
1. Write your own customized json for vector. (just concept)
2. Use a customized object with primitive types and map it. (just concept)
3. Use Newtonsoft.Json (suggested solution)
Regarding to Microsoft [doc](https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-migrate-from-newtonsoft-how-to?pivots=dotnet-5-0), `System.Text.Json` you can see in the comparing list, that System.Text.Json might have some limitation.
>
> **If you want the suggested solution jump directly to solution 3.**
>
>
>
Let's take the first concept of custom [serialized](https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-converters-how-to?pivots=dotnet-5-0). Btw this custom example is just for demonstration and not full solution.
So what you can do is following:
1. Create a custom vector `CustomVector` model.
2. Create a custom `VectorConverter` class that extend `JsonConverter`.
3. Added some mapping.
4. Put the attribute `VectorConverter` to vector property.
Here is my attempt CustomVector:
```
public class CustomVector
{
public float? X { get; set; }
public float? Y { get; set; }
public float? Z { get; set; }
}
```
And custom VectorConverter:
```
public class VectorConverter : JsonConverter<Vector3>
{
public override Vector3 Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options)
{
// just for now
return new Vector3();
}
public override void Write(Utf8JsonWriter writer, Vector3 data, JsonSerializerOptions options)
{
// just for now
var customVector = new CustomVector
{
X = data.X,
Y = data.Y,
Z = data.Z
};
var result = JsonSerializer.Serialize(customVector);
writer.WriteStringValue(result);
}
}
```
And you vector property, added the following attribute:
```
[JsonConverter(typeof(VectorConverter))]
public Vector3 Vector { get; set; }
```
This will return following result:
[](https://i.stack.imgur.com/0nwkg.png)
Now this solve part of the issue, if you want to post a vector object, you will have another challenge, that also depends on your implementation logic.
Therefore, comes my second solution attempt where we expose our custom vector and ignore vector3 in json and map it to/from Vector3 from our code:
So hence we have introduces a `CustomVector`, we can use that in stead of Vector3 in our model, than map it to our Vector3.
```
public class Test
{
public string Field { get; set; }
public string Property { get; set; }
[JsonIgnore]
public Vector3 Vector { get; set; }
public CustomVector CustomVector { get; set; }
}
```
[](https://i.stack.imgur.com/YbGG8.png)
Here is a get and post method with mapping example:
```
[HttpGet]
public Test Get()
{
var vector = new CustomVector() { X = 1, Y = 1, Z = 1 };
var test = new Test
{
Field = "Field",
Property = "Property",
CustomVector = vector
};
VectorMapping(test);
return test;
}
[HttpPost]
public Test Post(Test test)
{
VectorMapping(test);
return test;
}
private static void VectorMapping(Test test)
{
test.Vector = new Vector3
{
X = test.CustomVector.X.GetValueOrDefault(),
Y = test.CustomVector.Y.GetValueOrDefault(),
Z = test.CustomVector.Z.GetValueOrDefault()
};
}
```
The down side in first solution, we need to write a full customize serializing, and in our second solution we have introduced extra model and mapping.
### The suggested solution
Therefore I suggest the following and 3rd attempt:
Keep every thing you have as it is in your solution, just added nuget `Swashbuckle.AspNetCore.Newtonsoft` to your project, like:
```
<PackageReference Include="Swashbuckle.AspNetCore.Newtonsoft" Version="5.6.3" />
```
And in your startup
```
services.AddSwaggerGenNewtonsoftSupport();
```
Fire up, and this will generate the documentation, as it allow serializing and deserializing Vector3 and other class types that are not supported by `System.Text.Json`.
As you can see this include now Vector3 in documentation:
[](https://i.stack.imgur.com/K4X3C.png)
I am pretty sure this can be done other ways. So this is my attempts solving it. |
13,137,449 | I'm working on a scientific experiment where about two dozen test persons play a turn-based game with/against each other. Right now, it's a Python web app with a WSGI interface. I'd like to augment the usability with websockets: When all players have finished their turns, I'd like to notify all clients to update their status. Right now, everyone has to either wait for the turn timeout, or continually reload and wait for the "turn is still in progress" error message not to appear again (busy waiting, effectively).
I read through multiple websocket libraries' documentation and I understand how websockets work, but I'm not sure about the architecture for mixing WSGI and websockets: Can I have a websockets and a WSGI server in the same process (and if so, how, using really any websockets library) and just call `my_websocket.send_message()` from a WSGI handler, or should I have a separate websockets server and do some IPC? Or should I not mix them at all?
**edit, 6 months later:** I ended up starting a separate websockets server process (using Autobahn), instead of integrating it with the WSGI server. The reason was that it's much easier and cleaner to separate the two of them, and talking to the websockets server from the WSGI process (server to server communication) was straight forward and worked on the first attempt using [websocket-client](https://pypi.python.org/pypi/websocket-client). | 2012/10/30 | [
"https://Stackoverflow.com/questions/13137449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/22404/"
] | Here is an example that does what you want:
* <https://github.com/tavendo/AutobahnPython/tree/master/examples/twisted/websocket/echo_wsgi>
It runs a WSGI web app (Flask-based in this case, but can be anything WSGI conforming) plus a WebSocket server under 1 server and 1 port.
You can send WS messages from within Web handlers. Autobahn also provides PubSub on top of WebSocket, which greatly simplifies the sending of notifications (via `WampServerProtocol.dispatch`) like in your case.
* <http://autobahn.ws/python>
Disclosure: I am author of Autobahn and work for Tavendo. | >
> but I'm not sure about the architecture for mixing WSGI and websockets
>
>
>
**I made it**
**use [WSocket](https://wsocket.gitbook.io/)**
Simple WSGI HTTP + Websocket Server, Framework, Middleware And App.
-------------------------------------------------------------------
Includes
--------
* Server(WSGI) included - works with any WSGI framework
* Middleware - adds Websocket support for any WSGI framework
* Framework - simple Websocket WSGI web application framework
* App - Event based app for Websocket communication
**When external server used, some clients like Firefox requires `http 1.1` Server. for Middleware, Framework, App**
* Handler - adds Websocket support to [wsgiref](https://docs.python.org/3/library/wsgiref.html "python builtin WSGI server")(python builtin WSGI server)
* Client -Coming soon...
Common Features
---------------
* only single file less than 1000 lines
* websocket sub protocol supported
* websocket message compression supported (works if client asks)
* receive and send pong and ping messages(with automatic pong sender)
* receive and send binary or text messages
* works for messages with or without mask
* closing messages supported
* auto and manual close
example using bottle web framework and WSocket middleware
```py
from bottle import request, Bottle
from wsocket import WSocketApp, WebSocketError, logger, run
from time import sleep
logger.setLevel(10) # for debugging
bottle = Bottle()
app = WSocketApp(bottle)
# app = WSocketApp(bottle, "WAMP")
@bottle.route("/")
def handle_websocket():
wsock = request.environ.get("wsgi.websocket")
if not wsock:
return "Hello World!"
while True:
try:
message = wsock.receive()
if message != None:
print("participator : " + message)
wsock.send("you : "+message)
sleep(2)
wsock.send("you : "+message)
except WebSocketError:
break
run(app)
``` |
35,991,852 | This is the observed behavior:
```
In [4]: x = itertools.groupby(range(10), lambda x: True)
In [5]: y = next(x)
In [6]: next(x)
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
<ipython-input-6-5e4e57af3a97> in <module>()
----> 1 next(x)
StopIteration:
In [7]: y
Out[7]: (True, <itertools._grouper at 0x10a672e80>)
In [8]: list(y[1])
Out[8]: [9]
```
The expected output of `list(y[1])` is `[0,1,2,3,4,5,6,7,8,9]`
What's going on here?
I observed this on `cpython 3.4.2`, but others have seen this with `cpython 3.5` and `IronPython 2.9.9a0 (2.9.0.0) on Mono 4.0.30319.17020 (64-bit)`.
The observed behavior on `Jython 2.7.0` and pypy:
```
Python 2.7.10 (5f8302b8bf9f, Nov 18 2015, 10:46:46)
[PyPy 4.0.1 with GCC 4.8.4]
>>>> x = itertools.groupby(range(10), lambda x: True)
>>>> y = next(x)
>>>> next(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>>> y
(True, <itertools._groupby object at 0x00007fb1096039a0>)
>>>> list(y[1])
[]
``` | 2016/03/14 | [
"https://Stackoverflow.com/questions/35991852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198633/"
] | [`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) documentation tells that
>
> `itertools.groupby(iterable, key=None)`
>
>
> [...]
>
>
> The operation of `groupby()` is similar to the uniq filter in Unix. It generates a break or new group every time the value of the key function changes (which is why it is usually necessary to have sorted the data using the same key function). That behavior differs from SQL’s GROUP BY which aggregates common elements regardless of their input order.
>
>
> The returned group is itself an iterator that shares the underlying iterable with `groupby()`. Because the source is shared, when the `groupby() object is advanced, the previous group is no longer visible. So, if that data is needed later, **it should be stored as a list** [--]
>
>
>
So the **assumption** from the last paragraph is that that the generated list would be the empty list `[]`, since the iterator advanced already, and met `StopIteration`; but instead in CPython the result is surprising `[9]`.
---
This is because the [`_grouper` iterator](https://github.com/python/cpython/blob/a3922b02c124b5e564d7ea5a3cb62256b988de28/Modules/itertoolsmodule.c#L275) lags one item behind the original iterator, which is because `groupby` needs to peek one item ahead to see if it belongs to the current or the next group, yet it must be able to later yield this item as the first item of the new group.
However the `currkey` and `currvalue` attributes of the `groupby` are *not* reset when the [original iterator is exhausted](https://github.com/python/cpython/blob/a3922b02c124b5e564d7ea5a3cb62256b988de28/Modules/itertoolsmodule.c#L98), so `currvalue` still points to the last item from the iterator.
The CPython documentation actually contains this equivalent code, that also has the exact same behaviour as the C version code:
```
class groupby:
# [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B
# [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D
def __init__(self, iterable, key=None):
if key is None:
key = lambda x: x
self.keyfunc = key
self.it = iter(iterable)
self.tgtkey = self.currkey = self.currvalue = object()
def __iter__(self):
return self
def __next__(self):
while self.currkey == self.tgtkey:
self.currvalue = next(self.it) # Exit on StopIteration
self.currkey = self.keyfunc(self.currvalue)
self.tgtkey = self.currkey
return (self.currkey, self._grouper(self.tgtkey))
def _grouper(self, tgtkey):
while self.currkey == tgtkey:
yield self.currvalue
try:
self.currvalue = next(self.it)
except StopIteration:
return
self.currkey = self.keyfunc(self.currvalue)
```
Notably the `__next__` finds the first item of the next group, and stores it its key into `self.currkey` and its value to `self.currvalue`. But the key is the line
```
self.currvalue = next(self.it) # Exit on StopIteration
```
When `next` throws `StopItertion` the `self.currvalue` still contains the last key of the previous group. Now, when `y[1]` is made into a `list`, it **first** yields the value of `self.currvalue`, and only then runs `next()` on the underlying iterator (and meets `StopIteration` again).
---
Even though there is Python equivalent in the documentation, that behaves exactly like the authoritative C code implementation in CPython, IronPython, Jython and PyPy give different results. | The problem is that you group all of them into one group so after the first `next` call everything is already grouped:
```
import itertools
x = itertools.groupby(range(10), lambda x: True)
key, elements = next(x)
```
but the `elements` are a generator, so you need to pass it immediatly into some structure taking an iterable to "print" or "save" it, i.e. a `list`:
```
print('Key: "{}" with value "{}"'.format(key, list(elements)))
```
and then your `range(10)` is empty and the groupy-generator is finished:
```
Key: True with value [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
``` |
68,582,040 | I've just started learning python recently because of work, and I'm struggling with a for loop in my code. I want my program to output values for amount of ammonia produced hourly, which is dependent on the cells in Excel (attached below), and I need the columns for 'H2 consumed by NH3' and 'Actual Stored' as lists in Python in order to move forward. However, when I run this code, the process is constantly running and doesn't stop, which made me realize I've messed up somewhere but I can't tell where.
```
hourly_H2_prod = [hourly_elec*1000/avg_pwr_use for hourly_elec in elec_list]
H2_sum = sum(hourly_H2_prod)
avg_hourly_H2_consumed = avg_hourly_NH3_prod*3/17.31
init_H2_stored = H2_sum/365*Storage_days #highlighted cell in Excel screenshot
actual_H2_stored = [init_H2_stored]
hourly_H2_consumed = []
for i in range(1,len(hourly_H2_prod)):
for j in range(len(actual_H2_stored)):
hourly_H2_consumed.append(max(min(avg_hourly_H2_consumed,actual_H2_stored[j]),0))
actual_H2_stored.append(max(min(init_H2_stored,actual_H2_stored[j]+hourly_H2_prod[i]-hourly_H2_consumed[j]),0))
```
I have already managed to get the list format for the 'hourly H2 production' column
[(this is to show how the values are dependent on each other, for actual-H2-stored)](https://i.stack.imgur.com/SM7yz.png)
[(how values depend for hourly-H2-consumed)](https://i.stack.imgur.com/kHbbq.png)
I think I'm messing up when I need to iterate using the previous variable as shown in the excel formula. I also hope I've explained everything fully, I appreciate any help! | 2021/07/29 | [
"https://Stackoverflow.com/questions/68582040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15584532/"
] | I figured it out, @VonWooDSoN you were right the nested loop did mess everything up. I removed the line altogether and instead just put in this, and it worked perfectly and now I'm getting the values in the list form I need, so thank you for the tip!
```
for i in range(1,len(hourly_H2_prod)):
hourly_H2_consumed.append(max(min(avg_hourly_H2_consumed,actual_H2_stored[i-1]),0))
actual_H2_stored.append(max(min(init_H2_stored,actual_H2_stored[i-1]+hourly_H2_prod[i]-hourly_H2_consumed[i-1]),0))
``` | I'm trying to see where your error is, but I cannot run your code.
I think that the issue is *truly* in that nested "j" for loop. As of python 3 a `range` is a generator, so it's possible that adding to that list in the nested loop is going to create an endless loop because the `len` function *may* be called each iteration. Try replacing it with this :`for j in list(range(len(actual_H2_stored))):`
I added the `list` to the `range` so that it'll fully preprocess your list's length before you start iteration. |
864,883 | I need to write a series of matrices out to a plain text file from python. All my matricies are in float format so the simple
file.write() and file.writelines()
do not work. Is there a conversion method I can employ that doesn't have me looping through all the lists (matrix = list of lists in my case) converting the individual values?
I guess I should clarify, that it needn't look like a matrix, just the associated values in an easy to parse list, as I will be reading in later. All on one line may actually make this easier! | 2009/05/14 | [
"https://Stackoverflow.com/questions/864883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23553/"
] | ```
m = [[1.1, 2.1, 3.1], [4.1, 5.1, 6.1], [7.1, 8.1, 9.1]]
file.write(str(m))
```
If you want more control over the format of each value:
```
def format(value):
return "%.3f" % value
formatted = [[format(v) for v in r] for r in m]
file.write(str(formatted))
``` | the following works for me:
```
with open(fname, 'w') as f:
f.writelines(','.join(str(j) for j in i) + '\n' for i in matrix)
``` |
864,883 | I need to write a series of matrices out to a plain text file from python. All my matricies are in float format so the simple
file.write() and file.writelines()
do not work. Is there a conversion method I can employ that doesn't have me looping through all the lists (matrix = list of lists in my case) converting the individual values?
I guess I should clarify, that it needn't look like a matrix, just the associated values in an easy to parse list, as I will be reading in later. All on one line may actually make this easier! | 2009/05/14 | [
"https://Stackoverflow.com/questions/864883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23553/"
] | the following works for me:
```
with open(fname, 'w') as f:
f.writelines(','.join(str(j) for j in i) + '\n' for i in matrix)
``` | Why not use [pickle](http://docs.python.org/library/pickle.html)?
```
import cPickle as pickle
pckl_file = file("test.pckl", "w")
pickle.dump([1,2,3], pckl_file)
pckl_file.close()
``` |
864,883 | I need to write a series of matrices out to a plain text file from python. All my matricies are in float format so the simple
file.write() and file.writelines()
do not work. Is there a conversion method I can employ that doesn't have me looping through all the lists (matrix = list of lists in my case) converting the individual values?
I guess I should clarify, that it needn't look like a matrix, just the associated values in an easy to parse list, as I will be reading in later. All on one line may actually make this easier! | 2009/05/14 | [
"https://Stackoverflow.com/questions/864883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23553/"
] | the following works for me:
```
with open(fname, 'w') as f:
f.writelines(','.join(str(j) for j in i) + '\n' for i in matrix)
``` | ```
import pickle
# write object to file
a = ['hello', 'world']
pickle.dump(a, open('delme.txt', 'wb'))
# read object from file
b = pickle.load(open('delme.txt', 'rb'))
print b # ['hello', 'world']
```
At this point you can look at the file 'delme.txt' with vi
```
vi delme.txt
1 (lp0
2 S'hello'
3 p1
4 aS'world'
5 p2
6 a.
``` |
864,883 | I need to write a series of matrices out to a plain text file from python. All my matricies are in float format so the simple
file.write() and file.writelines()
do not work. Is there a conversion method I can employ that doesn't have me looping through all the lists (matrix = list of lists in my case) converting the individual values?
I guess I should clarify, that it needn't look like a matrix, just the associated values in an easy to parse list, as I will be reading in later. All on one line may actually make this easier! | 2009/05/14 | [
"https://Stackoverflow.com/questions/864883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23553/"
] | the following works for me:
```
with open(fname, 'w') as f:
f.writelines(','.join(str(j) for j in i) + '\n' for i in matrix)
``` | for row in matrix:
file.write(" ".join(map(str,row))+"\n")
This works for me... and writes the output in matrix format |
864,883 | I need to write a series of matrices out to a plain text file from python. All my matricies are in float format so the simple
file.write() and file.writelines()
do not work. Is there a conversion method I can employ that doesn't have me looping through all the lists (matrix = list of lists in my case) converting the individual values?
I guess I should clarify, that it needn't look like a matrix, just the associated values in an easy to parse list, as I will be reading in later. All on one line may actually make this easier! | 2009/05/14 | [
"https://Stackoverflow.com/questions/864883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23553/"
] | ```
m = [[1.1, 2.1, 3.1], [4.1, 5.1, 6.1], [7.1, 8.1, 9.1]]
file.write(str(m))
```
If you want more control over the format of each value:
```
def format(value):
return "%.3f" % value
formatted = [[format(v) for v in r] for r in m]
file.write(str(formatted))
``` | Why not use [pickle](http://docs.python.org/library/pickle.html)?
```
import cPickle as pickle
pckl_file = file("test.pckl", "w")
pickle.dump([1,2,3], pckl_file)
pckl_file.close()
``` |
864,883 | I need to write a series of matrices out to a plain text file from python. All my matricies are in float format so the simple
file.write() and file.writelines()
do not work. Is there a conversion method I can employ that doesn't have me looping through all the lists (matrix = list of lists in my case) converting the individual values?
I guess I should clarify, that it needn't look like a matrix, just the associated values in an easy to parse list, as I will be reading in later. All on one line may actually make this easier! | 2009/05/14 | [
"https://Stackoverflow.com/questions/864883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23553/"
] | ```
m = [[1.1, 2.1, 3.1], [4.1, 5.1, 6.1], [7.1, 8.1, 9.1]]
file.write(str(m))
```
If you want more control over the format of each value:
```
def format(value):
return "%.3f" % value
formatted = [[format(v) for v in r] for r in m]
file.write(str(formatted))
``` | ```
import pickle
# write object to file
a = ['hello', 'world']
pickle.dump(a, open('delme.txt', 'wb'))
# read object from file
b = pickle.load(open('delme.txt', 'rb'))
print b # ['hello', 'world']
```
At this point you can look at the file 'delme.txt' with vi
```
vi delme.txt
1 (lp0
2 S'hello'
3 p1
4 aS'world'
5 p2
6 a.
``` |
864,883 | I need to write a series of matrices out to a plain text file from python. All my matricies are in float format so the simple
file.write() and file.writelines()
do not work. Is there a conversion method I can employ that doesn't have me looping through all the lists (matrix = list of lists in my case) converting the individual values?
I guess I should clarify, that it needn't look like a matrix, just the associated values in an easy to parse list, as I will be reading in later. All on one line may actually make this easier! | 2009/05/14 | [
"https://Stackoverflow.com/questions/864883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23553/"
] | ```
m = [[1.1, 2.1, 3.1], [4.1, 5.1, 6.1], [7.1, 8.1, 9.1]]
file.write(str(m))
```
If you want more control over the format of each value:
```
def format(value):
return "%.3f" % value
formatted = [[format(v) for v in r] for r in m]
file.write(str(formatted))
``` | for row in matrix:
file.write(" ".join(map(str,row))+"\n")
This works for me... and writes the output in matrix format |
864,883 | I need to write a series of matrices out to a plain text file from python. All my matricies are in float format so the simple
file.write() and file.writelines()
do not work. Is there a conversion method I can employ that doesn't have me looping through all the lists (matrix = list of lists in my case) converting the individual values?
I guess I should clarify, that it needn't look like a matrix, just the associated values in an easy to parse list, as I will be reading in later. All on one line may actually make this easier! | 2009/05/14 | [
"https://Stackoverflow.com/questions/864883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23553/"
] | Why not use [pickle](http://docs.python.org/library/pickle.html)?
```
import cPickle as pickle
pckl_file = file("test.pckl", "w")
pickle.dump([1,2,3], pckl_file)
pckl_file.close()
``` | ```
import pickle
# write object to file
a = ['hello', 'world']
pickle.dump(a, open('delme.txt', 'wb'))
# read object from file
b = pickle.load(open('delme.txt', 'rb'))
print b # ['hello', 'world']
```
At this point you can look at the file 'delme.txt' with vi
```
vi delme.txt
1 (lp0
2 S'hello'
3 p1
4 aS'world'
5 p2
6 a.
``` |
864,883 | I need to write a series of matrices out to a plain text file from python. All my matricies are in float format so the simple
file.write() and file.writelines()
do not work. Is there a conversion method I can employ that doesn't have me looping through all the lists (matrix = list of lists in my case) converting the individual values?
I guess I should clarify, that it needn't look like a matrix, just the associated values in an easy to parse list, as I will be reading in later. All on one line may actually make this easier! | 2009/05/14 | [
"https://Stackoverflow.com/questions/864883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23553/"
] | Why not use [pickle](http://docs.python.org/library/pickle.html)?
```
import cPickle as pickle
pckl_file = file("test.pckl", "w")
pickle.dump([1,2,3], pckl_file)
pckl_file.close()
``` | for row in matrix:
file.write(" ".join(map(str,row))+"\n")
This works for me... and writes the output in matrix format |
864,883 | I need to write a series of matrices out to a plain text file from python. All my matricies are in float format so the simple
file.write() and file.writelines()
do not work. Is there a conversion method I can employ that doesn't have me looping through all the lists (matrix = list of lists in my case) converting the individual values?
I guess I should clarify, that it needn't look like a matrix, just the associated values in an easy to parse list, as I will be reading in later. All on one line may actually make this easier! | 2009/05/14 | [
"https://Stackoverflow.com/questions/864883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23553/"
] | for row in matrix:
file.write(" ".join(map(str,row))+"\n")
This works for me... and writes the output in matrix format | ```
import pickle
# write object to file
a = ['hello', 'world']
pickle.dump(a, open('delme.txt', 'wb'))
# read object from file
b = pickle.load(open('delme.txt', 'rb'))
print b # ['hello', 'world']
```
At this point you can look at the file 'delme.txt' with vi
```
vi delme.txt
1 (lp0
2 S'hello'
3 p1
4 aS'world'
5 p2
6 a.
``` |
3,743,708 | Hi have made one source in python for get fundamental frequecys from audio files, i want use this for get tones from DTMF audios !
but how get the low tones from the audio?
thks!!
---
Exactly im apply FFT but its return always the High Frequency.
the table for frequencys here
<http://www.mediacollege.com/audio/tone/dtmf.html>
For exemple when i get one .wav audio file of key "1" i have just the requency 1209
how get the low frequency in this case for key "1" is 697, FFT dont give me this :-( | 2010/09/18 | [
"https://Stackoverflow.com/questions/3743708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/451639/"
] | I have handled this using Spring's PropertyPlaceholderConfigurer and including property files on the classpath and one on the filesystem:
```
<context:property-placeholder
location="classpath*:META-INF/spring/*.properties,file:myapp*.properties"/>
```
If there is a myapp\*.properties file in the current directory when the app starts (or tests are run etc.) it will override properties from files baked into the war/ear/whatever. You could take out the star but then you will have to have the file present. | This will probably affect (break) various [`database` commands](http://static.springsource.org/spring-roo/reference/html/command-index.html) like
* `database properties set`
* `database properties remove`
* `database properties list`
IMO, you should keep it under `META-INF/spring`. |
63,962,448 | I recently installed the Spinnaker SDK and the PySpin library for use with Flir cameras (I followed the instructions on the website). However, when I tried to run the supplied Acquisition.py example file, I got the error "module 'pyspin' has no attribute 'System'"
A few other notes:
1. For some reason, when I used pip to install PySpin, the module was installed with the name "pyspin" rather than "PySpin", even though it was written in the script as PySpin (e.g. import PySpin). Initially the script wasn't even able to find the module until I changed all the library names in the script to the lowercase version.
2. I had an unrelated library, also called PySpin, on my computer from before. I used pip to uninstall the old PySpin (python -m pip uninstall PySpin) when I realized this and then reinstalled the "proper" library. | 2020/09/18 | [
"https://Stackoverflow.com/questions/63962448",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13751367/"
] | This problem seems to continue to exist: a library called 'pyspin' can be installed via pip, but that's not the correct library for spinnaker-operated hardware: "PySpin" - i.e. there seems to be a naming conflict.
The 'pyspin' - lowercase, seems to be a library that only generates a spinning text graphic, and nothing else - it has nothing at all to do with FLIR - and that's why you cant access the system etc. calls.
Typically, the advice to install the FLIR PySpin library seems to be "go and get the entire spinnaker package from FLIR and install it", though having done that process upwards of six times, I've still not managed to obtain the correct 'PySpin' library, and as far as I can find out, it's not available anywhere else.
If anyone at all can point to a github repo, FLIR repository or anything at all that has the correct 'PySpin' library, please post it here. | Assuming that you already installed the latest version of the Spinnaker SDK, you have to download the Python Spinnaker SDK package (for me it was sitting in the Linux -> Ubuntu 20.04 -> python repository of this website: <https://flir.app.boxcn.net/v/SpinnakerSDK>).
The version you should download depends on your system architecture (use "uname -m" in the terminal to find out).
Once you downloaded the archive, simply extract it anywhere and follow the installation procedure in README.txt (for me, it meant running python3.8 -m pip install --user spinnaker\_python-2.x.x.x-cp38-cp38-linux\_x86\_64.whl after moving the .whl file to /opt/spinnaker/doc, where x.x.x.x should match the name of the file in the archive AND the version of the Spinnaker SDK that you currently have installed).
Once the installation is successful, you can use the PySpin library in Python. |
54,174,830 | I'd like to call my `cdef` methods and improve the speed of my program as much as possible. I do not want to use `cpdef` (I explain why below). Ultimately, I'd like to access `cdef` methods (some of which return void) that are members of my Cython extensions.
I tried following [this example](https://notes-on-cython.readthedocs.io/en/latest/fibo_speed.html), which gives me the impression that I can call a `cdef` function by making a Python (`def`) wrapper for it.
I can't reproduce these results, so I tried a different problem for myself (summing all the numbers from 0 to n).
Of course, I'm looking at the [documentation](https://cython.readthedocs.io/en/latest/src/tutorial/cdef_classes.html), which says
>
> The directive cpdef makes two versions of the method available; one fast for use from Cython and one slower for use from Python.
>
>
>
and later (emphasis mine),
>
> This does **slightly more than providing a python wrapper** for a cdef method: unlike a cdef method, a cpdef method is fully overridable by methods and instance attributes in Python subclasses. **It adds a little calling overhead compared to a cdef method**.
>
>
>
So how does one use a `cdef` function without the extra calling overhead of a `cpdef` function?
With the code at the end of this question, I get the following results:
```
def/cdef:
273.04207632583245
def/cpdef:
304.4114626176919
cpdef/cdef:
0.8969507060538783
```
Somehow, `cpdef` is faster than `cdef`. For n < 100, I can occasionally get `cpdef/cdef` > 1, but it's rare. I think it has to do with wrapping the `cdef` function in a `def` function. This is what the example I link to does, but they claim better performance from using `cdef` than from using `cpdef`.
I'm pretty sure this is not how you wrap a `cdef` function while avoiding the additional overhead (the source of which is not clearly documented) of a `cpdef`.
And now, the code:
**setup.py**
```py
from setuptools import setup, Extension
from Cython.Build import cythonize
pkg_name = "tmp"
compile_args=['-std=c++17']
cy_foo = Extension(
name=pkg_name + '.core.cy_foo',
sources=[
pkg_name + '/core/cy_foo.pyx',
],
language='c++',
extra_compile_args=compile_args,
)
setup(
name=pkg_name,
ext_modules=cythonize(cy_foo,
annotate=True,
build_dir='build'),
packages=[
pkg_name,
pkg_name + '.core',
],
)
```
**foo.py**
```py
def foo_def(n):
sum = 0
for i in range(n):
sum += i
return sum
```
**cy\_foo.pyx**
```py
def foo_cdef(n):
return foo_cy(n)
cdef int foo_cy(int n):
cdef int sum = 0
cdef int i = 0
for i in range(n):
sum += i
return sum
cpdef int foo_cpdef(int n):
cdef int sum = 0
cdef int i = 0
for i in range(n):
sum += i
return sum
```
**test.py**
```py
import timeit
from tmp.core.foo import foo_def
from tmp.core.cy_foo import foo_cdef
from tmp.core.cy_foo import foo_cpdef
n = 10000
# Python call
start_time = timeit.default_timer()
a = foo_def(n)
pyTime = timeit.default_timer() - start_time
# Call Python wrapper for C function
start_time = timeit.default_timer()
b = foo_cdef(n)
cTime = timeit.default_timer() - start_time
# Call cpdef function, which does more than wrap a cdef function (whatever that means)
start_time = timeit.default_timer()
c = foo_cpdef(n)
cpTime = timeit.default_timer() - start_time
print("def/cdef:")
print(pyTime/cTime)
print("def/cpdef:")
print(pyTime/cpTime)
print("cpdef/cdef:")
print(cpTime/cTime)
``` | 2019/01/14 | [
"https://Stackoverflow.com/questions/54174830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5670215/"
] | The precisions are incompatible. You are calling `sgemv` which takes single precision arguments but you are passing double precision arrays and vectors. | Perhaps the `trans` parameter is required?
```
trans: Must be 'N', 'C', or 'T'.
```
(As per the note at the bottom of [Developer Reference for Intel® Math Kernel Library - Fortran](https://software.intel.com/en-us/mkl-developer-reference-fortran-gemv).) |
54,174,830 | I'd like to call my `cdef` methods and improve the speed of my program as much as possible. I do not want to use `cpdef` (I explain why below). Ultimately, I'd like to access `cdef` methods (some of which return void) that are members of my Cython extensions.
I tried following [this example](https://notes-on-cython.readthedocs.io/en/latest/fibo_speed.html), which gives me the impression that I can call a `cdef` function by making a Python (`def`) wrapper for it.
I can't reproduce these results, so I tried a different problem for myself (summing all the numbers from 0 to n).
Of course, I'm looking at the [documentation](https://cython.readthedocs.io/en/latest/src/tutorial/cdef_classes.html), which says
>
> The directive cpdef makes two versions of the method available; one fast for use from Cython and one slower for use from Python.
>
>
>
and later (emphasis mine),
>
> This does **slightly more than providing a python wrapper** for a cdef method: unlike a cdef method, a cpdef method is fully overridable by methods and instance attributes in Python subclasses. **It adds a little calling overhead compared to a cdef method**.
>
>
>
So how does one use a `cdef` function without the extra calling overhead of a `cpdef` function?
With the code at the end of this question, I get the following results:
```
def/cdef:
273.04207632583245
def/cpdef:
304.4114626176919
cpdef/cdef:
0.8969507060538783
```
Somehow, `cpdef` is faster than `cdef`. For n < 100, I can occasionally get `cpdef/cdef` > 1, but it's rare. I think it has to do with wrapping the `cdef` function in a `def` function. This is what the example I link to does, but they claim better performance from using `cdef` than from using `cpdef`.
I'm pretty sure this is not how you wrap a `cdef` function while avoiding the additional overhead (the source of which is not clearly documented) of a `cpdef`.
And now, the code:
**setup.py**
```py
from setuptools import setup, Extension
from Cython.Build import cythonize
pkg_name = "tmp"
compile_args=['-std=c++17']
cy_foo = Extension(
name=pkg_name + '.core.cy_foo',
sources=[
pkg_name + '/core/cy_foo.pyx',
],
language='c++',
extra_compile_args=compile_args,
)
setup(
name=pkg_name,
ext_modules=cythonize(cy_foo,
annotate=True,
build_dir='build'),
packages=[
pkg_name,
pkg_name + '.core',
],
)
```
**foo.py**
```py
def foo_def(n):
sum = 0
for i in range(n):
sum += i
return sum
```
**cy\_foo.pyx**
```py
def foo_cdef(n):
return foo_cy(n)
cdef int foo_cy(int n):
cdef int sum = 0
cdef int i = 0
for i in range(n):
sum += i
return sum
cpdef int foo_cpdef(int n):
cdef int sum = 0
cdef int i = 0
for i in range(n):
sum += i
return sum
```
**test.py**
```py
import timeit
from tmp.core.foo import foo_def
from tmp.core.cy_foo import foo_cdef
from tmp.core.cy_foo import foo_cpdef
n = 10000
# Python call
start_time = timeit.default_timer()
a = foo_def(n)
pyTime = timeit.default_timer() - start_time
# Call Python wrapper for C function
start_time = timeit.default_timer()
b = foo_cdef(n)
cTime = timeit.default_timer() - start_time
# Call cpdef function, which does more than wrap a cdef function (whatever that means)
start_time = timeit.default_timer()
c = foo_cpdef(n)
cpTime = timeit.default_timer() - start_time
print("def/cdef:")
print(pyTime/cTime)
print("def/cpdef:")
print(pyTime/cpTime)
print("cpdef/cdef:")
print(cpTime/cTime)
``` | 2019/01/14 | [
"https://Stackoverflow.com/questions/54174830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5670215/"
] | When you look at BLAS or LAPACK routines then you should always have a look at the first letter:
* `S`: single precision
* `D`: double precision
* `C`: single precision complex
* `Z`: double precision complex
You defined your matrix `A` as well as the vectors `XP` and `YP` as a double precision number using the statement:
```
integer, parameter :: DP = selected_real_kind(15)
```
So for this, you need to use `dgemv` or define your precision as single precision.
There is also a difference between calling `dgemv` and `dgemv_f95`. `dgemv_f95` is part of Intel MKL and not really a common naming. For portability reasons, I would not use that notation but stick to the classic `dgemv` which is also part of Intel MKL.
>
> `DGEMV` performs one of the matrix-vector operations
>
>
>
> ```
> y := alpha*A*x + beta*y, or y := alpha*A**T*x + beta*y,
>
> ```
>
> where `alpha` and `beta` are scalars, `x` and `y` are vectors and `A` is an
> `m` by `n` matrix.
>
>
>
If you want to know how to call the function, I suggest to have a look [here](http://www.netlib.org/lapack/explore-html/dc/da8/dgemv_8f_source.html), but it should, in the end, look something like this:
```
call DGEMV('N',3,3,ALPHA,A,3,XP,1,BETA,YP,1)
``` | Perhaps the `trans` parameter is required?
```
trans: Must be 'N', 'C', or 'T'.
```
(As per the note at the bottom of [Developer Reference for Intel® Math Kernel Library - Fortran](https://software.intel.com/en-us/mkl-developer-reference-fortran-gemv).) |
54,174,830 | I'd like to call my `cdef` methods and improve the speed of my program as much as possible. I do not want to use `cpdef` (I explain why below). Ultimately, I'd like to access `cdef` methods (some of which return void) that are members of my Cython extensions.
I tried following [this example](https://notes-on-cython.readthedocs.io/en/latest/fibo_speed.html), which gives me the impression that I can call a `cdef` function by making a Python (`def`) wrapper for it.
I can't reproduce these results, so I tried a different problem for myself (summing all the numbers from 0 to n).
Of course, I'm looking at the [documentation](https://cython.readthedocs.io/en/latest/src/tutorial/cdef_classes.html), which says
>
> The directive cpdef makes two versions of the method available; one fast for use from Cython and one slower for use from Python.
>
>
>
and later (emphasis mine),
>
> This does **slightly more than providing a python wrapper** for a cdef method: unlike a cdef method, a cpdef method is fully overridable by methods and instance attributes in Python subclasses. **It adds a little calling overhead compared to a cdef method**.
>
>
>
So how does one use a `cdef` function without the extra calling overhead of a `cpdef` function?
With the code at the end of this question, I get the following results:
```
def/cdef:
273.04207632583245
def/cpdef:
304.4114626176919
cpdef/cdef:
0.8969507060538783
```
Somehow, `cpdef` is faster than `cdef`. For n < 100, I can occasionally get `cpdef/cdef` > 1, but it's rare. I think it has to do with wrapping the `cdef` function in a `def` function. This is what the example I link to does, but they claim better performance from using `cdef` than from using `cpdef`.
I'm pretty sure this is not how you wrap a `cdef` function while avoiding the additional overhead (the source of which is not clearly documented) of a `cpdef`.
And now, the code:
**setup.py**
```py
from setuptools import setup, Extension
from Cython.Build import cythonize
pkg_name = "tmp"
compile_args=['-std=c++17']
cy_foo = Extension(
name=pkg_name + '.core.cy_foo',
sources=[
pkg_name + '/core/cy_foo.pyx',
],
language='c++',
extra_compile_args=compile_args,
)
setup(
name=pkg_name,
ext_modules=cythonize(cy_foo,
annotate=True,
build_dir='build'),
packages=[
pkg_name,
pkg_name + '.core',
],
)
```
**foo.py**
```py
def foo_def(n):
sum = 0
for i in range(n):
sum += i
return sum
```
**cy\_foo.pyx**
```py
def foo_cdef(n):
return foo_cy(n)
cdef int foo_cy(int n):
cdef int sum = 0
cdef int i = 0
for i in range(n):
sum += i
return sum
cpdef int foo_cpdef(int n):
cdef int sum = 0
cdef int i = 0
for i in range(n):
sum += i
return sum
```
**test.py**
```py
import timeit
from tmp.core.foo import foo_def
from tmp.core.cy_foo import foo_cdef
from tmp.core.cy_foo import foo_cpdef
n = 10000
# Python call
start_time = timeit.default_timer()
a = foo_def(n)
pyTime = timeit.default_timer() - start_time
# Call Python wrapper for C function
start_time = timeit.default_timer()
b = foo_cdef(n)
cTime = timeit.default_timer() - start_time
# Call cpdef function, which does more than wrap a cdef function (whatever that means)
start_time = timeit.default_timer()
c = foo_cpdef(n)
cpTime = timeit.default_timer() - start_time
print("def/cdef:")
print(pyTime/cTime)
print("def/cpdef:")
print(pyTime/cpTime)
print("cpdef/cdef:")
print(cpTime/cTime)
``` | 2019/01/14 | [
"https://Stackoverflow.com/questions/54174830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5670215/"
] | When you look at BLAS or LAPACK routines then you should always have a look at the first letter:
* `S`: single precision
* `D`: double precision
* `C`: single precision complex
* `Z`: double precision complex
You defined your matrix `A` as well as the vectors `XP` and `YP` as a double precision number using the statement:
```
integer, parameter :: DP = selected_real_kind(15)
```
So for this, you need to use `dgemv` or define your precision as single precision.
There is also a difference between calling `dgemv` and `dgemv_f95`. `dgemv_f95` is part of Intel MKL and not really a common naming. For portability reasons, I would not use that notation but stick to the classic `dgemv` which is also part of Intel MKL.
>
> `DGEMV` performs one of the matrix-vector operations
>
>
>
> ```
> y := alpha*A*x + beta*y, or y := alpha*A**T*x + beta*y,
>
> ```
>
> where `alpha` and `beta` are scalars, `x` and `y` are vectors and `A` is an
> `m` by `n` matrix.
>
>
>
If you want to know how to call the function, I suggest to have a look [here](http://www.netlib.org/lapack/explore-html/dc/da8/dgemv_8f_source.html), but it should, in the end, look something like this:
```
call DGEMV('N',3,3,ALPHA,A,3,XP,1,BETA,YP,1)
``` | The precisions are incompatible. You are calling `sgemv` which takes single precision arguments but you are passing double precision arrays and vectors. |
26,987,643 | I'm trying to write a simple smtp server program. I've written a simple smtp client (in C#) which sends an email. I've tested the program with [smtp4dev](https://smtp4dev.codeplex.com/). So far it all works fine.
I'd also like to write my own simple program which receives the email (instead of smtp4dev). I've tried a number of different code snippets (eg: [Here](https://djangosnippets.org/snippets/96/)) that I've found around the web but I can't seem to get them working.
I've also tried using [twisted](http://twistedmatrix.com/documents/current/_downloads/emailserver.tac).
To start with I can see using [TCPView](http://technet.microsoft.com/en-au/sysinternals/bb897437.aspx) that the port numbers in the code are not the ones being used.
I get the feeling that I'm missing something conceptual though and heading in the wrong direction.
**EDIT**
Here's the C# code in case you are interested
```
MailMessage mail = new MailMessage();
mail.Subject = "Your Subject";
mail.From = new MailAddress("test@test.com.au");
mail.To.Add("soslab@soslab.lab");
mail.Body = "Hello! your mail content goes here...";
mail.IsBodyHtml = true;
SmtpClient smtp = new SmtpClient("LOCALHOST", 26);
smtp.EnableSsl = false;
try
{
smtp.Send(mail);
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
```
---
here's the python code
```
import smtpd
import asyncore
class EmailServer(smtpd.SMTPServer):
def process_message(self, peer, mailfrom, rcpttos, data):
print 'a'
def run():
foo = EmailServer(('localhost', 26), None)
try:
asyncore.loop()
except KeyboardInterrupt:
pass
if __name__ == '__main__':
run()
```
 | 2014/11/18 | [
"https://Stackoverflow.com/questions/26987643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2040876/"
] | For some reason this program runs fine when I run it from the command line
```
import smtpd
import asyncore
import winsound
class PYEmailServer(smtpd.SMTPServer):
def __init__(*args, **kwargs):
smtpd.SMTPServer.__init__(*args, **kwargs)
def process_message(self, peer, mailfrom, rcpttos, data):
winsound.Beep(2500, 1000)
def run():
foo = PYEmailServer(('localhost', 26), None)
try:
asyncore.loop()
except KeyboardInterrupt:
foo.close()
if __name__ == '__main__':
run()
```
It does not work when I run it from IDLE. (C# program just throws an exception like the service isnt there). I dont know why this would be, but I have my original problem working. | To test your smtp server you need to set the smtpclient object on another terminal
```
import smtplib
smtpclient=smtplib.SMTP('127.0.0.1',8001)
smtpClient.sendmail('sender@gmail.com','recivers@gmail.com','sadfsdf')
``` |
51,925,941 | I have a Travis CI project connected to GitHub that tries to update content in the Github repo and push them back to GitHub, both master and gh-pages branches.
However, although my travis-ci log files says everything is ok, I only see the gh-pages branch updated, but not the master branch.
My travis.yml file is:
```
language: node_js
node_js: stable
language: python
python: 3.6
# Travis-CI Caching
cache:
directories:
- node_modules
- pip
# S: Build Lifecycle
install:
- npm install
- npm install -g gulp
- python -m pip install requests
- python -m pip install bs4
- python -m pip install lxml
before_script:
- cd archive_builder
- python build_archive.py
- cd ..
script:
- gulp dist
after_script:
- cd dist
- git init
- git config user.name "my git name"
- git config user.email "my git email"
- git add -A
- git commit -m "travis -- update gh-page"
- git push --force --quiet "https://${GH_TOKEN}@${GH_REF}" master:gh-pages
- sh ../purgeCF.sh $CF_ZONE $CF_KEY $CF_EMAIL
- cd ..
- git add -A
- git commit -m "travis -- update master files"
- git push --quiet "https://${GH_TOKEN}@${GH_REF}" HEAD:master
# E: Build LifeCycle
branches:
only:
- master
env:
global:
- GH_REF: github.com/mygitname/myprojectname.git
```
In this script, I first update and build website sourcefiles with gulp, storing them into "dist" folder. Then I push content in "dist" to my gh-pages branch, and push everything else to my master branch.
The credentials are stored as security keys with Travis and should work correctly.
To push "dist/", I created a new ".git/" under "dist/" and force push it as new.
To push everything else, I could not do it because the root repository already contains ".git" folder and I do not want to lose my previous commits. It should work.
Thanks for help. | 2018/08/20 | [
"https://Stackoverflow.com/questions/51925941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1316781/"
] | For some reason `heroku auth:token` was returning a wrong token for me, even after making sure that I was logged in to heroku on the command line.
After trying all the solutions posted, what worked for me was:
* Go to [Heroku Account](https://dashboard.heroku.com/account)
* Manually copy the API Key and then paste it into command line:
+ For the ones hosted at [travis-ci.com](https://travis-ci.com):
+ `travis encrypt pasteAPIKeyHere --add deploy.api_key --pro`
+ For the ones hosted at [travis-ci.org](https://travis-ci.org):
+ `travis encrypt pasteAPIKeyHere --add deploy.api_key --org`
Hope this helps. | This is an encryption issue, depending on where your Travis account is hosted (`travis-ci.com` or `travis-ci.org`) & if your project is public.
I use a public project connected on `travis-ci.com`, and the command given by [Travis-ci doc about deploy to Heroku](https://docs.travis-ci.com/user/deployment/heroku/) isn't working :
`travis encrypt $(heroku auth:token) --add deploy.api_key`
To guarantee a correct encryption use `--org` (for `travis-ci.org`) or `--pro` (for `travis-ci.com`) tag, in my case :
`travis encrypt $(heroku auth:token) --add deploy.api_key --pro`
See [Travis-CI issue #10018](https://github.com/travis-ci/travis-ci/issues/10018). |
51,925,941 | I have a Travis CI project connected to GitHub that tries to update content in the Github repo and push them back to GitHub, both master and gh-pages branches.
However, although my travis-ci log files says everything is ok, I only see the gh-pages branch updated, but not the master branch.
My travis.yml file is:
```
language: node_js
node_js: stable
language: python
python: 3.6
# Travis-CI Caching
cache:
directories:
- node_modules
- pip
# S: Build Lifecycle
install:
- npm install
- npm install -g gulp
- python -m pip install requests
- python -m pip install bs4
- python -m pip install lxml
before_script:
- cd archive_builder
- python build_archive.py
- cd ..
script:
- gulp dist
after_script:
- cd dist
- git init
- git config user.name "my git name"
- git config user.email "my git email"
- git add -A
- git commit -m "travis -- update gh-page"
- git push --force --quiet "https://${GH_TOKEN}@${GH_REF}" master:gh-pages
- sh ../purgeCF.sh $CF_ZONE $CF_KEY $CF_EMAIL
- cd ..
- git add -A
- git commit -m "travis -- update master files"
- git push --quiet "https://${GH_TOKEN}@${GH_REF}" HEAD:master
# E: Build LifeCycle
branches:
only:
- master
env:
global:
- GH_REF: github.com/mygitname/myprojectname.git
```
In this script, I first update and build website sourcefiles with gulp, storing them into "dist" folder. Then I push content in "dist" to my gh-pages branch, and push everything else to my master branch.
The credentials are stored as security keys with Travis and should work correctly.
To push "dist/", I created a new ".git/" under "dist/" and force push it as new.
To push everything else, I could not do it because the root repository already contains ".git" folder and I do not want to lose my previous commits. It should work.
Thanks for help. | 2018/08/20 | [
"https://Stackoverflow.com/questions/51925941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1316781/"
] | This is an encryption issue, depending on where your Travis account is hosted (`travis-ci.com` or `travis-ci.org`) & if your project is public.
I use a public project connected on `travis-ci.com`, and the command given by [Travis-ci doc about deploy to Heroku](https://docs.travis-ci.com/user/deployment/heroku/) isn't working :
`travis encrypt $(heroku auth:token) --add deploy.api_key`
To guarantee a correct encryption use `--org` (for `travis-ci.org`) or `--pro` (for `travis-ci.com`) tag, in my case :
`travis encrypt $(heroku auth:token) --add deploy.api_key --pro`
See [Travis-CI issue #10018](https://github.com/travis-ci/travis-ci/issues/10018). | Fixed it by doing this:
```
deploy:
provider: heroku
api_key:
secure: ENCRYPTED_API_KEY
app: simple-e-shop <--------------------
on:
repo: john-doe/simple-eshop
```
Before I had the app name the same as the name of the GitHub repo. But if should be the name of heroku app. The difference was in one dash character, and it worked! |
51,925,941 | I have a Travis CI project connected to GitHub that tries to update content in the Github repo and push them back to GitHub, both master and gh-pages branches.
However, although my travis-ci log files says everything is ok, I only see the gh-pages branch updated, but not the master branch.
My travis.yml file is:
```
language: node_js
node_js: stable
language: python
python: 3.6
# Travis-CI Caching
cache:
directories:
- node_modules
- pip
# S: Build Lifecycle
install:
- npm install
- npm install -g gulp
- python -m pip install requests
- python -m pip install bs4
- python -m pip install lxml
before_script:
- cd archive_builder
- python build_archive.py
- cd ..
script:
- gulp dist
after_script:
- cd dist
- git init
- git config user.name "my git name"
- git config user.email "my git email"
- git add -A
- git commit -m "travis -- update gh-page"
- git push --force --quiet "https://${GH_TOKEN}@${GH_REF}" master:gh-pages
- sh ../purgeCF.sh $CF_ZONE $CF_KEY $CF_EMAIL
- cd ..
- git add -A
- git commit -m "travis -- update master files"
- git push --quiet "https://${GH_TOKEN}@${GH_REF}" HEAD:master
# E: Build LifeCycle
branches:
only:
- master
env:
global:
- GH_REF: github.com/mygitname/myprojectname.git
```
In this script, I first update and build website sourcefiles with gulp, storing them into "dist" folder. Then I push content in "dist" to my gh-pages branch, and push everything else to my master branch.
The credentials are stored as security keys with Travis and should work correctly.
To push "dist/", I created a new ".git/" under "dist/" and force push it as new.
To push everything else, I could not do it because the root repository already contains ".git" folder and I do not want to lose my previous commits. It should work.
Thanks for help. | 2018/08/20 | [
"https://Stackoverflow.com/questions/51925941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1316781/"
] | This is an encryption issue, depending on where your Travis account is hosted (`travis-ci.com` or `travis-ci.org`) & if your project is public.
I use a public project connected on `travis-ci.com`, and the command given by [Travis-ci doc about deploy to Heroku](https://docs.travis-ci.com/user/deployment/heroku/) isn't working :
`travis encrypt $(heroku auth:token) --add deploy.api_key`
To guarantee a correct encryption use `--org` (for `travis-ci.org`) or `--pro` (for `travis-ci.com`) tag, in my case :
`travis encrypt $(heroku auth:token) --add deploy.api_key --pro`
See [Travis-CI issue #10018](https://github.com/travis-ci/travis-ci/issues/10018). | In my case I had to login again using `travis login --pro` and then `travis encrypt $(heroku auth:token) --add deploy.api_key --pro` generate proper api-key. I use free Travis version.
This is mentioned in Travis Heroku doc: <https://docs.travis-ci.com/user/deployment/heroku/>
>
> travis command defaults to using travis-ci.org as the API endpoint. If
> your build runs on travis-ci.com (even if your repository is public),
> add --pro flag to override this:
> `travis encrypt $(heroku auth:token) --add deploy.api_key --pro`
>
>
> |
51,925,941 | I have a Travis CI project connected to GitHub that tries to update content in the Github repo and push them back to GitHub, both master and gh-pages branches.
However, although my travis-ci log files says everything is ok, I only see the gh-pages branch updated, but not the master branch.
My travis.yml file is:
```
language: node_js
node_js: stable
language: python
python: 3.6
# Travis-CI Caching
cache:
directories:
- node_modules
- pip
# S: Build Lifecycle
install:
- npm install
- npm install -g gulp
- python -m pip install requests
- python -m pip install bs4
- python -m pip install lxml
before_script:
- cd archive_builder
- python build_archive.py
- cd ..
script:
- gulp dist
after_script:
- cd dist
- git init
- git config user.name "my git name"
- git config user.email "my git email"
- git add -A
- git commit -m "travis -- update gh-page"
- git push --force --quiet "https://${GH_TOKEN}@${GH_REF}" master:gh-pages
- sh ../purgeCF.sh $CF_ZONE $CF_KEY $CF_EMAIL
- cd ..
- git add -A
- git commit -m "travis -- update master files"
- git push --quiet "https://${GH_TOKEN}@${GH_REF}" HEAD:master
# E: Build LifeCycle
branches:
only:
- master
env:
global:
- GH_REF: github.com/mygitname/myprojectname.git
```
In this script, I first update and build website sourcefiles with gulp, storing them into "dist" folder. Then I push content in "dist" to my gh-pages branch, and push everything else to my master branch.
The credentials are stored as security keys with Travis and should work correctly.
To push "dist/", I created a new ".git/" under "dist/" and force push it as new.
To push everything else, I could not do it because the root repository already contains ".git" folder and I do not want to lose my previous commits. It should work.
Thanks for help. | 2018/08/20 | [
"https://Stackoverflow.com/questions/51925941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1316781/"
] | For some reason `heroku auth:token` was returning a wrong token for me, even after making sure that I was logged in to heroku on the command line.
After trying all the solutions posted, what worked for me was:
* Go to [Heroku Account](https://dashboard.heroku.com/account)
* Manually copy the API Key and then paste it into command line:
+ For the ones hosted at [travis-ci.com](https://travis-ci.com):
+ `travis encrypt pasteAPIKeyHere --add deploy.api_key --pro`
+ For the ones hosted at [travis-ci.org](https://travis-ci.org):
+ `travis encrypt pasteAPIKeyHere --add deploy.api_key --org`
Hope this helps. | Fixed it by doing this:
```
deploy:
provider: heroku
api_key:
secure: ENCRYPTED_API_KEY
app: simple-e-shop <--------------------
on:
repo: john-doe/simple-eshop
```
Before I had the app name the same as the name of the GitHub repo. But if should be the name of heroku app. The difference was in one dash character, and it worked! |
51,925,941 | I have a Travis CI project connected to GitHub that tries to update content in the Github repo and push them back to GitHub, both master and gh-pages branches.
However, although my travis-ci log files says everything is ok, I only see the gh-pages branch updated, but not the master branch.
My travis.yml file is:
```
language: node_js
node_js: stable
language: python
python: 3.6
# Travis-CI Caching
cache:
directories:
- node_modules
- pip
# S: Build Lifecycle
install:
- npm install
- npm install -g gulp
- python -m pip install requests
- python -m pip install bs4
- python -m pip install lxml
before_script:
- cd archive_builder
- python build_archive.py
- cd ..
script:
- gulp dist
after_script:
- cd dist
- git init
- git config user.name "my git name"
- git config user.email "my git email"
- git add -A
- git commit -m "travis -- update gh-page"
- git push --force --quiet "https://${GH_TOKEN}@${GH_REF}" master:gh-pages
- sh ../purgeCF.sh $CF_ZONE $CF_KEY $CF_EMAIL
- cd ..
- git add -A
- git commit -m "travis -- update master files"
- git push --quiet "https://${GH_TOKEN}@${GH_REF}" HEAD:master
# E: Build LifeCycle
branches:
only:
- master
env:
global:
- GH_REF: github.com/mygitname/myprojectname.git
```
In this script, I first update and build website sourcefiles with gulp, storing them into "dist" folder. Then I push content in "dist" to my gh-pages branch, and push everything else to my master branch.
The credentials are stored as security keys with Travis and should work correctly.
To push "dist/", I created a new ".git/" under "dist/" and force push it as new.
To push everything else, I could not do it because the root repository already contains ".git" folder and I do not want to lose my previous commits. It should work.
Thanks for help. | 2018/08/20 | [
"https://Stackoverflow.com/questions/51925941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1316781/"
] | For some reason `heroku auth:token` was returning a wrong token for me, even after making sure that I was logged in to heroku on the command line.
After trying all the solutions posted, what worked for me was:
* Go to [Heroku Account](https://dashboard.heroku.com/account)
* Manually copy the API Key and then paste it into command line:
+ For the ones hosted at [travis-ci.com](https://travis-ci.com):
+ `travis encrypt pasteAPIKeyHere --add deploy.api_key --pro`
+ For the ones hosted at [travis-ci.org](https://travis-ci.org):
+ `travis encrypt pasteAPIKeyHere --add deploy.api_key --org`
Hope this helps. | In my case I had to login again using `travis login --pro` and then `travis encrypt $(heroku auth:token) --add deploy.api_key --pro` generate proper api-key. I use free Travis version.
This is mentioned in Travis Heroku doc: <https://docs.travis-ci.com/user/deployment/heroku/>
>
> travis command defaults to using travis-ci.org as the API endpoint. If
> your build runs on travis-ci.com (even if your repository is public),
> add --pro flag to override this:
> `travis encrypt $(heroku auth:token) --add deploy.api_key --pro`
>
>
> |
35,911,557 | I am using this [library](https://pypi.python.org/pypi/django-datatables) for **datatables** in **django-rest**. Everything is working fine expect **request.user session** in views. It seems to me **django-datatable** is not authenticating the user token and therefore request.user returns anonymous user. And the same is accessible even without sending user token in headers.
Here is my code :
```
class MyDataTableView(BaseDatatableView):
"""
"""
model = MyModel
columns = [***columns** ]
order_columns = [***columns**]
def get_initial_queryset(self):
"""
initial queryset for
"""
self.request.user -----> returns antonymous user
queryset = self.model.objects
return queryset
``` | 2016/03/10 | [
"https://Stackoverflow.com/questions/35911557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5594493/"
] | Have You tried to subclass `BaseDatatableView` and overwrite its .get like:
```
def get(self, *args, **kwargs):
super().get(*args, **kwargs)
print(self.request)
```
My guess is that `get_initial_queryset` can be invoked before actual request dispatch, so the user is anonymous there. When You look into the code of `django_datatables/mixins.py`, there is a mixin called `JsonResponseMixin`. It's GET method is directly responsible for request processing, so You should look for Your answers there. The easiest way - subclass it and overwrite the method. | Have you added the token JS to the Datatables initiation JS file? django-datatables just creates the correct JSON string. Initiating the cookie is different.
I fought with this a while and my missing piece was that I had to get and set the cookie:
```
// using jQuery
function getCookie(name) {
var cookieValue = null;
if (document.cookie && document.cookie !== '') {
var cookies = document.cookie.split(';');
for (var i = 0; i < cookies.length; i++) {
var cookie = jQuery.trim(cookies[i]);
// Does this cookie string begin with the name we want?
if (cookie.substring(0, name.length + 1) === (name + '=')) {
cookieValue = decodeURIComponent(cookie.substring(name.length + 1));
break;
}
}
}
return cookieValue;
}
var csrftoken = getCookie('csrftoken');
function csrfSafeMethod(method) {
// these HTTP methods do not require CSRF protection
return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method));
}
$.ajaxSetup({
beforeSend: function(xhr, settings) {
if (!csrfSafeMethod(settings.type) && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", csrftoken);
}
}
});
```
this is above where I set the Datatables params for example :
```
let table = $('#datatables').DataTable({
"processing": true,
"serverSide": true,
stateSave: true,
"ajax": {
```
........ |
733,574 | I have a list that contains several tuples, like:
```
[('a_key', 'a value'), ('another_key', 'another value')]
```
where the first tuple-values act as dictionary-keys.
I'm now searching for a python-like way to access the key/value-pairs, like:
`"mylist.a_key"` or `"mylist['a_key']"`
without iterating over the list. any ideas? | 2009/04/09 | [
"https://Stackoverflow.com/questions/733574",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/89006/"
] | You can't do it without any iteration. You will either need iteration to convert it into a dict, at which point key access will become possible sans iteration, or you will need to iterate over it for each key access. Converting to a dict seems the better idea-- in the long run it is more efficient, but more importantly, it represents how you actually see this data structure-- as pairs of keys and values.
```
>>> x = [('a_key', 'a value'), ('another_key', 'another value')]
>>> y = dict(x)
>>> y['a_key']
'a value'
>>> y['another_key']
'another value'
``` | If you're generating the list yourself, you might be able to create it as a dictionary at source (which allows for key, value pairs).
Otherwise, Van Gale's defaultdict is the way to go I would think.
Edit:
As mentioned in the comments, defaultdict is not required here unless you need to deal with corner cases like several values with the same key in your list. Still, if you can originally generate the "list" as a dictionary, you save yourself having to iterate back over it afterwards. |
59,234,661 | I have a simple python script which is moving files from my download folder. The script works fine when I running it via terminal. The issue happens when it gets run through launchd:
```
Traceback (most recent call last):
File "/Users/ben/Project/Automation/CleanDownload.py", line 11, in <module>
for f in listdir(downloadFolder):
OSError: [Errno 1] Operation not permitted: '/Users/ben/Downloads/'
```
Any Idea why? | 2019/12/08 | [
"https://Stackoverflow.com/questions/59234661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1237007/"
] | Here is the workaround that I used to circumvent this issue. I was trying to run a simple shell script, but the same workaround would apply to a Python script. To summarize, the steps involved are:
* Create an automator application called e.g. run-script.app that has a single bash script which runs whatever file is passed to it
* Either give the full automator application **Full Disk Access** via **Security & Privacy** or run it once manually and click **Allow** when macOS prompts for permissions
* Call the automator application with whatever script you want to run
More details:
* Whatever script you're wanting to run, make sure it's executable (using `chmod +x`) and that you have the right `#!` line at the top of the script (e.g. `#!/bin/bash`). In this example, I'll use a script at `~/scripts/organize-screenshots.sh` that moves screenshots from my desktop to my Google Drive directory:
```
#!/bin/bash
user_dir="/Users/soxley"
find "$user_dir"/Desktop -name 'Screen Shot *.png' -exec mv {} "$user_dir"/Google\ Drive/pictures/screenshots/ \;
```
* Next, create an Automator application:
+ Open **Automator**
+ Click **New Document**
+ Select **Application**
+ Click **Choose**
+ Select **Utilities > Run Shell Script**
+ Select **Pass Input: as arguments**
+ Enter `/bin/bash -c "$1"` as the body of the script (see screenshot below)
+ Click **File > Save** and save the application wherever you'd like (`run-script.app` in this example)
* Next, run the application that was just created manually to make sure it has the permissions it needs (you could also grant **Full Disk Access** to the new application in **Security & Privacy**):
+ Open **Terminal.app**
+ Execute the command `open -a run-script.app organize-screenshots.sh`
+ Click **Allow** when macOS asks if the application can access your Desktop
* Now you're ready to configure your script in launchd. Update your .plist with the following `ProgramArguments`:
```
<key>ProgramArguments</key>
<array>
<string>open</string>
<string>-a</string>
<string>/Users/soxley/scripts/run-script.app</string>
<string>/Users/soxley/scripts/organize-screenshots.sh</string>
</array>
```
Now you should be able to run whatever script you want using this application as a wrapper.
[](https://i.stack.imgur.com/8VqAB.jpg)
[](https://i.stack.imgur.com/Xi9pR.jpg) | if you havent seen this check out ["OSError: [Errno 1] Operation not permitted" when installing Scrapy in OSX 10.11 (El Capitan) (System Integrity Protection)](https://stackoverflow.com/questions/31900008/oserror-errno-1-operation-not-permitted-when-installing-scrapy-in-osx-10-11)
and this
<https://apple.stackexchange.com/questions/339862/ls-operation-not-permitted-mojave-security>
according to **Roger**
go to
Go to **System Preferences** -> **Security & Privacy**
and give Full Disk Access to `Terminal`.
<https://discussions.apple.com/thread/8637915> |
59,234,661 | I have a simple python script which is moving files from my download folder. The script works fine when I running it via terminal. The issue happens when it gets run through launchd:
```
Traceback (most recent call last):
File "/Users/ben/Project/Automation/CleanDownload.py", line 11, in <module>
for f in listdir(downloadFolder):
OSError: [Errno 1] Operation not permitted: '/Users/ben/Downloads/'
```
Any Idea why? | 2019/12/08 | [
"https://Stackoverflow.com/questions/59234661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1237007/"
] | Here is the workaround that I used to circumvent this issue. I was trying to run a simple shell script, but the same workaround would apply to a Python script. To summarize, the steps involved are:
* Create an automator application called e.g. run-script.app that has a single bash script which runs whatever file is passed to it
* Either give the full automator application **Full Disk Access** via **Security & Privacy** or run it once manually and click **Allow** when macOS prompts for permissions
* Call the automator application with whatever script you want to run
More details:
* Whatever script you're wanting to run, make sure it's executable (using `chmod +x`) and that you have the right `#!` line at the top of the script (e.g. `#!/bin/bash`). In this example, I'll use a script at `~/scripts/organize-screenshots.sh` that moves screenshots from my desktop to my Google Drive directory:
```
#!/bin/bash
user_dir="/Users/soxley"
find "$user_dir"/Desktop -name 'Screen Shot *.png' -exec mv {} "$user_dir"/Google\ Drive/pictures/screenshots/ \;
```
* Next, create an Automator application:
+ Open **Automator**
+ Click **New Document**
+ Select **Application**
+ Click **Choose**
+ Select **Utilities > Run Shell Script**
+ Select **Pass Input: as arguments**
+ Enter `/bin/bash -c "$1"` as the body of the script (see screenshot below)
+ Click **File > Save** and save the application wherever you'd like (`run-script.app` in this example)
* Next, run the application that was just created manually to make sure it has the permissions it needs (you could also grant **Full Disk Access** to the new application in **Security & Privacy**):
+ Open **Terminal.app**
+ Execute the command `open -a run-script.app organize-screenshots.sh`
+ Click **Allow** when macOS asks if the application can access your Desktop
* Now you're ready to configure your script in launchd. Update your .plist with the following `ProgramArguments`:
```
<key>ProgramArguments</key>
<array>
<string>open</string>
<string>-a</string>
<string>/Users/soxley/scripts/run-script.app</string>
<string>/Users/soxley/scripts/organize-screenshots.sh</string>
</array>
```
Now you should be able to run whatever script you want using this application as a wrapper.
[](https://i.stack.imgur.com/8VqAB.jpg)
[](https://i.stack.imgur.com/Xi9pR.jpg) | Have you tried giving `/sbin/launchd` Full Disk Access? |
59,234,661 | I have a simple python script which is moving files from my download folder. The script works fine when I running it via terminal. The issue happens when it gets run through launchd:
```
Traceback (most recent call last):
File "/Users/ben/Project/Automation/CleanDownload.py", line 11, in <module>
for f in listdir(downloadFolder):
OSError: [Errno 1] Operation not permitted: '/Users/ben/Downloads/'
```
Any Idea why? | 2019/12/08 | [
"https://Stackoverflow.com/questions/59234661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1237007/"
] | Here is the workaround that I used to circumvent this issue. I was trying to run a simple shell script, but the same workaround would apply to a Python script. To summarize, the steps involved are:
* Create an automator application called e.g. run-script.app that has a single bash script which runs whatever file is passed to it
* Either give the full automator application **Full Disk Access** via **Security & Privacy** or run it once manually and click **Allow** when macOS prompts for permissions
* Call the automator application with whatever script you want to run
More details:
* Whatever script you're wanting to run, make sure it's executable (using `chmod +x`) and that you have the right `#!` line at the top of the script (e.g. `#!/bin/bash`). In this example, I'll use a script at `~/scripts/organize-screenshots.sh` that moves screenshots from my desktop to my Google Drive directory:
```
#!/bin/bash
user_dir="/Users/soxley"
find "$user_dir"/Desktop -name 'Screen Shot *.png' -exec mv {} "$user_dir"/Google\ Drive/pictures/screenshots/ \;
```
* Next, create an Automator application:
+ Open **Automator**
+ Click **New Document**
+ Select **Application**
+ Click **Choose**
+ Select **Utilities > Run Shell Script**
+ Select **Pass Input: as arguments**
+ Enter `/bin/bash -c "$1"` as the body of the script (see screenshot below)
+ Click **File > Save** and save the application wherever you'd like (`run-script.app` in this example)
* Next, run the application that was just created manually to make sure it has the permissions it needs (you could also grant **Full Disk Access** to the new application in **Security & Privacy**):
+ Open **Terminal.app**
+ Execute the command `open -a run-script.app organize-screenshots.sh`
+ Click **Allow** when macOS asks if the application can access your Desktop
* Now you're ready to configure your script in launchd. Update your .plist with the following `ProgramArguments`:
```
<key>ProgramArguments</key>
<array>
<string>open</string>
<string>-a</string>
<string>/Users/soxley/scripts/run-script.app</string>
<string>/Users/soxley/scripts/organize-screenshots.sh</string>
</array>
```
Now you should be able to run whatever script you want using this application as a wrapper.
[](https://i.stack.imgur.com/8VqAB.jpg)
[](https://i.stack.imgur.com/Xi9pR.jpg) | I was breaking my head on this issue for Big Sur for a LONG Time. What worked for me was the following :
1. Grant Full Disk Access to Python3
2. Grant Full Disk Access to launchd & launchctl |
55,670,030 | I'm trying to do some design automation in CATIA. I'm using python, I then record macros in CATIA and translates the code there to python code. Now I have stumbled upon a problem.
Below is the Macro from CATIA that i want to translate into pyhton code.
```
Language="VBSCRIPT"
Sub CATMain()
Set productDocument1 = CATIA.ActiveDocument
Set product1 = productDocument1.Product
Set product1 = product1.ReferenceProduct
Set constraints1 = product1.Connections("CATIAConstraints")
Set reference1 = product1.CreateReferenceFromName("ContainerSchiff/Container1/!yz plane")
Set reference2 = product1.CreateReferenceFromName("ContainerSchiff/Container0/!Geometrical Set.1/Point.2")
----Here is the problem ---- Set constraint1 = constraints1.AddBiEltCst(catCstTypeDistance, reference1, reference2)
Set length1 = constraint1.Dimension
length1.Value = 300.000000
product1.Update
End Sub
```
When translating this i have no idea what to do with `catCstTypeDistance`
If I leave it as is, then python will obviously complain about the name not being defined. If I pass it as a string complains as well. Below is the part of the python
```py
else:
add_container_skeleton(product1,i)
product1.ReferenceProduct
constraints1=product1.Connections("CATIAConstraints")
Name1="ContainerSchiff/Container" + str(i-1) + "/!Container1/yz plane"
Name2="ContainerSchiff/Container" + str(i) + "/!Geometrical Set.1/Point.2"
reference1= product1.CreateReferenceFromName(Name1)
reference2 = product1.CreateReferenceFromName(Name2)
constraint1 = constraints1.AddBiEltCst('catCstTypeDistance', reference1, reference2)
length1 = constraint1.Dimension
length1.Value = 300.000000
```
and the two different error messages that I get.
```
File "C:\Users\Mange\Documents\LIU\Catia part 2\first_draft.py", line 179, in place_containers
constraint1 = constraints1.AddBiEltCst('catCstTypeDistance', reference1, reference2)
File "<COMObject Connections>", line 3, in AddBiEltCst
ValueError: invalid literal for int() with base 10: 'catCstTypeDistance'
```
```
File "C:\Users\Mange\Documents\LIU\Catia part 2\first_draft.py", line 179, in place_containers
constraint1 = constraints1.AddBiEltCst(catCstTypeDistance(), reference1, reference2)
NameError: name 'catCstTypeDistance' is not defined
```
How can I "access" this thing/object(?) from python? | 2019/04/13 | [
"https://Stackoverflow.com/questions/55670030",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9778872/"
] | Using preventDefault() on parent elements prevent the original event to be fired, but does not stop propagation. Understanding hierarchy and event propagation is crucial.
You have part of the solution in your code snippet. If you comment out that particular line, the code works properly, like you would expect.
But if you use
```
e.stopPropagation();
```
it also works.
In order not to repeat information already on the web, I found a very similar case here ([Why does preventDefault() on a parent element's click 'disable' a checkbox?](https://stackoverflow.com/questions/15767083/why-does-preventdefault-on-a-parent-elements-click-disable-a-checkbox)) which may help you understand better event propagation and bubbling.
You can also find an even better explanation here (<https://medium.freecodecamp.org/a-simplified-explanation-of-event-propagation-in-javascript-f9de7961a06e>).
MDN documentation also rarely fails to impress (<https://developer.mozilla.org/en-US/docs/Web/API/Event/preventDefault>). | You just need to add `e.stopPropagation()` to return the default functionality for radio button again which is `checked`
The behaviour is that You have `radio` is a child to the `div` and you `click` listener is based on the parent `div` and when you `preventDefault` then the children inherit the prevention as well.
check [that](https://developer.mozilla.org/en-US/docs/Web/API/Element/click_event)
>
>
> >
> > If the button is pressed on one element and released on a different one, the event is fired on the most specific ancestor element that contained both.
> >
> >
> >
>
>
> |
55,670,030 | I'm trying to do some design automation in CATIA. I'm using python, I then record macros in CATIA and translates the code there to python code. Now I have stumbled upon a problem.
Below is the Macro from CATIA that i want to translate into pyhton code.
```
Language="VBSCRIPT"
Sub CATMain()
Set productDocument1 = CATIA.ActiveDocument
Set product1 = productDocument1.Product
Set product1 = product1.ReferenceProduct
Set constraints1 = product1.Connections("CATIAConstraints")
Set reference1 = product1.CreateReferenceFromName("ContainerSchiff/Container1/!yz plane")
Set reference2 = product1.CreateReferenceFromName("ContainerSchiff/Container0/!Geometrical Set.1/Point.2")
----Here is the problem ---- Set constraint1 = constraints1.AddBiEltCst(catCstTypeDistance, reference1, reference2)
Set length1 = constraint1.Dimension
length1.Value = 300.000000
product1.Update
End Sub
```
When translating this i have no idea what to do with `catCstTypeDistance`
If I leave it as is, then python will obviously complain about the name not being defined. If I pass it as a string complains as well. Below is the part of the python
```py
else:
add_container_skeleton(product1,i)
product1.ReferenceProduct
constraints1=product1.Connections("CATIAConstraints")
Name1="ContainerSchiff/Container" + str(i-1) + "/!Container1/yz plane"
Name2="ContainerSchiff/Container" + str(i) + "/!Geometrical Set.1/Point.2"
reference1= product1.CreateReferenceFromName(Name1)
reference2 = product1.CreateReferenceFromName(Name2)
constraint1 = constraints1.AddBiEltCst('catCstTypeDistance', reference1, reference2)
length1 = constraint1.Dimension
length1.Value = 300.000000
```
and the two different error messages that I get.
```
File "C:\Users\Mange\Documents\LIU\Catia part 2\first_draft.py", line 179, in place_containers
constraint1 = constraints1.AddBiEltCst('catCstTypeDistance', reference1, reference2)
File "<COMObject Connections>", line 3, in AddBiEltCst
ValueError: invalid literal for int() with base 10: 'catCstTypeDistance'
```
```
File "C:\Users\Mange\Documents\LIU\Catia part 2\first_draft.py", line 179, in place_containers
constraint1 = constraints1.AddBiEltCst(catCstTypeDistance(), reference1, reference2)
NameError: name 'catCstTypeDistance' is not defined
```
How can I "access" this thing/object(?) from python? | 2019/04/13 | [
"https://Stackoverflow.com/questions/55670030",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9778872/"
] | How do I see this situation (inspired by <https://stackoverflow.com/a/15767580/11357125>):
1. You click on the radio
2. **It gets checked**
3. The event is dispatched on the document root
4. Capture phase, nothing happens with your handlers
5. The event arrives at the `<input>`
6. …and begins to bubble
7. On the `<div>`, it is handled. Event listener calls the `preventDefault` method, setting an internal `cancelled` flag. `<div>` getting class '.checked' and radio setted as **checked again, now programmatically**.
8. Event bubbles on, but nothing happens any more.
9. Since the event was cancelled, the default action should not occur and the checkbox is reset to its previous state **even after it was checked programmatically**. | You just need to add `e.stopPropagation()` to return the default functionality for radio button again which is `checked`
The behaviour is that You have `radio` is a child to the `div` and you `click` listener is based on the parent `div` and when you `preventDefault` then the children inherit the prevention as well.
check [that](https://developer.mozilla.org/en-US/docs/Web/API/Element/click_event)
>
>
> >
> > If the button is pressed on one element and released on a different one, the event is fired on the most specific ancestor element that contained both.
> >
> >
> >
>
>
> |
55,670,030 | I'm trying to do some design automation in CATIA. I'm using python, I then record macros in CATIA and translates the code there to python code. Now I have stumbled upon a problem.
Below is the Macro from CATIA that i want to translate into pyhton code.
```
Language="VBSCRIPT"
Sub CATMain()
Set productDocument1 = CATIA.ActiveDocument
Set product1 = productDocument1.Product
Set product1 = product1.ReferenceProduct
Set constraints1 = product1.Connections("CATIAConstraints")
Set reference1 = product1.CreateReferenceFromName("ContainerSchiff/Container1/!yz plane")
Set reference2 = product1.CreateReferenceFromName("ContainerSchiff/Container0/!Geometrical Set.1/Point.2")
----Here is the problem ---- Set constraint1 = constraints1.AddBiEltCst(catCstTypeDistance, reference1, reference2)
Set length1 = constraint1.Dimension
length1.Value = 300.000000
product1.Update
End Sub
```
When translating this i have no idea what to do with `catCstTypeDistance`
If I leave it as is, then python will obviously complain about the name not being defined. If I pass it as a string complains as well. Below is the part of the python
```py
else:
add_container_skeleton(product1,i)
product1.ReferenceProduct
constraints1=product1.Connections("CATIAConstraints")
Name1="ContainerSchiff/Container" + str(i-1) + "/!Container1/yz plane"
Name2="ContainerSchiff/Container" + str(i) + "/!Geometrical Set.1/Point.2"
reference1= product1.CreateReferenceFromName(Name1)
reference2 = product1.CreateReferenceFromName(Name2)
constraint1 = constraints1.AddBiEltCst('catCstTypeDistance', reference1, reference2)
length1 = constraint1.Dimension
length1.Value = 300.000000
```
and the two different error messages that I get.
```
File "C:\Users\Mange\Documents\LIU\Catia part 2\first_draft.py", line 179, in place_containers
constraint1 = constraints1.AddBiEltCst('catCstTypeDistance', reference1, reference2)
File "<COMObject Connections>", line 3, in AddBiEltCst
ValueError: invalid literal for int() with base 10: 'catCstTypeDistance'
```
```
File "C:\Users\Mange\Documents\LIU\Catia part 2\first_draft.py", line 179, in place_containers
constraint1 = constraints1.AddBiEltCst(catCstTypeDistance(), reference1, reference2)
NameError: name 'catCstTypeDistance' is not defined
```
How can I "access" this thing/object(?) from python? | 2019/04/13 | [
"https://Stackoverflow.com/questions/55670030",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9778872/"
] | How do I see this situation (inspired by <https://stackoverflow.com/a/15767580/11357125>):
1. You click on the radio
2. **It gets checked**
3. The event is dispatched on the document root
4. Capture phase, nothing happens with your handlers
5. The event arrives at the `<input>`
6. …and begins to bubble
7. On the `<div>`, it is handled. Event listener calls the `preventDefault` method, setting an internal `cancelled` flag. `<div>` getting class '.checked' and radio setted as **checked again, now programmatically**.
8. Event bubbles on, but nothing happens any more.
9. Since the event was cancelled, the default action should not occur and the checkbox is reset to its previous state **even after it was checked programmatically**. | Using preventDefault() on parent elements prevent the original event to be fired, but does not stop propagation. Understanding hierarchy and event propagation is crucial.
You have part of the solution in your code snippet. If you comment out that particular line, the code works properly, like you would expect.
But if you use
```
e.stopPropagation();
```
it also works.
In order not to repeat information already on the web, I found a very similar case here ([Why does preventDefault() on a parent element's click 'disable' a checkbox?](https://stackoverflow.com/questions/15767083/why-does-preventdefault-on-a-parent-elements-click-disable-a-checkbox)) which may help you understand better event propagation and bubbling.
You can also find an even better explanation here (<https://medium.freecodecamp.org/a-simplified-explanation-of-event-propagation-in-javascript-f9de7961a06e>).
MDN documentation also rarely fails to impress (<https://developer.mozilla.org/en-US/docs/Web/API/Event/preventDefault>). |
60,903,896 | First off I apologize if my terminology is way off and this is a basic question that has been answered a million times! I am trying to figure this out without knowing what it is called, so my searches have not been turning anything useful up...
I often find myself assigning certain "properties" to class instances in python which I will want to modify and reference later. A good example would be the "status" of an instance like in the following code:
```
class Example:
def __init__():
self.status = "NORMAL"
a = Example()
print(a.status)
a.status = "CANCELLED"
print(a.status)
```
While this certainly works it requires that the property is a string, which is not very maintainable and is quite prone to error. Is there some way of assigning an object to the class which can be passed to an attribute? For example (and I know this does not work):
```
class ExampleWithProperty:
NORMAL
CANCELLED
def __init__()
self.status = self.NORMAL
b = Example()
print(b.status)
# would expect: ExampleWithProperty.NORMAL or b.NORMAL
b.status = b.CANCELLED
print(b.status)
# would expect: ExampleWithProperty.CANCELLED or b.CANCELLED
```
I believe I've seen a similar functionality in other languages but I wasn't able to think of how to do this in python! | 2020/03/28 | [
"https://Stackoverflow.com/questions/60903896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2909253/"
] | I think you're looking for Enums
```
>>> from enum import Enum
>>> class Color(Enum):
... RED = 1
... GREEN = 2
... BLUE = 3
...
```
<https://docs.python.org/3/library/enum.html>
Edit:
```py
from enum import Enum
class Status(Enum):
NORMAL = 1
CHANGED = 2
class MyClass:
def __init__(self):
self.status = Status.NORMAL
instance = MyClass()
instance.status = Status.CHANGED
``` | To add to sergenp's helpful answer, this is how I would add an Enum to an existing class in a visually "cleaner" way which will be easier to reference from outside the class:
```
class Example():
class StatusOptions(Enum):
NORMAL = 0
CANCELLED = 1
def __init__(self):
self.NORMAL = self.StatusOptions.NORMAL
self.CANCELLED = self.StatusOptions.CANCELLED
self.status = self.StatusOptions(self.NORMAL)
a = Example()
print(a.status)
a.status = a.CANCELLED
print(a.status)
print(a.status == a.CANCELLED)
``` |
62,146,235 | I would like to make a python script that can either process a string fed into the command line
`python my_script.py "Hello World"`
or a set of strings inside a file (e.g. input\_file.txt)
`python my_script.py -i input_file.txt`
Is there a way to do this via `argparse`? So far I can handle input files, but I don't know how to add the option of just processing a string fed directly in the command line. | 2020/06/02 | [
"https://Stackoverflow.com/questions/62146235",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | make argparse accept one positional argument (filename or string) and one flag (-i). If the flag is present, treat the argument as a file. Take a look at [argparse tutorial](https://docs.python.org/3/howto/argparse.html#combining-positional-and-optional-arguments) for more info. I've modified the example to fit your needs.
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("inp", help="input string or input file")
parser.add_argument("-i", "--treat-as-file", action="store_true",
help="reads from file if specified")
args = parser.parse_args()
if args.treat_as_file:
print("Treating {} as input file".format(args.inp))
else:
print("Treating {} as the input".format(args.inp))
```
Output:
```
/tmp $ python test.py abcde
Treating abcde as the input
/tmp $ python test.py -i abcde
Treating abcde as input file
``` | To process a command line string, you can use `sys.argv`, which is a list of all the arguments fed into the command line.
main.py:
```
import sys
print(sys.argv)
```
Running the following line in the CLI
`>> python main.py foo bar "hello world"`
would output:
`['main.py', 'foo', 'bar', 'hello world']` |
47,523,741 | I have list A, which contains values separated by "|" and looks like this
```
df1['original']= ['aaa|bbb|ccc','aaa|ccc','aaa|ccc|ddd']
```
output:
```
aaa|bbb|ccc
aaa|ccc
aaa|ccc|ddd
```
In another df, I have another two columns. First one looks like this: `df1['title']=['aaa','bbb','ccc','ddd']`.
And correspondingly, I have another column next to df1['title'].
`df1['new_value']=['x','y','z','w']`.
output:
```
title new_value
aaa x
bbb y
ccc z
ddd w
```
I want to search every value in `df1['original']` and replace by the searching results based on `df1['title']` and `df1['new_value']`. I want to keep the`|` in the original column. And the final output should look like this:
```
df1['New']=['x|y|z','x|z','x|z|w']
```
output:
New
```
x|y|z
x|z
x|z|w
```
This is more like a conditional lookup in excel and any ideas how to achieve this in python?
Thanks for your time!
The link below can only match the ones that contain one value.
Look up and replace values in a list (pandas) | 2017/11/28 | [
"https://Stackoverflow.com/questions/47523741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6933497/"
] | You can also achieve this manually by using [named constructor](https://www.dartlang.org/resources/dart-tips/dart-tips-ep-11) like this simple example:
```
import 'package:flutter/material.dart';
Map myMap = {"Users": [
{"Name": "Mark", "Email": "mark@email"},
{"Name": "Rick", "Email": "rick@email"},
]
};
class MyData {
String name;
String email;
MyData.fromJson(Map json){
this.name = json["Name"];
this.email = json ["Email"];
}
}
class UserList extends StatelessWidget {
MyData data;
@override
Widget build(BuildContext context) {
return new Scaffold(
appBar: new AppBar(title: new Text("User List"),),
body:
new ListView.builder(
shrinkWrap: true,
itemCount: myMap["Users"].length,
itemBuilder: (BuildContext context, int index) {
data = new MyData.fromJson(myMap["Users"][index]);
return new Text("${data.name} ${data.email}");
})
);
}
}
```
[](https://i.stack.imgur.com/j3Yyh.png) | There is no built-in way.
You can use one of the serialization packages like
* <https://pub.dev/packages/json_serializable>
* <https://pub.dev/packages/built_value>
* ... |
47,523,741 | I have list A, which contains values separated by "|" and looks like this
```
df1['original']= ['aaa|bbb|ccc','aaa|ccc','aaa|ccc|ddd']
```
output:
```
aaa|bbb|ccc
aaa|ccc
aaa|ccc|ddd
```
In another df, I have another two columns. First one looks like this: `df1['title']=['aaa','bbb','ccc','ddd']`.
And correspondingly, I have another column next to df1['title'].
`df1['new_value']=['x','y','z','w']`.
output:
```
title new_value
aaa x
bbb y
ccc z
ddd w
```
I want to search every value in `df1['original']` and replace by the searching results based on `df1['title']` and `df1['new_value']`. I want to keep the`|` in the original column. And the final output should look like this:
```
df1['New']=['x|y|z','x|z','x|z|w']
```
output:
New
```
x|y|z
x|z
x|z|w
```
This is more like a conditional lookup in excel and any ideas how to achieve this in python?
Thanks for your time!
The link below can only match the ones that contain one value.
Look up and replace values in a list (pandas) | 2017/11/28 | [
"https://Stackoverflow.com/questions/47523741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6933497/"
] | There is no built-in way.
You can use one of the serialization packages like
* <https://pub.dev/packages/json_serializable>
* <https://pub.dev/packages/built_value>
* ... | [You can use Function.apply](https://stackoverflow.com/a/16690274/2234013). I made a [gist](https://gist.github.com/micimize/67e428dca235e26b84c193d403a6b643) to test it out the linked answer |
47,523,741 | I have list A, which contains values separated by "|" and looks like this
```
df1['original']= ['aaa|bbb|ccc','aaa|ccc','aaa|ccc|ddd']
```
output:
```
aaa|bbb|ccc
aaa|ccc
aaa|ccc|ddd
```
In another df, I have another two columns. First one looks like this: `df1['title']=['aaa','bbb','ccc','ddd']`.
And correspondingly, I have another column next to df1['title'].
`df1['new_value']=['x','y','z','w']`.
output:
```
title new_value
aaa x
bbb y
ccc z
ddd w
```
I want to search every value in `df1['original']` and replace by the searching results based on `df1['title']` and `df1['new_value']`. I want to keep the`|` in the original column. And the final output should look like this:
```
df1['New']=['x|y|z','x|z','x|z|w']
```
output:
New
```
x|y|z
x|z
x|z|w
```
This is more like a conditional lookup in excel and any ideas how to achieve this in python?
Thanks for your time!
The link below can only match the ones that contain one value.
Look up and replace values in a list (pandas) | 2017/11/28 | [
"https://Stackoverflow.com/questions/47523741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6933497/"
] | You can also achieve this manually by using [named constructor](https://www.dartlang.org/resources/dart-tips/dart-tips-ep-11) like this simple example:
```
import 'package:flutter/material.dart';
Map myMap = {"Users": [
{"Name": "Mark", "Email": "mark@email"},
{"Name": "Rick", "Email": "rick@email"},
]
};
class MyData {
String name;
String email;
MyData.fromJson(Map json){
this.name = json["Name"];
this.email = json ["Email"];
}
}
class UserList extends StatelessWidget {
MyData data;
@override
Widget build(BuildContext context) {
return new Scaffold(
appBar: new AppBar(title: new Text("User List"),),
body:
new ListView.builder(
shrinkWrap: true,
itemCount: myMap["Users"].length,
itemBuilder: (BuildContext context, int index) {
data = new MyData.fromJson(myMap["Users"][index]);
return new Text("${data.name} ${data.email}");
})
);
}
}
```
[](https://i.stack.imgur.com/j3Yyh.png) | [You can use Function.apply](https://stackoverflow.com/a/16690274/2234013). I made a [gist](https://gist.github.com/micimize/67e428dca235e26b84c193d403a6b643) to test it out the linked answer |
36,422,068 | I am using the Apache-Spark service at bluemix. I am currently having trouble accessing my Object Storage through spark-submit.
I know that the file exists and is accessible through the swift on the jupyter notebook. I use the following to verify:
```
file_name = "swift://notebooks.spark/small.verbatim"
text_file = sc.textFile(file_name)
print "number of verbatims", text_file.count()
```
and the output is :
```
number of verbatims 100
```
but when i try to do the same with the spark-submit, i get an error.
This is the code that I am submitting through spark-submit:
```
import sys, traceback
from pymongo import MongoClient
import time
from datetime import datetime
from pyspark import SparkContext
sc = SparkContext('local', 'Schedule Insight Extractor')
try:
file_name = "swift://notebooks.spark/small.verbatim" # small dataset
text_file = sc.textFile(file_name)
r = None
r = "number of verbatims", text_file.count()
except:
e = sys.exc_info()[0]
print ("ERROR %s", e)
traceback.print_exc(file=sys.stdout)
```
And this throws the following exception:
```
Traceback (most recent call last):
File "/gpfs/fs01/user/sf6d-7c3a9c08343577-05540e1c503a/data/workdir/spark-driver-cece5080-17dd-48e4-9036-52788e5a7b77/test_spark_submit.py", line 20, in <module>
r = "number of verbatims", text_file.count()
File "/usr/local/src/spark160master/spark-1.6.0-bin-2.6.0/python/lib/pyspark.zip/pyspark/rdd.py", line 1004, in count
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "/usr/local/src/spark160master/spark-1.6.0-bin-2.6.0/python/lib/pyspark.zip/pyspark/rdd.py", line 995, in sum
return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
File "/usr/local/src/spark160master/spark-1.6.0-bin-2.6.0/python/lib/pyspark.zip/pyspark/rdd.py", line 869, in fold
vals = self.mapPartitions(func).collect()
File "/usr/local/src/spark160master/spark-1.6.0-bin-2.6.0/python/lib/pyspark.zip/pyspark/rdd.py", line 771, in collect
port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/usr/local/src/spark160master/spark-1.6.0-bin-2.6.0/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/usr/local/src/spark160master/spark-1.6.0-bin-2.6.0/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.lang.NullPointerException
at org.apache.commons.httpclient.HttpMethodBase.getStatusCode(HttpMethodBase.java:570)
at org.apache.hadoop.fs.swift.exceptions.SwiftInvalidResponseException.<init>(SwiftInvalidResponseException.java:53)
at org.apache.hadoop.fs.swift.http.SwiftRestClient.buildException(SwiftRestClient.java:1827)
at org.apache.hadoop.fs.swift.http.SwiftRestClient.perform(SwiftRestClient.java:1728)
at org.apache.hadoop.fs.swift.http.SwiftRestClient.perform(SwiftRestClient.java:1662)
at org.apache.hadoop.fs.swift.http.SwiftRestClient.authenticate(SwiftRestClient.java:1154)
at org.apache.hadoop.fs.swift.http.SwiftRestClient.authIfNeeded(SwiftRestClient.java:1618)
at org.apache.hadoop.fs.swift.http.SwiftRestClient.preRemoteCommand(SwiftRestClient.java:1634)
at org.apache.hadoop.fs.swift.http.SwiftRestClient.headRequest(SwiftRestClient.java:1085)
at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.stat(SwiftNativeFileSystemStore.java:258)
at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:213)
at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:182)
at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.getFileStatus(SwiftNativeFileSystem.java:174)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1644)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:257)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1934)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:507)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:785)
```
What am i doing wrong ? I am supposed to use the SwiftClient ? Is the url not well structured ? | 2016/04/05 | [
"https://Stackoverflow.com/questions/36422068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3501515/"
] | You are trying access the notebooks container using default hadoop configuration of 'spark' that can be used in your Notebook Enviornment on Bluemix Service.
With spark-submit, you are actually invoking a new spark context to work with.
You would need explicitly set the new object storage credentials:-
`def set_hadoop_config(creds):`
```
prefix = "fs.swift.service." + creds['name']
hconf = sc._jsc.hadoopConfiguration()
hconf.set(prefix + ".auth.url", creds['auth_url'] + '/v2.0/tokens')
hconf.set(prefix + ".auth.endpoint.prefix", "endpoints")
hconf.set(prefix + ".tenant", creds['project_id'])
hconf.set(prefix + ".username", creds['user_id'])
hconf.set(prefix + ".password", creds['password'])
hconf.setInt(prefix + ".http.port", 8080)
hconf.set(prefix + ".region", creds['region'])
hconf.setBoolean(prefix + ".public", True)
```
`ObjectSCredentials = {`
```
'username': 'XXXXXXXXXXX',
'password': 'XXXXXXXXXXX',
'auth_url': 'https://identity.open.softlayer.com',
'project': 'XXXXXXXXXXX',
'project_id': 'XXXXXXXXXXX',
'region': 'dallas',
'user_id': 'XXXXXXXXXXX',
'domain_id': 'XXXXXXXXXXX',
'domain_name': 'XXXXXXXXXXX',
'filename': 'small.verbatim',
'container': 'notebooks',
'tenantId': ‘XXXXXXXXX'
```
`}`
`ObjectSCredentials['name'] = ’TEST'`
`set_hadoop_config(ObjectSCredentials)`
`rdddata = sc.textFile("swift://notebooks." + ObjectSCredentials['name'] + “/small.verbatim")`
Thanks,
Charles. | Can you use the following for auth url property:
Replace
```
hconf.set(prefix + ".auth.url", creds['auth_url'] + '/v2.0/tokens')
```
WITH
```
hconf.set(prefix + ".auth.url", creds['auth_url']+'/v3/auth/tokens')
``` |
38,665,823 | ```
{
"Steps": [
{
"Status": {
"State": "PENDING",
"StateChangeReason": {}
},
"ActionOnFailure": "CANCEL_AND_WAIT",
"Name": "ABCD"
},
{
"Status": {
"State": "COMPLETED",
"StateChangeReason": {}
},
"ActionOnFailure": "CANCEL_AND_WAIT",
"Name": "KLMN"
},
{
"Status": {
"Timeline": {
"CreationDateTime": 1469815629.4289999
},
"State": "PENDING",
"StateChangeReason": {}
},
"ActionOnFailure": "TERMINATE_CLUSTER",
"Name": "XYZ"
}
]
}
```
I want to check whether the status of step with name = "KLMN" is completed or not. How can I do that in python.
>
> python -c 'import json,sys;obj=json.load(sys.stdin);print
> obj["Steps"]....'
>
>
>
how should I code print step to print COMPLETED | 2016/07/29 | [
"https://Stackoverflow.com/questions/38665823",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2514978/"
] | This this regex, as demonstrated on [regex101](https://regex101.com/r/eK8lN1/1):
`(\n[0-9]{3})[ 0-9]{4}([^\n]+(?:\n\s+[^\n]+)*)`
The capture group `([^\n]+(?:\n\s+[^\n]+)*)` matches
* any non-linebreaks: `[^\n]+`
* then any number of extra lines: `(?:\n\s+[^\n]+)*` | Add a negative lookahead to the end to make sure the line break is followed by 3 digits. There are also a few things that could be done to shorten the regular expression.
```
(\n\d{3})[ \d]{4}((?:(?!\n\d{3}).)*)
``` |
19,724,083 | I was working to install OpenCV to work with Python on my Mac
<http://www.jeffreythompson.org/blog/2013/08/22/update-installing-opencv-on-mac-mountain-lion/>
I got everything installed without problems, but by the end I got this wired problem mentioning that the "OpenCV library" is not connected
```
$ pkg-config --cflags opencv
-I/usr/local/include/opencv -I/usr/local/include
$ brew install opencv
Warning: opencv-2.4.6.1 already installed, it's just not linked
$ python
Python 2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:52:43)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named cv
>>> import cv
``` | 2013/11/01 | [
"https://Stackoverflow.com/questions/19724083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2939173/"
] | For what it's worth, I think you just missed a step in [my instructions](http://www.jeffreythompson.org/blog/2013/08/22/update-installing-opencv-on-mac-mountain-lion/). You need to add the following line to your `.bash_profile` file in your Home folder (or create it if the file doesn't exist):
```
export PYTHONPATH=/usr/local/lib/python2.7/site-packages:$PYTHONPATH
```
Be sure to restart Terminal before starting Python. | see if this works for you....
<http://samkhan13.wordpress.com/2012/06/18/using-opencv-with-python-on-your-mac-os-x/> |
34,877,445 | Is there a graceful way to get names of named `%s`-like variables of string object?
Like this:
```
string = '%(a)s and %(b)s are friends.'
names = get_names(string) # ['a', 'b']
```
Known alternative ways:
1. Parse names using regular expression, e.g.:
```
import re
names = re.findall(r'%\((\w)\)[sdf]', string) # ['a', 'b']
```
2. Use `.format()`-compatible formating and `Formatter().parse(string)`.
[How to get the variable names from the string for the format() method](https://stackoverflow.com/questions/22830226/how-to-get-the-variable-names-from-the-string-for-the-format-method])
But what about a string with %s-like variables?
**PS**: python 2.7 | 2016/01/19 | [
"https://Stackoverflow.com/questions/34877445",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5810218/"
] | In order to answer this question, you need to define "graceful". Several factors might be worth considering:
1. Is the code short, easy to remember, easy to write, and self explanatory?
2. Does it reuse the underlying logic (i.e. follow the DRY principle)?
3. Does it implement exactly the same parsing logic?
Unfortunately, the "%" formatting for strings is implemented in the C routine "PyString\_Format" in stringobject.c. This routine does not provide an API or hooks that allow access to a parsed form of the format string. It simply builds up the result as it is parsing the format string. Thus any solution will need to duplicate the parsing logic from the C routine. This means DRY is not followed and exposes any solution to breaking if a change is made to the formatting specification.
The parsing algorithm in PyString\_Format includes a fair bit of complexity, including handling nested parentheses in key names, so cannot be fully implemented using regular expression nor using string "split()". Short of copying the C code from PyString\_Format and converting it to Python code, I do not see any remotely easy way of correctly extracting the names of the mapping keys under **all** circumstances.
So my conclusion is that there is no "graceful" way to obtain the names of the mapping keys for a Python 2.7 "%" format string.
The following code uses a regular expression to provide a partial solution that covers most common usage:
```
import re
class StringFormattingParser(object):
__matcher = re.compile(r'(?<!%)%\(([^)]+)\)[-# +0-9.hlL]*[diouxXeEfFgGcrs]')
@classmethod
def getKeyNames(klass, formatString):
return klass.__matcher.findall(formatString)
# Demonstration of use with some sample format strings
for value in [
'%(a)s and %(b)s are friends.',
'%%(nomatch)i',
'%%',
'Another %(matched)+4.5f%d%% example',
'(%(should_match(but does not))s',
]:
print StringFormattingParser.getKeyNames(value)
# Note the following prints out "really does match"!
print '%(should_match(but does not))s' % {'should_match(but does not)': 'really does match'}
```
P.S. DRY = Don't Repeat Yourself (<https://en.wikipedia.org/wiki/Don%27t_repeat_yourself>) | You could also do this:
```
[y[0] for y in [x.split(')') for x in s.split('%(')] if len(y)>1]
``` |
34,877,445 | Is there a graceful way to get names of named `%s`-like variables of string object?
Like this:
```
string = '%(a)s and %(b)s are friends.'
names = get_names(string) # ['a', 'b']
```
Known alternative ways:
1. Parse names using regular expression, e.g.:
```
import re
names = re.findall(r'%\((\w)\)[sdf]', string) # ['a', 'b']
```
2. Use `.format()`-compatible formating and `Formatter().parse(string)`.
[How to get the variable names from the string for the format() method](https://stackoverflow.com/questions/22830226/how-to-get-the-variable-names-from-the-string-for-the-format-method])
But what about a string with %s-like variables?
**PS**: python 2.7 | 2016/01/19 | [
"https://Stackoverflow.com/questions/34877445",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5810218/"
] | In order to answer this question, you need to define "graceful". Several factors might be worth considering:
1. Is the code short, easy to remember, easy to write, and self explanatory?
2. Does it reuse the underlying logic (i.e. follow the DRY principle)?
3. Does it implement exactly the same parsing logic?
Unfortunately, the "%" formatting for strings is implemented in the C routine "PyString\_Format" in stringobject.c. This routine does not provide an API or hooks that allow access to a parsed form of the format string. It simply builds up the result as it is parsing the format string. Thus any solution will need to duplicate the parsing logic from the C routine. This means DRY is not followed and exposes any solution to breaking if a change is made to the formatting specification.
The parsing algorithm in PyString\_Format includes a fair bit of complexity, including handling nested parentheses in key names, so cannot be fully implemented using regular expression nor using string "split()". Short of copying the C code from PyString\_Format and converting it to Python code, I do not see any remotely easy way of correctly extracting the names of the mapping keys under **all** circumstances.
So my conclusion is that there is no "graceful" way to obtain the names of the mapping keys for a Python 2.7 "%" format string.
The following code uses a regular expression to provide a partial solution that covers most common usage:
```
import re
class StringFormattingParser(object):
__matcher = re.compile(r'(?<!%)%\(([^)]+)\)[-# +0-9.hlL]*[diouxXeEfFgGcrs]')
@classmethod
def getKeyNames(klass, formatString):
return klass.__matcher.findall(formatString)
# Demonstration of use with some sample format strings
for value in [
'%(a)s and %(b)s are friends.',
'%%(nomatch)i',
'%%',
'Another %(matched)+4.5f%d%% example',
'(%(should_match(but does not))s',
]:
print StringFormattingParser.getKeyNames(value)
# Note the following prints out "really does match"!
print '%(should_match(but does not))s' % {'should_match(but does not)': 'really does match'}
```
P.S. DRY = Don't Repeat Yourself (<https://en.wikipedia.org/wiki/Don%27t_repeat_yourself>) | Don't know if this qualifies as graceful in your book, but here's a short function that parses out the names. No error checking, so it will fail for malformed format strings.
```
def get_names(s):
i = s.find('%')
while 0 <= i < len(s) - 3:
if s[i+1] == '(':
yield(s[i+2:s.find(')', i)])
i = s.find('%', i+2)
string = 'abd %(one) %%(two) 99 %%%(three)'
list(get_names(string) #=> ['one', 'three']
``` |
34,877,445 | Is there a graceful way to get names of named `%s`-like variables of string object?
Like this:
```
string = '%(a)s and %(b)s are friends.'
names = get_names(string) # ['a', 'b']
```
Known alternative ways:
1. Parse names using regular expression, e.g.:
```
import re
names = re.findall(r'%\((\w)\)[sdf]', string) # ['a', 'b']
```
2. Use `.format()`-compatible formating and `Formatter().parse(string)`.
[How to get the variable names from the string for the format() method](https://stackoverflow.com/questions/22830226/how-to-get-the-variable-names-from-the-string-for-the-format-method])
But what about a string with %s-like variables?
**PS**: python 2.7 | 2016/01/19 | [
"https://Stackoverflow.com/questions/34877445",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5810218/"
] | In order to answer this question, you need to define "graceful". Several factors might be worth considering:
1. Is the code short, easy to remember, easy to write, and self explanatory?
2. Does it reuse the underlying logic (i.e. follow the DRY principle)?
3. Does it implement exactly the same parsing logic?
Unfortunately, the "%" formatting for strings is implemented in the C routine "PyString\_Format" in stringobject.c. This routine does not provide an API or hooks that allow access to a parsed form of the format string. It simply builds up the result as it is parsing the format string. Thus any solution will need to duplicate the parsing logic from the C routine. This means DRY is not followed and exposes any solution to breaking if a change is made to the formatting specification.
The parsing algorithm in PyString\_Format includes a fair bit of complexity, including handling nested parentheses in key names, so cannot be fully implemented using regular expression nor using string "split()". Short of copying the C code from PyString\_Format and converting it to Python code, I do not see any remotely easy way of correctly extracting the names of the mapping keys under **all** circumstances.
So my conclusion is that there is no "graceful" way to obtain the names of the mapping keys for a Python 2.7 "%" format string.
The following code uses a regular expression to provide a partial solution that covers most common usage:
```
import re
class StringFormattingParser(object):
__matcher = re.compile(r'(?<!%)%\(([^)]+)\)[-# +0-9.hlL]*[diouxXeEfFgGcrs]')
@classmethod
def getKeyNames(klass, formatString):
return klass.__matcher.findall(formatString)
# Demonstration of use with some sample format strings
for value in [
'%(a)s and %(b)s are friends.',
'%%(nomatch)i',
'%%',
'Another %(matched)+4.5f%d%% example',
'(%(should_match(but does not))s',
]:
print StringFormattingParser.getKeyNames(value)
# Note the following prints out "really does match"!
print '%(should_match(but does not))s' % {'should_match(but does not)': 'really does match'}
```
P.S. DRY = Don't Repeat Yourself (<https://en.wikipedia.org/wiki/Don%27t_repeat_yourself>) | Also, you can reduce this `%`-task to `Formater`-solution.
```
>>> import re
>>> from string import Formatter
>>>
>>> string = '%(a)s and %(b)s are friends.'
>>>
>>> string = re.sub('((?<!%)%(\((\w)\)s))', '{\g<3>}', string)
>>>
>>> tuple(fn[1] for fn in Formatter().parse(string) if fn[1] is not None)
('a', 'b')
>>>
```
In this case you can use both variants of formating, I suppose.
The regular expression in it depends on what you want.
```
>>> re.sub('((?<!%)%(\((\w)\)s))', '{\g<3>}', '%(a)s and %(b)s are %(c)s friends.')
'{a} and {b} are {c} friends.'
>>> re.sub('((?<!%)%(\((\w)\)s))', '{\g<3>}', '%(a)s and %(b)s are %%(c)s friends.')
'{a} and {b} are %%(c)s friends.'
>>> re.sub('((?<!%)%(\((\w)\)s))', '{\g<3>}', '%(a)s and %(b)s are %%%(c)s friends.')
'{a} and {b} are %%%(c)s friends.'
``` |
70,599,392 | This is probably a naive question from a programming languages standpoint (the answer must be no).
Was there ever a version of python that had case-insensitive dicts by default?
Ie, if I queried `dict['value']` it would be the same as `dict['VALUE']`?
I just ask because I am working with some code written in Python 3.7, but I am working in Python 3.8.5. I could either just rewrite the code, or try a different version of Python - not sure which will take longer.
Perhaps this is also a function of pandas, which went from pandas 1.0.4 to '1.1.3'. I will check more on this. | 2022/01/05 | [
"https://Stackoverflow.com/questions/70599392",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2795733/"
] | I don't know of any previous version of Python that did this, but you could probably make your own dictionary type that does it pretty easily.
```
class UncasedDict(dict):
def __getitem__(self, key):
if isinstance(key, str):
key = key.lower()
return super().__getitem__(key)
def __setitem__(self, key, value):
if isinstance(key, str):
key = key.lower()
return super().__setitem__(key, value)
d = UncasedDict()
d["hello"] = 1
print(f'{d["hello"]=}')
print(f'{d["helLo"]=}')
d["GOODBYE"] = 2
print(f'{d["GOODBYE"]=}')
print(f'{d["GoOdByE"]=}')
# d["hello"]=1
# d["helLo"]=1
# d["GOODBYE"]=2
# d["GoOdByE"]=2
```
The idea is to just intercept `key` when you get/set the dictionary values and replace it with `key.lower()`. You would want to do this for each capability of `dict`s that you use, e.g., `__delitem__()`, `__contains__()`, etc. | Dictionaries are mutable unordered collections (they do not record element position or order of insertion) of key-value pairs. Keys within the dictionary must be unique and must be hashable. That includes types like numbers, strings and tuples. Lists and dicts can not be used as keys since they are mutable. Dictionaries in other languages are also called hash tables or associative arrays.
Numeric types used for keys obey the normal rules for numeric comparison: if two numbers compare equal (such as 1 and 1.0) then they can be used interchangeably to index the same dictionary entry.
Reference: [dict](https://python-reference.readthedocs.io/en/latest/docs/dict/) |
51,408,907 | I have a multi-level groupby that returns the count of grouped rows per grouping from my dataframe. It displays it in a new column without a label. I am trying to filter for counts NOT equal to 6. I tried creating an index of True/False for that, but I do not know how to get back the results from the index. I also tried filter and lambda combinations without success.
Here is code, where person, WL (wavelength), file and threshold are columns in my dataframe (df\_new).
```
df_new.groupby([df_new['Person'], df_new['WL'], df_new['File'],
df_new['Threshold']])['RevNum'].count()
```
I get back a list of the counts, however, that's as far as I can get. I'm not able to figure out how to see only the records that are NOT equal to 6.
For example, towards the bottom of the results there is this entry:
```
656 TRW-2017-04-25_60_584 0 5
```
A larger example of results:
```
Person WL File Threshold
AEM 440 AEM-2018-05-23_11_440 0 6
1 6
AEM-2018-05-23_50_440 0 6
1 6
452 AEM-2018-05-23_11_440 0 6
1 6
AEM-2018-05-23_50_440 0 6
1 6
464 AEM-2018-05-23_11_440 0 6
1 6
AEM-2018-05-23_50_440 0 6
1 6
476 AEM-2018-05-23_11_440 0 6
1 6
AEM-2018-05-23_50_440 0 6
1 6
488 AEM-2018-05-23_11_440 0 6
1 6
AEM-2018-05-23_50_440 0 6
1 6
AGC 440 AGC-2018-05-25_12_440 0 6
1 6
AGC-2018-05-25_50_440 0 6
1 6
452 AGC-2018-05-25_12_440 0 6
1 6
AGC-2018-05-25_50_440 0 6
1 6
464 AGC-2018-05-25_12_440 0 6
1 6
..
TRW 620 TRW-2017-04-08_60_572 0 6
1 6
632 TRW-2017-04-25_60_584 0 6
1 6
644 TRW-2017-04-08_60_572 0 6
1 6
656 TRW-2017-04-25_60_584 0 5
1 6
TRW-2017-04-25_60_656 0 6
1 6
```
When I change my code to:
```
df_counts = df_new.groupby([df_new['Person'], df_new['WL'], df_new['File'],
df_new['Threshold']])['RevNum'].count()
```
It stores it as a series and not a dataframe, and I cannot access the last column with the values (the count results of my groupby).
When I try:
```
df_counts_grouped = df_new.groupby([df_new['Person'], df_new['WL'],
df_new['File'], df_new['Threshold']])['RevNum'].count()
df_counts_grouped.filter(lambda x: x['B'].max() != 6)
```
I tried .max, .min, .count etc.
It says that 'function' object is not iterable.
I believe that a series is not iterable?
Any help filtering my groupby results is appreciated.
If I could get the results of the groupby into a new dataframe and rename the resulting "count" column, I could access it. Not sure how to send the results of my groupby with count to a new dataframe. Alternatively, I am not sure how to use the results to only select the appropriate rows from the first dataframe, since it's a count of many of the rows in the original dataframe.
The dataframe looks like this to begin with, before doing any groupby stuff.
```
File Threshold StepSize RevNum WL RevPos BkgdLt Person Date AbRevPos ExpNum EarlyEnd
48 AEM-2018-05-23_11_440 1 1.50 7.0 464 -2.07 11 AEM 2018-05-23 2.07 Two NaN
49 AEM-2018-05-23_11_440 1 0.82 8.0 464 -3.57 11 AEM 2018-05-23 3.57 Two NaN
50 AEM-2018-05-23_11_440 1 1.50 7.0 488 -2.58 11 AEM 2018-05-23 2.58 Two NaN
54 AEM-2018-05-23_11_440 1 0.82 8.0 488 -5.58 11 AEM 2018-05-23 5.58 Two NaN
55 AEM-2018-05-23_11_440 1 1.50 7.0 440 -3.00 11 AEM 2018-05-23 3.00 Two NaN
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3286 entries, 48 to 7839
Data columns (total 12 columns):
File 3286 non-null object
Threshold 3286 non-null int64
StepSize 3286 non-null float64
RevNum 3286 non-null float64
WL 3286 non-null int64
RevPos 3286 non-null float64
BkgdLt 3286 non-null int32
Person 3286 non-null object
Date 3286 non-null datetime64[ns]
AbRevPos 3286 non-null float64
ExpNum 3286 non-null object
EarlyEnd 0 non-null float64
dtypes: datetime64[ns](1), float64(5), int32(1), int64(2), object(3)
memory usage: 320.9+ KB
```
This code:
```
df_counts_grouped = df_new.groupby([df_new['Person'], df_new['WL'], df_new['File'], df_new['Threshold']])['RevNum'].count()
df_counts_grouped.head(10)
```
Produces this output:
```
Person WL File Threshold
AEM 440 AEM-2018-05-23_11_440 0 6
1 6
AEM-2018-05-23_50_440 0 6
1 6
452 AEM-2018-05-23_11_440 0 6
1 6
AEM-2018-05-23_50_440 0 6
1 6
464 AEM-2018-05-23_11_440 0 6
1 6
Name: RevNum, dtype: int64
```
I have found the beginning of an answer to my question, which lies in syntax. It lies in the difference between Pandas Series and Pandas DataFrames!
```
df_new.groupby('Person')['WL'].count() # produces Pandas Series
df_new.groupby('Person')[['WL']].count() # Produces Pandas DataFrame
```
Found at: <https://shanelynn.ie/summarising-aggregation-and-grouping-data-in-python-pandas/> | 2018/07/18 | [
"https://Stackoverflow.com/questions/51408907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9932738/"
] | I have created a quick minimal complete and verifiable example for you:
```
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'Letter':['a', 'b']*2, 'Number':[1]*3+[2]})
In [3]: df
Out[3]:
Letter Number
0 a 1
1 b 1
2 a 1
3 b 2
In [4]: df.groupby(['Letter', 'Number'])['Number'].count()
Out[4]:
Letter Number
a 1 2
b 1 1
2 1
Name: Number, dtype: int64
In [5]: grouped_counts = df.groupby(['Letter', 'Number'])['Number'].count()
In [6]: type(grouped_counts)
Out[6]: pandas.core.series.Series
```
As you can see, the maximum number of counts is 2, so let's filter for all groups that gave counts lower than 2:
```
In [7]: grouped_counts.loc[grouped_counts<2]
Out[7]:
Letter Number
b 1 1
2 1
``` | I figured it out! It was a super simple syntax issue of changing from a Series to a DataFrame!
```
df_new.groupby('Person')['WL'].count() # produces Pandas Series
df_new.groupby('Person')[['WL']].count() # Produces Pandas DataFrame
```
Found at: <https://shanelynn.ie/summarising-aggregation-and-grouping-data-in-python-pandas/>
My code now looks like this and I can get back only the entries where the Reversal Number (RevNum) is not 6.
```
df_counts_grouped = df_new.groupby([df_new['Person'], df_new['WL'], df_new['File'], df_new['Threshold']])[['RevNum']].count()
df_counts_grouped[df_counts_grouped['RevNum'] != 6]
```
The simple change from single brackets around 'RevNum':
```
df_counts_grouped = df_new.groupby([df_new['Person'], df_new['WL'], df_new['File'], df_new['Threshold']])['RevNum'].count()
```
To double brackets around my column label, 'RevNum':
```
df_counts_grouped = df_new.groupby([df_new['Person'], df_new['WL'], df_new['File'], df_new['Threshold']])[['RevNum']].count()
```
Fixed everything! |
64,234,972 | I'm working in python using numpy (could be a pandas series too) and am trying to make the following calculation:
Lets say I have an array corresponding to points on the x axis:
```
2, 9, 5, 6, 55, 8
```
For each element in this array I would like to get the distance to the closest element so the output would look like the following:
```
3, 1, 1, 1, 46, 1
```
I am trying to find a solution that can scale to 2D (distance to nearest XY point) and ideally would avoid a for loop. Is that possible? | 2020/10/06 | [
"https://Stackoverflow.com/questions/64234972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7754184/"
] | **Approach 1**
You can use broadcasting in order to get matrix of distances:
```
>>> data = np.array([2,9,5,6,55,8])
>>> dst_matrix = data - data[:, None]
>>> dst_matrix
array([[ 0, 7, 3, 4, 53, 6],
[ -7, 0, -4, -3, 46, -1],
[ -3, 4, 0, 1, 50, 3],
[ -4, 3, -1, 0, 49, 2],
[-53, -46, -50, -49, 0, -47],
[ -6, 1, -3, -2, 47, 0]])
```
Then we can eliminate diagonal as proposed [in this post](https://stackoverflow.com/questions/46736258/deleting-diagonal-elements-of-a-numpy-array):
```
dst_matrix = dst_matrix[~np.eye(dst_matrix.shape[0],dtype=bool)].reshape(dst_matrix.shape[0],-1)
>>> dst_matrix
array([[ 7, 3, 4, 53, 6],
[ -7, -4, -3, 46, -1],
[ -3, 4, 1, 50, 3],
[ -4, 3, -1, 49, 2],
[-53, -46, -50, -49, -47],
[ -6, 1, -3, -2, 47]])
```
Finally, mininum items can be found:
```
>>> np.min(np.abs(dst_matrix), axis=1)
array([ 3, 1, 1, 1, 46, 1])
```
**Approach 2**
If you're looking for time and memory efficient solution, the best option is `scipy.spatial.cKDTree`s which packs points (of any dimension) into specific data structure that is optimized for querying closest points. It can also be extended to 2D or 3D.
```
import scipy.spatial
data = np.array([2,9,5,6,55,8])
ckdtree = scipy.spatial.cKDTree(data[:,None])
distances, idx = ckdtree.query(data[:,None], k=2)
output = distances[:,1] #distances to not coincident points
```
For each point querying first two closest points is required here because first of them is expected to be coincident. This is the only solution I found between all the proposed answers that doesn't take ages (the average performance is 4secs for 1M points). **Warning:** you need to filter duplicated points before applying this method. | You can do some list comprehension on a pandas series:
```
s = pd.Series([2,9,5,6,55,8])
s.apply(lambda x: min([abs(x - s[y]) for y in s.index if s[y] != x]))
Out[1]:
0 3
1 1
2 1
3 1
4 46
5 1
```
Then you can just add `.to_list()` or `.to_numpy()` to the end to get rid of the series index:
```
s.apply(lambda x: min([abs(x - s[y]) for y in s.index if s[y] != x])).to_numpy()
array([ 3, 1, 1, 1, 46, 1], dtype=int64)
``` |
64,234,972 | I'm working in python using numpy (could be a pandas series too) and am trying to make the following calculation:
Lets say I have an array corresponding to points on the x axis:
```
2, 9, 5, 6, 55, 8
```
For each element in this array I would like to get the distance to the closest element so the output would look like the following:
```
3, 1, 1, 1, 46, 1
```
I am trying to find a solution that can scale to 2D (distance to nearest XY point) and ideally would avoid a for loop. Is that possible? | 2020/10/06 | [
"https://Stackoverflow.com/questions/64234972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7754184/"
] | There seems to be a theme with O(N^2) solutions here. For 1D, it's quite simple to get O(N log N):
```
x = np.array([2, 9, 5, 6, 55, 8])
i = np.argsort(x)
dist = np.diff(x[i])
min_dist = np.r_[dist[0], np.minimum(dist[1:], dist[:-1]), dist[-1]])
min_dist = min_dist[np.argsort(i)]
```
This clearly won't scale well to multiple dimensions, so use [`scipy.special.KDTree`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.html) instead. Assuming your data is N-dimensional and has shape `(M, N)`, you can do
```
k = KDTree(data)
dist = k.query(data, k=2)[0][:, -1]
```
Scipy has a Cython implementation of `KDTree`, [`cKDTree`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html). Sklearn has a [`sklearn.neighbors.KDTree`](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KDTree.html) with a similar interface as well. | **Approach 1**
You can use broadcasting in order to get matrix of distances:
```
>>> data = np.array([2,9,5,6,55,8])
>>> dst_matrix = data - data[:, None]
>>> dst_matrix
array([[ 0, 7, 3, 4, 53, 6],
[ -7, 0, -4, -3, 46, -1],
[ -3, 4, 0, 1, 50, 3],
[ -4, 3, -1, 0, 49, 2],
[-53, -46, -50, -49, 0, -47],
[ -6, 1, -3, -2, 47, 0]])
```
Then we can eliminate diagonal as proposed [in this post](https://stackoverflow.com/questions/46736258/deleting-diagonal-elements-of-a-numpy-array):
```
dst_matrix = dst_matrix[~np.eye(dst_matrix.shape[0],dtype=bool)].reshape(dst_matrix.shape[0],-1)
>>> dst_matrix
array([[ 7, 3, 4, 53, 6],
[ -7, -4, -3, 46, -1],
[ -3, 4, 1, 50, 3],
[ -4, 3, -1, 49, 2],
[-53, -46, -50, -49, -47],
[ -6, 1, -3, -2, 47]])
```
Finally, mininum items can be found:
```
>>> np.min(np.abs(dst_matrix), axis=1)
array([ 3, 1, 1, 1, 46, 1])
```
**Approach 2**
If you're looking for time and memory efficient solution, the best option is `scipy.spatial.cKDTree`s which packs points (of any dimension) into specific data structure that is optimized for querying closest points. It can also be extended to 2D or 3D.
```
import scipy.spatial
data = np.array([2,9,5,6,55,8])
ckdtree = scipy.spatial.cKDTree(data[:,None])
distances, idx = ckdtree.query(data[:,None], k=2)
output = distances[:,1] #distances to not coincident points
```
For each point querying first two closest points is required here because first of them is expected to be coincident. This is the only solution I found between all the proposed answers that doesn't take ages (the average performance is 4secs for 1M points). **Warning:** you need to filter duplicated points before applying this method. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.