qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
50,305,112
I am trying to install pandas in my company computer. I tried to do ``` pip install pandas ``` but operation retries and then timesout. then I downloaded the package: pandas-0.22.0-cp27-cp27m-win\_amd64.whl and install: ``` pip install pandas-0.22.0-cp27-cp27m-win_amd64 ``` But I get the following error: > > > ``` > Retrying (Retry(total=4, connect=None, read=None, redirect=None, > status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16320>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16C50>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16C18>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16780>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16898>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Could not find a version that satisfies the requirement pytz>=2011k (from pandas==0.22.0) (from versions: ) > No matching distribution found for pytz>=2011k (from pandas==0.22.0) > > ``` > > I did the same with package: `pandas-0.22.0-cp27-cp27m-win_amd64.whl` I also tried to use proxies: ``` pip --proxy=IND\namit.kewat:xl123456@192.168.180.150:8880 install numpy ``` But I am unable to get pandas. when I tried to access the site : <https://pypi.org/project/pandas/#files> I can access it without any problem on explorer
2018/05/12
[ "https://Stackoverflow.com/questions/50305112", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4570833/" ]
This works for me: ``` pip --default-timeout=1000 install pandas ```
I've fixed this issue on my server by following command because the timeout not helped me. ``` sudo ip link set eth0 mtu 1450 ``` In my case problem was in network and ddos protection on my ubuntu 20 server. Hope it'll be helpfull for someone. More about MTU here <https://ru.wikipedia.org/wiki/Maximum_segment_size>
50,305,112
I am trying to install pandas in my company computer. I tried to do ``` pip install pandas ``` but operation retries and then timesout. then I downloaded the package: pandas-0.22.0-cp27-cp27m-win\_amd64.whl and install: ``` pip install pandas-0.22.0-cp27-cp27m-win_amd64 ``` But I get the following error: > > > ``` > Retrying (Retry(total=4, connect=None, read=None, redirect=None, > status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16320>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16C50>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16C18>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16780>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16898>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Could not find a version that satisfies the requirement pytz>=2011k (from pandas==0.22.0) (from versions: ) > No matching distribution found for pytz>=2011k (from pandas==0.22.0) > > ``` > > I did the same with package: `pandas-0.22.0-cp27-cp27m-win_amd64.whl` I also tried to use proxies: ``` pip --proxy=IND\namit.kewat:xl123456@192.168.180.150:8880 install numpy ``` But I am unable to get pandas. when I tried to access the site : <https://pypi.org/project/pandas/#files> I can access it without any problem on explorer
2018/05/12
[ "https://Stackoverflow.com/questions/50305112", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4570833/" ]
`PIP` has a default timeout of `15 sec`, [reference guide](https://pip.pypa.io/en/stable/cli/pip/). `Pandas` is a relatively big file, at 10MB, and it's dependant `Numpy`, at 20MB could still be needed (if it is not installed already.). In addition, your network connection may be slow. Therefore, set `PIP` to take longer time by, for example, giving it `1000 sec`: ``` pip --default-timeout=1000 install pandas ``` as suggested by @Pouya Khalilzad.
In my case, my network was configured to use IPV6 by default, so I changed it to work with IPV4 only. You can do that in the Network connections section in the control panel: `'Control Panel\All Control Panel Items\Network Connections'` [![enter image description here](https://i.stack.imgur.com/agR8k.png)](https://i.stack.imgur.com/agR8k.png) Than disable the IPV6 option: [![enter image description here](https://i.stack.imgur.com/CN9fw.png)](https://i.stack.imgur.com/CN9fw.png)
50,305,112
I am trying to install pandas in my company computer. I tried to do ``` pip install pandas ``` but operation retries and then timesout. then I downloaded the package: pandas-0.22.0-cp27-cp27m-win\_amd64.whl and install: ``` pip install pandas-0.22.0-cp27-cp27m-win_amd64 ``` But I get the following error: > > > ``` > Retrying (Retry(total=4, connect=None, read=None, redirect=None, > status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16320>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16C50>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16C18>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16780>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16898>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Could not find a version that satisfies the requirement pytz>=2011k (from pandas==0.22.0) (from versions: ) > No matching distribution found for pytz>=2011k (from pandas==0.22.0) > > ``` > > I did the same with package: `pandas-0.22.0-cp27-cp27m-win_amd64.whl` I also tried to use proxies: ``` pip --proxy=IND\namit.kewat:xl123456@192.168.180.150:8880 install numpy ``` But I am unable to get pandas. when I tried to access the site : <https://pypi.org/project/pandas/#files> I can access it without any problem on explorer
2018/05/12
[ "https://Stackoverflow.com/questions/50305112", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4570833/" ]
`PIP` has a default timeout of `15 sec`, [reference guide](https://pip.pypa.io/en/stable/cli/pip/). `Pandas` is a relatively big file, at 10MB, and it's dependant `Numpy`, at 20MB could still be needed (if it is not installed already.). In addition, your network connection may be slow. Therefore, set `PIP` to take longer time by, for example, giving it `1000 sec`: ``` pip --default-timeout=1000 install pandas ``` as suggested by @Pouya Khalilzad.
I've fixed this issue on my server by following command because the timeout not helped me. ``` sudo ip link set eth0 mtu 1450 ``` In my case problem was in network and ddos protection on my ubuntu 20 server. Hope it'll be helpfull for someone. More about MTU here <https://ru.wikipedia.org/wiki/Maximum_segment_size>
50,305,112
I am trying to install pandas in my company computer. I tried to do ``` pip install pandas ``` but operation retries and then timesout. then I downloaded the package: pandas-0.22.0-cp27-cp27m-win\_amd64.whl and install: ``` pip install pandas-0.22.0-cp27-cp27m-win_amd64 ``` But I get the following error: > > > ``` > Retrying (Retry(total=4, connect=None, read=None, redirect=None, > status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16320>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16C50>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16C18>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16780>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by > 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection > object at 0x0000000003F16898>, 'Connection to pypi.python.org timed > out. (connect timeout=15)')': /simple/pytz/ > Could not find a version that satisfies the requirement pytz>=2011k (from pandas==0.22.0) (from versions: ) > No matching distribution found for pytz>=2011k (from pandas==0.22.0) > > ``` > > I did the same with package: `pandas-0.22.0-cp27-cp27m-win_amd64.whl` I also tried to use proxies: ``` pip --proxy=IND\namit.kewat:xl123456@192.168.180.150:8880 install numpy ``` But I am unable to get pandas. when I tried to access the site : <https://pypi.org/project/pandas/#files> I can access it without any problem on explorer
2018/05/12
[ "https://Stackoverflow.com/questions/50305112", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4570833/" ]
In my case, my network was configured to use IPV6 by default, so I changed it to work with IPV4 only. You can do that in the Network connections section in the control panel: `'Control Panel\All Control Panel Items\Network Connections'` [![enter image description here](https://i.stack.imgur.com/agR8k.png)](https://i.stack.imgur.com/agR8k.png) Than disable the IPV6 option: [![enter image description here](https://i.stack.imgur.com/CN9fw.png)](https://i.stack.imgur.com/CN9fw.png)
I've fixed this issue on my server by following command because the timeout not helped me. ``` sudo ip link set eth0 mtu 1450 ``` In my case problem was in network and ddos protection on my ubuntu 20 server. Hope it'll be helpfull for someone. More about MTU here <https://ru.wikipedia.org/wiki/Maximum_segment_size>
37,015,123
I have a user defined dictionary (sub-classing python's built-in dict object), which does not allow modifying the dict directly: ``` class customDict(dict): """ This dict does not allow the direct modification of its entries(e.g., d['a'] = 5 or del d['a']) """ def __init__(self, *args, **kwargs): self.update(*args, **kwargs) def __setitem__(self,key,value): raise Exception('You cannot directly modify this dictionary. Use set_[property_name] method instead') def __delitem__(self,key): raise Exception('You cannot directly modify this dictionary. Use set_[property_name] method instead') ``` My problem is that I am not able to deep copy this dictionary using copy.deepcopy. Here's an example: ``` d1 = customDict({'a':1,'b':2,'c':3}) print d1 d2 = deepcopy(d1) print d2 ``` where it throws the exception I've defined myself for setitem: ``` Exception: You cannot directly modify this dictionary. Use set_[property_name] method instead ``` I tried overwriting deepcopy method as follows as suggested [here](https://stackoverflow.com/questions/1500718/what-is-the-right-way-to-override-the-copy-deepcopy-operations-on-an-object-in-p): ``` def __deepcopy__(self, memo): cls = self.__class__ result = cls.__new__(cls) memo[id(self)] = result for k, v in self.__dict__.items(): setattr(result, k, deepcopy(v, memo)) return result ``` This doesn't throw any errors but it returns an empty dictionary: ``` d1 = customDict({'a':1,'b':2,'c':3}) print d1 d2 = deepcopy(d1) print d2 {'a': 1, 'c': 3, 'b': 2} {} ``` Any ideas how to fix this?
2016/05/03
[ "https://Stackoverflow.com/questions/37015123", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3076813/" ]
Your `deepcopy` implementation does not work because the values of `dict` is not stored in `__dict__`. `dict` is a special class. You can make it work calling `__init__` with a deepcopy of the dict. ``` def __deepcopy__(self, memo): def _deepcopy_dict(x, memo): y = {} memo[id(x)] = y for key, value in x.iteritems(): y[deepcopy(key, memo)] = deepcopy(value, memo) return y cls = self.__class__ result = cls.__new__(cls) result.__init__(_deepcopy_dict(self, memo)) memo[id(self)] = result for k, v in self.__dict__.items(): setattr(result, k, deepcopy(v, memo)) return result ``` This program ``` d1 = customDict({'a': 2,'b': [3, 4]}) d2 = deepcopy(d1) d2['b'].append(5) print d1 print d2 ``` Outputs ``` {'a': 2, 'b': [3, 4]} {'a': 2, 'b': [3, 4, 5]} ```
Something like this should work without having to change deepcopy. ``` x2 = customList(copy.deepcopy(list(x1))) ``` This will cast `x1` to a `list` deepcopy it then make it a `customList` before assigning to `x2`.
66,469,499
I made a memory game in python where players take turn picking two tiles in a grid to see if the revealed letters match. I used two lists for this, one to store the letters e.g. `letters = ['A', 'A', 'B', 'B']` and the other to record the revealed letters that matches so far in the game e.g. `correctly_revealed = ['A', 'A', ' ', ' ']` and then use an `if letters == correctly_revealed` condition to end the game. The letters only get revealed if both letters in chosen tiles matches. The letters do not always come in pairs however, meaning that the remaining unrevealed letters all be different letters e.g. `letters = ['B', 'B', 'C', 'D']` and `correctly_revealed = ['B', 'B', ' ', ' ']`. So I'm not sure how to set an `if` condition to end the game if it comes to that point
2021/03/04
[ "https://Stackoverflow.com/questions/66469499", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14026994/" ]
This is indeed Red, Green, Blue, and Alpha, mapped to the 0.0 to 1.0 range, but with an additional transformation as well: These values have been converted from the sRGB colorspace to linear using the [sRGB transfer function](https://en.wikipedia.org/wiki/SRGB). (The back story here is, the [baseColorTexture](https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/README.md#pbrmetallicroughnessbasecolortexture) is supposed to be stored in the sRGB colorspace and subject to hardware sRGB decoding, but the [baseColorFactor](https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/README.md#pbrmetallicroughnessbasecolorfactor) has no hardware decoder and therefore is specified as linear values directly.) The simple version of this, if you have a value between 0 and 255, is to divide by 255.0 and raise that to the power 2.2. This is an approximation, but works well. So for example if your Red value was 200, you could run the following formula, shown here as JavaScript but could be adapted to any language: ``` Math.pow(200 / 255, 2.2) ``` This would give a linear red value of about `0.58597`. Note that the alpha values are not subject to the sRGB transfer function, so for them you simply divide by 255 and stop there. Some packages will do this conversion automatically. For example, in Blender if you click on the Base Color color picker, you'll see it has a "Hex" tab that shows a typical CSS-style color, and an "RGB" tab that has the numeric linear values. [![Blender color picker](https://i.stack.imgur.com/ClgrJ.png)](https://i.stack.imgur.com/ClgrJ.png) This can be used to quickly convert typical CSS colors to linear space. The VSCode [glTF Tools](https://github.com/AnalyticalGraphicsInc/gltf-vscode) (for which I'm a contributor) can also [show glTF color factors](https://twitter.com/emackey/status/1353792898370830340) as CSS values (going the other way).
It is RGBA format, but with numbers between 0 and 1. If you want to insert a color in the Format: * RGB (255, 255, 255) [=white] divide all values by `255` and use `1` (=fully opaque for the last value * RGBA (255, 0, 0, 255) [=fully opaque red] divide all components by `255` Documentation can be found [here](http://here). Actually the only difference is, that you can insert more color nuances because you have more than `255` possible values per channel.
64,399,807
I learning python web automation using selenium but when I trying to add a input for find\_element\_by\_name it is not working. ``` from selenium import webdriver PATH = 'C:\Program Files (x86)\chromedriver.exe' driver = webdriver.Chrome(PATH) driver.get('https://kahoot.it') codeInput = driver.find_element_by_name('gadmeId') codeInput = 202206 ``` I have downloaded the chromedriver but still it is not working.
2020/10/17
[ "https://Stackoverflow.com/questions/64399807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14466617/" ]
First make sure that you spelled it "gameId" and not "gadmeId" Also import send keys: ``` from selenium.webdriver.common.keys import Keys ``` Then you can send the gameId ``` codeInput = driver.find_element_by_name('gameId') codeInput.send_keys('202206') ```
To send value to the input tag. ``` codeInput.send_keys('202206') ``` Also ``` driver.find_element_by_name('gameId') ``` is suppose to be gameId. I would also use a wait after the driver.get() for page loading.
64,399,807
I learning python web automation using selenium but when I trying to add a input for find\_element\_by\_name it is not working. ``` from selenium import webdriver PATH = 'C:\Program Files (x86)\chromedriver.exe' driver = webdriver.Chrome(PATH) driver.get('https://kahoot.it') codeInput = driver.find_element_by_name('gadmeId') codeInput = 202206 ``` I have downloaded the chromedriver but still it is not working.
2020/10/17
[ "https://Stackoverflow.com/questions/64399807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14466617/" ]
First make sure that you spelled it "gameId" and not "gadmeId" Also import send keys: ``` from selenium.webdriver.common.keys import Keys ``` Then you can send the gameId ``` codeInput = driver.find_element_by_name('gameId') codeInput.send_keys('202206') ```
Use `send_keys` to send input to the element, otherwise you are running overwriting the variable: ``` codeInput.send_keys('202206') ``` Your assignment to `codeInput` works fine, still check the name attribute correctly.
61,122,276
So I've been following Google's official tensorflow guide and trying to build a simple neural network using Keras. But when it comes to training the model, it does not use the entire dataset (with 60000 entries) and instead uses only 1875 entries for training. Any possible fix? ```py import tensorflow as tf from tensorflow import keras import numpy as np fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() train_images = train_images / 255.0 test_images = test_images / 255.0 class_names = ['T-shirt', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot'] model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss= tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit(train_images, train_labels, epochs=10) ``` Output: ``` Epoch 1/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3183 - accuracy: 0.8866 Epoch 2/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3169 - accuracy: 0.8873 Epoch 3/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3144 - accuracy: 0.8885 Epoch 4/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3130 - accuracy: 0.8885 Epoch 5/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3110 - accuracy: 0.8883 Epoch 6/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3090 - accuracy: 0.8888 Epoch 7/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3073 - accuracy: 0.8895 Epoch 8/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3057 - accuracy: 0.8900 Epoch 9/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3040 - accuracy: 0.8905 Epoch 10/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3025 - accuracy: 0.8915 <tensorflow.python.keras.callbacks.History at 0x7fbe0e5aebe0> ``` Here's the original google colab notebook where I've been working on this: <https://colab.research.google.com/drive/1NdtzXHEpiNnelcMaJeEm6zmp34JMcN38>
2020/04/09
[ "https://Stackoverflow.com/questions/61122276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5935310/" ]
The number `1875` shown during fitting the model is not the training samples; it is the number of *batches*. `model.fit` includes an optional argument `batch_size`, which, according to the [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit): > > If unspecified, `batch_size` will default to 32. > > > So, what happens here is - you fit with the default batch size of 32 (since you have not specified anything different), so the total number of batches for your data is ``` 60000/32 = 1875 ```
It does not train on 1875 samples. ``` Epoch 1/10 1875/1875 [=== ``` 1875 here is the number of steps, not samples. In `fit` method, there is an argument, `batch_size`. The default value for it is `32`. So `1875*32=60000`. The implementation is correct. If you train it with `batch_size=16`, you will see the number of steps will be `3750` instead of `1875`, since `60000/16=3750`.
61,122,276
So I've been following Google's official tensorflow guide and trying to build a simple neural network using Keras. But when it comes to training the model, it does not use the entire dataset (with 60000 entries) and instead uses only 1875 entries for training. Any possible fix? ```py import tensorflow as tf from tensorflow import keras import numpy as np fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() train_images = train_images / 255.0 test_images = test_images / 255.0 class_names = ['T-shirt', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot'] model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss= tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit(train_images, train_labels, epochs=10) ``` Output: ``` Epoch 1/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3183 - accuracy: 0.8866 Epoch 2/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3169 - accuracy: 0.8873 Epoch 3/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3144 - accuracy: 0.8885 Epoch 4/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3130 - accuracy: 0.8885 Epoch 5/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3110 - accuracy: 0.8883 Epoch 6/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3090 - accuracy: 0.8888 Epoch 7/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3073 - accuracy: 0.8895 Epoch 8/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3057 - accuracy: 0.8900 Epoch 9/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3040 - accuracy: 0.8905 Epoch 10/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3025 - accuracy: 0.8915 <tensorflow.python.keras.callbacks.History at 0x7fbe0e5aebe0> ``` Here's the original google colab notebook where I've been working on this: <https://colab.research.google.com/drive/1NdtzXHEpiNnelcMaJeEm6zmp34JMcN38>
2020/04/09
[ "https://Stackoverflow.com/questions/61122276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5935310/" ]
It does not train on 1875 samples. ``` Epoch 1/10 1875/1875 [=== ``` 1875 here is the number of steps, not samples. In `fit` method, there is an argument, `batch_size`. The default value for it is `32`. So `1875*32=60000`. The implementation is correct. If you train it with `batch_size=16`, you will see the number of steps will be `3750` instead of `1875`, since `60000/16=3750`.
Just use batch\_size = 1, if you want the entire 60000 data samples to be visible.
61,122,276
So I've been following Google's official tensorflow guide and trying to build a simple neural network using Keras. But when it comes to training the model, it does not use the entire dataset (with 60000 entries) and instead uses only 1875 entries for training. Any possible fix? ```py import tensorflow as tf from tensorflow import keras import numpy as np fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() train_images = train_images / 255.0 test_images = test_images / 255.0 class_names = ['T-shirt', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot'] model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss= tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit(train_images, train_labels, epochs=10) ``` Output: ``` Epoch 1/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3183 - accuracy: 0.8866 Epoch 2/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3169 - accuracy: 0.8873 Epoch 3/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3144 - accuracy: 0.8885 Epoch 4/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3130 - accuracy: 0.8885 Epoch 5/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3110 - accuracy: 0.8883 Epoch 6/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3090 - accuracy: 0.8888 Epoch 7/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3073 - accuracy: 0.8895 Epoch 8/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3057 - accuracy: 0.8900 Epoch 9/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3040 - accuracy: 0.8905 Epoch 10/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3025 - accuracy: 0.8915 <tensorflow.python.keras.callbacks.History at 0x7fbe0e5aebe0> ``` Here's the original google colab notebook where I've been working on this: <https://colab.research.google.com/drive/1NdtzXHEpiNnelcMaJeEm6zmp34JMcN38>
2020/04/09
[ "https://Stackoverflow.com/questions/61122276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5935310/" ]
The number `1875` shown during fitting the model is not the training samples; it is the number of *batches*. `model.fit` includes an optional argument `batch_size`, which, according to the [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit): > > If unspecified, `batch_size` will default to 32. > > > So, what happens here is - you fit with the default batch size of 32 (since you have not specified anything different), so the total number of batches for your data is ``` 60000/32 = 1875 ```
Just use batch\_size = 1, if you want the entire 60000 data samples to be visible.
24,070,856
I have a problem with QCheckBox. I am trying to connect a boolean variable to a QCheckBox so that **when I change the boolean variable, the QCheckBox will be automatically checked or unchecked.** My Question is similar to the Question below but in opposite way. [question: Python3 PyQt4 Creating a simple QCheckBox and changing a Boolean variable](https://stackoverflow.com/questions/12736825/python3-pyqt4-creating-a-simple-qcheckbox-and-changing-a-boolean-variable) I just copy one solution from that question to here. ``` import sys from PyQt4.QtGui import * from PyQt4.QtCore import * class SelectionWindow(QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.ILCheck = False ILCheckbox = QCheckBox(self) ILCheckbox.setCheckState(Qt.Unchecked) ILCheckbox.stateChanged.connect(self.ILCheckbox_changed) MainLayout = QGridLayout() MainLayout.addWidget(ILCheckbox, 0, 0, 1, 1) self.setLayout(MainLayout) def ILCheckbox_changed(self, state): self.ILCheck = (state == Qt.Checked) print(self.ILCheck) if __name__ == '__main__': app = QApplication(sys.argv) window = SelectionWindow() window.show() window.ILCheck = True sys.exit(app.exec_()) ``` In this case, once I set ILCheck to True, QCheckBox will be checked. Any help would be appreciated!!! Thanks!!!! --- Update: I am using MVC on my project, the code above just a example show what I need. The bool value `ILCheck` will be use in other place, and I don't want call `ILCheckBox` in my model. I expect that if I modify the value of `ILCheck`, `ILCheckBox` will react correctlly. --- Update: Thanks for all your reply and help. All your solution is great!!! The way I need is more like a **Modeling-View** solution so that I can separate modeling part from gui part. When I want to update something, I just need update modeling, and don't need pay attention to what gui looks like. I can't set this Bool property in View Class so that I can't use this solution. I am not sure MVC is suitable in PyQT. I have a close solution like below with a problem. ``` from PyQt4 import QtGui, QtCore, uic import sys class CellList(QtGui.QStandardItemModel): def __init__(self, cells = [], parent = None): QtGui.QStandardItemModel.__init__(self, parent) self.__cells = cells self.add(cells) def headerData(self, section, orientation, role): if role == QtCore.Qt.DisplayRole: return QtCore.QString("Cell id List") def flags(self, index): return QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable def add(self, cells): for i in xrange(0, len(cells)): item = QtGui.QStandardItem('Cell %s' % cells[i][0]) if (cells[i][1]): item.setCheckState(QtCore.Qt.Checked) else: item.setCheckState(QtCore.Qt.Unchecked) item.setCheckable(True) self.appendRow(item) def update(self, cells = None): # TODO: Making this working with out clean all old Cell self.clear() if cells is None: cells = self.__cells else: print "hi" self.__cells = cells print cells self.add(cells) if __name__ == '__main__': app = QtGui.QApplication(sys.argv) listView = QtGui.QListView() listView.show() data = [[85, True], (105, True), (123, False)] model = CellList(data) listView.setModel(model) data[0][1] = False model.update(data) sys.exit(app.exec_()) ``` There is a problem comes with this solution and I can't solve. I think only a view can set a Model. I am not sure if I can set a model to a single `QCheckBox`. .
2014/06/05
[ "https://Stackoverflow.com/questions/24070856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2727296/" ]
[`property`](https://docs.python.org/2/library/functions.html#property) is the way to define a variable that does additional work upon assigning/accessing. Below is the code modified for that purpose. It changes `ILCheck` to a property such that it'll also update the checkbox upon assigning. Proper error checking for `.setter` is left out but most probably needed. ``` import sys from PyQt4.QtGui import * from PyQt4.QtCore import * class SelectionWindow(QWidget): def __init__(self, parent=None): super(SelectionWindow, self).__init__(parent) self._ILCheck = False self.ILCheckbox = QCheckBox(self) self.ILCheckbox.setCheckState(Qt.Unchecked) self.ILCheckbox.stateChanged.connect(self.ILCheckbox_changed) MainLayout = QGridLayout() MainLayout.addWidget(self.ILCheckbox, 0, 0, 1, 1) self.setLayout(MainLayout) def ILCheckbox_changed(self, state): self._ILCheck = (state == Qt.Checked) print(self.ILCheck) @property def ILCheck(self): return self._ILCheck @ILCheck.setter def ILCheck(self, value): self._ILCheck = value self.ILCheckbox.setChecked(value) if __name__ == '__main__': app = QApplication(sys.argv) window = SelectionWindow() window.show() window.ILCheck = True sys.exit(app.exec_()) ```
just use `ILCheckbox.setCheckState(Qt.Checked)` after calling ILCheck. You don't neet signals here since you can call a slot sirectly. If you want to do use this feature more than once, you should consider writing a setter which changes the state of `self.ILCheck` and emits a signal. Edit after your clarification: * You can use the setter approach, but instead of setting the value of ILCheckbox, you should call `your_properly_named_anddefine_signal.emit()`. For more information about signal definition see e.g. <http://www.pythoncentral.io/pysidepyqt-tutorial-creating-your-own-signals-and-slots/>. * You'll have to connect your signal to a slot which will set the checkbox correctly. This connection could be made in the `__init__()` of your controller class.
24,070,856
I have a problem with QCheckBox. I am trying to connect a boolean variable to a QCheckBox so that **when I change the boolean variable, the QCheckBox will be automatically checked or unchecked.** My Question is similar to the Question below but in opposite way. [question: Python3 PyQt4 Creating a simple QCheckBox and changing a Boolean variable](https://stackoverflow.com/questions/12736825/python3-pyqt4-creating-a-simple-qcheckbox-and-changing-a-boolean-variable) I just copy one solution from that question to here. ``` import sys from PyQt4.QtGui import * from PyQt4.QtCore import * class SelectionWindow(QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.ILCheck = False ILCheckbox = QCheckBox(self) ILCheckbox.setCheckState(Qt.Unchecked) ILCheckbox.stateChanged.connect(self.ILCheckbox_changed) MainLayout = QGridLayout() MainLayout.addWidget(ILCheckbox, 0, 0, 1, 1) self.setLayout(MainLayout) def ILCheckbox_changed(self, state): self.ILCheck = (state == Qt.Checked) print(self.ILCheck) if __name__ == '__main__': app = QApplication(sys.argv) window = SelectionWindow() window.show() window.ILCheck = True sys.exit(app.exec_()) ``` In this case, once I set ILCheck to True, QCheckBox will be checked. Any help would be appreciated!!! Thanks!!!! --- Update: I am using MVC on my project, the code above just a example show what I need. The bool value `ILCheck` will be use in other place, and I don't want call `ILCheckBox` in my model. I expect that if I modify the value of `ILCheck`, `ILCheckBox` will react correctlly. --- Update: Thanks for all your reply and help. All your solution is great!!! The way I need is more like a **Modeling-View** solution so that I can separate modeling part from gui part. When I want to update something, I just need update modeling, and don't need pay attention to what gui looks like. I can't set this Bool property in View Class so that I can't use this solution. I am not sure MVC is suitable in PyQT. I have a close solution like below with a problem. ``` from PyQt4 import QtGui, QtCore, uic import sys class CellList(QtGui.QStandardItemModel): def __init__(self, cells = [], parent = None): QtGui.QStandardItemModel.__init__(self, parent) self.__cells = cells self.add(cells) def headerData(self, section, orientation, role): if role == QtCore.Qt.DisplayRole: return QtCore.QString("Cell id List") def flags(self, index): return QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable def add(self, cells): for i in xrange(0, len(cells)): item = QtGui.QStandardItem('Cell %s' % cells[i][0]) if (cells[i][1]): item.setCheckState(QtCore.Qt.Checked) else: item.setCheckState(QtCore.Qt.Unchecked) item.setCheckable(True) self.appendRow(item) def update(self, cells = None): # TODO: Making this working with out clean all old Cell self.clear() if cells is None: cells = self.__cells else: print "hi" self.__cells = cells print cells self.add(cells) if __name__ == '__main__': app = QtGui.QApplication(sys.argv) listView = QtGui.QListView() listView.show() data = [[85, True], (105, True), (123, False)] model = CellList(data) listView.setModel(model) data[0][1] = False model.update(data) sys.exit(app.exec_()) ``` There is a problem comes with this solution and I can't solve. I think only a view can set a Model. I am not sure if I can set a model to a single `QCheckBox`. .
2014/06/05
[ "https://Stackoverflow.com/questions/24070856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2727296/" ]
As Avaris shows in his/her answer, emulating the overload of operator `=` is a good start for solving the question. But yet still the problem of the code being added to the `SelectionWindow` class. But since we are using `Qt`, lets implement a custom `QObject` that represents our "smart" boolean variable which will emit a signal when change its value. ``` class SmartBool(QObject): valueChanged = pyqtSignal(bool) # Signal to be emitted when value changes. def __init__(self): super(SmartBool, self).__init__() # Call QObject contructor. self.__value = False # False initialized by default. @property def value(self): return self.__value @value.setter def value(self, value): if self.__value != value: self.valueChanged.emit(value) # If value change emit signal. self.__value = value ``` Now your code needs only a couple of changes: replace the line: ``` self.ILCheck = False ``` by: ``` self.ILCheck = SmartBool() ``` and connect the signal and the slot, add the line to the \_ \_ init \_ \_ some where after the line above. **IMPORTANT, you are not bind to make the connection from within the `SelectionWindow` class** ``` self.connect(self.ILCheck, SIGNAL("valueChanged(bool)"), ILCheckbox, SLOT("setChecked(bool)")) ``` for testing the result just add: ``` window.ILCheck.value = True ``` to your "main" and you will see the checkbox checked next time you run the example. **The full code example was added to the end for stetical reasons** ``` import sys from PyQt4.QtGui import * from PyQt4.QtCore import * class SmartBool(QObject): valueChanged = pyqtSignal(bool) # Signal to be emitted when value changes. def __init__(self, value=False): super(SmartBool, self).__init__() # Call QObject contructor. self.__value = value # False initialized by default. @property def value(self): return self.__value @value.setter def value(self, value): if self.__value != value: self.valueChanged.emit(value) # If value change emit signal. self.__value = value class SelectionWindow(QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.ILCheck = SmartBool() # Your steroides bool variable. ILCheckbox = QCheckBox(self) self.connect(self.ILCheck, SIGNAL("valueChanged(bool)"), ILCheckbox, SLOT("setChecked(bool)")) ILCheckbox.setCheckState(Qt.Unchecked) ILCheckbox.stateChanged.connect(self.ILCheckbox_changed) MainLayout = QGridLayout() MainLayout.addWidget(ILCheckbox, 0, 0, 1, 1) self.setLayout(MainLayout) def ILCheckbox_changed(self, state): self.ILCheck = (state == Qt.Checked) print(self.ILCheck) if __name__ == '__main__': app = QApplication(sys.argv) window = SelectionWindow() window.show() window.ILCheck.value = True sys.exit(app.exec_()) ```
just use `ILCheckbox.setCheckState(Qt.Checked)` after calling ILCheck. You don't neet signals here since you can call a slot sirectly. If you want to do use this feature more than once, you should consider writing a setter which changes the state of `self.ILCheck` and emits a signal. Edit after your clarification: * You can use the setter approach, but instead of setting the value of ILCheckbox, you should call `your_properly_named_anddefine_signal.emit()`. For more information about signal definition see e.g. <http://www.pythoncentral.io/pysidepyqt-tutorial-creating-your-own-signals-and-slots/>. * You'll have to connect your signal to a slot which will set the checkbox correctly. This connection could be made in the `__init__()` of your controller class.
24,070,856
I have a problem with QCheckBox. I am trying to connect a boolean variable to a QCheckBox so that **when I change the boolean variable, the QCheckBox will be automatically checked or unchecked.** My Question is similar to the Question below but in opposite way. [question: Python3 PyQt4 Creating a simple QCheckBox and changing a Boolean variable](https://stackoverflow.com/questions/12736825/python3-pyqt4-creating-a-simple-qcheckbox-and-changing-a-boolean-variable) I just copy one solution from that question to here. ``` import sys from PyQt4.QtGui import * from PyQt4.QtCore import * class SelectionWindow(QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.ILCheck = False ILCheckbox = QCheckBox(self) ILCheckbox.setCheckState(Qt.Unchecked) ILCheckbox.stateChanged.connect(self.ILCheckbox_changed) MainLayout = QGridLayout() MainLayout.addWidget(ILCheckbox, 0, 0, 1, 1) self.setLayout(MainLayout) def ILCheckbox_changed(self, state): self.ILCheck = (state == Qt.Checked) print(self.ILCheck) if __name__ == '__main__': app = QApplication(sys.argv) window = SelectionWindow() window.show() window.ILCheck = True sys.exit(app.exec_()) ``` In this case, once I set ILCheck to True, QCheckBox will be checked. Any help would be appreciated!!! Thanks!!!! --- Update: I am using MVC on my project, the code above just a example show what I need. The bool value `ILCheck` will be use in other place, and I don't want call `ILCheckBox` in my model. I expect that if I modify the value of `ILCheck`, `ILCheckBox` will react correctlly. --- Update: Thanks for all your reply and help. All your solution is great!!! The way I need is more like a **Modeling-View** solution so that I can separate modeling part from gui part. When I want to update something, I just need update modeling, and don't need pay attention to what gui looks like. I can't set this Bool property in View Class so that I can't use this solution. I am not sure MVC is suitable in PyQT. I have a close solution like below with a problem. ``` from PyQt4 import QtGui, QtCore, uic import sys class CellList(QtGui.QStandardItemModel): def __init__(self, cells = [], parent = None): QtGui.QStandardItemModel.__init__(self, parent) self.__cells = cells self.add(cells) def headerData(self, section, orientation, role): if role == QtCore.Qt.DisplayRole: return QtCore.QString("Cell id List") def flags(self, index): return QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable def add(self, cells): for i in xrange(0, len(cells)): item = QtGui.QStandardItem('Cell %s' % cells[i][0]) if (cells[i][1]): item.setCheckState(QtCore.Qt.Checked) else: item.setCheckState(QtCore.Qt.Unchecked) item.setCheckable(True) self.appendRow(item) def update(self, cells = None): # TODO: Making this working with out clean all old Cell self.clear() if cells is None: cells = self.__cells else: print "hi" self.__cells = cells print cells self.add(cells) if __name__ == '__main__': app = QtGui.QApplication(sys.argv) listView = QtGui.QListView() listView.show() data = [[85, True], (105, True), (123, False)] model = CellList(data) listView.setModel(model) data[0][1] = False model.update(data) sys.exit(app.exec_()) ``` There is a problem comes with this solution and I can't solve. I think only a view can set a Model. I am not sure if I can set a model to a single `QCheckBox`. .
2014/06/05
[ "https://Stackoverflow.com/questions/24070856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2727296/" ]
As Avaris shows in his/her answer, emulating the overload of operator `=` is a good start for solving the question. But yet still the problem of the code being added to the `SelectionWindow` class. But since we are using `Qt`, lets implement a custom `QObject` that represents our "smart" boolean variable which will emit a signal when change its value. ``` class SmartBool(QObject): valueChanged = pyqtSignal(bool) # Signal to be emitted when value changes. def __init__(self): super(SmartBool, self).__init__() # Call QObject contructor. self.__value = False # False initialized by default. @property def value(self): return self.__value @value.setter def value(self, value): if self.__value != value: self.valueChanged.emit(value) # If value change emit signal. self.__value = value ``` Now your code needs only a couple of changes: replace the line: ``` self.ILCheck = False ``` by: ``` self.ILCheck = SmartBool() ``` and connect the signal and the slot, add the line to the \_ \_ init \_ \_ some where after the line above. **IMPORTANT, you are not bind to make the connection from within the `SelectionWindow` class** ``` self.connect(self.ILCheck, SIGNAL("valueChanged(bool)"), ILCheckbox, SLOT("setChecked(bool)")) ``` for testing the result just add: ``` window.ILCheck.value = True ``` to your "main" and you will see the checkbox checked next time you run the example. **The full code example was added to the end for stetical reasons** ``` import sys from PyQt4.QtGui import * from PyQt4.QtCore import * class SmartBool(QObject): valueChanged = pyqtSignal(bool) # Signal to be emitted when value changes. def __init__(self, value=False): super(SmartBool, self).__init__() # Call QObject contructor. self.__value = value # False initialized by default. @property def value(self): return self.__value @value.setter def value(self, value): if self.__value != value: self.valueChanged.emit(value) # If value change emit signal. self.__value = value class SelectionWindow(QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.ILCheck = SmartBool() # Your steroides bool variable. ILCheckbox = QCheckBox(self) self.connect(self.ILCheck, SIGNAL("valueChanged(bool)"), ILCheckbox, SLOT("setChecked(bool)")) ILCheckbox.setCheckState(Qt.Unchecked) ILCheckbox.stateChanged.connect(self.ILCheckbox_changed) MainLayout = QGridLayout() MainLayout.addWidget(ILCheckbox, 0, 0, 1, 1) self.setLayout(MainLayout) def ILCheckbox_changed(self, state): self.ILCheck = (state == Qt.Checked) print(self.ILCheck) if __name__ == '__main__': app = QApplication(sys.argv) window = SelectionWindow() window.show() window.ILCheck.value = True sys.exit(app.exec_()) ```
[`property`](https://docs.python.org/2/library/functions.html#property) is the way to define a variable that does additional work upon assigning/accessing. Below is the code modified for that purpose. It changes `ILCheck` to a property such that it'll also update the checkbox upon assigning. Proper error checking for `.setter` is left out but most probably needed. ``` import sys from PyQt4.QtGui import * from PyQt4.QtCore import * class SelectionWindow(QWidget): def __init__(self, parent=None): super(SelectionWindow, self).__init__(parent) self._ILCheck = False self.ILCheckbox = QCheckBox(self) self.ILCheckbox.setCheckState(Qt.Unchecked) self.ILCheckbox.stateChanged.connect(self.ILCheckbox_changed) MainLayout = QGridLayout() MainLayout.addWidget(self.ILCheckbox, 0, 0, 1, 1) self.setLayout(MainLayout) def ILCheckbox_changed(self, state): self._ILCheck = (state == Qt.Checked) print(self.ILCheck) @property def ILCheck(self): return self._ILCheck @ILCheck.setter def ILCheck(self, value): self._ILCheck = value self.ILCheckbox.setChecked(value) if __name__ == '__main__': app = QApplication(sys.argv) window = SelectionWindow() window.show() window.ILCheck = True sys.exit(app.exec_()) ```
3,014,223
We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud) For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database. ### Requirements There are a couple of things we wan't to test in the following priority: 1. Are the sources valid, hence "compilable"? 2. For packages, with respect to a certain database, would they compile? 3. Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules? Also, * the tool must run head-less (commandline, ant, ...) * we want to do analysis on a partial code base (changed sources only) ### Tools We did a little research and found the following tools that could potencially help: * **[Cast Application Intelligence Platform (AIP)](http://www.castsoftware.com/Product/Application-Intelligence-Platform.aspx):** Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format. * **[Toad for Oracle](http://www.toadsoft.com/toad_oracle.htm):** The Professional version is said to include something called Xpert validates a set of rules against a code base. * **[Sonar](http://www.sonarsource.com) + [PL/SQL-Plugin](http://www.sonarsource.com/plugins/plugin-plsql/):** Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base. * **[Semantic Designs DMSToolkit](http://www.semanticdesigns.com/Products/DMS/DMSToolkit.html?Home=PLSQLTools):** Quite general analysis of source code base. Commandline available? * **[Semantic Designs Clones Detector](http://www.semanticdesigns.com/Products/Clone/index.html?Home=PLSQLTools):** Detects clones. But also via command line? * **[Fortify Source Code Analyzer](http://www.fortify.com/products/detect/in_development.jsp):** Seems to be focussed on security issues. But maybe it is extensible? *[more...](http://products.enterpriseitplanet.com/security/security/1101145596.html)* So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here? Any ideas? Other products? Experiences? ### Related Questions on SO: * [Any Static Code Analysis Tools for Stored Procedures?](https://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures) * <https://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql> * [Is there a static analysis tool for Python, Ruby, Sql, Cobol, Perl, and PL/SQL?](https://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql)
2010/06/10
[ "https://Stackoverflow.com/questions/3014223", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11562/" ]
I think that this blog describes the needed process: <http://www.theserverlabs.com/blog/?p=435> Please check and let me know what you think about it.
Our approach is to keep each database object (tables, views, functions, packages, sprocs etc) in its own file under source control and have an integration server ([TeamCity](http://www.jetbrains.com/teamcity/), [Hudson](http://hudson-ci.org/) etc) do a nightly build of the database - from source - where it drops and recreates the schema before checking for compilation errors in the `user_errors` system table. This lets you know when someone has introduced compilation errors into the build. The next step is to use something like [PLUTO](http://code.google.com/p/pluto-test-framework/) to add unit tests to your PL/SQL code and add those into the nightly build task. For us, this has involved having sample test datasets (also under source control) that allow us to get the database to a "known state" for the purposes of testing. I've not found anything that helps us much with any of the above so it's mainly a collection of Ant tasks, custom shell scripts and wizardry, which basically apply the required DDL to an empty database and use `DBMS_UTILITY.COMPILE_SCHEMA()` to, uh, compile the schema. You can add more fancy stuff later, like back-tracing objects which fail to compile or fail tests to a specific *submit* in source control, and issue "blame mail". I'd be really interested to see if anyone else has a better approach or if there's an off-the-shelf product that does this for me!
3,014,223
We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud) For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database. ### Requirements There are a couple of things we wan't to test in the following priority: 1. Are the sources valid, hence "compilable"? 2. For packages, with respect to a certain database, would they compile? 3. Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules? Also, * the tool must run head-less (commandline, ant, ...) * we want to do analysis on a partial code base (changed sources only) ### Tools We did a little research and found the following tools that could potencially help: * **[Cast Application Intelligence Platform (AIP)](http://www.castsoftware.com/Product/Application-Intelligence-Platform.aspx):** Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format. * **[Toad for Oracle](http://www.toadsoft.com/toad_oracle.htm):** The Professional version is said to include something called Xpert validates a set of rules against a code base. * **[Sonar](http://www.sonarsource.com) + [PL/SQL-Plugin](http://www.sonarsource.com/plugins/plugin-plsql/):** Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base. * **[Semantic Designs DMSToolkit](http://www.semanticdesigns.com/Products/DMS/DMSToolkit.html?Home=PLSQLTools):** Quite general analysis of source code base. Commandline available? * **[Semantic Designs Clones Detector](http://www.semanticdesigns.com/Products/Clone/index.html?Home=PLSQLTools):** Detects clones. But also via command line? * **[Fortify Source Code Analyzer](http://www.fortify.com/products/detect/in_development.jsp):** Seems to be focussed on security issues. But maybe it is extensible? *[more...](http://products.enterpriseitplanet.com/security/security/1101145596.html)* So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here? Any ideas? Other products? Experiences? ### Related Questions on SO: * [Any Static Code Analysis Tools for Stored Procedures?](https://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures) * <https://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql> * [Is there a static analysis tool for Python, Ruby, Sql, Cobol, Perl, and PL/SQL?](https://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql)
2010/06/10
[ "https://Stackoverflow.com/questions/3014223", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11562/" ]
I think that this blog describes the needed process: <http://www.theserverlabs.com/blog/?p=435> Please check and let me know what you think about it.
Our [DMS Software Reengineering Toolkit](http://www.semanticdesigns.com/Products/DMS/DMSToolkit.html) is the foundation for arbitrary customizable tools. It has a PL/SQL front end that can be used to build arbitrary source code quality checks. Yes, it has a command-line version. There are a variety of [PL/SQL COTS tools](http://www.semanticdesigns.com/Products/LanguageTools/PLSQLTools.html) based on DMS that could be used to check quality: * Formatter - Cleans up layout. Side effect: static check for legal PL/SQL syntax * Source Code Search Engine - enables fast search of indexed source code base. Computes Halstead and Cyclomatic metrics as a side effect of setting up the index. * CloneDR - finds and reports duplicated PL/SQL code * Test Coverage - determines part of PL/SQL code not executed by tests (ad hoc, unit, or functional tests) All these have command line versions.
17,410,970
In my program, many processes can try to create a file if the file doesnt exist currently. Now I want to ensure that only one of the processes is able to create the file and the rest get an exception if its already been created(kind of process safe and thread safe open() implementation). How can I achieve this in python. Just for clarity, what I want is that the file is created if it doesnt exist. But if it already exists then throw an exception. And this all should happen atomicly.
2013/07/01
[ "https://Stackoverflow.com/questions/17410970", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1421499/" ]
In Python 2.x: ``` import os fd = os.open('filename', os.O_CREAT|os.O_EXCL) with os.fdopen(fd, 'w') as f: .... ``` In Python 3.3+: ``` with open('filename', 'x') as f: .... ```
If you're running on a Unix-like system, open the file like this: ``` f = os.fdopen(os.open(filename, os.O_CREAT | os.O_WRONLY | os.O_EXCL), 'w') ``` The `O_EXCL` flag to `os.open` ensures that the file will only be created (and opened) if it doesn't already exist, otherwise an `OSError` exception will be raised. The existence check and creation will be performed atomically, so you can have multiple threads or processes contend to create the file, and only one will come out successful.
69,499,962
So I have this big .csv in my work that looks something like this: ``` Name| Adress| Email| Paid Value John| x street | John@dmail.com| 0| Chris| c street | Chris@dmail.com| 100| Rebecca| y street| RebeccaFML|@dmail.com|177| Bozo | z street| BozoSMH|@yahow.com|976| ``` As you can see, the .csv is seperated by pipes and the email of the last two people have pipes in it, causing formating problems. There are only 2 customers with this problem but these fellas will have more and more entries every month and we have to manually find them in the csv and change the email by hand . It is a very boring and time consuming process because the file is that big. We use python to deal with data, I researched a bit and couldn't find anything to help me with it, any ideas? Edit: So what I want is a way to change this email adresses automatically through code (like RebeccaFML|@dmail.com -> RebeccaFML@dmail.com). It doenst need to be pandas or anything, I am accepting ideas of any sort. The main thing is I only know how to replace once I read the file in python, but since these registers have the pipes in it, they dont read properly. Ty in advance
2021/10/08
[ "https://Stackoverflow.com/questions/69499962", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14166159/" ]
with subset using `dplyr` you can use the code below ``` library(dplyr) df %>% subset(!is.na(value) & bs_Scores != "bs_24" ) ```
A `dplyr` solution: ```r library(tidyverse) bs_scores <- tibble::tribble( ~bs_Scores, ~value, "bs_0", 16.7, "bs_1", 41.7, "bs_12", 33.3, "bs_24", NA, "bs_0", 25, "bs_1", 41.7, "bs_12", NA, "bs_24", 0, "bs_0", 16.7, "bs_1", 41.7, "bs_12", 16.7, "bs_24", 16.7, "bs_0", NA ) bs_scores %>% filter(!(bs_Scores == "bs_24" & (is.na(value)))) #> # A tibble: 12 × 2 #> bs_Scores value #> <chr> <dbl> #> 1 bs_0 16.7 #> 2 bs_1 41.7 #> 3 bs_12 33.3 #> 4 bs_0 25 #> 5 bs_1 41.7 #> 6 bs_12 NA #> 7 bs_24 0 #> 8 bs_0 16.7 #> 9 bs_1 41.7 #> 10 bs_12 16.7 #> 11 bs_24 16.7 #> 12 bs_0 NA ``` Created on 2021-10-11 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)
69,499,962
So I have this big .csv in my work that looks something like this: ``` Name| Adress| Email| Paid Value John| x street | John@dmail.com| 0| Chris| c street | Chris@dmail.com| 100| Rebecca| y street| RebeccaFML|@dmail.com|177| Bozo | z street| BozoSMH|@yahow.com|976| ``` As you can see, the .csv is seperated by pipes and the email of the last two people have pipes in it, causing formating problems. There are only 2 customers with this problem but these fellas will have more and more entries every month and we have to manually find them in the csv and change the email by hand . It is a very boring and time consuming process because the file is that big. We use python to deal with data, I researched a bit and couldn't find anything to help me with it, any ideas? Edit: So what I want is a way to change this email adresses automatically through code (like RebeccaFML|@dmail.com -> RebeccaFML@dmail.com). It doenst need to be pandas or anything, I am accepting ideas of any sort. The main thing is I only know how to replace once I read the file in python, but since these registers have the pipes in it, they dont read properly. Ty in advance
2021/10/08
[ "https://Stackoverflow.com/questions/69499962", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14166159/" ]
Using `base R` with `subset` ``` subset(df1, !((is.na(value) & bs_Scores == 'bs_24')|bs_Scores == "")) ``` -output ``` bs_Scores value 1 bs_0 16.7 2 bs_1 41.7 3 bs_12 33.3 5 bs_0 25.0 6 bs_1 41.7 7 bs_12 NA 8 bs_24 0.0 9 bs_0 16.7 10 bs_1 41.7 11 bs_12 16.7 12 bs_24 16.7 13 bs_0 NA ``` ### data ``` df1 <- structure(list(bs_Scores = c("bs_0", "bs_1", "bs_12", "bs_24", "bs_0", "bs_1", "bs_12", "bs_24", "bs_0", "bs_1", "bs_12", "bs_24", "bs_0"), value = c(16.7, 41.7, 33.3, NA, 25, 41.7, NA, 0, 16.7, 41.7, 16.7, 16.7, NA)), class = "data.frame", row.names = c(NA, -13L)) ```
A `dplyr` solution: ```r library(tidyverse) bs_scores <- tibble::tribble( ~bs_Scores, ~value, "bs_0", 16.7, "bs_1", 41.7, "bs_12", 33.3, "bs_24", NA, "bs_0", 25, "bs_1", 41.7, "bs_12", NA, "bs_24", 0, "bs_0", 16.7, "bs_1", 41.7, "bs_12", 16.7, "bs_24", 16.7, "bs_0", NA ) bs_scores %>% filter(!(bs_Scores == "bs_24" & (is.na(value)))) #> # A tibble: 12 × 2 #> bs_Scores value #> <chr> <dbl> #> 1 bs_0 16.7 #> 2 bs_1 41.7 #> 3 bs_12 33.3 #> 4 bs_0 25 #> 5 bs_1 41.7 #> 6 bs_12 NA #> 7 bs_24 0 #> 8 bs_0 16.7 #> 9 bs_1 41.7 #> 10 bs_12 16.7 #> 11 bs_24 16.7 #> 12 bs_0 NA ``` Created on 2021-10-11 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)
50,675,758
Help me please with understanding some of asyncio things. I want to realize if its possible to do next: I have synchronous function that for example creates some data in remote API (API can returns success or fail): ``` def sync_func(url): ... do something return result ``` I have coroutine to run that sync operation in executor: ``` async def coro_func(url) loop = asyncio.get_event_loop() fn = functools.partial(sync_func, url) return await loop.run_in_executor(None, fn) ``` Next I want to do smth like 1. If remote API does not respond for 1 sec, I want to start next url to be processed, but I want to know result of that first task (when API finally will send response) that was broken by timeout. I wrap coro\_func() in a shield() to avoid it from cancellation. But don't have an idea how I can check result after ... `list_of_urls = [url1, ... urlN] map_of_task_results = {} async def task_processing(): for url in list_of_urls: res = asyncio.wait_for(shield(coro_func(url), timeout=1)) if res == 'success': return res break else: map_of_task_results[url] = res return "all tasks were processed"` P.S. When I'm tried to access shield(coro) result - it has CancelledError exception.. but I expect that there might be result, because I 'shielded' task. `try: task = asyncio.shield(coro_func(url)) result = await asyncio.wait_for(task, timeout=API_TIMEOUT) except TimeoutError as e: import ipdb; ipdb.set_trace() pending_tasks[api_details['api_url']] = task` ``` ipdb> task <Future cancelled created at /usr/lib/python3.6/asyncio/base_events.py:276> ipdb> task.exception <built-in method exception of _asyncio.Future object at 0x7f7d41eeb588> ipdb> task.exception() ``` \*\*\* concurrent.futures.\_base.CancelledError `
2018/06/04
[ "https://Stackoverflow.com/questions/50675758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2235755/" ]
If you create a future (task) out of your coroutine before you shield it, you can always check it later. For example: ``` coro_task = loop.create_task(coro_func(url)) try: result = await asyncio.wait_for(asyncio.shield(coro_task), API_TIMEOUT) except asyncio.TimeoutError: pending_tasks[api_details['api_url']] = coro_task ``` You can use [`coro_task.done()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.done) to check if the task has completed in the meantime and call [`result()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.result) if so or `await` it if not. If needed you can even use `shield`/`wait_for` on it again, and so on.
Ok, thanks @user4815162342 I figured out how to process tasks those were interrupted by timeout - in common my solution now looks like: ``` def sync_func(url): ... do something probably long return result async def coro_func(url) loop = asyncio.get_event_loop() fn = functools.partial(sync_func, url) return await loop.run_in_executor(None, fn) async def waiter(pending_tasks): count = 60 while not all(map(lambda x: x.done(), pending_tasks.values())) and count > 0: logger.info("Waiting for pending tasks..") await asyncio.sleep(1) count -= 1 # Finally process results those were in pending print([task.result() for task in pending_tasks.values()]) async def task_processing(...): list_of_urls = [url1, ... urlN] pending_tasks = {} for url in list_of_urls: try: task = asyncio.Task(coro_func(url)) result = await asyncio.wait_for(asyncio.shield(task), timeout=API_TIMEOUT) except TimeoutError as e: pending_tasks[url] = task if not result or result != 'success': continue else: print('Do something good here on first fast success, response to user ASAP in my case.') break # here start of pending task processing loop = asyncio.get_event_loop() loop.create_task(waiter(pending_tasks)) ``` So I'm collecting tasks those were interrupted by concurrent.future.TimeoutError in the dict mapping object, then I run task with waiter() coro that tries to wait 60 sec while pending tasks will get status done or 60 sec will run out. In addition to words, my code placed into Tornado's RequestHandler and Tornado uses asyncio event loop. So after N attempts to get fast response from one url from url's list, I can then answer to user and do not lose results of tasks those were initiated and interrupted with TimeoutError. (I can process them after I respond to the user, so that's was my main idea) I hope it saves a lot of time for somebody looking for the same :)
64,341,672
``` totalquestions = int(5) while totalquestions > 0 : num1 = randint(0,9) num2 = randint(0,9) print(num1) print(num2) answer = input(str("What is num1 ** num2?")) if answer == (num1 ** num2): print("correct") else: print("false") ``` I'm trying to create a quiz program where the user is given 2 random numbers and has to find the correct exponentiation of the 2 numbers given. Whenever i try to run this program I always get a false print statement even if the value I've inputted is correct. Sorry if this has a very simple solution I'm still a noob at python.
2020/10/13
[ "https://Stackoverflow.com/questions/64341672", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14444439/" ]
You need to collect the arguments first, *then* pass them to `Person`. ``` def getPeople(num): people = [] for i in range(num): name = input("What is the persons name?: ") age = input("What is the persons age?: ") computing = input("What is the persons Computing score?: ") maths = input("What is the persons Maths score?: ") english = input("What is the persons English score?: ") people.append(Person(name, age, computing, maths, english)) return people people = getPeople(5) ``` Note that there is a good case for using a class method here. ``` class Person: def __init__(self, name, age, computing, maths, english): self.name = name self.age = age self.computing = computing self.maths = maths self.english = english @classmethod def from_input(cls): name = input("What is the persons name?: ") age = input("What is the persons age?: ") computing = input("What is the persons Computing score?: ") maths = input("What is the persons Maths score?: ") english = input("What is the persons English score?: ") return cls(name, age, computing, maths, english) def getPeople(num): return [Person.from_input() for _ in range(num)] ```
You have added an init method for the class, so you need to pass all those variables as arguments when you call the `Person()` class. As an example: ``` name = input() age = input() .... new_person = Person(name, age, ...) people.append(new_person) ```
64,341,672
``` totalquestions = int(5) while totalquestions > 0 : num1 = randint(0,9) num2 = randint(0,9) print(num1) print(num2) answer = input(str("What is num1 ** num2?")) if answer == (num1 ** num2): print("correct") else: print("false") ``` I'm trying to create a quiz program where the user is given 2 random numbers and has to find the correct exponentiation of the 2 numbers given. Whenever i try to run this program I always get a false print statement even if the value I've inputted is correct. Sorry if this has a very simple solution I'm still a noob at python.
2020/10/13
[ "https://Stackoverflow.com/questions/64341672", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14444439/" ]
You need to collect the arguments first, *then* pass them to `Person`. ``` def getPeople(num): people = [] for i in range(num): name = input("What is the persons name?: ") age = input("What is the persons age?: ") computing = input("What is the persons Computing score?: ") maths = input("What is the persons Maths score?: ") english = input("What is the persons English score?: ") people.append(Person(name, age, computing, maths, english)) return people people = getPeople(5) ``` Note that there is a good case for using a class method here. ``` class Person: def __init__(self, name, age, computing, maths, english): self.name = name self.age = age self.computing = computing self.maths = maths self.english = english @classmethod def from_input(cls): name = input("What is the persons name?: ") age = input("What is the persons age?: ") computing = input("What is the persons Computing score?: ") maths = input("What is the persons Maths score?: ") english = input("What is the persons English score?: ") return cls(name, age, computing, maths, english) def getPeople(num): return [Person.from_input() for _ in range(num)] ```
The arguments to the `__init__()` method specify the arguments you have to supply when you call `Person()` (except that `self` is passed automatically). So you need to pass all the attribute values there, not assign them after creating the person. ``` def getPeople(num): people = [] for i in range(num): name = input("What is the persons name?: ") age = input("What is the persons age?: ") computing = input("What is the persons Computing score?: ") maths = input("What is the persons Maths score?: ") english = input("What is the persons English score?: ") newPerson = Person(name, age, computing, maths, english) people.append(newPerson) return people ```
64,341,672
``` totalquestions = int(5) while totalquestions > 0 : num1 = randint(0,9) num2 = randint(0,9) print(num1) print(num2) answer = input(str("What is num1 ** num2?")) if answer == (num1 ** num2): print("correct") else: print("false") ``` I'm trying to create a quiz program where the user is given 2 random numbers and has to find the correct exponentiation of the 2 numbers given. Whenever i try to run this program I always get a false print statement even if the value I've inputted is correct. Sorry if this has a very simple solution I'm still a noob at python.
2020/10/13
[ "https://Stackoverflow.com/questions/64341672", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14444439/" ]
You need to collect the arguments first, *then* pass them to `Person`. ``` def getPeople(num): people = [] for i in range(num): name = input("What is the persons name?: ") age = input("What is the persons age?: ") computing = input("What is the persons Computing score?: ") maths = input("What is the persons Maths score?: ") english = input("What is the persons English score?: ") people.append(Person(name, age, computing, maths, english)) return people people = getPeople(5) ``` Note that there is a good case for using a class method here. ``` class Person: def __init__(self, name, age, computing, maths, english): self.name = name self.age = age self.computing = computing self.maths = maths self.english = english @classmethod def from_input(cls): name = input("What is the persons name?: ") age = input("What is the persons age?: ") computing = input("What is the persons Computing score?: ") maths = input("What is the persons Maths score?: ") english = input("What is the persons English score?: ") return cls(name, age, computing, maths, english) def getPeople(num): return [Person.from_input() for _ in range(num)] ```
Did you mean to collect the arguments first and then supply them to the new instance? ```py def getPeople(num): people = [] for i in range(num): name = input("What is the persons name?: ") age = input("What is the persons age?: ") computing = input("What is the persons Computing score?: ") maths = input("What is the persons Maths score?: ") english = input("What is the persons English score?: ") people.append(Person(name, age, computing, maths, english)) return people ```
39,225,263
The bottleneck of my code is currently a conversion from a Python list to a C array using ctypes, as described [in this question](https://stackoverflow.com/questions/4145775/how-do-i-convert-a-python-list-into-a-c-array-by-using-ctypes). A small experiment shows that it is indeed very slow, in comparison of other Python instructions: ``` import timeit setup="from array import array; import ctypes; t = [i for i in range(1000000)];" print(timeit.timeit(stmt='(ctypes.c_uint32 * len(t))(*t)',setup=setup,number=10)) print(timeit.timeit(stmt='array("I",t)',setup=setup,number=10)) print(timeit.timeit(stmt='set(t)',setup=setup,number=10)) ``` Gives: ``` 1.790962941000089 0.0911122129996329 0.3200237319997541 ``` I obtained these results with CPython 3.4.2. I get similar times with CPython 2.7.9 and Pypy 2.4.0. I tried runing the above code with `perf`, commenting the `timeit` instructions to run only one at a time. I get these results: **ctypes** ``` Performance counter stats for 'python3 perf.py': 1807,891637 task-clock (msec) # 1,000 CPUs utilized 8 context-switches # 0,004 K/sec 0 cpu-migrations # 0,000 K/sec 59 523 page-faults # 0,033 M/sec 5 755 704 178 cycles # 3,184 GHz 13 552 506 138 instructions # 2,35 insn per cycle 3 217 289 822 branches # 1779,581 M/sec 748 614 branch-misses # 0,02% of all branches 1,808349671 seconds time elapsed ``` **array** ``` Performance counter stats for 'python3 perf.py': 144,678718 task-clock (msec) # 0,998 CPUs utilized 0 context-switches # 0,000 K/sec 0 cpu-migrations # 0,000 K/sec 12 913 page-faults # 0,089 M/sec 458 284 661 cycles # 3,168 GHz 1 253 747 066 instructions # 2,74 insn per cycle 325 528 639 branches # 2250,011 M/sec 708 280 branch-misses # 0,22% of all branches 0,144966969 seconds time elapsed ``` **set** ``` Performance counter stats for 'python3 perf.py': 369,786395 task-clock (msec) # 0,999 CPUs utilized 0 context-switches # 0,000 K/sec 0 cpu-migrations # 0,000 K/sec 108 584 page-faults # 0,294 M/sec 1 175 946 161 cycles # 3,180 GHz 2 086 554 968 instructions # 1,77 insn per cycle 422 531 402 branches # 1142,636 M/sec 768 338 branch-misses # 0,18% of all branches 0,370103043 seconds time elapsed ``` The code with `ctypes` has less page-faults than the code with `set` and the same number of branch-misses than the two others. The only thing I see is that there are more instructions and branches (but I still don't know why) and more context switches (but it is certainly a consequence of the longer run time rather than a cause). I therefore have two questions: 1. Why is ctypes so slow ? 2. Is there a way to improve performances, either with ctype or with another library?
2016/08/30
[ "https://Stackoverflow.com/questions/39225263", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4110059/" ]
Here's a little trick, it works for all sorts of situations including yours. But also for trailing comma's for example. Concept ------- Instead of printing your text directly, store it in an array like so: ``` $information_to_print = ['col1', 'col2', 'col3']; $cols = []; foreach ($information_to_print as $col) { $cols[] = 'This is: ' . $col; } ``` Now all you have to do is implode the array, using closing and opening tags as glue, and wrap in corresponding elements. ``` echo '<tr><td>' . implode('</td><td>', $cols) . '</td></tr>; ``` Implementation -------------- In your particular case it would look something like this ``` <?php $entriesConverted = [ ['column_size' => 1, 'content' => 'Item 1', 'text_align' => 'Center'], ['column_size' => 0.5, 'content' => 'Item 2', 'text_align' => 'Center'], ['column_size' => 0.75, 'content' => 'Item 3', 'text_align' => 'Center'], ]; // Set the sum to 0 to keep things clean and simple $sum = 0; $blocks = []; $block_i = 0; // Echo the starting div echo '<div class="content-block homepage-block row">', PHP_EOL; // Loop through the new columns foreach($entriesConverted as $newEntry){ for ($i=$newEntry['column_size']; $i <= 1; $i++) { $sum += $i; $newEntry['column_size'] = str_replace([0.25, 0.33, 0.5, 0.67, 0.75, 1], ['col-md-3', 'col-md-4', 'col-md-6', 'col-md-8', 'col-md-9', 'col-md-12'], $newEntry['column_size']); $newEntry['text_align'] = str_replace(['Left', 'Center', 'Right', 'Justified'], ['text-left', 'text-center', 'text-right', 'text-justify'], $newEntry['text_align']); if (!isset($blocks[$block_i])) { $blocks[] = ''; } $blocks[$block_i] .= '<div class="' . $newEntry['column_size'] . ' ' . $newEntry['text_align'] . '">' . $newEntry['content'] . '</div>'; } if($sum == 1){ $sum = 0; ++$block_i; } } echo implode("\n</div>\n<div class=\"content-block homepage-block row\">\n", $blocks); // Echo closing div echo PHP_EOL, '</div>'; ``` See a working version here: <http://ideone.com/28uXCT> *Note: I added some newlines to keep the output readable* **warning:** Be aware of a bug in your code. As you can see in the output of ideone, the total column span of the second row exceeds 12.
I think this might be easier if the row elements are inside the loop rather than outside. For example here's a quick pseudocode: ``` array items sum = 0 loop through items open row print output for this item increment sum if sum is 1 set sum 0 close row if this is not the last item in the array open next row ```
18,785,063
I've created virtualenv for Python 2.7.4 on Ubuntu 13.04. I've installed python-dev. I have [the error](http://pastebin.com/YQfdYDVK) when installing numpy in the virtualenv. Maybe, you have any ideas to fix?
2013/09/13
[ "https://Stackoverflow.com/questions/18785063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1212100/" ]
The problem is `SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel.` so do the following in order to obtain 'Python.h' make sure apt-get and gcc are up to date ``` sudo apt-get update sudo apt-get upgrade gcc ``` then install the python2.7-dev ``` sudo apt-get install python2.7-dev ``` and I see that you have most probably already done the above things. pip will eventually spit out another error for not being able to write into `/user/bin/blahBlah/dist-packages/` or something like that because it couldn't figure out that it was supposed to install your desiredPackage (e.g. numpy) within the active env (the env created by virtualenv which you might have even changed directory to while doing all this) so do this: ``` pip -E /some/path/env install desiredPackage ``` that should get the job done... hopefully :) **---Edit---** From PIP Version 1.1 onward, the command `pip -E` doesn't work. The following is an excerpt from the release notes of version 1.1 (<https://pip.pypa.io/en/latest/news.html>) Removed `-E/--environment` option and `PIP_RESPECT_VIRTUALENV`; both use a restart-in-venv mechanism that's broken, and neither one is useful since every virtualenv now has pip inside it. Replace `pip -E path/to/venv install Foo` with `virtualenv path/to/venv && path/to/venv/pip install Foo`
This is probably because you do not have the `python-dev` package installed. You can install it like this: ``` sudo apt-get install python-dev ``` You can also install it via the Software Center: ![enter image description here](https://i.stack.imgur.com/mNiu0.png)
18,785,063
I've created virtualenv for Python 2.7.4 on Ubuntu 13.04. I've installed python-dev. I have [the error](http://pastebin.com/YQfdYDVK) when installing numpy in the virtualenv. Maybe, you have any ideas to fix?
2013/09/13
[ "https://Stackoverflow.com/questions/18785063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1212100/" ]
If you're hitting this issue even though you've installed all OS dependencies (python-devel, fortran compiler, etc), the issue might be instead related to the following bug: ["numpy installation thru install\_requires directive issue..."](http://github.com/numpy/numpy/issues/2434) Work around is to manually install numpy in your (virtual) environment before running setup.py to install whatever you want to install that depends on numpy. eg, `pip install numpy` then `python ./setup.py install`
@samkhan13 solution didn't work for me as pip said it doesn't have the -E option. I was still getting the same error, but what worked for me was to install matplotlib, which installed numpy.
18,785,063
I've created virtualenv for Python 2.7.4 on Ubuntu 13.04. I've installed python-dev. I have [the error](http://pastebin.com/YQfdYDVK) when installing numpy in the virtualenv. Maybe, you have any ideas to fix?
2013/09/13
[ "https://Stackoverflow.com/questions/18785063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1212100/" ]
The problem is `SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel.` so do the following in order to obtain 'Python.h' make sure apt-get and gcc are up to date ``` sudo apt-get update sudo apt-get upgrade gcc ``` then install the python2.7-dev ``` sudo apt-get install python2.7-dev ``` and I see that you have most probably already done the above things. pip will eventually spit out another error for not being able to write into `/user/bin/blahBlah/dist-packages/` or something like that because it couldn't figure out that it was supposed to install your desiredPackage (e.g. numpy) within the active env (the env created by virtualenv which you might have even changed directory to while doing all this) so do this: ``` pip -E /some/path/env install desiredPackage ``` that should get the job done... hopefully :) **---Edit---** From PIP Version 1.1 onward, the command `pip -E` doesn't work. The following is an excerpt from the release notes of version 1.1 (<https://pip.pypa.io/en/latest/news.html>) Removed `-E/--environment` option and `PIP_RESPECT_VIRTUALENV`; both use a restart-in-venv mechanism that's broken, and neither one is useful since every virtualenv now has pip inside it. Replace `pip -E path/to/venv install Foo` with `virtualenv path/to/venv && path/to/venv/pip install Foo`
If you're hitting this issue even though you've installed all OS dependencies (python-devel, fortran compiler, etc), the issue might be instead related to the following bug: ["numpy installation thru install\_requires directive issue..."](http://github.com/numpy/numpy/issues/2434) Work around is to manually install numpy in your (virtual) environment before running setup.py to install whatever you want to install that depends on numpy. eg, `pip install numpy` then `python ./setup.py install`
18,785,063
I've created virtualenv for Python 2.7.4 on Ubuntu 13.04. I've installed python-dev. I have [the error](http://pastebin.com/YQfdYDVK) when installing numpy in the virtualenv. Maybe, you have any ideas to fix?
2013/09/13
[ "https://Stackoverflow.com/questions/18785063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1212100/" ]
If you're on Python3 you'll need to do `sudo apt-get install python3-dev`. Took me a little while to figure it out.
This is probably because you do not have the `python-dev` package installed. You can install it like this: ``` sudo apt-get install python-dev ``` You can also install it via the Software Center: ![enter image description here](https://i.stack.imgur.com/mNiu0.png)
18,785,063
I've created virtualenv for Python 2.7.4 on Ubuntu 13.04. I've installed python-dev. I have [the error](http://pastebin.com/YQfdYDVK) when installing numpy in the virtualenv. Maybe, you have any ideas to fix?
2013/09/13
[ "https://Stackoverflow.com/questions/18785063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1212100/" ]
If you're on Python3 you'll need to do `sudo apt-get install python3-dev`. Took me a little while to figure it out.
This answer is for those of us that compiled python from source or installed it to a non standard directory. In my case, python2.7 was installed to /usr/local and the include files were installed to /usr/local/include/python2.7 ``` C_INCLUDE_PATH=/usr/local/include/python2.7:$C_INCLUDE_PATH pip install numpy ```
18,785,063
I've created virtualenv for Python 2.7.4 on Ubuntu 13.04. I've installed python-dev. I have [the error](http://pastebin.com/YQfdYDVK) when installing numpy in the virtualenv. Maybe, you have any ideas to fix?
2013/09/13
[ "https://Stackoverflow.com/questions/18785063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1212100/" ]
If you're on Python3 you'll need to do `sudo apt-get install python3-dev`. Took me a little while to figure it out.
I recently had the same problem. I run Debian Jessie and tried to install numpy from a Python 2.7.9 virtualenv. I got the same error -- numpy complaining that Python.h is missing while python2.7-dev and gcc are already installed. ``` File "numpy/core/setup.py", line 42, in check_types ], File "numpy/core/setup.py", line 293, in check_types SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel. ``` I'm running pip 1.5.6 and it doesn't appear to have command line option '-E' ``` $ pip -V pip 1.5.6 from /home/alex/.virtualenvs/myenv/local/lib/python2.7/site- packages (python 2.7) ``` Upgrading pip to the latest verson 7.0.3 solves the problem ``` $ pip install --upgrade pip Downloading/unpacking pip from https://pypi.python.org/packages/py2.py3/p/pip/pip-7.0.3-py2.py3-none-any.whl#md5=6950e1d775fea7ea50af690f72589dbd Downloading pip-7.0.3-py2.py3-none-any.whl (1.1MB): 1.1MB downloaded Installing collected packages: pip Found existing installation: pip 1.5.6 Uninstalling pip: Successfully uninstalled pip Successfully installed pip Cleaning up... ``` Now it is possible to install numpy ``` $ pip install numpy Collecting numpy Downloading numpy-1.9.2.tar.gz (4.0MB) 100% |████████████████████████████████| 4.0MB 61kB/s Installing collected packages: numpy Running setup.py install for numpy Successfully installed numpy-1.9.2 ```
18,785,063
I've created virtualenv for Python 2.7.4 on Ubuntu 13.04. I've installed python-dev. I have [the error](http://pastebin.com/YQfdYDVK) when installing numpy in the virtualenv. Maybe, you have any ideas to fix?
2013/09/13
[ "https://Stackoverflow.com/questions/18785063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1212100/" ]
The problem is `SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel.` so do the following in order to obtain 'Python.h' make sure apt-get and gcc are up to date ``` sudo apt-get update sudo apt-get upgrade gcc ``` then install the python2.7-dev ``` sudo apt-get install python2.7-dev ``` and I see that you have most probably already done the above things. pip will eventually spit out another error for not being able to write into `/user/bin/blahBlah/dist-packages/` or something like that because it couldn't figure out that it was supposed to install your desiredPackage (e.g. numpy) within the active env (the env created by virtualenv which you might have even changed directory to while doing all this) so do this: ``` pip -E /some/path/env install desiredPackage ``` that should get the job done... hopefully :) **---Edit---** From PIP Version 1.1 onward, the command `pip -E` doesn't work. The following is an excerpt from the release notes of version 1.1 (<https://pip.pypa.io/en/latest/news.html>) Removed `-E/--environment` option and `PIP_RESPECT_VIRTUALENV`; both use a restart-in-venv mechanism that's broken, and neither one is useful since every virtualenv now has pip inside it. Replace `pip -E path/to/venv install Foo` with `virtualenv path/to/venv && path/to/venv/pip install Foo`
This answer is for those of us that compiled python from source or installed it to a non standard directory. In my case, python2.7 was installed to /usr/local and the include files were installed to /usr/local/include/python2.7 ``` C_INCLUDE_PATH=/usr/local/include/python2.7:$C_INCLUDE_PATH pip install numpy ```
18,785,063
I've created virtualenv for Python 2.7.4 on Ubuntu 13.04. I've installed python-dev. I have [the error](http://pastebin.com/YQfdYDVK) when installing numpy in the virtualenv. Maybe, you have any ideas to fix?
2013/09/13
[ "https://Stackoverflow.com/questions/18785063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1212100/" ]
If you're hitting this issue even though you've installed all OS dependencies (python-devel, fortran compiler, etc), the issue might be instead related to the following bug: ["numpy installation thru install\_requires directive issue..."](http://github.com/numpy/numpy/issues/2434) Work around is to manually install numpy in your (virtual) environment before running setup.py to install whatever you want to install that depends on numpy. eg, `pip install numpy` then `python ./setup.py install`
This answer is for those of us that compiled python from source or installed it to a non standard directory. In my case, python2.7 was installed to /usr/local and the include files were installed to /usr/local/include/python2.7 ``` C_INCLUDE_PATH=/usr/local/include/python2.7:$C_INCLUDE_PATH pip install numpy ```
18,785,063
I've created virtualenv for Python 2.7.4 on Ubuntu 13.04. I've installed python-dev. I have [the error](http://pastebin.com/YQfdYDVK) when installing numpy in the virtualenv. Maybe, you have any ideas to fix?
2013/09/13
[ "https://Stackoverflow.com/questions/18785063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1212100/" ]
The problem is `SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel.` so do the following in order to obtain 'Python.h' make sure apt-get and gcc are up to date ``` sudo apt-get update sudo apt-get upgrade gcc ``` then install the python2.7-dev ``` sudo apt-get install python2.7-dev ``` and I see that you have most probably already done the above things. pip will eventually spit out another error for not being able to write into `/user/bin/blahBlah/dist-packages/` or something like that because it couldn't figure out that it was supposed to install your desiredPackage (e.g. numpy) within the active env (the env created by virtualenv which you might have even changed directory to while doing all this) so do this: ``` pip -E /some/path/env install desiredPackage ``` that should get the job done... hopefully :) **---Edit---** From PIP Version 1.1 onward, the command `pip -E` doesn't work. The following is an excerpt from the release notes of version 1.1 (<https://pip.pypa.io/en/latest/news.html>) Removed `-E/--environment` option and `PIP_RESPECT_VIRTUALENV`; both use a restart-in-venv mechanism that's broken, and neither one is useful since every virtualenv now has pip inside it. Replace `pip -E path/to/venv install Foo` with `virtualenv path/to/venv && path/to/venv/pip install Foo`
@samkhan13 solution didn't work for me as pip said it doesn't have the -E option. I was still getting the same error, but what worked for me was to install matplotlib, which installed numpy.
18,785,063
I've created virtualenv for Python 2.7.4 on Ubuntu 13.04. I've installed python-dev. I have [the error](http://pastebin.com/YQfdYDVK) when installing numpy in the virtualenv. Maybe, you have any ideas to fix?
2013/09/13
[ "https://Stackoverflow.com/questions/18785063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1212100/" ]
If you're hitting this issue even though you've installed all OS dependencies (python-devel, fortran compiler, etc), the issue might be instead related to the following bug: ["numpy installation thru install\_requires directive issue..."](http://github.com/numpy/numpy/issues/2434) Work around is to manually install numpy in your (virtual) environment before running setup.py to install whatever you want to install that depends on numpy. eg, `pip install numpy` then `python ./setup.py install`
This is probably because you do not have the `python-dev` package installed. You can install it like this: ``` sudo apt-get install python-dev ``` You can also install it via the Software Center: ![enter image description here](https://i.stack.imgur.com/mNiu0.png)
22,099,882
I need some help with the encoding of a list. I'm new in python, sorry. First, I'm using Python 2.7.3 I have two lists (entidad & valores), and I need to get them encoded or something of that. My code: ``` import urllib from bs4 import BeautifulSoup import csv sock = urllib.urlopen("http://www.fatm.com.es/Datos_Equipo.asp?Cod=01HU0010") htmlSource = sock.read() sock.close() soup = BeautifulSoup(htmlSource) form = soup.find("form", {'id': "FORM1"}) table = form.find("table") entidad = [item.text.strip() for item in table.find_all('td')] valores = [item.get('value') for item in form.find_all('input')] valores.remove('Imprimir') valores.remove('Cerrar') header = entidad values = valores print values out = open('tomate.csv', 'w') w = csv.writer(out) w.writerow(header) w.writerow(values) out.close() ``` the log: *UnicodeEncodeError: 'ascii' codec can't encode character* any ideas? Thanks in advance!!
2014/02/28
[ "https://Stackoverflow.com/questions/22099882", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3361555/" ]
You should encode your data to utf-8 manually, csv.writer didnt do it for you: ``` w.writerow([s.encode("utf-8") for s in header]) w.writerow([s.encode("utf-8") for s in values]) #w.writerow(header) #w.writerow(values) ```
This appears to be the same type of problem as had been found here [UnicodeEncodeError in csv writer in Python](http://love-python.blogspot.com/2012/04/unicodeencodeerror-in-csv-writer-in.html) > > UnicodeEncodeError in csv writer in Python > > Today I was writing a > program that generates a csv file after some processing. But I got the > following error while trying on some test data: > > > writer.writerow(csv\_li) UnicodeEncodeError: 'ascii' codec can't encode > character u'\xbf' in position 5: ordinal not in range(128) > > > I looked into the documentation of csv module in Python and found a > class named UnicodeWriter. So I changed my code to > > > writer = UnicodeWriter(open("filename.csv", "wb")) > > > Then I tried to run it again. It got rid of the previous > UnicodeEncodeError but got into another error. > > > self.writer.writerow([s.encode("utf-8") for s in row]) AttributeError: > 'int' object has no attribute 'encode' > > > So, before writing the list, I had to change every value to string. > > > row = [str(item) for item in row] > > > I think this line can be added in the writerow function of > UnicodeWriter class. > > >
41,286,526
I am trying to setup a queue listener for laravel and cannot seem to get supervisor working correctly. I get the following error when I run `supervisorctl reload`: `error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib/python2.7/socket.py line: 228` The file DOES exist. If try to run `sudo supervisorctl` I get this `unix:///var/run/supervisor.sock no such file`. I've tried reinstall supervisor and that did not work either. Not sure what to do here. I'm running Laravel Homestead (Ubuntu 16.04). Result of `service supervisor status`: `vagrant@homestead:~/Code$ sudo service supervisor status ● supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2016-12-22 11:06:21 EST; 41s ago Docs: http://supervisord.org Process: 23154 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Process: 23149 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2) Main PID: 23149 (code=exited, status=2)`
2016/12/22
[ "https://Stackoverflow.com/questions/41286526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1965066/" ]
You should run `sudo service supervisor start` when you are in the supervisor dir. Worked for me.
I had a very similar problem (Ubuntu 18.04) and searched similar threads to no avail so answering here with some more comprehensive answers. Lack of a sock file or socket error is only an indicator that supervisor is not running. If a simple restart doesn't work its either 1. not installed, or 2. failing to start. In my case nothing was being logged to the supervisor.log file for me to know why it was failing until I ran the following command (-n to run in foreground) only to find out that there was a leftover configuration file for a project that had been deleted that I missed. ``` /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf ``` Once I deleted the bad/leftover file in the conf.d folder and started it back up with `sudo service supervisor start` everything worked. Here are some comprehensive steps you can take. 1. Is supervisor installed? `dpkg -l | grep supervisor` If not reinstall `sudo apt install supervisor` 2. Are all instances of supervisor stopped? `systemctl stop supervisor` Lingering supervisor processes can be found `ps aux | grep supervisor` then `kill -9 PID`. 3. Is supervisor.conf in the right location `/etc/supervisor/supervisor.conf` and there are no syntax errors? Reinstall from package would correct this. 4. Move your specific files in conf.d/ temporarily out of the folder to try and start with no additional config files. If it starts right up `sudo service supervisor start` the likelihood of an error in your project .conf file exists. 5. Check status with `sudo service supervisor status`. 6. Move your .conf files one by one back into conf.d/ and restart `sudo service supervisor restart`. Be sure to check with `sudo service supervisor status` between. If it fails you know which .conf file has an issue and can ask for specific help. 7. check everything is running with `supervisorctl status` and if not start with `supervisorctl start all`.
41,286,526
I am trying to setup a queue listener for laravel and cannot seem to get supervisor working correctly. I get the following error when I run `supervisorctl reload`: `error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib/python2.7/socket.py line: 228` The file DOES exist. If try to run `sudo supervisorctl` I get this `unix:///var/run/supervisor.sock no such file`. I've tried reinstall supervisor and that did not work either. Not sure what to do here. I'm running Laravel Homestead (Ubuntu 16.04). Result of `service supervisor status`: `vagrant@homestead:~/Code$ sudo service supervisor status ● supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2016-12-22 11:06:21 EST; 41s ago Docs: http://supervisord.org Process: 23154 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Process: 23149 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2) Main PID: 23149 (code=exited, status=2)`
2016/12/22
[ "https://Stackoverflow.com/questions/41286526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1965066/" ]
None of about answers helped me. the problem was i didn't follow [supervisor documentation](http://supervisord.org/installing.html). and a step i didn't do was run `echo_supervisord_conf` command that makes the configuration file. ****Steps i did for**** **Ubuntu 18.04:** **Installing supervisor (without pip):** 1. `sudo apt-get install supervisor` 2. `echo_supervisord_conf > /etc/supervisord.conf` (with root access: first run `sudo -i` then `echo_supervisord_conf > /etc/supervisord.conf`) 3. change python dependency to python2 > > (`Depends: python-pkg-resources, init-system-helpers (>= 1.18~), python-meld3, python:any (<< 2.8), python:any (>= 2.7.5-5~)` > > > in these files: `/usr/bin/supervisord` | `/usr/bin/supervisorctl` | `/usr/bin/echo_supervisord_conf`. Just change the first line from `#!/usr/bin/python` to `#!/usr/bin/python2` 4. run `supervisord` 5. Finish hope help !
Check the *supervisord.conf* file. Look for the following: ``` [unix_http_server] file=/path/to/supervisor.sock/file ; (the path to the socket file) chmod=0700 ; sockef file mode(default 0700) ``` Go to the path mentioned above and check if the file is present. If it is present then try re-installing supervisor. If not then search for *supervisor.sock* file either using the command line or file explorer GUI. Copy the file found in the above step to the location specified in the [unix\_http\_server] by using the cp command or GUI. For me, the *supervisor.sock* was present in the /run folder.
41,286,526
I am trying to setup a queue listener for laravel and cannot seem to get supervisor working correctly. I get the following error when I run `supervisorctl reload`: `error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib/python2.7/socket.py line: 228` The file DOES exist. If try to run `sudo supervisorctl` I get this `unix:///var/run/supervisor.sock no such file`. I've tried reinstall supervisor and that did not work either. Not sure what to do here. I'm running Laravel Homestead (Ubuntu 16.04). Result of `service supervisor status`: `vagrant@homestead:~/Code$ sudo service supervisor status ● supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2016-12-22 11:06:21 EST; 41s ago Docs: http://supervisord.org Process: 23154 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Process: 23149 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2) Main PID: 23149 (code=exited, status=2)`
2016/12/22
[ "https://Stackoverflow.com/questions/41286526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1965066/" ]
You can try by removing all of the related folder of supervisor & uninstall supervisor completely. ``` sudo rm -rf /var/log/supervisor/supervisord.log sudo rm -rf /etc/supervisor/conf.d/ ``` After doing this, reinstall supervisor by ``` sudo apt install supervisor ``` Now, you can run correctly. Check with ``` sudo systemctl status supervisor ```
Check the *supervisord.conf* file. Look for the following: ``` [unix_http_server] file=/path/to/supervisor.sock/file ; (the path to the socket file) chmod=0700 ; sockef file mode(default 0700) ``` Go to the path mentioned above and check if the file is present. If it is present then try re-installing supervisor. If not then search for *supervisor.sock* file either using the command line or file explorer GUI. Copy the file found in the above step to the location specified in the [unix\_http\_server] by using the cp command or GUI. For me, the *supervisor.sock* was present in the /run folder.
41,286,526
I am trying to setup a queue listener for laravel and cannot seem to get supervisor working correctly. I get the following error when I run `supervisorctl reload`: `error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib/python2.7/socket.py line: 228` The file DOES exist. If try to run `sudo supervisorctl` I get this `unix:///var/run/supervisor.sock no such file`. I've tried reinstall supervisor and that did not work either. Not sure what to do here. I'm running Laravel Homestead (Ubuntu 16.04). Result of `service supervisor status`: `vagrant@homestead:~/Code$ sudo service supervisor status ● supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2016-12-22 11:06:21 EST; 41s ago Docs: http://supervisord.org Process: 23154 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Process: 23149 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2) Main PID: 23149 (code=exited, status=2)`
2016/12/22
[ "https://Stackoverflow.com/questions/41286526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1965066/" ]
I had a very similar problem (Ubuntu 18.04) and searched similar threads to no avail so answering here with some more comprehensive answers. Lack of a sock file or socket error is only an indicator that supervisor is not running. If a simple restart doesn't work its either 1. not installed, or 2. failing to start. In my case nothing was being logged to the supervisor.log file for me to know why it was failing until I ran the following command (-n to run in foreground) only to find out that there was a leftover configuration file for a project that had been deleted that I missed. ``` /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf ``` Once I deleted the bad/leftover file in the conf.d folder and started it back up with `sudo service supervisor start` everything worked. Here are some comprehensive steps you can take. 1. Is supervisor installed? `dpkg -l | grep supervisor` If not reinstall `sudo apt install supervisor` 2. Are all instances of supervisor stopped? `systemctl stop supervisor` Lingering supervisor processes can be found `ps aux | grep supervisor` then `kill -9 PID`. 3. Is supervisor.conf in the right location `/etc/supervisor/supervisor.conf` and there are no syntax errors? Reinstall from package would correct this. 4. Move your specific files in conf.d/ temporarily out of the folder to try and start with no additional config files. If it starts right up `sudo service supervisor start` the likelihood of an error in your project .conf file exists. 5. Check status with `sudo service supervisor status`. 6. Move your .conf files one by one back into conf.d/ and restart `sudo service supervisor restart`. Be sure to check with `sudo service supervisor status` between. If it fails you know which .conf file has an issue and can ask for specific help. 7. check everything is running with `supervisorctl status` and if not start with `supervisorctl start all`.
Facing the `python file not found an error, code=exited, status=2` once I try with the official document but still same. I have tried so many solutions for my laravel application. But at last, I have tried with my solution. Here is an example for the code : ``` [program:dev-worker] process_name=%(program_name)s_%(process_num)02d command=php /var/www/html/example.com/artisan queue:work --sleep=3 --tries=3 autostart=true autorestart=true user=ubuntu numprocs=8 redirect_stderr=true stdout_logfile=/var/www/html/example.com/storage/logs/laravel.log stopwaitsecs=3600 ``` Ref: <https://laravel.com/docs/7.x/queues#supervisor-configuration>
41,286,526
I am trying to setup a queue listener for laravel and cannot seem to get supervisor working correctly. I get the following error when I run `supervisorctl reload`: `error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib/python2.7/socket.py line: 228` The file DOES exist. If try to run `sudo supervisorctl` I get this `unix:///var/run/supervisor.sock no such file`. I've tried reinstall supervisor and that did not work either. Not sure what to do here. I'm running Laravel Homestead (Ubuntu 16.04). Result of `service supervisor status`: `vagrant@homestead:~/Code$ sudo service supervisor status ● supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2016-12-22 11:06:21 EST; 41s ago Docs: http://supervisord.org Process: 23154 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Process: 23149 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2) Main PID: 23149 (code=exited, status=2)`
2016/12/22
[ "https://Stackoverflow.com/questions/41286526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1965066/" ]
I had a very similar problem (Ubuntu 18.04) and searched similar threads to no avail so answering here with some more comprehensive answers. Lack of a sock file or socket error is only an indicator that supervisor is not running. If a simple restart doesn't work its either 1. not installed, or 2. failing to start. In my case nothing was being logged to the supervisor.log file for me to know why it was failing until I ran the following command (-n to run in foreground) only to find out that there was a leftover configuration file for a project that had been deleted that I missed. ``` /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf ``` Once I deleted the bad/leftover file in the conf.d folder and started it back up with `sudo service supervisor start` everything worked. Here are some comprehensive steps you can take. 1. Is supervisor installed? `dpkg -l | grep supervisor` If not reinstall `sudo apt install supervisor` 2. Are all instances of supervisor stopped? `systemctl stop supervisor` Lingering supervisor processes can be found `ps aux | grep supervisor` then `kill -9 PID`. 3. Is supervisor.conf in the right location `/etc/supervisor/supervisor.conf` and there are no syntax errors? Reinstall from package would correct this. 4. Move your specific files in conf.d/ temporarily out of the folder to try and start with no additional config files. If it starts right up `sudo service supervisor start` the likelihood of an error in your project .conf file exists. 5. Check status with `sudo service supervisor status`. 6. Move your .conf files one by one back into conf.d/ and restart `sudo service supervisor restart`. Be sure to check with `sudo service supervisor status` between. If it fails you know which .conf file has an issue and can ask for specific help. 7. check everything is running with `supervisorctl status` and if not start with `supervisorctl start all`.
If by running `sudo service supervisor status` you get the following: `ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2)` Try running `/usr/bin/supervisord`, it will give you clear message to tell you where the error is.
41,286,526
I am trying to setup a queue listener for laravel and cannot seem to get supervisor working correctly. I get the following error when I run `supervisorctl reload`: `error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib/python2.7/socket.py line: 228` The file DOES exist. If try to run `sudo supervisorctl` I get this `unix:///var/run/supervisor.sock no such file`. I've tried reinstall supervisor and that did not work either. Not sure what to do here. I'm running Laravel Homestead (Ubuntu 16.04). Result of `service supervisor status`: `vagrant@homestead:~/Code$ sudo service supervisor status ● supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2016-12-22 11:06:21 EST; 41s ago Docs: http://supervisord.org Process: 23154 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Process: 23149 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2) Main PID: 23149 (code=exited, status=2)`
2016/12/22
[ "https://Stackoverflow.com/questions/41286526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1965066/" ]
On Centos 7 I use the following... ``` supervisord -c /path/to/supervisord.conf ``` followed by... ``` supervisorctl -c /path/to/supervisord.conf ``` This gets rid of the ".sock file not found" error. Now you have to kill old processes using... ``` ps aux|grep gunicorn ``` Kill the offending processes using... ``` kill <pid> ``` Then again... ``` supervisorctl -c /path/to/supervisord.conf ``` Supervisor should now be running properly if your config is good.
I ran into this issue because we were using supervisorctl to manage gunicorn. The root of my problem had nothing to do with supervisor (it was handling other processes just fine) or the python sock.py file (file was there, permissions were correct), but rather the gunicorn config file `/etc/supervisor/conf.d/gunicorn.conf`. This configuration file was managed by a source-controlled template with environment variables and when we updated the template on the server, the template variables were never replaced with the actual data. So for example something in the gunicorn.conf file read `user={{ user }}` instead of `user=gunicorn`. When supervisor tried to parse this config when running `supervisorctl start gunicorn` it would crash with this socket error. Repairing the gunicorn.conf file resolved the supervisor issue.
41,286,526
I am trying to setup a queue listener for laravel and cannot seem to get supervisor working correctly. I get the following error when I run `supervisorctl reload`: `error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib/python2.7/socket.py line: 228` The file DOES exist. If try to run `sudo supervisorctl` I get this `unix:///var/run/supervisor.sock no such file`. I've tried reinstall supervisor and that did not work either. Not sure what to do here. I'm running Laravel Homestead (Ubuntu 16.04). Result of `service supervisor status`: `vagrant@homestead:~/Code$ sudo service supervisor status ● supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2016-12-22 11:06:21 EST; 41s ago Docs: http://supervisord.org Process: 23154 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Process: 23149 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2) Main PID: 23149 (code=exited, status=2)`
2016/12/22
[ "https://Stackoverflow.com/questions/41286526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1965066/" ]
I had a very similar problem (Ubuntu 18.04) and searched similar threads to no avail so answering here with some more comprehensive answers. Lack of a sock file or socket error is only an indicator that supervisor is not running. If a simple restart doesn't work its either 1. not installed, or 2. failing to start. In my case nothing was being logged to the supervisor.log file for me to know why it was failing until I ran the following command (-n to run in foreground) only to find out that there was a leftover configuration file for a project that had been deleted that I missed. ``` /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf ``` Once I deleted the bad/leftover file in the conf.d folder and started it back up with `sudo service supervisor start` everything worked. Here are some comprehensive steps you can take. 1. Is supervisor installed? `dpkg -l | grep supervisor` If not reinstall `sudo apt install supervisor` 2. Are all instances of supervisor stopped? `systemctl stop supervisor` Lingering supervisor processes can be found `ps aux | grep supervisor` then `kill -9 PID`. 3. Is supervisor.conf in the right location `/etc/supervisor/supervisor.conf` and there are no syntax errors? Reinstall from package would correct this. 4. Move your specific files in conf.d/ temporarily out of the folder to try and start with no additional config files. If it starts right up `sudo service supervisor start` the likelihood of an error in your project .conf file exists. 5. Check status with `sudo service supervisor status`. 6. Move your .conf files one by one back into conf.d/ and restart `sudo service supervisor restart`. Be sure to check with `sudo service supervisor status` between. If it fails you know which .conf file has an issue and can ask for specific help. 7. check everything is running with `supervisorctl status` and if not start with `supervisorctl start all`.
I ran into this issue because we were using supervisorctl to manage gunicorn. The root of my problem had nothing to do with supervisor (it was handling other processes just fine) or the python sock.py file (file was there, permissions were correct), but rather the gunicorn config file `/etc/supervisor/conf.d/gunicorn.conf`. This configuration file was managed by a source-controlled template with environment variables and when we updated the template on the server, the template variables were never replaced with the actual data. So for example something in the gunicorn.conf file read `user={{ user }}` instead of `user=gunicorn`. When supervisor tried to parse this config when running `supervisorctl start gunicorn` it would crash with this socket error. Repairing the gunicorn.conf file resolved the supervisor issue.
41,286,526
I am trying to setup a queue listener for laravel and cannot seem to get supervisor working correctly. I get the following error when I run `supervisorctl reload`: `error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib/python2.7/socket.py line: 228` The file DOES exist. If try to run `sudo supervisorctl` I get this `unix:///var/run/supervisor.sock no such file`. I've tried reinstall supervisor and that did not work either. Not sure what to do here. I'm running Laravel Homestead (Ubuntu 16.04). Result of `service supervisor status`: `vagrant@homestead:~/Code$ sudo service supervisor status ● supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2016-12-22 11:06:21 EST; 41s ago Docs: http://supervisord.org Process: 23154 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Process: 23149 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2) Main PID: 23149 (code=exited, status=2)`
2016/12/22
[ "https://Stackoverflow.com/questions/41286526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1965066/" ]
You should run `sudo service supervisor start` when you are in the supervisor dir. Worked for me.
> > Source of answer : <http://supervisord.org/installing.html> > > > 1. Run command : `echo_supervisord_conf` 2. Once you see the file echoed to your terminal, reinvoke the command as `echo_supervisord_conf > /etc/supervisord.conf`. This won’t work if you do not have root access. 3. If you don’t have root access, or you’d rather not put the `supervisord.conf` file in `/etc/supervisord.conf`, you can place it in the current directory (`echo_supervisord_conf > supervisord.conf`) and start supervisord with the `-c` flag in order to specify the configuration file location. The error should've been resolved by now.
41,286,526
I am trying to setup a queue listener for laravel and cannot seem to get supervisor working correctly. I get the following error when I run `supervisorctl reload`: `error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib/python2.7/socket.py line: 228` The file DOES exist. If try to run `sudo supervisorctl` I get this `unix:///var/run/supervisor.sock no such file`. I've tried reinstall supervisor and that did not work either. Not sure what to do here. I'm running Laravel Homestead (Ubuntu 16.04). Result of `service supervisor status`: `vagrant@homestead:~/Code$ sudo service supervisor status ● supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2016-12-22 11:06:21 EST; 41s ago Docs: http://supervisord.org Process: 23154 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Process: 23149 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2) Main PID: 23149 (code=exited, status=2)`
2016/12/22
[ "https://Stackoverflow.com/questions/41286526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1965066/" ]
I had a very similar problem (Ubuntu 18.04) and searched similar threads to no avail so answering here with some more comprehensive answers. Lack of a sock file or socket error is only an indicator that supervisor is not running. If a simple restart doesn't work its either 1. not installed, or 2. failing to start. In my case nothing was being logged to the supervisor.log file for me to know why it was failing until I ran the following command (-n to run in foreground) only to find out that there was a leftover configuration file for a project that had been deleted that I missed. ``` /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf ``` Once I deleted the bad/leftover file in the conf.d folder and started it back up with `sudo service supervisor start` everything worked. Here are some comprehensive steps you can take. 1. Is supervisor installed? `dpkg -l | grep supervisor` If not reinstall `sudo apt install supervisor` 2. Are all instances of supervisor stopped? `systemctl stop supervisor` Lingering supervisor processes can be found `ps aux | grep supervisor` then `kill -9 PID`. 3. Is supervisor.conf in the right location `/etc/supervisor/supervisor.conf` and there are no syntax errors? Reinstall from package would correct this. 4. Move your specific files in conf.d/ temporarily out of the folder to try and start with no additional config files. If it starts right up `sudo service supervisor start` the likelihood of an error in your project .conf file exists. 5. Check status with `sudo service supervisor status`. 6. Move your .conf files one by one back into conf.d/ and restart `sudo service supervisor restart`. Be sure to check with `sudo service supervisor status` between. If it fails you know which .conf file has an issue and can ask for specific help. 7. check everything is running with `supervisorctl status` and if not start with `supervisorctl start all`.
I did the following to solve the issue on CentOS Linux 7 ``` sudo systemctl status supervisord.service ``` With the above command, I realise that the program was in active ``` sudo systemctl start supervisord.service ``` Now I use the command above to start the service and everything works well now
41,286,526
I am trying to setup a queue listener for laravel and cannot seem to get supervisor working correctly. I get the following error when I run `supervisorctl reload`: `error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib/python2.7/socket.py line: 228` The file DOES exist. If try to run `sudo supervisorctl` I get this `unix:///var/run/supervisor.sock no such file`. I've tried reinstall supervisor and that did not work either. Not sure what to do here. I'm running Laravel Homestead (Ubuntu 16.04). Result of `service supervisor status`: `vagrant@homestead:~/Code$ sudo service supervisor status ● supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2016-12-22 11:06:21 EST; 41s ago Docs: http://supervisord.org Process: 23154 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Process: 23149 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2) Main PID: 23149 (code=exited, status=2)`
2016/12/22
[ "https://Stackoverflow.com/questions/41286526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1965066/" ]
> > **2020 UPDATE** > > > Try running `sudo service supervisor start` in your terminal before using the below solution. I found out that the issue sometimes occurs when `supervisor` is not running, nothing complicated. > > > I am using `Ubuntu 18.04`. I had the same problem and re-installing supervisor did not solve my problem. I ended up completely removing the conf.d directory and recreating it with new configs. **Make sure you back up your configurations before trying this**: 1. `sudo rm -rf /etc/supervisor/conf.d/` 2. `sudo mkdir /etc/supervisor/conf.d` 3. `sudo nano /etc/supervisor/conf.d/my-file.conf` 4. Copy+Paste your configuration into your new file. `sudo supervisorctl reread` started working again.
I had a very similar problem (Ubuntu 18.04) and searched similar threads to no avail so answering here with some more comprehensive answers. Lack of a sock file or socket error is only an indicator that supervisor is not running. If a simple restart doesn't work its either 1. not installed, or 2. failing to start. In my case nothing was being logged to the supervisor.log file for me to know why it was failing until I ran the following command (-n to run in foreground) only to find out that there was a leftover configuration file for a project that had been deleted that I missed. ``` /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf ``` Once I deleted the bad/leftover file in the conf.d folder and started it back up with `sudo service supervisor start` everything worked. Here are some comprehensive steps you can take. 1. Is supervisor installed? `dpkg -l | grep supervisor` If not reinstall `sudo apt install supervisor` 2. Are all instances of supervisor stopped? `systemctl stop supervisor` Lingering supervisor processes can be found `ps aux | grep supervisor` then `kill -9 PID`. 3. Is supervisor.conf in the right location `/etc/supervisor/supervisor.conf` and there are no syntax errors? Reinstall from package would correct this. 4. Move your specific files in conf.d/ temporarily out of the folder to try and start with no additional config files. If it starts right up `sudo service supervisor start` the likelihood of an error in your project .conf file exists. 5. Check status with `sudo service supervisor status`. 6. Move your .conf files one by one back into conf.d/ and restart `sudo service supervisor restart`. Be sure to check with `sudo service supervisor status` between. If it fails you know which .conf file has an issue and can ask for specific help. 7. check everything is running with `supervisorctl status` and if not start with `supervisorctl start all`.
18,995,555
I'm trying check whether the short int have digits that contains in long int. Instead this came out: ``` long int: 198381998 short int: 19 Found a match at 0 Found a match at 1 Found a match at 2 Found a match at 3 Found a match at 4 Found a match at 5 Found a match at 6 Found a match at 7 ``` It's suppose to look like this: (Correct one) ``` long int: 198381998 short int: 19 Found a match at 0 Found a match at 5 ``` Code: ``` longInt = ( input ("long int: ")) floatLong = float (longInt) shortInt = ( input ("short int: ")) floatShort = float (shortInt) max_digit = int (math.log10(floatLong)) #Count the no. of long int i = int(math.log10(floatShort)) # Count the no. shortInt that is being input for string in range (max_digit): if ( shortInt in longInt): # Check whether there is any digit in shortInt # that contains anything inside longInt print ( "Found a match at ", string) ``` Without using any built-in function of python, no list or string.etc method.
2013/09/25
[ "https://Stackoverflow.com/questions/18995555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2811732/" ]
You're passing `allData` as an argument to the mapping, but it isn't defined anywhere. You want `data.users` instead (*not* `data` because then `ko.mapping.fromJSON` will return a single object with one key, `users` whose value will be an `observableArray`; you'll confuse Knockout if you try to use that object as the value of another `observableArray`, namely `self.users`).
Switching to this .ajax call seemed to resolve the issue. ``` // Load initial state from server, convert it to User instances, then populate self.users $.ajax({ url: '/sws/users/index', dataType: 'json', type: 'POST', success: function (data) { self.users(data['users']); console.log(data['users']); } }); ```
63,087,586
In my views.py file of my Django application I'm trying to load the 'transformers' library with the following command: ``` from transformers import pipeline ``` This works in my local environment, but on my Linux server at Linode, when I try to load my website, the page tries to load for 5 minutes then I get a Timeout error. I don't understand what is going on, I know I have installed the library correctly. I have also run the same code in the python shell on my server and it loads fine, it's is just that if I load it in my Django views.py file, no page of my website loads. My server: Ubuntu 20.04 LTS, Nanode 1GB: 1 CPU, 25GB Storage, 1GB RAM Library: transformers==3.0.2 I also have the same problem when I try to load tensorflow. All the other libraries are loading fine, like pytorch and pandas etc. I've been trying to solve this problem since more than a week, I've also changed hosts from GCP to Linode, but it's still the same. **Edit:** I created a new server and installed everything from scratch and used a virtualenv this time, but still its the same problem. Following are the installed libraries outputted from `pip freeze`: ``` asgiref==3.2.10 certifi==2020.6.20 chardet==3.0.4 click==7.1.2 Django==3.0.7 djangorestframework==3.11.0 filelock==3.0.12 future==0.18.2 idna==2.10 joblib==0.16.0 numpy==1.19.1 packaging==20.4 Pillow==7.2.0 pyparsing==2.4.7 pytz==2020.1 regex==2020.7.14 requests==2.24.0 sacremoses==0.0.43 sentencepiece==0.1.91 six==1.15.0 sqlparse==0.3.1 tokenizers==0.8.1rc1 torch==1.5.1+cpu torchvision==0.6.1+cpu tqdm==4.48.0 transformers==3.0.2 urllib3==1.25.10 ``` I also know transformers library is installed because if I try to import some library that doesn't exist then I simply get an error, like I should. But in this case it just loads forever and doesn't output any error. This is so bizarre.
2020/07/25
[ "https://Stackoverflow.com/questions/63087586", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4823067/" ]
Maybe you just have to create or update your *requirements.txt* file. Here is the command : `pip freeze > requirements.txt`
Based on this [answer](https://serverfault.com/a/514251) > > Some third party packages for Python which use C extension modules, and this includes scipy and numpy, will only work in the Python main interpreter and cannot be used in sub interpreters as mod\_wsgi by default uses. > > > `transformers` library uses numpy, so you should force the WSGI application to run in the main interpreter of the process by changing apache config: ``` ## open apache config $ nano /etc/apache2/sites-enabled/000-default.conf ## add this line to apache config WSGIApplicationGroup %{GLOBAL} ## restart apache $ systemctl restart apache2 ``` now it works!! for more information visit the link above.
23,728,065
I have been banging my head against the wall with this for long enough that I am okay to turn here at this point. I have a page with iframe: ``` <iframe frameborder="0" allowtransparency="true" tabindex="0" src="" title="Rich text editor, listing_description" aria-describedby="cke_18" style="width:100%;height:100%"> ``` When I get by xpath using: `'//*[@aria-describedby="cke_18"]'` I get a web element where: ``` >>> elem <selenium.webdriver.remote.webelement.WebElement object at 0x104327b50> >>> elem.id u'{3dfc8264-71bc-c948-882a-acd6a8b93ab5}' >>> elem.is_displayed <bound method WebElement.is_displayed of <selenium.webdriver.remote.webelement.WebElement object at 0x104327b50>> ``` Now, when I try to extract to put information in this iframe, I get something along the following error: `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Applications/Spyder.app/Contents/Resources/lib/python2.7/spyderlib/widgets/externalshell/sitecustomize.py", line 560, in debugfile debugger.run("runfile(%r, args=%r, wdir=%r)" % (filename, args, wdir)) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/bdb.py", line 400, in run exec cmd in globals, locals File "<string>", line 1, in <module> File "/Applications/Spyder.app/Contents/Resources/lib/python2.7/spyderlib/widgets/externalshell/sitecustomize.py", line 540, in runfile execfile(filename, namespace) File "/Users/jasonmellone/Documents/PythonProjects/nakedApts.py", line 88, in <module> a = elem.find_element_by_xpath(".//*") File "/Library/Python/2.7/site-packages/selenium-2.41.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 201, in find_element_by_xpath return self.find_element(by=By.XPATH, value=xpath) File "/Library/Python/2.7/site-packages/selenium-2.41.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 377, in find_element {"using": by, "value": value})['value'] File "/Library/Python/2.7/site-packages/selenium-2.41.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 370, in _execute return self._parent.execute(command, params) File "/Library/Python/2.7/site-packages/selenium-2.41.0-py2.7.egg/selenium/webdriver/remote/webdriver.py", line 166, in execute self.error_handler.check_response(response) File "/Library/Python/2.7/site-packages/selenium-2.41.0-py2.7.egg/selenium/webdriver/remote/errorhandler.py", line 164, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: u'Unable to locate element: {"method":"xpath","selector":".//*"}' ; Stacktrace: at FirefoxDriver.prototype.findElementInternal_ (file:///var/folders/8x/0msd5dd13l9453ff9739rj7w0000gn/T/tmpmH4ARe/extensions/fxdriver@googlecode.com/components/driver_component.js:8905) at FirefoxDriver.prototype.findChildElement (file:///var/folders/8x/0msd5dd13l9453ff9739rj7w0000gn/T/tmpmH4ARe/extensions/fxdriver@googlecode.com/components/driver_component.js:8917) at DelayedCommand.prototype.executeInternal_/h (file:///var/folders/8x/0msd5dd13l9453ff9739rj7w0000gn/T/tmpmH4ARe/extensions/fxdriver@googlecode.com/components/command_processor.js:10884) at DelayedCommand.prototype.executeInternal_ (file:///var/folders/8x/0msd5dd13l9453ff9739rj7w0000gn/T/tmpmH4ARe/extensions/fxdriver@googlecode.com/components/command_processor.js:10889) at DelayedCommand.prototype.execute/< (file:///var/folders/8x/0msd5dd13l9453ff9739rj7w0000gn/T/tmpmH4ARe/extensions/fxdriver@googlecode.com/components/command_processor.js:10831)` Now, I, not being a selenium developer, have no idea what this means. When I run the following code: ``` elem = Helper.getElementByxPath(mydriver,'//*[@aria-describedby="cke_18"]',"ABC"); mydriver.switch_to_frame(elem); ``` The above runs where `Helper.getElementByxPath` is: ``` def getElementByxPath(mydriver,xPath,valueString): try: a = mydriver.find_element_by_xpath(xPath); a.send_keys(valueString); return a; except: print "Unexpected error:", sys.exc_info()[0]; return 0; a = elem.find_element_by_xpath(".//*") ``` Giving me the following: ``` >>> elem.id u'{8be4819b-f828-534a-9eb2-5b791f42b99a}' ``` And the following statement: ``` a = elem.find_element_by_xpath(".//*") ``` Gives me another huge error. The frustrating part to me is the following: 1. I don't need to get information out of the embedded input in the iframe, I just want to sendkeys. 2. I am **HAPPY** to just "Keys.TAB" until I reach the proper box, and Cursor.location.element.send\_keys (pseudo code). 3. I just want to type text on the page as the CURSOR IS ALREADY IN THE RIGHT PLACE (can't i just do this easily?) My goal is to just send keys here, not to do anything deeper, and I cannot seem to solve this problem without getting something like the above issue. Is there a way to solve this? I am quite defeated and hope someone has an answer. Thank you!
2014/05/19
[ "https://Stackoverflow.com/questions/23728065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/360826/" ]
you need Wget for Windows, you can download it from here <http://gnuwin32.sourceforge.net/packages/wget.htm> open notepad and paste your code, save as "myscript.bat" make sure it doesn't have .txt put your "myscript.bat" in the same folder with wget.exe now try it, it should work
For a newer firmware version, U need to add referer and user-agent. Try this, work for me: ``` wget -qO- --user=admin --password=admin --referer http://192.168.0.1 --user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:21.0) Gecko/20100101 Firefox/21.0" http://192.168.0.1/userRpm/SysRebootRpm.htm?Reboot=Reboot ```
55,603,451
I am trying to make a program that analyzes stocks, and right now I wrote a simple python script to plot moving averages. Extracting the CSV file from the native path works fine, but when I get it from the web, it doesn't work. Keeps displaying an error: 'list' object has no attribute 'Date' It worked fine with .CSV, but the web thing is messed up. If I run print(df), it displays the table really weirdly. ``` import pandas as pd import matplotlib.pyplot as plt import numpy as np df = pd.read_html("https://finance.yahoo.com/quote/AAPL/history?period1=1428469200&period2=1554699600&interval=1d&filter=history&frequency=1d") x = df.Date y = df.Close a = df['Close'].rolling(50, min_periods=50).mean() b = df['Close'].rolling(200, min_periods=200).mean() plt.plot(x, y) plt.plot(a) plt.plot(b) plt.savefig("AAPL Stuff") ``` I ran in Jupyter Notebook. I expected the output out[1] an image of the chart, but I got the error: ``` AttributeError Traceback (most recent call last) <ipython-input-18-d97fbde31cef> in <module> 4 5 df = pd.read_html("https://finance.yahoo.com/quote/AAPL/history?period1=1428469200&period2=1554699600&interval=1d&filter=history&frequency=1d") ----> 6 x = df.Date 7 y = df.Close 8 AttributeError: 'list' object has no attribute 'Date' ```
2019/04/10
[ "https://Stackoverflow.com/questions/55603451", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11337553/" ]
The data got placed in a (one-element) list. If you do this, after the `read_html` call, it should work: ``` df = df[0] ```
Did you mean to access the Date feature from the DataFrame object? If that is the case, then change: `python x = df.Date` to `python x = df['Date']` `python y = df.Close` to `python y = df['Close']` EDIT: Also: `python df.plot(x='Date', y='Close', style='o')` works instead of plt.plot
3,949,727
For code: ``` #!/usr/bin/python src = """ print '!!!' import os """ obj = compile(src, '', 'exec') eval(obj, {'__builtins__': False}) ``` I get output: ``` !!! Traceback (most recent call last): File "./test.py", line 9, in <module> eval(obj, {'__builtins__': False}) File "", line 3, in <module> ImportError: __import__ not found ``` Both 'print' and 'import' are language construct. Why does 'eval' restrict using of 'import' but doesn't restrict 'print'? P.S. I'm using python 2.6 UPDATE: Question is not "Why does import not work?" but "Why does print work?" Are there some architecture restrictions or something else?
2010/10/16
[ "https://Stackoverflow.com/questions/3949727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23712/" ]
The `__import__` method is invoked by the `import` keyword: [python.org](http://docs.python.org/library/functions.html?highlight=import#__import__) If you want to be able to import a module you need to leave the `__import__` method in the builtins: ``` src = """ print '!!!' import os """ obj = compile(src, '', 'exec') eval(obj, {'__builtins__': {'__import__':__builtins__.__import__}}) ```
In your `eval` the call to `import` is made successfully however `import` makes use of the `__import__` method in builtins which you have made unavailable in your `exec`. This is the reason why you are seeing ``` ImportError: __import__ not found ``` `print` doesn't depend on any builtins so works OK. You could pass just `__import__` from builtins with something like: ``` eval(obj, {'__builtins__' : {'__import__' :__builtins__.__import__}}) ```
3,949,727
For code: ``` #!/usr/bin/python src = """ print '!!!' import os """ obj = compile(src, '', 'exec') eval(obj, {'__builtins__': False}) ``` I get output: ``` !!! Traceback (most recent call last): File "./test.py", line 9, in <module> eval(obj, {'__builtins__': False}) File "", line 3, in <module> ImportError: __import__ not found ``` Both 'print' and 'import' are language construct. Why does 'eval' restrict using of 'import' but doesn't restrict 'print'? P.S. I'm using python 2.6 UPDATE: Question is not "Why does import not work?" but "Why does print work?" Are there some architecture restrictions or something else?
2010/10/16
[ "https://Stackoverflow.com/questions/3949727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23712/" ]
In your `eval` the call to `import` is made successfully however `import` makes use of the `__import__` method in builtins which you have made unavailable in your `exec`. This is the reason why you are seeing ``` ImportError: __import__ not found ``` `print` doesn't depend on any builtins so works OK. You could pass just `__import__` from builtins with something like: ``` eval(obj, {'__builtins__' : {'__import__' :__builtins__.__import__}}) ```
print works because you specified `'exec'` to the `compile` function call.
3,949,727
For code: ``` #!/usr/bin/python src = """ print '!!!' import os """ obj = compile(src, '', 'exec') eval(obj, {'__builtins__': False}) ``` I get output: ``` !!! Traceback (most recent call last): File "./test.py", line 9, in <module> eval(obj, {'__builtins__': False}) File "", line 3, in <module> ImportError: __import__ not found ``` Both 'print' and 'import' are language construct. Why does 'eval' restrict using of 'import' but doesn't restrict 'print'? P.S. I'm using python 2.6 UPDATE: Question is not "Why does import not work?" but "Why does print work?" Are there some architecture restrictions or something else?
2010/10/16
[ "https://Stackoverflow.com/questions/3949727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23712/" ]
In your `eval` the call to `import` is made successfully however `import` makes use of the `__import__` method in builtins which you have made unavailable in your `exec`. This is the reason why you are seeing ``` ImportError: __import__ not found ``` `print` doesn't depend on any builtins so works OK. You could pass just `__import__` from builtins with something like: ``` eval(obj, {'__builtins__' : {'__import__' :__builtins__.__import__}}) ```
`import` calls the global/builtin `__import__` function; if there isn't one to be found, `import` fails. `print` does not rely on any globals to do its work. That is why `print` works in your example, even though you do not use the available `__builtins__`.
3,949,727
For code: ``` #!/usr/bin/python src = """ print '!!!' import os """ obj = compile(src, '', 'exec') eval(obj, {'__builtins__': False}) ``` I get output: ``` !!! Traceback (most recent call last): File "./test.py", line 9, in <module> eval(obj, {'__builtins__': False}) File "", line 3, in <module> ImportError: __import__ not found ``` Both 'print' and 'import' are language construct. Why does 'eval' restrict using of 'import' but doesn't restrict 'print'? P.S. I'm using python 2.6 UPDATE: Question is not "Why does import not work?" but "Why does print work?" Are there some architecture restrictions or something else?
2010/10/16
[ "https://Stackoverflow.com/questions/3949727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23712/" ]
The `__import__` method is invoked by the `import` keyword: [python.org](http://docs.python.org/library/functions.html?highlight=import#__import__) If you want to be able to import a module you need to leave the `__import__` method in the builtins: ``` src = """ print '!!!' import os """ obj = compile(src, '', 'exec') eval(obj, {'__builtins__': {'__import__':__builtins__.__import__}}) ```
print works because you specified `'exec'` to the `compile` function call.
3,949,727
For code: ``` #!/usr/bin/python src = """ print '!!!' import os """ obj = compile(src, '', 'exec') eval(obj, {'__builtins__': False}) ``` I get output: ``` !!! Traceback (most recent call last): File "./test.py", line 9, in <module> eval(obj, {'__builtins__': False}) File "", line 3, in <module> ImportError: __import__ not found ``` Both 'print' and 'import' are language construct. Why does 'eval' restrict using of 'import' but doesn't restrict 'print'? P.S. I'm using python 2.6 UPDATE: Question is not "Why does import not work?" but "Why does print work?" Are there some architecture restrictions or something else?
2010/10/16
[ "https://Stackoverflow.com/questions/3949727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23712/" ]
The `__import__` method is invoked by the `import` keyword: [python.org](http://docs.python.org/library/functions.html?highlight=import#__import__) If you want to be able to import a module you need to leave the `__import__` method in the builtins: ``` src = """ print '!!!' import os """ obj = compile(src, '', 'exec') eval(obj, {'__builtins__': {'__import__':__builtins__.__import__}}) ```
`import` calls the global/builtin `__import__` function; if there isn't one to be found, `import` fails. `print` does not rely on any globals to do its work. That is why `print` works in your example, even though you do not use the available `__builtins__`.
26,797,378
I'm developing an aplication built on the python Bottle framework. It's amazing and i want to host it somewhere, but of course i need a service where i can run the server. What are the best options available (on costs too)? Thanks!
2014/11/07
[ "https://Stackoverflow.com/questions/26797378", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2344954/" ]
I work for [PythonAnywhere](http://www.pythonanywhere.com/), a Python hosting company, so obviously I think we have a great Bottle hosting solution -- check it out :-) Some of our competitors -- ones where we feel proud if a customer leaves them to move to us, or where we don't feel too depressed if they leave us to go to them, are Webfaction, Heroku, and (if you don't mind doing OS maintenance yourself and setting up your own front-end webserver) Digital Ocean and Amazon Web Services. There are definitely other good hosts, but those are the ones that spring to mind.
The easiest way to host is use Ngrok ,its an open source tool where use get a temporary link where you can access your local host server on any machine through that link .you can download it from here <https://ngrok.com/>
12,667,057
I want my Python program to be deterministic, so I have been using `OrderedDicts` extensively throughout the code. Unfortunately, while debugging memory leaks today, I discovered that OrderedDicts have a custom `__del__` method, making them uncollectable whenever there's a cycle. It's rather unfortunate that there's no warning in the documentation about this. So what can I do? Is there any deterministic dictionary in the Python standard library that plays nicely with gc? I'd really hate to have to roll my own, especially over a stupid one line function like this. Also, is this something I should file a bug report for? I'm not familiar with the Python library's procedures, and what they consider a bug. Edit: [It appears that this is a known bug that was fixed back in 2010. I must have somehow gotten a really old version of 2.7 installed.](http://bugs.python.org/issue9825) I guess the best approach is to just include a monkey patch in case the user happens to be running a broken version like me.
2012/10/01
[ "https://Stackoverflow.com/questions/12667057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1420279/" ]
If the presence of the \_\_del\_\_ method is problematic for you, just remove it: ``` >>> import collections >>> del collections.OrderedDict.__del__ ``` You will gain the ability to use OrderedDicts in a reference cycle. You will lose having the OrderedDict free all its resources immediately upon deletion.
It sounds like you've tracked down a bug in `OrderedDict` that was fixed at some point after your version of 2.7. If it wasn't in any actual released versions, maybe you can just ignore it. But otherwise, yeah, you need a workaround. I would suggest that, instead of monkeypatching `collections.OrderedDict`, you should instead use the [Equivalent OrderedDict recipe that runs on Python 2.4 or later](http://code.activestate.com/recipes/576693/) linked in [the documentation](http://docs.python.org/library/collections.html#collections.OrderedDict) for `collections.OrderedDict` (which does not have the excess `__del__`). If nothing else, when someone comes along and says "I need to run this on 2.6, how much work is it to port" the answer will be "a little less"… But two more points: > > rewriting everything to avoid cycles is a huge amount of effort. > > > The fact that you've got cycles in your dictionaries is a red flag that you're doing something wrong (typically using strong refs for a cache or for back-pointers), which is likely to lead to other memory problems, and possibly other bugs. So that effort may turn out to be necessary anyway. You still haven't explained what you're trying to accomplish; I suspect the "deterministic" thing is just a red herring (especially since `dict`s actually are deterministic), so the best solution is `s/OrderedDict/dict/g`. But if determinism is necessary, you can't depend on the cycle collector, because it's not deterministic, and that means your finalizer ordering and so on all become non-deterministic. It also means your memory usage is non-deterministic—you may end up with a program that stays within your desired memory bounds 99.999% of the time, but not 100%; if those bounds are critically important, that can be worse than failing every time. Meanwhile, the iteration order of dictionaries isn't specified, but in practice, CPython and PyPy iterate in the order of the hash buckets, not the id (memory location) of either the value or the key, and whatever Jython and IronPython do (they may be using some underlying Java or .NET collection that has different behavior; I haven't tested), it's unlikely that the memory order of the keys would be relevant. (How could you efficiently iterate a hash table based on something like that?) You may have confused yourself by testing with objects that use `id` for `hash`, but most objects hash based on value. For example, take this simple program: ``` d={} d[0] = 0 d[1] = 1 d[2] = 2 for k in d: print(k, d[k], id(k), id(d[k]), hash(k)) ``` If you run it repeatedly with CPython 2.7, CPython 3.2, and PyPy 1.9, the keys will always be iterated in order 0, 1, 2. The `id` columns may *also* be the same each time (that depends on your platform), but you can fix that in a number of ways—insert in a different order, reverse the order of the values, use string values instead of ints, assign the values to variables and then insert those variables instead of the literals, etc. Play with it enough and you can get every possible order for the `id` columns, and yet the keys are still iterated in the same order every time. The order of iteration is not *predictable*, because to predict it you need the function for converting `hash(k)` into a bucket index, which depends on information you don't have access to from Python. Even if it's just `hash(k) % self._table_size`, unless that `_table_size` is exposed to the Python interface, it's not helpful. (It's a complex function of the sequence of inserts and deletes that could in principle be calculated, but in practice it's silly to try.) But it is *deterministic*; if you insert and delete the same keys in the same order every time, the iteration order will be the same every time.
12,667,057
I want my Python program to be deterministic, so I have been using `OrderedDicts` extensively throughout the code. Unfortunately, while debugging memory leaks today, I discovered that OrderedDicts have a custom `__del__` method, making them uncollectable whenever there's a cycle. It's rather unfortunate that there's no warning in the documentation about this. So what can I do? Is there any deterministic dictionary in the Python standard library that plays nicely with gc? I'd really hate to have to roll my own, especially over a stupid one line function like this. Also, is this something I should file a bug report for? I'm not familiar with the Python library's procedures, and what they consider a bug. Edit: [It appears that this is a known bug that was fixed back in 2010. I must have somehow gotten a really old version of 2.7 installed.](http://bugs.python.org/issue9825) I guess the best approach is to just include a monkey patch in case the user happens to be running a broken version like me.
2012/10/01
[ "https://Stackoverflow.com/questions/12667057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1420279/" ]
It sounds like you've tracked down a bug in `OrderedDict` that was fixed at some point after your version of 2.7. If it wasn't in any actual released versions, maybe you can just ignore it. But otherwise, yeah, you need a workaround. I would suggest that, instead of monkeypatching `collections.OrderedDict`, you should instead use the [Equivalent OrderedDict recipe that runs on Python 2.4 or later](http://code.activestate.com/recipes/576693/) linked in [the documentation](http://docs.python.org/library/collections.html#collections.OrderedDict) for `collections.OrderedDict` (which does not have the excess `__del__`). If nothing else, when someone comes along and says "I need to run this on 2.6, how much work is it to port" the answer will be "a little less"… But two more points: > > rewriting everything to avoid cycles is a huge amount of effort. > > > The fact that you've got cycles in your dictionaries is a red flag that you're doing something wrong (typically using strong refs for a cache or for back-pointers), which is likely to lead to other memory problems, and possibly other bugs. So that effort may turn out to be necessary anyway. You still haven't explained what you're trying to accomplish; I suspect the "deterministic" thing is just a red herring (especially since `dict`s actually are deterministic), so the best solution is `s/OrderedDict/dict/g`. But if determinism is necessary, you can't depend on the cycle collector, because it's not deterministic, and that means your finalizer ordering and so on all become non-deterministic. It also means your memory usage is non-deterministic—you may end up with a program that stays within your desired memory bounds 99.999% of the time, but not 100%; if those bounds are critically important, that can be worse than failing every time. Meanwhile, the iteration order of dictionaries isn't specified, but in practice, CPython and PyPy iterate in the order of the hash buckets, not the id (memory location) of either the value or the key, and whatever Jython and IronPython do (they may be using some underlying Java or .NET collection that has different behavior; I haven't tested), it's unlikely that the memory order of the keys would be relevant. (How could you efficiently iterate a hash table based on something like that?) You may have confused yourself by testing with objects that use `id` for `hash`, but most objects hash based on value. For example, take this simple program: ``` d={} d[0] = 0 d[1] = 1 d[2] = 2 for k in d: print(k, d[k], id(k), id(d[k]), hash(k)) ``` If you run it repeatedly with CPython 2.7, CPython 3.2, and PyPy 1.9, the keys will always be iterated in order 0, 1, 2. The `id` columns may *also* be the same each time (that depends on your platform), but you can fix that in a number of ways—insert in a different order, reverse the order of the values, use string values instead of ints, assign the values to variables and then insert those variables instead of the literals, etc. Play with it enough and you can get every possible order for the `id` columns, and yet the keys are still iterated in the same order every time. The order of iteration is not *predictable*, because to predict it you need the function for converting `hash(k)` into a bucket index, which depends on information you don't have access to from Python. Even if it's just `hash(k) % self._table_size`, unless that `_table_size` is exposed to the Python interface, it's not helpful. (It's a complex function of the sequence of inserts and deletes that could in principle be calculated, but in practice it's silly to try.) But it is *deterministic*; if you insert and delete the same keys in the same order every time, the iteration order will be the same every time.
Note that [the fix made in Python 2.7](https://github.com/python/cpython/commit/2039753a9ab9d41375ba17877e231e8d53e17749#diff-52502c75edd9dd62aa7a817dbab542d2) to eliminate the `__del__` method and so stop them from being uncollectable does unfortunately mean that every use of an `OrderedDict` (even an empty one) results in a reference cycle which must be garbage collected. See [this answer](https://stackoverflow.com/a/46935255/445073) for more details.
12,667,057
I want my Python program to be deterministic, so I have been using `OrderedDicts` extensively throughout the code. Unfortunately, while debugging memory leaks today, I discovered that OrderedDicts have a custom `__del__` method, making them uncollectable whenever there's a cycle. It's rather unfortunate that there's no warning in the documentation about this. So what can I do? Is there any deterministic dictionary in the Python standard library that plays nicely with gc? I'd really hate to have to roll my own, especially over a stupid one line function like this. Also, is this something I should file a bug report for? I'm not familiar with the Python library's procedures, and what they consider a bug. Edit: [It appears that this is a known bug that was fixed back in 2010. I must have somehow gotten a really old version of 2.7 installed.](http://bugs.python.org/issue9825) I guess the best approach is to just include a monkey patch in case the user happens to be running a broken version like me.
2012/10/01
[ "https://Stackoverflow.com/questions/12667057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1420279/" ]
If the presence of the \_\_del\_\_ method is problematic for you, just remove it: ``` >>> import collections >>> del collections.OrderedDict.__del__ ``` You will gain the ability to use OrderedDicts in a reference cycle. You will lose having the OrderedDict free all its resources immediately upon deletion.
Note that [the fix made in Python 2.7](https://github.com/python/cpython/commit/2039753a9ab9d41375ba17877e231e8d53e17749#diff-52502c75edd9dd62aa7a817dbab542d2) to eliminate the `__del__` method and so stop them from being uncollectable does unfortunately mean that every use of an `OrderedDict` (even an empty one) results in a reference cycle which must be garbage collected. See [this answer](https://stackoverflow.com/a/46935255/445073) for more details.
63,336,512
I have a python flask application which uses tabula internally to extract tables from pdf files.After I do 'cf push' and run the application on PCF,i load the pdf file to the application to read the table. When the app tries to extract the tabular data,I get the below error. ``` 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] [2020-08-10 08:08:40,134] ERROR in app: Exception on / [POST] 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] Traceback (most recent call last): 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/tabula/io.py", line 80, in _run 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] result = subprocess.run( 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/subprocess.py", line 489, in run 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] with Popen(*popenargs, **kwargs) as process: 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/subprocess.py", line 854, in __init__ 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] self._execute_child(args, executable, preexec_fn, close_fds, 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/subprocess.py", line 1702, in _execute_child 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] raise child_exception_type(errno_num, err_msg, err_filename) 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] FileNotFoundError: [Errno 2] No such file or directory: 'java' 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] During handling of the above exception, another exception occurred: 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] Traceback (most recent call last): 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/flask/app.py", line 2446, in wsgi_app 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] response = self.full_dispatch_request() 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/flask/app.py", line 1951, in full_dispatch_request 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] rv = self.handle_user_exception(e) 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/flask/app.py", line 1820, in handle_user_exception 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] reraise(exc_type, exc_value, tb) 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] raise value 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/flask/app.py", line 1949, in full_dispatch_request 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] rv = self.dispatch_request() 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/flask/app.py", line 1935, in dispatch_request 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] return self.view_functions[rule.endpoint](**req.view_args) 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "app.py", line 55, in index 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] wireListDF = pdfExtractorOBJ.getWireListDataFrame() 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/app/WireHarnessPDFExtractor.py", line 158, in getWireListDataFrame 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] self.readBTPPDF() 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/app/WireHarnessPDFExtractor.py", line 31, in readBTPPDF 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] df = tabula.read_pdf(self.pdf_path, pages='all', stream=True ,guess=True, encoding="utf-8", 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/tabula/io.py", line 322, in read_pdf 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] output = _run(java_options, kwargs, path, encoding) 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/tabula/io.py", line 91, in _run 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] raise JavaNotFoundError(JAVA_NOT_FOUND_ERROR) 2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] tabula.errors.JavaNotFoundError: `java` command is not found from this Python process.Please ensure Java is installed and PATH is set for `java` 2020-08-10T13:38:40.136+05:30 [APP/PROC/WEB/0] [ERR] 10.255.10.112 - - [10/Aug/2020 08:08:40] "[35m[1mPOST / HTTP/1.1[0m" 500 - ``` I know tabula has java dependencies, any suggestions on how to have the setup for the python flask application with tabula so that it can be used on PCF platform.
2020/08/10
[ "https://Stackoverflow.com/questions/63336512", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12403005/" ]
This is a java path error. Your python runtime is not able to find java at all. You need to ensure that your export java in your export path variables. If you are running this process on linux, you can export `export PATH=<your java bin dir>:$PATH`
The highlights: * You need multiple buildpacks, one for Java and one for Python * You want to use apt-buildpack, not the Java buildpack though * You need to set PATH to point to the location where the apt-buildpack installs Java (or have your app look for Java in this specific place) * You can set PATH in a `.profile` file. All of this is explained in [my answer to this similar question](https://stackoverflow.com/a/62927340/1585136).
27,718,277
Well I have an assignment to implement DES and I chose python, only problem is I can't figure out how to XOR bits of a String or Byte String, I can manually XOR them if only I can manage to read the 1s and 0s in them. Example: ``` s1 = b'abc' s2 = b'efg' s3 = XOR(s1,s2) // my own method ``` How can I XOR them or how can I get the binary values of 1s and 0s that represent them? If you use any python methods explain them, I'm relatively new to this language.
2014/12/31
[ "https://Stackoverflow.com/questions/27718277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3417451/" ]
First you need to `zip` your strings then use `ord` (in `python 2`) and `^` for each of characters : ``` >>> s1 = b'abc' >>> s2 = b'efg' >>> ''.join(chr(ord(i)^ord(j)) for i,j in zip(s1,s2)) '\x04\x04\x04' ``` the [`ord()`](https://docs.python.org/2/library/functions.html#ord) function retuen value of the byte when the argument is an 8-bit string.But if you are using `python 3` you dont need `ord` : ``` >>> ''.join(chr(i^j) for i,j in zip(s1,s2)) '\x04\x04\x04' ``` > > Since bytes objects are sequences of integers (akin to a tuple), for a bytes object b, `b[0]` will be an integer, while `b[0:1]` will be a bytes object of length 1. (This contrasts with text strings, where both indexing and slicing will produce a string of length 1) > > > ``` example : >>> s1[0] 97 >>> s1[0:1] b'a' ``` --- and if you want to convert back your strings you need to firs convert the `XOR`ed string to binary you can do it by `binascii.a2b_qp` function : ``` >>> import binascii >>> s=''.join(chr(i^j) for i,j in zip(s1,s2)) >>> s4=binascii.a2b_qp(s) >>> ''.join(chr(i^j) for i,j in zip(s1,s4)) 'efg' ```
``` >>> b''.join(chr(ord(a) ^ ord(b)) for a, b in zip(b'abc', b'efg')) '\x04\x04\x04' ```
27,718,277
Well I have an assignment to implement DES and I chose python, only problem is I can't figure out how to XOR bits of a String or Byte String, I can manually XOR them if only I can manage to read the 1s and 0s in them. Example: ``` s1 = b'abc' s2 = b'efg' s3 = XOR(s1,s2) // my own method ``` How can I XOR them or how can I get the binary values of 1s and 0s that represent them? If you use any python methods explain them, I'm relatively new to this language.
2014/12/31
[ "https://Stackoverflow.com/questions/27718277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3417451/" ]
``` >>> b''.join(chr(ord(a) ^ ord(b)) for a, b in zip(b'abc', b'efg')) '\x04\x04\x04' ```
Not really efficient, but this should work. ``` s1 = b'abc' s2 = b'efg' s3= b'' for c1,c2 in zip(s1, s2): s3 += chr( ord(c1) ^ ord(c2) ) >>> s3 '\x04\x04\x04' ```
27,718,277
Well I have an assignment to implement DES and I chose python, only problem is I can't figure out how to XOR bits of a String or Byte String, I can manually XOR them if only I can manage to read the 1s and 0s in them. Example: ``` s1 = b'abc' s2 = b'efg' s3 = XOR(s1,s2) // my own method ``` How can I XOR them or how can I get the binary values of 1s and 0s that represent them? If you use any python methods explain them, I'm relatively new to this language.
2014/12/31
[ "https://Stackoverflow.com/questions/27718277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3417451/" ]
First you need to `zip` your strings then use `ord` (in `python 2`) and `^` for each of characters : ``` >>> s1 = b'abc' >>> s2 = b'efg' >>> ''.join(chr(ord(i)^ord(j)) for i,j in zip(s1,s2)) '\x04\x04\x04' ``` the [`ord()`](https://docs.python.org/2/library/functions.html#ord) function retuen value of the byte when the argument is an 8-bit string.But if you are using `python 3` you dont need `ord` : ``` >>> ''.join(chr(i^j) for i,j in zip(s1,s2)) '\x04\x04\x04' ``` > > Since bytes objects are sequences of integers (akin to a tuple), for a bytes object b, `b[0]` will be an integer, while `b[0:1]` will be a bytes object of length 1. (This contrasts with text strings, where both indexing and slicing will produce a string of length 1) > > > ``` example : >>> s1[0] 97 >>> s1[0:1] b'a' ``` --- and if you want to convert back your strings you need to firs convert the `XOR`ed string to binary you can do it by `binascii.a2b_qp` function : ``` >>> import binascii >>> s=''.join(chr(i^j) for i,j in zip(s1,s2)) >>> s4=binascii.a2b_qp(s) >>> ''.join(chr(i^j) for i,j in zip(s1,s4)) 'efg' ```
Not really efficient, but this should work. ``` s1 = b'abc' s2 = b'efg' s3= b'' for c1,c2 in zip(s1, s2): s3 += chr( ord(c1) ^ ord(c2) ) >>> s3 '\x04\x04\x04' ```
13,768,118
I'm building a python app using the UPS Shipping API. On sending the request (see below) I keep getting the following error: ``` UPS Error 9370701: Invalid processing option. ``` I'm not sure what this means and there isn't much more info in the API documentation. Could someone help me figure out what's going wrong here or give some more information about the cause of this error. ``` <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:ns0="http://www.ups.com/XMLSchema/XOLTWS/Common/v1.0" xmlns:ns1="http://schemas.xmlsoap.org/soap/envelope/" xmlns:security="http://www.ups.com/XMLSchema/XOLTWS/UPSS/v1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns2="http://www.ups.com/XMLSchema/XOLTWS/FreightShip/v1.0" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header> <security:UPSSecurity> <security:UsernameToken> <security:Username>winkerVSbecks</security:Username> <security:Password>myPassword</security:Password> </security:UsernameToken> <security:ServiceAccessToken> <security:AccessLicenseNumber>myLicenseNumber</security:AccessLicenseNumber> </security:ServiceAccessToken> </security:UPSSecurity> </SOAP-ENV:Header> <ns1:Body> <ns2:FreightShipRequest> <ns2:Request> <ns0:RequestOption>1</ns0:RequestOption> <ns0:RequestOption>Shipping</ns0:RequestOption> </ns2:Request> <ns2:Shipment> <ns2:ShipFrom> <ns2:Name>Adobe</ns2:Name> <ns2:Address> <ns2:AddressLine>560 Front St. W</ns2:AddressLine> <ns2:AddressLine></ns2:AddressLine> <ns2:City>Toronto</ns2:City> <ns2:StateProvinceCode>ON</ns2:StateProvinceCode> <ns2:PostalCode>M5V1C1</ns2:PostalCode> <ns2:CountryCode>CA</ns2:CountryCode> </ns2:Address> <ns2:Phone> <ns2:Number>6478340000</ns2:Number> </ns2:Phone> </ns2:ShipFrom> <ns2:ShipperNumber>535T8T</ns2:ShipperNumber> <ns2:ShipTo> <ns2:Name>Apple</ns2:Name> <ns2:Address> <ns2:AddressLine>313 Richmond St. E</ns2:AddressLine> <ns2:AddressLine></ns2:AddressLine> <ns2:City>Toronto</ns2:City> <ns2:StateProvinceCode>ON</ns2:StateProvinceCode> <ns2:PostalCode>M5V4S7</ns2:PostalCode> <ns2:CountryCode>CA</ns2:CountryCode> </ns2:Address> <ns2:Phone> <ns2:Number>4164530000</ns2:Number> </ns2:Phone> </ns2:ShipTo> <ns2:PaymentInformation> <ns2:Payer> <ns2:Name>Spiderman</ns2:Name> <ns2:Address> <ns2:AddressLine>560 Front St. W</ns2:AddressLine> <ns2:City>Toronto</ns2:City> <ns2:StateProvinceCode>ON</ns2:StateProvinceCode> <ns2:PostalCode>M5V1C1</ns2:PostalCode> <ns2:CountryCode>CA</ns2:CountryCode> </ns2:Address> <ns2:ShipperNumber>535T8T</ns2:ShipperNumber> <ns2:AttentionName>He-Man</ns2:AttentionName> <ns2:Phone> <ns2:Number>6478343039</ns2:Number> </ns2:Phone> </ns2:Payer> <ns2:ShipmentBillingOption> <ns2:Code>10</ns2:Code> </ns2:ShipmentBillingOption> </ns2:PaymentInformation> <ns2:Service> <ns2:Code>308</ns2:Code> </ns2:Service> <ns2:HandlingUnitOne> <ns2:Quantity>16</ns2:Quantity> <ns2:Type> <ns2:Code>PLT</ns2:Code> </ns2:Type> </ns2:HandlingUnitOne> <ns2:Commodity> <ns2:CommodityID>22</ns2:CommodityID> <ns2:Description>These are some fancy widgets!</ns2:Description> <ns2:Weight> <ns2:UnitOfMeasurement> <ns2:Code>LBS</ns2:Code> </ns2:UnitOfMeasurement> <ns2:Value>511.25</ns2:Value> </ns2:Weight> <ns2:Dimensions> <ns2:UnitOfMeasurement> <ns2:Code>IN</ns2:Code> </ns2:UnitOfMeasurement> <ns2:Length>1.25</ns2:Length> <ns2:Width>1.2</ns2:Width> <ns2:Height>5</ns2:Height> </ns2:Dimensions> <ns2:NumberOfPieces>1</ns2:NumberOfPieces> <ns2:PackagingType> <ns2:Code>PLT</ns2:Code> </ns2:PackagingType> <ns2:CommodityValue> <ns2:CurrencyCode>USD</ns2:CurrencyCode> <ns2:MonetaryValue>265.2</ns2:MonetaryValue> </ns2:CommodityValue> <ns2:FreightClass>60</ns2:FreightClass> <ns2:NMFCCommodityCode>566</ns2:NMFCCommodityCode> </ns2:Commodity> <ns2:Reference> <ns2:Number> <ns2:Code>PM</ns2:Code> <ns2:Value>1651651616</ns2:Value> </ns2:Number> <ns2:NumberOfCartons>5</ns2:NumberOfCartons> <ns2:Weight> <ns2:UnitOfMeasurement> <ns2:Code>LBS</ns2:Code> </ns2:UnitOfMeasurement> <ns2:Value>2</ns2:Value> </ns2:Weight> </ns2:Reference> </ns2:Shipment> </ns2:FreightShipRequest> </ns1:Body> </SOAP-ENV:Envelope> ```
2012/12/07
[ "https://Stackoverflow.com/questions/13768118", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1365008/" ]
Try this ``` DirectoryInfo dir = new DirectoryInfo(Path.GetFullPath(fp)); lb_Files.Items.Clear(); foreach (FileInfo file in dir.GetFiles()) { lb_Files.Items.Add(new RadListBoxItem(file.ToString(), file.ToString())); } ```
No you cannot cast a `String` object into a `RadListBoxItem`, you must create a `RadListBoxItem` using that string as your Value and Text properties: So replace this: ``` RadListBoxItem rlb = new RadListBoxItem(); rlb = (RadListBoxItem)file.ToString(); //radListBox lb_Files.Items.Add(rlb.ToString()); ``` With this: ``` lb_Files.Items.Add(new RadListBoxItem { Value = file.ToString(), Text = file.ToString() }); ```
2,100,233
I have a javascript which takes two variables i.e two lists one is a list of numbers and the other list of strings from django/python ``` numbersvar = [0,1,2,3] stringsvar = ['a','b','c'] ``` The numbersvar is rendered perfectly but when I do {{stringsvar}} it does not render it.
2010/01/20
[ "https://Stackoverflow.com/questions/2100233", "https://Stackoverflow.com", "https://Stackoverflow.com/users/228741/" ]
Maybe it will be better to use a **[json](http://www.google.com/url?sa=t&source=web&ct=res&cd=1&ved=0CDkQFjAA&url=http%3A%2F%2Fdocs.python.org%2Flibrary%2Fjson.html&ei=b9pWS7mmO8eK_Abcppz-Aw&usg=AFQjCNG7_WS8ga_dF7-eGjquJarNhC5Eew&sig2=_SZrRNdbAGmbGuW9tVYgHw)** module to create a javascript lists? ``` >>> a = ['stste', 'setset', 'serthjsetj'] >>> b = json.dumps(a) >>> b '["stste", "setset", "serthjsetj"]' >>> json.loads(b) [u'stste', u'setset', u'serthjsetj'] ```
What does stringsvar contain? The list, or the string representation of the list? I suggest you pass the correct javascript string representation of the list from the view method to the template to render. Python and javascript array literals have the same syntax, so you could do: ``` def my_view(request): return render_template("...", stringsvar=str(the_list)) ``` And in the template: ``` <script language="javascript"> stringsvar = {{ stringsvar }}; ... </script> ``` Or you can use the json serializer, this has the added benefit that you will be able to pass other kinds of values as well. ``` from django.core import serializers def my_view(request): return render_template("...", stringsvar=serializers.serialize("json", the_list)) ```
10,393,385
> > **Possible Duplicate:** > > [Instance variables vs. class variables in Python](https://stackoverflow.com/questions/2714573/instance-variables-vs-class-variables-in-python) > > > What is the difference between these two situations and how is it treated with in Python? Ex1 ``` class MyClass: anArray = {} ``` Ex2 ``` class MyClass: __init__(self): self.anArray = {} ``` It seems like the in the first example the array is being treated like a static variable. How does Python treat this and what is the reason for this?
2012/05/01
[ "https://Stackoverflow.com/questions/10393385", "https://Stackoverflow.com", "https://Stackoverflow.com/users/751467/" ]
In the first example, `anArray` (which in Python is called a dictionary, not an array) is a class attribute. It can be accessed using `MyClass.anArray`. It exists as soon as the class is defined. In the second example, `anArray` is an instance attribute. It can be accessed using `MyClass().anArray`. (But note that doing that just throws away the `MyClass` instance created; a more sensible example is `mc = MyClass(); mc.anArray['a'] = 5`.) It doesn't exist until an instance of the class is created.
It is declared diffrent area. Ex1 is Like global or static variable. ``` obj = MyClass() obj2 = MyClass() print "IS one instance ", id(obj.anArray) == id(obj2.anArray) ``` Ex2 is local attribute. ``` obj = MyClass() obj2 = MyClass() print "IS one instance ", id(obj.anArray) == id(obj2.anArray) ```
46,374,747
it's kind of very daunting now. I've tried all I could possibly figure out, to no avail. I am using ElementaryOS Loki, based on Ubuntu 16.04 LTS. I have `boost 1.65.1` installed under `/usr/local` I am using `cmake 3.9.3` which is supporting building boost 1.65.0 and forward. I have tried every possible way to mess with my `CMakeLists.txt`, which as of now, looks like this ``` cmake_minimum_required( VERSION 2.8 FATAL_ERROR ) project( boostpythondemo ) set( Boost_DEBUG ON ) MESSAGE("Boost Debugging is on.") set( Boost_NO_SYSTEM_PATHS TRUE ) if( Boost_NO_SYSTEM_PATHS) set( BOOST_ROOT "/usr/local/boost_1_65_1" ) set( BOOST_INCLUDEDIR "/usr/local/boost_1_65_1/boost" ) set( BOOST_LIBRARYDIR "/usr/local/boost_1_65_1/stage/lib" ) endif( Boost_NO_SYSTEM_PATHS ) find_package( PythonLibs 3.6 REQUIRED ) include_directories( ${PYTHON_INCLUDE_DIRS} ) find_package( Boost COMPONENTS python REQUIRED ) if( Boost_FOUND ) MESSAGE("******************************BOOST FOUND*******************") endif( Boost_FOUND ) include_directories( ${Boost_INCLUDE_DIRS} ) link_directories( ${Boost_LIBRARIES} ) add_library( heyall SHARED heyall.cpp ) add_library( heyall_ext SHARED heyall_ext.cpp ) target_link_libraries( heyall_ext ${BOOST_LIBRARIES} heyall ) set_target_properties( heyall_ext PROPERTIES PREFIX "" ) ``` from the [command line output](https://codeshare.io/5M9AzJ) I can see I am setting the boost variables to the correct locations. However, cmake just can't find boost\_python. I really can't figure out what's going on now. the line says "BOOST FOUND" never got printed. here is also the full [cmake output log](https://gist.github.com/stucash/5297f5c03fb447ab89cb119b25e39979). I built boost with python 3.6.2 which will be used to build boost\_python as well, so this way I can use python 3 against boost\_python. Anyone has bumped into this before?
2017/09/23
[ "https://Stackoverflow.com/questions/46374747", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4436572/" ]
Thanks @JohnZwinck for pointing out the obvious over-looked error I had and @James for sharing his answer. but it seems his answer is for Boost 1.63.0, so I wanted to post a solution here so anyone who's having problem with latest CMAKE and Boost Python (up to today) can save some head scratching time. some prep work first, so go ahead download CMAKE 3.9.3, please beware that if you are using Boost 1.65.0 or above, you will need to use at least CMAKE 3.9.3, CMAKE explicitly bundles with different Boost versions and 3.9.3 is the one shipped with 1.65.0 or above. Otherwise, you may get an error from CMAKE saying > > imported targets not available for boost version > > > **Install Boost 1.65.1 (with python3.6.2)** download Boost 1.65.1 and extract it under `/usr/local` you can just follow Boost official guide (getting started guide) to install boost with python2, it should be hassle-free. but to install boost with python3, you will firstly need to add a user-config.jam file and specify the python version you want to use to build Boost (Boost Python). You will need to specify the parameter on command line like James did (`./bootstrap --with-python=Python3`), and add an user-config.jam in your home directory. firstly, you should create a user-config.jam under your `/home/$USERNAME/` (a subdirectory of `/home`). You can specify your compiler (gcc, clang, etc), and some other stuff, and for us it is the python version. to create a user-config.jam, you can do `$ sudo cp /usr/local/boost_1_65_1/tools/build/example/user-config.jam $HOME/user-config.jam` inside your user-config.jam file, add this line: `using python : 3.6 : /usr/bin/python3.6 : /usr/include/python3.6 : /usr/lib ;` replace with your python 3 version. now we are building and installing Boost 1.65.1 `$ ./bootstrap.sh --prefix=/usr/local --with-python=python3` `$ ./b2 --install -j 8 # build boost in parallel using all cores available` once it's finished make sure you in your `.profile` add: `export INCLUDE="/usr/local/include/boost:$INCLUDE"` `export LIBRARY_PATH="/usr/local/lib:$LIBRARY_PATH"` `export LD_LIBRARY_PATH="/usr/local/lib:$LD_LIBRARY_PATH"` **Setting up CMAKELists.txt** The one in the question details works just fine; but once you have followed the above steps, a simple CMAKELists.txt like below should suffice. ``` cmake_minimum_required( VERSION 2.8 FATAL_ERROR ) project( boostpythondemo ) find_package( PythonLibs 3.6 REQUIRED ) include_directories( ${PYTHON_INCLUDE_DIRS} ) find_package( Boost COMPONENTS python3 REQUIRED ) if( Boost_FOUND ) MESSAGE("********************************FOUND BOOST***********************") endif( Boost_FOUND ) include_directories( ${Boost_INCLUDE_DIRS} ) link_directories( ${Boost_LIBRARIES} ) add_library( heyall SHARED heyall.cpp ) add_library( heyall_ext SHARED heyall_ext.cpp ) target_link_libraries( heyall_ext ${BOOST_LIBRARIES} heyall ) set_target_properties( heyall_ext PROPERTIES PREFIX "" ) ``` Apparently the BOOST\_FOUND message was for debugging you can safely remove it. now you should just go ahead build using `cmake` & `make`.
There are some dependencies for both CMake and Boost, so I am removing my old answer and providing a link to the bash script on GitHubGist. The script can be found [here](https://gist.github.com/JamesKBowler/24228a401230c0279d9d966a18abc9e6) To run the script first make it executable ``` chmod +x boost_python3_install.sh ``` then run with sudo. ``` sudo ./boost_python3_install.sh ``` enjoy!
32,462,512
I'm trying to create a simple markdown to latex converter, just to learn python and basic regex, but I'm stuck trying to figure out why the below code doesn't work: ``` re.sub (r'\[\*\](.*?)\[\*\]: ?(.*?)$', r'\\footnote{\2}\1', s, flags=re.MULTILINE|re.DOTALL) ``` I want to convert something like: ``` s = """This is a note[*] and this is another[*] [*]: some text [*]: other text""" ``` to: ``` This is a note\footnote{some text} and this is another\footnote{other text} ``` this is what I got (from using my regex above): ``` This is a note\footnote{some text} and this is another[*] [*]: note 2 ``` Why is the pattern only been matched once? EDIT: ----- I tried the following lookahead assertion: ``` re.sub(r'\[\*\](?!:)(?=.+?\[\*\]: ?(.+?)$',r'\\footnote{\1}',flags=re.DOTALL|re.MULTILINE) #(?!:) is to prevent [*]: to be matched ``` now it matches all the footnotes, however they're not matched correctly. ``` s = """This is a note[*] and this is another[*] [*]: some text [*]: other text""" ``` is giving me ``` This is a note\footnote{some text} and this is another\footnote{some text} [*]: note 1 [*]: note 2 ``` Any thoughts about it?
2015/09/08
[ "https://Stackoverflow.com/questions/32462512", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4699624/" ]
Just lost a week trying to find a suitable tool for Neo4J. It has somehow gotten more difficult. My experience updated from the last post here (2015): Gephi: 2015: Supported Neo4j 2017: Doesn't support Neo4j Linxurious: 2015: Free 2017: Discontinued and doesn't list the price Neoclipse: 2017: No updates since 2014. Doesn't work with the current version of Neo4J. Structr: Looks promising, but requires a lot of Java knowledge just to get it running. Have lost days on this and still have not successfully installed. It does not look good for Neo4J tools. It was actually much better 2 years ago.
There are at least 3 GUI tools for neo4j that allow editing: * [neoclipse](https://github.com/neo4j-contrib/neoclipse/wiki) * [Gephi](http://gephi.github.io/) * [linkurious](http://linkurio.us/) `neoclipse` and `Gephi` are open source and free. `linkurous` has a free open-source community edition.
32,462,512
I'm trying to create a simple markdown to latex converter, just to learn python and basic regex, but I'm stuck trying to figure out why the below code doesn't work: ``` re.sub (r'\[\*\](.*?)\[\*\]: ?(.*?)$', r'\\footnote{\2}\1', s, flags=re.MULTILINE|re.DOTALL) ``` I want to convert something like: ``` s = """This is a note[*] and this is another[*] [*]: some text [*]: other text""" ``` to: ``` This is a note\footnote{some text} and this is another\footnote{other text} ``` this is what I got (from using my regex above): ``` This is a note\footnote{some text} and this is another[*] [*]: note 2 ``` Why is the pattern only been matched once? EDIT: ----- I tried the following lookahead assertion: ``` re.sub(r'\[\*\](?!:)(?=.+?\[\*\]: ?(.+?)$',r'\\footnote{\1}',flags=re.DOTALL|re.MULTILINE) #(?!:) is to prevent [*]: to be matched ``` now it matches all the footnotes, however they're not matched correctly. ``` s = """This is a note[*] and this is another[*] [*]: some text [*]: other text""" ``` is giving me ``` This is a note\footnote{some text} and this is another\footnote{some text} [*]: note 1 [*]: note 2 ``` Any thoughts about it?
2015/09/08
[ "https://Stackoverflow.com/questions/32462512", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4699624/" ]
@Zuriar Two years after your original post :) but nevertheless .. Now there is also Graphileon InterActor (<http://www.graphileon.com>) , an enhanced user-interface for Neo4j. Multi-panel, create / update nodes and relations without writing a single line of code. **UPDATE August 15th, 2018** We have replaced the Sandbox and Community Edition by the Personal Edition. This version is free as well, and is distributed as a desktop app for MacOS, Windows and Linux. For more info, visit our [blog](https://graphileon.com/graphileon-personal-edition/). **UPDATE June 22th, 2020** We released version 2.7.0 of the Personal Edition, which supports Neo4j 4.0. For release notes , go here <https://docs.graphileon.com/graphileon/Release_notes.html> **UPDATE Aug 8th, 2022** Graphileon is now also available as a fully managed Cloud Service. Read more about it here <https://graphileon.com/graphileon-cloud-has-arrived/> Disclosure : I work for Graphileon
There are at least 3 GUI tools for neo4j that allow editing: * [neoclipse](https://github.com/neo4j-contrib/neoclipse/wiki) * [Gephi](http://gephi.github.io/) * [linkurious](http://linkurio.us/) `neoclipse` and `Gephi` are open source and free. `linkurous` has a free open-source community edition.
32,462,512
I'm trying to create a simple markdown to latex converter, just to learn python and basic regex, but I'm stuck trying to figure out why the below code doesn't work: ``` re.sub (r'\[\*\](.*?)\[\*\]: ?(.*?)$', r'\\footnote{\2}\1', s, flags=re.MULTILINE|re.DOTALL) ``` I want to convert something like: ``` s = """This is a note[*] and this is another[*] [*]: some text [*]: other text""" ``` to: ``` This is a note\footnote{some text} and this is another\footnote{other text} ``` this is what I got (from using my regex above): ``` This is a note\footnote{some text} and this is another[*] [*]: note 2 ``` Why is the pattern only been matched once? EDIT: ----- I tried the following lookahead assertion: ``` re.sub(r'\[\*\](?!:)(?=.+?\[\*\]: ?(.+?)$',r'\\footnote{\1}',flags=re.DOTALL|re.MULTILINE) #(?!:) is to prevent [*]: to be matched ``` now it matches all the footnotes, however they're not matched correctly. ``` s = """This is a note[*] and this is another[*] [*]: some text [*]: other text""" ``` is giving me ``` This is a note\footnote{some text} and this is another\footnote{some text} [*]: note 1 [*]: note 2 ``` Any thoughts about it?
2015/09/08
[ "https://Stackoverflow.com/questions/32462512", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4699624/" ]
There are at least 3 GUI tools for neo4j that allow editing: * [neoclipse](https://github.com/neo4j-contrib/neoclipse/wiki) * [Gephi](http://gephi.github.io/) * [linkurious](http://linkurio.us/) `neoclipse` and `Gephi` are open source and free. `linkurous` has a free open-source community edition.
It seems that Neo4j's new bloom product would be suitable. neo4j.com/bloom
32,462,512
I'm trying to create a simple markdown to latex converter, just to learn python and basic regex, but I'm stuck trying to figure out why the below code doesn't work: ``` re.sub (r'\[\*\](.*?)\[\*\]: ?(.*?)$', r'\\footnote{\2}\1', s, flags=re.MULTILINE|re.DOTALL) ``` I want to convert something like: ``` s = """This is a note[*] and this is another[*] [*]: some text [*]: other text""" ``` to: ``` This is a note\footnote{some text} and this is another\footnote{other text} ``` this is what I got (from using my regex above): ``` This is a note\footnote{some text} and this is another[*] [*]: note 2 ``` Why is the pattern only been matched once? EDIT: ----- I tried the following lookahead assertion: ``` re.sub(r'\[\*\](?!:)(?=.+?\[\*\]: ?(.+?)$',r'\\footnote{\1}',flags=re.DOTALL|re.MULTILINE) #(?!:) is to prevent [*]: to be matched ``` now it matches all the footnotes, however they're not matched correctly. ``` s = """This is a note[*] and this is another[*] [*]: some text [*]: other text""" ``` is giving me ``` This is a note\footnote{some text} and this is another\footnote{some text} [*]: note 1 [*]: note 2 ``` Any thoughts about it?
2015/09/08
[ "https://Stackoverflow.com/questions/32462512", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4699624/" ]
@Zuriar Two years after your original post :) but nevertheless .. Now there is also Graphileon InterActor (<http://www.graphileon.com>) , an enhanced user-interface for Neo4j. Multi-panel, create / update nodes and relations without writing a single line of code. **UPDATE August 15th, 2018** We have replaced the Sandbox and Community Edition by the Personal Edition. This version is free as well, and is distributed as a desktop app for MacOS, Windows and Linux. For more info, visit our [blog](https://graphileon.com/graphileon-personal-edition/). **UPDATE June 22th, 2020** We released version 2.7.0 of the Personal Edition, which supports Neo4j 4.0. For release notes , go here <https://docs.graphileon.com/graphileon/Release_notes.html> **UPDATE Aug 8th, 2022** Graphileon is now also available as a fully managed Cloud Service. Read more about it here <https://graphileon.com/graphileon-cloud-has-arrived/> Disclosure : I work for Graphileon
Just lost a week trying to find a suitable tool for Neo4J. It has somehow gotten more difficult. My experience updated from the last post here (2015): Gephi: 2015: Supported Neo4j 2017: Doesn't support Neo4j Linxurious: 2015: Free 2017: Discontinued and doesn't list the price Neoclipse: 2017: No updates since 2014. Doesn't work with the current version of Neo4J. Structr: Looks promising, but requires a lot of Java knowledge just to get it running. Have lost days on this and still have not successfully installed. It does not look good for Neo4J tools. It was actually much better 2 years ago.
32,462,512
I'm trying to create a simple markdown to latex converter, just to learn python and basic regex, but I'm stuck trying to figure out why the below code doesn't work: ``` re.sub (r'\[\*\](.*?)\[\*\]: ?(.*?)$', r'\\footnote{\2}\1', s, flags=re.MULTILINE|re.DOTALL) ``` I want to convert something like: ``` s = """This is a note[*] and this is another[*] [*]: some text [*]: other text""" ``` to: ``` This is a note\footnote{some text} and this is another\footnote{other text} ``` this is what I got (from using my regex above): ``` This is a note\footnote{some text} and this is another[*] [*]: note 2 ``` Why is the pattern only been matched once? EDIT: ----- I tried the following lookahead assertion: ``` re.sub(r'\[\*\](?!:)(?=.+?\[\*\]: ?(.+?)$',r'\\footnote{\1}',flags=re.DOTALL|re.MULTILINE) #(?!:) is to prevent [*]: to be matched ``` now it matches all the footnotes, however they're not matched correctly. ``` s = """This is a note[*] and this is another[*] [*]: some text [*]: other text""" ``` is giving me ``` This is a note\footnote{some text} and this is another\footnote{some text} [*]: note 1 [*]: note 2 ``` Any thoughts about it?
2015/09/08
[ "https://Stackoverflow.com/questions/32462512", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4699624/" ]
Just lost a week trying to find a suitable tool for Neo4J. It has somehow gotten more difficult. My experience updated from the last post here (2015): Gephi: 2015: Supported Neo4j 2017: Doesn't support Neo4j Linxurious: 2015: Free 2017: Discontinued and doesn't list the price Neoclipse: 2017: No updates since 2014. Doesn't work with the current version of Neo4J. Structr: Looks promising, but requires a lot of Java knowledge just to get it running. Have lost days on this and still have not successfully installed. It does not look good for Neo4J tools. It was actually much better 2 years ago.
It seems that Neo4j's new bloom product would be suitable. neo4j.com/bloom
32,462,512
I'm trying to create a simple markdown to latex converter, just to learn python and basic regex, but I'm stuck trying to figure out why the below code doesn't work: ``` re.sub (r'\[\*\](.*?)\[\*\]: ?(.*?)$', r'\\footnote{\2}\1', s, flags=re.MULTILINE|re.DOTALL) ``` I want to convert something like: ``` s = """This is a note[*] and this is another[*] [*]: some text [*]: other text""" ``` to: ``` This is a note\footnote{some text} and this is another\footnote{other text} ``` this is what I got (from using my regex above): ``` This is a note\footnote{some text} and this is another[*] [*]: note 2 ``` Why is the pattern only been matched once? EDIT: ----- I tried the following lookahead assertion: ``` re.sub(r'\[\*\](?!:)(?=.+?\[\*\]: ?(.+?)$',r'\\footnote{\1}',flags=re.DOTALL|re.MULTILINE) #(?!:) is to prevent [*]: to be matched ``` now it matches all the footnotes, however they're not matched correctly. ``` s = """This is a note[*] and this is another[*] [*]: some text [*]: other text""" ``` is giving me ``` This is a note\footnote{some text} and this is another\footnote{some text} [*]: note 1 [*]: note 2 ``` Any thoughts about it?
2015/09/08
[ "https://Stackoverflow.com/questions/32462512", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4699624/" ]
@Zuriar Two years after your original post :) but nevertheless .. Now there is also Graphileon InterActor (<http://www.graphileon.com>) , an enhanced user-interface for Neo4j. Multi-panel, create / update nodes and relations without writing a single line of code. **UPDATE August 15th, 2018** We have replaced the Sandbox and Community Edition by the Personal Edition. This version is free as well, and is distributed as a desktop app for MacOS, Windows and Linux. For more info, visit our [blog](https://graphileon.com/graphileon-personal-edition/). **UPDATE June 22th, 2020** We released version 2.7.0 of the Personal Edition, which supports Neo4j 4.0. For release notes , go here <https://docs.graphileon.com/graphileon/Release_notes.html> **UPDATE Aug 8th, 2022** Graphileon is now also available as a fully managed Cloud Service. Read more about it here <https://graphileon.com/graphileon-cloud-has-arrived/> Disclosure : I work for Graphileon
It seems that Neo4j's new bloom product would be suitable. neo4j.com/bloom
63,756,753
I need to be able to run python code on each "node" of the network so that I can test out the code properly. I can't use different port numbers and run the code since I need to handle various other things which kind of force using unique IP addresses.
2020/09/05
[ "https://Stackoverflow.com/questions/63756753", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6740018/" ]
In my DHT p2p project, I have a specific object that abstract the network communication. During testing I mock that object with an object that operate in memory: ``` class MockProtocol: def __init__(self, network, peer): self.network = network self.peer = peer async def rpc(self, address, name, *args): peer = self.network.peers[address[0]] proc = getattr(peer, name) start = time() out = await proc((self.peer._uid, None), *args) delta = time() - start assert delta < 5, "RPCProtocol allows 5s delay only" return out class MockNetwork: def __init__(self): self.peers = dict() def add(self, peer): peer._protocol = MockProtocol(self, peer) self.peers[peer._uid] = peer def choice(self): return random.choice(list(self.peers.values())) async def simple_network(): network = MockNetwork() for i in range(5): peer = make_peer() network.add(peer) bootstrap = peer for peer in network.peers.values(): await peer.bootstrap((bootstrap._uid, None)) for peer in network.peers.values(): await peer.bootstrap((bootstrap._uid, None)) # run connect, this simulate the peers connecting to an existing # network. for peer in network.peers.values(): await peer.connect() return network @pytest.mark.asyncio async def test_dict(make_network): network = await make_network() # setup value = b'test value' key = peer.hash(value) # make network and peers one = network.choice() two = network.choice() three = network.choice() four = network.choice() # exec out = await three.set(value) # check assert out == key fallback = list() for xxx in (one, two, three, four): try: out = await xxx.get(key) except KeyError: fallback.append(xxx) else: assert out == value for xxx in fallback: log.warning('fallback for peer %r', xxx) out = await xxx.get_at(key, three._uid) assert out == value ```
I think vmware or virtual box can help you.
62,813,690
I am writing a script which will poll Jenkins plugin API to fetch a list of plugin dependencies. For this I have used `requests` module of python. It keeps returning empty response, whereas I am getting a JSON response in Postman. ``` import requests def get_deps(): url = "https://plugins.jenkins.io/api/plugin/CFLint" headers = { "Connection": "keep-alive", "Accept": "*/*", "Accept-Encoding": "gzip, deflate, br" } reqs = requests.get(url, headers) return reqs.status_code return reqs.json() get_deps() ``` [![Postman_Result](https://i.stack.imgur.com/xQKMG.png)](https://i.stack.imgur.com/xQKMG.png) The output is as follows. ``` C:\Users\krisT\eclipse-workspace\jenkins>python jenkins.py C:\Users\krisT\eclipse-workspace\jenkins> ``` Where am I making a mistake? Everything looks correct to me. --- **Instead of return I had to save the response to a variable and print the response. My question felt like a noob.** ``` s = requests.Session() def get_deps(): url = "https://plugins.jenkins.io/api/plugin/CFLint" reqs = s.get(url) res = reqs.json() print(res) get_deps() ```
2020/07/09
[ "https://Stackoverflow.com/questions/62813690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5649739/" ]
what do you think abous this code : First i calculate the hash and send to server A for signature ``` PdfReader reader = new PdfReader(SRC); FileOutputStream os = new FileOutputStream(TEMP); PdfStamper stamper = PdfStamper.createSignature(reader, os, '\0'); PdfSignatureAppearance appearance = stamper.getSignatureAppearance(); appearance.setVisibleSignature(new Rectangle(36, 748, 144, 780), 1, "sig"); //appearance.setCertificate(chain[0]); ExternalSignatureContainer external = new ExternalBlankSignatureContainer(PdfName.ADOBE_PPKLITE, PdfName.ADBE_PKCS7_DETACHED); MakeSignature.signExternalContainer(appearance, external, 8192); InputStream inp = appearance.getRangeStream(); BouncyCastleDigest digest = new BouncyCastleDigest(); byte[] hash = DigestAlgorithms.digest(inp, digest.getMessageDigest("SHA256")); System.out.println("hash to sign : "+ hash); bytesToFile(hash, HASH); byte[] hashdocumentByte = TEST.signed_hash(hash); PdfReader reader2 = new PdfReader(TEMP); FileOutputStream os2 = new FileOutputStream(DEST); ExternalSignatureContainer external2 = new MyExternalSignatureContainer(hashdocumentByte,null); MakeSignature.signDeferred(reader2, "sig", os2, external2); ``` And in the server B where i sign the hash : ``` BouncyCastleProvider providerBC = new BouncyCastleProvider(); Security.addProvider(providerBC); // we load our private key from the key store KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType()); ks.load(new FileInputStream(CERTIFICATE), PIN); String alias = (String)ks.aliases().nextElement(); Certificate[] chain = ks.getCertificateChain(alias); PrivateKey pk = (PrivateKey) ks.getKey(alias, PIN); PrivateKeySignature signature = new PrivateKeySignature(pk, "SHA256", null); BouncyCastleDigest digest = new BouncyCastleDigest(); Calendar cal = Calendar.getInstance(); String hashAlgorithm = signature.getHashAlgorithm(); System.out.println(hashAlgorithm); PdfPKCS7 sgn = new PdfPKCS7(null, chain, "SHA256", null, digest, false); byte[] sh = sgn.getAuthenticatedAttributeBytes(hash, null, null, CryptoStandard.CMS); byte[] extSignature = signature.sign(sh); System.out.println(signature.getEncryptionAlgorithm()); sgn.setExternalDigest(extSignature, null, signature.getEncryptionAlgorithm()); return sgn.getEncodedPKCS7(hash, null, null, null, CryptoStandard.CMS); ```
Your `signDocument` method apparently does not accept a pre-calculated hash value but seems to calculate the hash of the data you give it, in your case the (lower case) hex presentation of the hash value you already calculated. In your first example document you have these values (all hashes are SHA256 hashes): * Hash of the byte ranges to sign: ``` 91A9F5EBC4F2ECEC819898824E00ECD9194C3E85E4410A3EFCF5193ED7739119 ``` * Hash of `"91a9f5ebc4f2ecec819898824e00ecd9194c3e85e4410a3efcf5193ed7739119".getBytes()`: ``` 2F37FE82F4F71770C2B33FB8787DE29627D7319EE77C6B5C48152F6E420A3242 ``` * Hash value signed by the embedded signature container: ``` 2F37FE82F4F71770C2B33FB8787DE29627D7319EE77C6B5C48152F6E420A3242 ``` And in your first example document you have these values (all hashes also are SHA256 hashes): * Hash of the byte ranges to sign: ``` 79793C58489EB94A17C365445622B7F7945972A5A0BC4C93B6444BEDFFA5A5BB ``` * Hash of `"79793c58489eb94a17c365445622b7f7945972a5a0bc4c93b6444bedffa5a5bb".getBytes()`: ``` A8BCBC6F9619ECB950864BFDF41D1B5B7CD33D035AF95570C426CF4B0405949B ``` * Hash value signed by the embedded signature container: ``` A8BCBC6F9619ECB950864BFDF41D1B5B7CD33D035AF95570C426CF4B0405949B ``` Thus, you have to correct your `signDocument` method to interpret the data correctly, or you have to give it a byte array containing the whole range stream to digest.
60,468,634
I'm fairly new to python and am doing some basic code. I need to know if i can repeat my iteration if the answer is not yes or no. Here is the code (sorry to those of you that think that im doing bad habits). I need the iteration to repeat during else. (The function just outputs text at the moment) ``` if remove_char1_attr1 = 'yes': char1_attr1.remove(min(char1_attr1)) char1_attr1_5 = random.randint(1,6) char1_attr1.append(char1_attr1_5) print("The numbers are now as follows: " +char1_attr1 ) elif remove_char1_attr1 = 'no' break else: incorrect_response() ```
2020/02/29
[ "https://Stackoverflow.com/questions/60468634", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12985589/" ]
Just put the code into a loop: ``` while True: if remove_char1_attr1 = 'yes': char1_attr1.remove(min(char1_attr1)) char1_attr1_5 = random.randint(1,6) char1_attr1.append(char1_attr1_5) print("The numbers are now as follows: " +char1_attr1 ) elif remove_char1_attr1 = 'no' break else: #incorrect_response() print("Incorrect") ``` then it will run till the `remove_char1_attr1` is "no"
You can try looping while it's not yes or no ```py while remove_char1_attr1 not in ('yes', 'no'): if remove_char1_attr1 = 'yes': char1_attr1.remove(min(char1_attr1)) char1_attr1_5 = random.randint(1,6) char1_attr1.append(char1_attr1_5) print("The numbers are now as follows: " +char1_attr1 ) elif remove_char1_attr1 = 'no' break else: incorrect_response() ```
52,415,096
I am calling a new object to manage an Azure Resource and using the Azure python packages. While calling it, i get a maximum depth exceeded error however if I step through the code in a python shell I don't get this issue. Below is the **init** method ``` class WindowsDeployer(object): def __init__(self, params): try: print("executes class init") self.subscription_id = '{SUBSCRIPTION-ID}' self.vmName = params["vmName"] self.location = params["location"] self.resource_group = "{}-rg".format(self.vmName) print("sets variables") # Error is in the below snippet, while calling ServicePrincipalCredentials self.credentials = ServicePrincipalCredentials( client_id='{CLIENT-ID}', secret='{SECRET}', tenant='{TENANT-ID}' ) # Does not reach here... print("creates a credential") self.client = ResourceManagementClient(self.credentials, self.subscription_id) ``` Instead, it exits with the following message: `maximum recursion depth exceeded` I have tried to increase the recursion limit to 10000 and that has not solved the issue. Pip freeze: `azure==4.0.0 azure-common==1.1.4 azure-mgmt==4.0.0` Traceback: ``` Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1985, in wsgi_app response = self.handle_exception(e) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1540, in handle_exception reraise(exc_type, exc_value, tb) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/_compat.py", line 33, in reraise raise value File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1982, in wsgi_app response = self.full_dispatch_request() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1614, in full_dispatch_request rv = self.handle_user_exception(e) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1517, in handle_user_exception reraise(exc_type, exc_value, tb) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/_compat.py", line 33, in reraise raise value File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1612, in full_dispatch_request rv = self.dispatch_request() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1598, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/Users/valana/Projects/wolfinterface/Code/wolfinterface/app.py", line 87, in wrap return f(*args, **kwargs) File "/Users/valana/Projects/wolfinterface/Code/wolfinterface/app.py", line 134, in provision return provision_page(request, session) File "/Users/valana/Projects/wolfinterface/Code/wolfinterface/provision.py", line 104, in provision_page deployer = WindowsDeployer(json.loads(params)) File "/Users/valana/Projects/wolfinterface/Code/wolfinterface/AzureProvision.py", line 30, in __init__ tenant='{TENNANT-ID}' File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/msrestazure/azure_active_directory.py", line 453, in __init__ self.set_token() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/msrestazure/azure_active_directory.py", line 478, in set_token proxies=self.proxies) File "/Users/valana/Library/Python/3.6/lib/python/site-packages/requests_oauthlib/oauth2_session.py", line 221, in fetch_token verify=verify, proxies=proxies) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 555, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/Users/valana/Library/Python/3.6/lib/python/site-packages/requests_oauthlib/oauth2_session.py", line 360, in request headers=headers, data=data, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 508, in request resp = self.send(prep, **send_kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 618, in send r = adapter.send(request, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/adapters.py", line 440, in send timeout=timeout File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 601, in urlopen chunked=chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 346, in _make_request self._validate_conn(conn) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 850, in _validate_conn conn.connect() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connection.py", line 314, in connect cert_reqs=resolve_cert_reqs(self.cert_reqs), File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/util/ssl_.py", line 269, in create_urllib3_context context.options |= options File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 465, in options super(SSLContext, SSLContext).options.__set__(self, value) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 465, in options super(SSLContext, SSLContext).options.__set__(self, value) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 465, in options ``` The last line keeps going until it hits the recursion limit
2018/09/19
[ "https://Stackoverflow.com/questions/52415096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6781059/" ]
Thanks for the help above. The issue was with my gevent packages (not sure exactly what) however adding upgrading gevent and adding the following lines fixed it. ``` import gevent.monkey gevent.monkey.patch_all() ```
I had a similar problem when using the `azure-storage-blob` module, and adding the following lines fixed it. I do not know why. It makes me confused. Exception: > > maximum recursion depth exceeded while calling a Python object > > > Solution: ``` import gevent.monkey gevent.monkey.patch_all() ```
72,921,087
Taking this command to start local server for example, the command includes -m, what is the meaning of -m in genearl? ``` python3 -m http.server ```
2022/07/09
[ "https://Stackoverflow.com/questions/72921087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4877535/" ]
From the documentation, which can be invoked using `python3 --help`. ``` -m mod : run library module as a script (terminates option list) ``` Instead of importing the module in another script (like `import <module-name>`), you directly run it as a script.
The -m stands for module-name in Python.
27,793,025
I can't use Java and Python at the same time. When I set ``` %JAVAHOME%\bin; %PYTHONPATH%; ``` I can use java, but not python. When I set ``` %PYTHONPATH%; %JAVAHOME%\bin; ``` I can use python, but not java. I'm using windows 7. How can I go about fixing this problem?
2015/01/06
[ "https://Stackoverflow.com/questions/27793025", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4422583/" ]
Don't put a space in your `PATH` entries ``` set "PATH=%JAVAHOME%\bin;%PYTHONPATH%;%PATH%" ```
1. Select Start, select Control Panel. double click System, and select the Advanced tab. 2. Click Environment Variables. In the section System Variables, find the PATH environment variable and select it. ... 3. In the Edit System Variable (or New System Variable) window, specify the value of the PATH environment variable. for more use this link <http://docs.oracle.com/javase/tutorial/essential/environment/paths.html>
27,793,025
I can't use Java and Python at the same time. When I set ``` %JAVAHOME%\bin; %PYTHONPATH%; ``` I can use java, but not python. When I set ``` %PYTHONPATH%; %JAVAHOME%\bin; ``` I can use python, but not java. I'm using windows 7. How can I go about fixing this problem?
2015/01/06
[ "https://Stackoverflow.com/questions/27793025", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4422583/" ]
Don't put a space in your `PATH` entries ``` set "PATH=%JAVAHOME%\bin;%PYTHONPATH%;%PATH%" ```
Have you tried removing the space after the semicolon ``` %JAVAHOME%\bin;%PYTHONPATH%; ```
27,793,025
I can't use Java and Python at the same time. When I set ``` %JAVAHOME%\bin; %PYTHONPATH%; ``` I can use java, but not python. When I set ``` %PYTHONPATH%; %JAVAHOME%\bin; ``` I can use python, but not java. I'm using windows 7. How can I go about fixing this problem?
2015/01/06
[ "https://Stackoverflow.com/questions/27793025", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4422583/" ]
Don't put a space in your `PATH` entries ``` set "PATH=%JAVAHOME%\bin;%PYTHONPATH%;%PATH%" ```
``` Select Start, select Control Panel. double click System, and select the Advanced tab. Click Environment Variables. In the section System Variables or User Variable, find the PATH environment variable and edit it.and write the path of your compiler like this way: Assume that your java compiler path is:D:\java\bin python compiler path is:C:\python27 so you have to set your User or System like variables like this: Variable name:PATH Variable value:D:\java\bin;C:\python27 You don't have to leave any space between the paths and you can add as many path as you want Have a look at this:[http://i.stack.imgur.com/N2C10.png] ```
40,652,793
I run a bash script with which start a python script to run in background ``` #!/bin/bash python test.py & ``` So how i can i kill the script with bash script also? I used the following command to kill but output `no process found` ``` killall $(ps aux | grep test.py | grep -v grep | awk '{ print $1 }') ``` I try to check the running processes by `ps aux | less` and found that the running script having command of `python test.py` Please assist, thank you!
2016/11/17
[ "https://Stackoverflow.com/questions/40652793", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6180818/" ]
Use `pkill` command as ``` pkill -f test.py ``` (or) a more fool-proof way using `pgrep` to search for the actual process-id ``` kill $(pgrep -f 'python test.py') ``` Or if more than one instance of the running program is identified and all of them needs to be killed, use [killall(1)](https://linux.die.net/man/1/killall) on Linux and BSD ``` killall test.py ```
You can use the `!` to get the PID of the last command. I would suggest something similar to the following, that also check if the process you want to run is already running: ``` #!/bin/bash if [[ ! -e /tmp/test.py.pid ]]; then # Check if the file already exists python test.py & #+and if so do not run another process. echo $! > /tmp/test.py.pid else echo -n "ERROR: The process is already running with pid " cat /tmp/test.py.pid echo fi ``` Then, when you want to kill it: ``` #!/bin/bash if [[ -e /tmp/test.py.pid ]]; then # If the file do not exists, then the kill `cat /tmp/test.py.pid` #+the process is not running. Useless rm /tmp/test.py.pid #+trying to kill it. else echo "test.py is not running" fi ``` Of course if the killing must take place some time after the command has been launched, you can put everything in the same script: ``` #!/bin/bash python test.py & # This does not check if the command echo $! > /tmp/test.py.pid #+has already been executed. But, #+would have problems if more than 1 sleep(<number_of_seconds_to_wait>) #+have been started since the pid file would. #+be overwritten. if [[ -e /tmp/test.py.pid ]]; then kill `cat /tmp/test.py.pid` else echo "test.py is not running" fi ``` If you want to be able to run more command with the same name simultaneously and be able to kill them selectively, a small edit of the script is needed. Tell me, I will try to help you! With something like this you are sure you are killing what you want to kill. Commands like `pkill` or grepping the `ps aux` can be risky.
40,652,793
I run a bash script with which start a python script to run in background ``` #!/bin/bash python test.py & ``` So how i can i kill the script with bash script also? I used the following command to kill but output `no process found` ``` killall $(ps aux | grep test.py | grep -v grep | awk '{ print $1 }') ``` I try to check the running processes by `ps aux | less` and found that the running script having command of `python test.py` Please assist, thank you!
2016/11/17
[ "https://Stackoverflow.com/questions/40652793", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6180818/" ]
Use `pkill` command as ``` pkill -f test.py ``` (or) a more fool-proof way using `pgrep` to search for the actual process-id ``` kill $(pgrep -f 'python test.py') ``` Or if more than one instance of the running program is identified and all of them needs to be killed, use [killall(1)](https://linux.die.net/man/1/killall) on Linux and BSD ``` killall test.py ```
``` ps -ef | grep python ``` it will return the "pid" then kill the process by ``` sudo kill -9 pid ``` eg output of ps command: user 13035 4729 0 13:44 pts/10 00:00:00 python (here 13035 is pid)
40,652,793
I run a bash script with which start a python script to run in background ``` #!/bin/bash python test.py & ``` So how i can i kill the script with bash script also? I used the following command to kill but output `no process found` ``` killall $(ps aux | grep test.py | grep -v grep | awk '{ print $1 }') ``` I try to check the running processes by `ps aux | less` and found that the running script having command of `python test.py` Please assist, thank you!
2016/11/17
[ "https://Stackoverflow.com/questions/40652793", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6180818/" ]
Use `pkill` command as ``` pkill -f test.py ``` (or) a more fool-proof way using `pgrep` to search for the actual process-id ``` kill $(pgrep -f 'python test.py') ``` Or if more than one instance of the running program is identified and all of them needs to be killed, use [killall(1)](https://linux.die.net/man/1/killall) on Linux and BSD ``` killall test.py ```
With the use of bashisms. ``` #!/bin/bash python test.py & kill $! ``` `$!` is the PID of the last process started in background. You can also save it in another variable if you start multiple scripts in the background.
40,652,793
I run a bash script with which start a python script to run in background ``` #!/bin/bash python test.py & ``` So how i can i kill the script with bash script also? I used the following command to kill but output `no process found` ``` killall $(ps aux | grep test.py | grep -v grep | awk '{ print $1 }') ``` I try to check the running processes by `ps aux | less` and found that the running script having command of `python test.py` Please assist, thank you!
2016/11/17
[ "https://Stackoverflow.com/questions/40652793", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6180818/" ]
Use `pkill` command as ``` pkill -f test.py ``` (or) a more fool-proof way using `pgrep` to search for the actual process-id ``` kill $(pgrep -f 'python test.py') ``` Or if more than one instance of the running program is identified and all of them needs to be killed, use [killall(1)](https://linux.die.net/man/1/killall) on Linux and BSD ``` killall test.py ```
``` killall python3 ``` will interrupt ***any and all*** python3 scripts running.
40,652,793
I run a bash script with which start a python script to run in background ``` #!/bin/bash python test.py & ``` So how i can i kill the script with bash script also? I used the following command to kill but output `no process found` ``` killall $(ps aux | grep test.py | grep -v grep | awk '{ print $1 }') ``` I try to check the running processes by `ps aux | less` and found that the running script having command of `python test.py` Please assist, thank you!
2016/11/17
[ "https://Stackoverflow.com/questions/40652793", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6180818/" ]
You can use the `!` to get the PID of the last command. I would suggest something similar to the following, that also check if the process you want to run is already running: ``` #!/bin/bash if [[ ! -e /tmp/test.py.pid ]]; then # Check if the file already exists python test.py & #+and if so do not run another process. echo $! > /tmp/test.py.pid else echo -n "ERROR: The process is already running with pid " cat /tmp/test.py.pid echo fi ``` Then, when you want to kill it: ``` #!/bin/bash if [[ -e /tmp/test.py.pid ]]; then # If the file do not exists, then the kill `cat /tmp/test.py.pid` #+the process is not running. Useless rm /tmp/test.py.pid #+trying to kill it. else echo "test.py is not running" fi ``` Of course if the killing must take place some time after the command has been launched, you can put everything in the same script: ``` #!/bin/bash python test.py & # This does not check if the command echo $! > /tmp/test.py.pid #+has already been executed. But, #+would have problems if more than 1 sleep(<number_of_seconds_to_wait>) #+have been started since the pid file would. #+be overwritten. if [[ -e /tmp/test.py.pid ]]; then kill `cat /tmp/test.py.pid` else echo "test.py is not running" fi ``` If you want to be able to run more command with the same name simultaneously and be able to kill them selectively, a small edit of the script is needed. Tell me, I will try to help you! With something like this you are sure you are killing what you want to kill. Commands like `pkill` or grepping the `ps aux` can be risky.
``` ps -ef | grep python ``` it will return the "pid" then kill the process by ``` sudo kill -9 pid ``` eg output of ps command: user 13035 4729 0 13:44 pts/10 00:00:00 python (here 13035 is pid)
40,652,793
I run a bash script with which start a python script to run in background ``` #!/bin/bash python test.py & ``` So how i can i kill the script with bash script also? I used the following command to kill but output `no process found` ``` killall $(ps aux | grep test.py | grep -v grep | awk '{ print $1 }') ``` I try to check the running processes by `ps aux | less` and found that the running script having command of `python test.py` Please assist, thank you!
2016/11/17
[ "https://Stackoverflow.com/questions/40652793", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6180818/" ]
You can use the `!` to get the PID of the last command. I would suggest something similar to the following, that also check if the process you want to run is already running: ``` #!/bin/bash if [[ ! -e /tmp/test.py.pid ]]; then # Check if the file already exists python test.py & #+and if so do not run another process. echo $! > /tmp/test.py.pid else echo -n "ERROR: The process is already running with pid " cat /tmp/test.py.pid echo fi ``` Then, when you want to kill it: ``` #!/bin/bash if [[ -e /tmp/test.py.pid ]]; then # If the file do not exists, then the kill `cat /tmp/test.py.pid` #+the process is not running. Useless rm /tmp/test.py.pid #+trying to kill it. else echo "test.py is not running" fi ``` Of course if the killing must take place some time after the command has been launched, you can put everything in the same script: ``` #!/bin/bash python test.py & # This does not check if the command echo $! > /tmp/test.py.pid #+has already been executed. But, #+would have problems if more than 1 sleep(<number_of_seconds_to_wait>) #+have been started since the pid file would. #+be overwritten. if [[ -e /tmp/test.py.pid ]]; then kill `cat /tmp/test.py.pid` else echo "test.py is not running" fi ``` If you want to be able to run more command with the same name simultaneously and be able to kill them selectively, a small edit of the script is needed. Tell me, I will try to help you! With something like this you are sure you are killing what you want to kill. Commands like `pkill` or grepping the `ps aux` can be risky.
With the use of bashisms. ``` #!/bin/bash python test.py & kill $! ``` `$!` is the PID of the last process started in background. You can also save it in another variable if you start multiple scripts in the background.
40,652,793
I run a bash script with which start a python script to run in background ``` #!/bin/bash python test.py & ``` So how i can i kill the script with bash script also? I used the following command to kill but output `no process found` ``` killall $(ps aux | grep test.py | grep -v grep | awk '{ print $1 }') ``` I try to check the running processes by `ps aux | less` and found that the running script having command of `python test.py` Please assist, thank you!
2016/11/17
[ "https://Stackoverflow.com/questions/40652793", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6180818/" ]
You can use the `!` to get the PID of the last command. I would suggest something similar to the following, that also check if the process you want to run is already running: ``` #!/bin/bash if [[ ! -e /tmp/test.py.pid ]]; then # Check if the file already exists python test.py & #+and if so do not run another process. echo $! > /tmp/test.py.pid else echo -n "ERROR: The process is already running with pid " cat /tmp/test.py.pid echo fi ``` Then, when you want to kill it: ``` #!/bin/bash if [[ -e /tmp/test.py.pid ]]; then # If the file do not exists, then the kill `cat /tmp/test.py.pid` #+the process is not running. Useless rm /tmp/test.py.pid #+trying to kill it. else echo "test.py is not running" fi ``` Of course if the killing must take place some time after the command has been launched, you can put everything in the same script: ``` #!/bin/bash python test.py & # This does not check if the command echo $! > /tmp/test.py.pid #+has already been executed. But, #+would have problems if more than 1 sleep(<number_of_seconds_to_wait>) #+have been started since the pid file would. #+be overwritten. if [[ -e /tmp/test.py.pid ]]; then kill `cat /tmp/test.py.pid` else echo "test.py is not running" fi ``` If you want to be able to run more command with the same name simultaneously and be able to kill them selectively, a small edit of the script is needed. Tell me, I will try to help you! With something like this you are sure you are killing what you want to kill. Commands like `pkill` or grepping the `ps aux` can be risky.
``` killall python3 ``` will interrupt ***any and all*** python3 scripts running.
25,065,017
I'm learning objective c a little bit to write an iPad app. I've mostly done some html5/php projects and learned some python at university. But one thing that really blows my mind is how hard it is to just style some text in an objective C label. Maybe I'm coming from a lazy markdown generation, but really, if I want to let an UILabel look like: > > > > > > **Objective:** Construct an *equilateral* triangle from the line segment AB. > > > > > > > > > In markdown this is as simple as: `**Objective:** Construct an *equilateral* triangle from the line segment AB.` Is there really no pain free objective C way to do this ? All the tutorials I read really wanted me to write like 15 lines of code. For something as simple as this. So my question is, what is the easiest way to do this, if you have a lot of styling to do in your app ? Will styling text become more natural with swift in iOS8 ?
2014/07/31
[ "https://Stackoverflow.com/questions/25065017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2517546/" ]
You can use `NSAttributedString`'s `data:options:documentAttributes:error:` initializer (first available in iOS 7.0 SDK). ``` import UIKit let htmlString = "<b>Objective</b>: Construct an <i>equilateral</i> triangle from the line segment AB." let htmlData = htmlString.dataUsingEncoding(NSUTF8StringEncoding) let options = [NSDocumentTypeDocumentAttribute: NSHTMLTextDocumentType] var error : NSError? = nil let attributedString = NSAttributedString(data: htmlData, options: options, documentAttributes: nil, error: &error) if error == nil { // we're good } ``` ***Note:*** You might also want to include `NSDefaultAttributesDocumentAttribute` option in the `options` dictionary to provide additional global styling (such as telling not to use Times New Roman). Take a look into **[NSAttributedString UIKit Additions Reference](https://developer.apple.com/library/ios/documentation/uikit/reference/NSAttributedString_UIKit_Additions/Reference/Reference.html)** for more information.
I faced similar frustrations while trying to use attributed text in Xcode, so I feel your pain. You can definitely use multiple `NSMutableAttributedtext`'s to get the job done, but this is very rigid. ``` UIFont *normalFont = [UIFont fontWithName:@"..." size:20]; UIFont *boldFont = [UIFont fontWithName:@"..." size:20]; UIFont *italicizedFont = [UIFont fontWithName:@"..." size:20]; NSMutableAttributedString *total = [[NSMutableAttributedString alloc]init]; NSAttributedString *string1 = [[NSAttributedString alloc] initWithString:[NSString stringWithFormat:@"Objective"] attributes:@{NSFontAttributeName:boldFont}]; NSAttributedString *string2 = [[NSAttributedString alloc] initWithString:[NSString stringWithFormat:@": Construct an "] attributes:@{NSFontAttributeName:normalFont}]; NSAttributedString *string3 = [[NSAttributedString alloc] initWithString:[NSString stringWithFormat:@"equilateral "] attributes:@{NSFontAttributeName:italicizedFont}]; NSAttributedString *string4 = [[NSAttributedString alloc] initWithString:[NSString stringWithFormat:@"triangle from the line segment AB."] attributes:@{NSFontAttributeName:normalFont}]; [total appendAttributedString:string1]; [total appendAttributedString:string2]; [total appendAttributedString:string3]; [total appendAttributedString:string4]; [self.someLabel setAttributedText: total]; ``` Another option is to use `NSRegularExpression`. While this will require more lines of code, it is a more fluid way of bolding, changing color, etc from an entire string at once. For your purposes however, using the `appendAttributedString` will be the shortest way with a label. ``` UIFont *normalFont = [UIFont fontWithName:@"..." size:20]; UIFont *boldFont = [UIFont fontWithFamilyName:@"..." size: 20]; UIFont *italicizedFont = [UIFont fontWithFamilyName:@"..." size: 20]; NSMutableAttributedString *attributedString = [[NSMutableAttributedString alloc] initWithString:[NSString stringWithFormat: @"Objective: Construct an equilateral triangle from the line segment AB."] attributes:@{NSFontAttributeName:normalFont}]; NSError *regexError; NSRegularExpression *regex1 = [NSRegularExpression regularExpressionWithPattern:@"Objective" options:NSRegularExpressionCaseInsensitive error:&regexError]; NSRegularExpression *regex2 = [NSRegularExpression regularExpressionWithPattern:@"equilateral" options:NSRegularExpressionCaseInsensitive error:&regexError]; if (!regexError) { NSArray *matches1 = [regex1 matchesInString:[attributedString string] options:0 range:NSMakeRange(0, [[attributedString string] length])]; NSArray *matches2 = [regex2 matchesInString:[attributedString string] options:0 range:NSMakeRange(0, [[attributedString string] length])]; for (NSTextCheckingResult *aMatch in matches1) { NSRange matchRange = [aMatch range]; [attributedString setAttributes:@{NSFontAttributeName:boldFont} range:matchRange]; } for (NSTextCheckingResult *aMatch in matches2) { NSRange matchRange = [aMatch range]; [attributedString setAttributes:@{NSFontAttributeName:italicizedFont} range:matchRange]; } [self.someLabel setAttributedText: attributedString]; ```
25,065,017
I'm learning objective c a little bit to write an iPad app. I've mostly done some html5/php projects and learned some python at university. But one thing that really blows my mind is how hard it is to just style some text in an objective C label. Maybe I'm coming from a lazy markdown generation, but really, if I want to let an UILabel look like: > > > > > > **Objective:** Construct an *equilateral* triangle from the line segment AB. > > > > > > > > > In markdown this is as simple as: `**Objective:** Construct an *equilateral* triangle from the line segment AB.` Is there really no pain free objective C way to do this ? All the tutorials I read really wanted me to write like 15 lines of code. For something as simple as this. So my question is, what is the easiest way to do this, if you have a lot of styling to do in your app ? Will styling text become more natural with swift in iOS8 ?
2014/07/31
[ "https://Stackoverflow.com/questions/25065017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2517546/" ]
I faced similar frustrations while trying to use attributed text in Xcode, so I feel your pain. You can definitely use multiple `NSMutableAttributedtext`'s to get the job done, but this is very rigid. ``` UIFont *normalFont = [UIFont fontWithName:@"..." size:20]; UIFont *boldFont = [UIFont fontWithName:@"..." size:20]; UIFont *italicizedFont = [UIFont fontWithName:@"..." size:20]; NSMutableAttributedString *total = [[NSMutableAttributedString alloc]init]; NSAttributedString *string1 = [[NSAttributedString alloc] initWithString:[NSString stringWithFormat:@"Objective"] attributes:@{NSFontAttributeName:boldFont}]; NSAttributedString *string2 = [[NSAttributedString alloc] initWithString:[NSString stringWithFormat:@": Construct an "] attributes:@{NSFontAttributeName:normalFont}]; NSAttributedString *string3 = [[NSAttributedString alloc] initWithString:[NSString stringWithFormat:@"equilateral "] attributes:@{NSFontAttributeName:italicizedFont}]; NSAttributedString *string4 = [[NSAttributedString alloc] initWithString:[NSString stringWithFormat:@"triangle from the line segment AB."] attributes:@{NSFontAttributeName:normalFont}]; [total appendAttributedString:string1]; [total appendAttributedString:string2]; [total appendAttributedString:string3]; [total appendAttributedString:string4]; [self.someLabel setAttributedText: total]; ``` Another option is to use `NSRegularExpression`. While this will require more lines of code, it is a more fluid way of bolding, changing color, etc from an entire string at once. For your purposes however, using the `appendAttributedString` will be the shortest way with a label. ``` UIFont *normalFont = [UIFont fontWithName:@"..." size:20]; UIFont *boldFont = [UIFont fontWithFamilyName:@"..." size: 20]; UIFont *italicizedFont = [UIFont fontWithFamilyName:@"..." size: 20]; NSMutableAttributedString *attributedString = [[NSMutableAttributedString alloc] initWithString:[NSString stringWithFormat: @"Objective: Construct an equilateral triangle from the line segment AB."] attributes:@{NSFontAttributeName:normalFont}]; NSError *regexError; NSRegularExpression *regex1 = [NSRegularExpression regularExpressionWithPattern:@"Objective" options:NSRegularExpressionCaseInsensitive error:&regexError]; NSRegularExpression *regex2 = [NSRegularExpression regularExpressionWithPattern:@"equilateral" options:NSRegularExpressionCaseInsensitive error:&regexError]; if (!regexError) { NSArray *matches1 = [regex1 matchesInString:[attributedString string] options:0 range:NSMakeRange(0, [[attributedString string] length])]; NSArray *matches2 = [regex2 matchesInString:[attributedString string] options:0 range:NSMakeRange(0, [[attributedString string] length])]; for (NSTextCheckingResult *aMatch in matches1) { NSRange matchRange = [aMatch range]; [attributedString setAttributes:@{NSFontAttributeName:boldFont} range:matchRange]; } for (NSTextCheckingResult *aMatch in matches2) { NSRange matchRange = [aMatch range]; [attributedString setAttributes:@{NSFontAttributeName:italicizedFont} range:matchRange]; } [self.someLabel setAttributedText: attributedString]; ```
Just to update the [akashivskyy’s answer](https://stackoverflow.com/a/25068643/1271826) (+1) with contemporary Swift syntax: ```swift guard let data = htmlString.data(using: .utf8) else { return } do { let attributedString = try NSAttributedString( data: data, options: [.documentType: NSAttributedString.DocumentType.html], documentAttributes: nil ) ... } catch { print(error) } ```
25,065,017
I'm learning objective c a little bit to write an iPad app. I've mostly done some html5/php projects and learned some python at university. But one thing that really blows my mind is how hard it is to just style some text in an objective C label. Maybe I'm coming from a lazy markdown generation, but really, if I want to let an UILabel look like: > > > > > > **Objective:** Construct an *equilateral* triangle from the line segment AB. > > > > > > > > > In markdown this is as simple as: `**Objective:** Construct an *equilateral* triangle from the line segment AB.` Is there really no pain free objective C way to do this ? All the tutorials I read really wanted me to write like 15 lines of code. For something as simple as this. So my question is, what is the easiest way to do this, if you have a lot of styling to do in your app ? Will styling text become more natural with swift in iOS8 ?
2014/07/31
[ "https://Stackoverflow.com/questions/25065017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2517546/" ]
You can use `NSAttributedString`'s `data:options:documentAttributes:error:` initializer (first available in iOS 7.0 SDK). ``` import UIKit let htmlString = "<b>Objective</b>: Construct an <i>equilateral</i> triangle from the line segment AB." let htmlData = htmlString.dataUsingEncoding(NSUTF8StringEncoding) let options = [NSDocumentTypeDocumentAttribute: NSHTMLTextDocumentType] var error : NSError? = nil let attributedString = NSAttributedString(data: htmlData, options: options, documentAttributes: nil, error: &error) if error == nil { // we're good } ``` ***Note:*** You might also want to include `NSDefaultAttributesDocumentAttribute` option in the `options` dictionary to provide additional global styling (such as telling not to use Times New Roman). Take a look into **[NSAttributedString UIKit Additions Reference](https://developer.apple.com/library/ios/documentation/uikit/reference/NSAttributedString_UIKit_Additions/Reference/Reference.html)** for more information.
Just to update the [akashivskyy’s answer](https://stackoverflow.com/a/25068643/1271826) (+1) with contemporary Swift syntax: ```swift guard let data = htmlString.data(using: .utf8) else { return } do { let attributedString = try NSAttributedString( data: data, options: [.documentType: NSAttributedString.DocumentType.html], documentAttributes: nil ) ... } catch { print(error) } ```
22,071,987
I haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at [Random sampling](http://docs.scipy.org/doc/numpy/reference/routines.random.html) but no function seems to do what I need. [random.uniform](http://docs.python.org/2/library/random.html#random.uniform) comes close but it only returns a single element, not a specific number. This is what I'm after: ``` ran_floats = some_function(low=0.5, high=13.3, size=50) ``` which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range `[0.5, 13.3]`. Is there such a function?
2014/02/27
[ "https://Stackoverflow.com/questions/22071987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391441/" ]
[`np.random.uniform`](https://numpy.org/doc/stable/reference/random/generated/numpy.random.uniform.html) fits your use case: ``` sampl = np.random.uniform(low=0.5, high=13.3, size=(50,)) ``` **Update Oct 2019:** While the syntax is still supported, it looks like the API changed with NumPy 1.17 to support greater control over the random number generator. Going forward the API has changed and you should look at <https://docs.scipy.org/doc/numpy/reference/random/generated/numpy.random.Generator.uniform.html> The enhancement proposal is here: <https://numpy.org/neps/nep-0019-rng-policy.html>
This is the simplest way ``` np.random.uniform(start,stop,(rows,columns)) ```
22,071,987
I haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at [Random sampling](http://docs.scipy.org/doc/numpy/reference/routines.random.html) but no function seems to do what I need. [random.uniform](http://docs.python.org/2/library/random.html#random.uniform) comes close but it only returns a single element, not a specific number. This is what I'm after: ``` ran_floats = some_function(low=0.5, high=13.3, size=50) ``` which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range `[0.5, 13.3]`. Is there such a function?
2014/02/27
[ "https://Stackoverflow.com/questions/22071987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391441/" ]
Why not use a list comprehension? In Python 2 ``` ran_floats = [random.uniform(low,high) for _ in xrange(size)] ``` In Python 3, `range` works like `xrange`([ref](https://www.geeksforgeeks.org/range-vs-xrange-python/)) ``` ran_floats = [random.uniform(low,high) for _ in range(size)] ```
The for loop in list comprehension takes time and makes it slow. It is better to use numpy parameters (low, high, size, ..etc) ``` import numpy as np import time rang = 10000 tic = time.time() for i in range(rang): sampl = np.random.uniform(low=0, high=2, size=(182)) print("it took: ", time.time() - tic) tic = time.time() for i in range(rang): ran_floats = [np.random.uniform(0,2) for _ in range(182)] print("it took: ", time.time() - tic) ``` sample output: ('it took: ', 0.06406784057617188) ('it took: ', 1.7253198623657227)
22,071,987
I haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at [Random sampling](http://docs.scipy.org/doc/numpy/reference/routines.random.html) but no function seems to do what I need. [random.uniform](http://docs.python.org/2/library/random.html#random.uniform) comes close but it only returns a single element, not a specific number. This is what I'm after: ``` ran_floats = some_function(low=0.5, high=13.3, size=50) ``` which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range `[0.5, 13.3]`. Is there such a function?
2014/02/27
[ "https://Stackoverflow.com/questions/22071987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391441/" ]
There may already be a function to do what you're looking for, but I don't know about it (yet?). In the meantime, I would suggess using: ``` ran_floats = numpy.random.rand(50) * (13.3-0.5) + 0.5 ``` This will produce an array of shape (50,) with a uniform distribution between 0.5 and 13.3. You could also define a function: ``` def random_uniform_range(shape=[1,],low=0,high=1): """ Random uniform range Produces a random uniform distribution of specified shape, with arbitrary max and min values. Default shape is [1], and default range is [0,1]. """ return numpy.random.rand(shape) * (high - min) + min ``` **EDIT**: Hmm, yeah, so I missed it, there is numpy.random.uniform() with the same exact call you want! Try `import numpy; help(numpy.random.uniform)` for more information.
The for loop in list comprehension takes time and makes it slow. It is better to use numpy parameters (low, high, size, ..etc) ``` import numpy as np import time rang = 10000 tic = time.time() for i in range(rang): sampl = np.random.uniform(low=0, high=2, size=(182)) print("it took: ", time.time() - tic) tic = time.time() for i in range(rang): ran_floats = [np.random.uniform(0,2) for _ in range(182)] print("it took: ", time.time() - tic) ``` sample output: ('it took: ', 0.06406784057617188) ('it took: ', 1.7253198623657227)
22,071,987
I haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at [Random sampling](http://docs.scipy.org/doc/numpy/reference/routines.random.html) but no function seems to do what I need. [random.uniform](http://docs.python.org/2/library/random.html#random.uniform) comes close but it only returns a single element, not a specific number. This is what I'm after: ``` ran_floats = some_function(low=0.5, high=13.3, size=50) ``` which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range `[0.5, 13.3]`. Is there such a function?
2014/02/27
[ "https://Stackoverflow.com/questions/22071987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391441/" ]
Why not to combine [random.uniform](http://docs.python.org/2/library/random.html#random.uniform) with a list comprehension? ``` >>> def random_floats(low, high, size): ... return [random.uniform(low, high) for _ in xrange(size)] ... >>> random_floats(0.5, 2.8, 5) [2.366910411506704, 1.878800401620107, 1.0145196974227986, 2.332600336488709, 1.945869474662082] ```
This should work for your example ``` sample = (np.random.random([50, ]) * 13.3) - 0.5 ```
22,071,987
I haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at [Random sampling](http://docs.scipy.org/doc/numpy/reference/routines.random.html) but no function seems to do what I need. [random.uniform](http://docs.python.org/2/library/random.html#random.uniform) comes close but it only returns a single element, not a specific number. This is what I'm after: ``` ran_floats = some_function(low=0.5, high=13.3, size=50) ``` which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range `[0.5, 13.3]`. Is there such a function?
2014/02/27
[ "https://Stackoverflow.com/questions/22071987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391441/" ]
[`np.random.uniform`](https://numpy.org/doc/stable/reference/random/generated/numpy.random.uniform.html) fits your use case: ``` sampl = np.random.uniform(low=0.5, high=13.3, size=(50,)) ``` **Update Oct 2019:** While the syntax is still supported, it looks like the API changed with NumPy 1.17 to support greater control over the random number generator. Going forward the API has changed and you should look at <https://docs.scipy.org/doc/numpy/reference/random/generated/numpy.random.Generator.uniform.html> The enhancement proposal is here: <https://numpy.org/neps/nep-0019-rng-policy.html>
`np.random.random_sample(size)` will generate random floats in the half-open interval [0.0, 1.0).
22,071,987
I haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at [Random sampling](http://docs.scipy.org/doc/numpy/reference/routines.random.html) but no function seems to do what I need. [random.uniform](http://docs.python.org/2/library/random.html#random.uniform) comes close but it only returns a single element, not a specific number. This is what I'm after: ``` ran_floats = some_function(low=0.5, high=13.3, size=50) ``` which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range `[0.5, 13.3]`. Is there such a function?
2014/02/27
[ "https://Stackoverflow.com/questions/22071987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391441/" ]
Alternatively you could use [SciPy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.uniform.html) ``` from scipy import stats stats.uniform(0.5, 13.3).rvs(50) ``` and for the record to sample integers it's ``` stats.randint(10, 20).rvs(50) ```
The for loop in list comprehension takes time and makes it slow. It is better to use numpy parameters (low, high, size, ..etc) ``` import numpy as np import time rang = 10000 tic = time.time() for i in range(rang): sampl = np.random.uniform(low=0, high=2, size=(182)) print("it took: ", time.time() - tic) tic = time.time() for i in range(rang): ran_floats = [np.random.uniform(0,2) for _ in range(182)] print("it took: ", time.time() - tic) ``` sample output: ('it took: ', 0.06406784057617188) ('it took: ', 1.7253198623657227)
22,071,987
I haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at [Random sampling](http://docs.scipy.org/doc/numpy/reference/routines.random.html) but no function seems to do what I need. [random.uniform](http://docs.python.org/2/library/random.html#random.uniform) comes close but it only returns a single element, not a specific number. This is what I'm after: ``` ran_floats = some_function(low=0.5, high=13.3, size=50) ``` which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range `[0.5, 13.3]`. Is there such a function?
2014/02/27
[ "https://Stackoverflow.com/questions/22071987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391441/" ]
The for loop in list comprehension takes time and makes it slow. It is better to use numpy parameters (low, high, size, ..etc) ``` import numpy as np import time rang = 10000 tic = time.time() for i in range(rang): sampl = np.random.uniform(low=0, high=2, size=(182)) print("it took: ", time.time() - tic) tic = time.time() for i in range(rang): ran_floats = [np.random.uniform(0,2) for _ in range(182)] print("it took: ", time.time() - tic) ``` sample output: ('it took: ', 0.06406784057617188) ('it took: ', 1.7253198623657227)
This should work for your example ``` sample = (np.random.random([50, ]) * 13.3) - 0.5 ```
22,071,987
I haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at [Random sampling](http://docs.scipy.org/doc/numpy/reference/routines.random.html) but no function seems to do what I need. [random.uniform](http://docs.python.org/2/library/random.html#random.uniform) comes close but it only returns a single element, not a specific number. This is what I'm after: ``` ran_floats = some_function(low=0.5, high=13.3, size=50) ``` which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range `[0.5, 13.3]`. Is there such a function?
2014/02/27
[ "https://Stackoverflow.com/questions/22071987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391441/" ]
Why not use a list comprehension? In Python 2 ``` ran_floats = [random.uniform(low,high) for _ in xrange(size)] ``` In Python 3, `range` works like `xrange`([ref](https://www.geeksforgeeks.org/range-vs-xrange-python/)) ``` ran_floats = [random.uniform(low,high) for _ in range(size)] ```
This should work for your example ``` sample = (np.random.random([50, ]) * 13.3) - 0.5 ```
22,071,987
I haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at [Random sampling](http://docs.scipy.org/doc/numpy/reference/routines.random.html) but no function seems to do what I need. [random.uniform](http://docs.python.org/2/library/random.html#random.uniform) comes close but it only returns a single element, not a specific number. This is what I'm after: ``` ran_floats = some_function(low=0.5, high=13.3, size=50) ``` which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range `[0.5, 13.3]`. Is there such a function?
2014/02/27
[ "https://Stackoverflow.com/questions/22071987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391441/" ]
Why not use a list comprehension? In Python 2 ``` ran_floats = [random.uniform(low,high) for _ in xrange(size)] ``` In Python 3, `range` works like `xrange`([ref](https://www.geeksforgeeks.org/range-vs-xrange-python/)) ``` ran_floats = [random.uniform(low,high) for _ in range(size)] ```
This is the simplest way ``` np.random.uniform(start,stop,(rows,columns)) ```
22,071,987
I haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at [Random sampling](http://docs.scipy.org/doc/numpy/reference/routines.random.html) but no function seems to do what I need. [random.uniform](http://docs.python.org/2/library/random.html#random.uniform) comes close but it only returns a single element, not a specific number. This is what I'm after: ``` ran_floats = some_function(low=0.5, high=13.3, size=50) ``` which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range `[0.5, 13.3]`. Is there such a function?
2014/02/27
[ "https://Stackoverflow.com/questions/22071987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391441/" ]
This is the simplest way ``` np.random.uniform(start,stop,(rows,columns)) ```
The for loop in list comprehension takes time and makes it slow. It is better to use numpy parameters (low, high, size, ..etc) ``` import numpy as np import time rang = 10000 tic = time.time() for i in range(rang): sampl = np.random.uniform(low=0, high=2, size=(182)) print("it took: ", time.time() - tic) tic = time.time() for i in range(rang): ran_floats = [np.random.uniform(0,2) for _ in range(182)] print("it took: ", time.time() - tic) ``` sample output: ('it took: ', 0.06406784057617188) ('it took: ', 1.7253198623657227)