qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
35,869,561
For a task I am to use ConditionalProbDist using LidstoneProbDist as the estimator, adding +0.01 to the sample count for each bin. I thought the following line of code would achieve this, but it produces a value error ``` fd = nltk.ConditionalProbDist(fd,nltk.probability.LidstoneProbDist,0.01) ``` I'm not sure how to format the arguments within ConditionalProbDist and haven't had much luck in finding out how to do so via python's help feature or google, so if anyone could set me right, it would be much appreciated!
2016/03/08
[ "https://Stackoverflow.com/questions/35869561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3255571/" ]
I found [the probability tutorial](http://www.nltk.org/howto/probability.html) on the NLTK website quite helpful as a reference. As mentioned in the answer above, using a lambda expression is a good idea, since the `ConditionalProbDist` will generate a frequency distribution (`nltk.FreqDist`) on the fly that's passed through to the estimator. A more subtle point is that passing through the bins parameter can't be done if you don't know how many bins you originally have in your input sample! However, a `FreqDist` has the number of bins available as `FreqDist.B()` ([docs](http://www.nltk.org/api/nltk.html#nltk.probability.FreqDist.B)). Instead use `FreqDist` as the only parameter to your lambda: ``` from nltk.probability import * # ... # Using the given parameters of one extra bin and a gamma of 0.01 lidstone_estimator = lambda fd: LidstoneProbDist(fd, 0.01, fd.B() + 1) conditional_pd = ConditionalProbDist(conditional_fd, lidstone_estimator) ``` I know this question is very old now, but I too struggled to find documentation, so I'm documenting it here in case someone else down the line runs into a similar struggle. Good luck (with fnlp)!
You probably don't need this anymore as the question is very old, but still, you can pass LidstoneProbDist arguments to ConditionalProbDist with the help of lambda: ``` estimator = lambda fdist, bins: nltk.LidstoneProbDist(fdist, 0.01, bins) cpd = nltk.ConditionalProbDist(fd, estimator, bins) ```
2,442
68,293,321
In Python/Pandas, I want to create a column in my dataframe that shows the average number of days between customer visits at a venue. That is, for each customer, what are the average number of days between that customer's visits? Data looks like [Image of My Data](https://i.stack.imgur.com/NPFMU.png) Sorry I'm really inexperienced and don't know how to type the data up other than this. I am following the solution in [this StackOverflow answer](https://stackoverflow.com/questions/45241221/python-pandas-calculate-average-days-between-dates), except that that person wanted the average number of days between visits in general, and I want days between visits for each customer. Thank you.
2021/07/07
[ "https://Stackoverflow.com/questions/68293321", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14814034/" ]
On windows linking DLLs goes through a trampoline library (.lib file) which generates the right bindings. The convention for these is to prefix the function names with `__imp__` ([there is a related C++ answer](https://stackoverflow.com/a/5159395/1818675)). There is an [open issue](https://github.com/rust-lang/reference/issues/638) that explains some of the difficulties creating and linking rust dlls under windows. Here are the relevant bits: > > If you start developing on Windows, Rust will produce a mylib.dll and mylib.dll.lib. To use this lib again from Rust you will have to specify #[link(name = "mylib.dll")], thus giving the impression that the full file name has to be specified. On Mac, however, #[link(name = "libmylib.dylib"] will fail (likewise Linux). > > > > > If you start developing on Mac and Linux, #[link(name = "mylib")] just works, giving you the impression Rust handles the name resolution (fully) automatically like other platforms that just require the base name. > > > > > In fact, the correct way to cross platform link against a dylib produced by Rust seems to be: > > > ```rust #[cfg_attr(all(target_os = "windows", target_env = "msvc"), link(name = "dylib.dll"))] #[cfg_attr(not(all(target_os = "windows", target_env = "msvc")), link(name = "dylib"))] extern "C" {} ```
This is not my ideal answer, but it is how I solve the problem. What I'm still looking for is a way to get the Microsoft Linker (I believe) to output full verbosity in the rust build as it can do when doing C++ builds. There are options to the build that might trigger this but I haven't found them yet. That plus this name munging in maybe 80% less text than I write here would be an ideal answer I think. The users.rust-lang.org user chrefr helped by asking some clarifying questiongs which jogged my brain. He mentioned that "*name mangling schema is unspecified in C++*" which was my aha moment. I was trying to force RUST to make the RUST linker look for my external output() API function, expecting it to look for the mangled name, as the native API call I am accessing was not declared with "cdecl" to prevent name mangling. I simply forced RUST to use the mangled name I found with dumpbin.hex (code below) What I was hoping for as an answer was a way to get linker.exe to output all the symbols it is looking for. Which would have been "output" which was what the compiler error was stating. I was thinking it was looking for a mangled name and wanted to compare the two mangled names by getting the microsoft linker to output what it was attempting to match. So my solution was to use the dumpbin munged name in my #[link] directive: ``` //#[link(name="myNativeLib")] //#[link(name="myNativeLib", kind="dylib")] // prepends _imp to symbol below #[link(name="myNativeLib", kind="static")] // I'm linking with a DLL extern { //#[link_name = "output"] #[link_name = "?output@@YAXPEBDZZ"] // Name found via DUMPBIN.exe /Exports fn output( format:LPCTSTR, ...); } ``` Although I have access to sources of myNativeLib, these are not distributed, and not going to change. The \*.lib and \*.exp are only available internally, so long term I will need a solution to bind to these modules that only relys on the \*.dll being present. That suggests I might need to dynamically load the DLL instead of doing what I consider "implicit" linking of the DLL. As I suspect rust is looking just at the \*.lib module to resolve the symbols. I need a kind="dylibOnly" for Windows DLLS that are distributed without \*.lib and \*.exp modules. But for the moment I was able to get all my link issues resolved. I can now call my RUST DLL from a VS2019 Platform Toolset V142 "main" and the RUST DLL can call a 'C' DLL function "output" and the data goes to the propriatary stream that the native "output" function was designed to send data to. There were several hoops involved but generally cargo/rustc/cbindgen worked well for this newbie. Now I'm trying to condsider any compute intensive task where multithreading is being avoided in 'C' that could be safely implemented in RUST which could be bencmarked to illustrate all this pain is worthwhile.
2,443
13,217,434
I'm planning to insert data to bellow CF that has compound keys. ``` CREATE TABLE event_attend ( event_id int, event_type varchar, event_user_id int, PRIMARY KEY (event_id, event_type) #compound keys... ); ``` But I can't insert data to this CF from python using cql. (http://code.google.com/a/apache-extras.org/p/cassandra-dbapi2/) ``` import cql connection = cql.connect(host, port, keyspace) cursor = connection.cursor() cursor.execute("INSERT INTO event_attend (event_id, event_type, event_user_id) VALUES (1, 'test', 2)", dict({}) ) ``` I get the following traceback: ``` Traceback (most recent call last): File "./v2_initial.py", line 153, in <module> db2cass.execute() File "./v2_initial.py", line 134, in execute cscursor.execute("insert into event_attend (event_id, event_type, event_user_id ) values (1, 'test', 2)", dict({})) File "/usr/local/pythonbrew/pythons/Python-2.7.2/lib/python2.7/site-packages/cql-1.4.0-py2.7.egg/cql/cursor.py", line 80, in execute response = self.get_response(prepared_q, cl) File "/usr/local/pythonbrew/pythons/Python-2.7.2/lib/python2.7/site-packages/cql-1.4.0-py2.7.egg/cql/thrifteries.py", line 80, in get_response return self.handle_cql_execution_errors(doquery, compressed_q, compress) File "/usr/local/pythonbrew/pythons/Python-2.7.2/lib/python2.7/site-packages/cql-1.4.0-py2.7.egg/cql/thrifteries.py", line 98, in handle_cql_execution_errors raise cql.ProgrammingError("Bad Request: %s" % ire.why) cql.apivalues.ProgrammingError: Bad Request: unable to make int from 'event_user_id' ``` What am I doing wrong?
2012/11/04
[ "https://Stackoverflow.com/questions/13217434", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1797779/" ]
It looks like you are trying to follow the example in: <http://pypi.python.org/pypi/cql/1.4.0> ``` import cql con = cql.connect(host, port, keyspace) cursor = con.cursor() cursor.execute("CQL QUERY", dict(kw='Foo', kw2='Bar', kwn='etc...')) ``` However, if you only need to insert one row (like in your question), just drop the empty dict() parameter. Also, since you are using composite keys, make sure you use CQL3 <http://www.datastax.com/dev/blog/whats-new-in-cql-3-0> ``` connection = cql.connect('localhost:9160', cql_version='3.0.0') ``` The following code should work (just adapt it to localhost if needed): ``` import cql con = cql.connect('172.24.24.24', 9160, keyspace, cql_version='3.0.0') print ("Connected!") cursor = con.cursor() CQLString = "INSERT INTO event_attend (event_id, event_type, event_user_id) VALUES (131, 'Party', 3156);" cursor.execute(CQLString) ```
For python 2.7, 3.3, 3.4, 3.5, and 3.6 for installation you can use ``` $ pip install cassandra-driver ``` And in python: ``` import cassandra ``` Documentation can be found under <https://datastax.github.io/python-driver/getting_started.html#passing-parameters-to-cql-queries>
2,445
41,351,431
Suppose I have the following numpy structured array: ``` In [250]: x Out[250]: array([(22, 2, -1000000000, 2000), (22, 2, 400, 2000), (22, 2, 804846, 2000), (44, 2, 800, 4000), (55, 5, 900, 5000), (55, 5, 1000, 5000), (55, 5, 8900, 5000), (55, 5, 11400, 5000), (33, 3, 14500, 3000), (33, 3, 40550, 3000), (33, 3, 40990, 3000), (33, 3, 44400, 3000)], dtype=[('f1', '<i4'), ('f2', '<f4'), ('f3', '<f4'), ('f4', '<i4')]) ``` I am trying to modify a subset of the above array to a regular numpy array. It is essential for my application that no copies are created (only views). Fields are retrieved from the above structured array by using the following function: ``` def fields_view(array, fields): return array.getfield(numpy.dtype( {name: array.dtype.fields[name] for name in fields} )) ``` If I am interested in fields 'f2' and 'f3', I would do the following: ``` In [251]: y=fields_view(x,['f2','f3']) In [252]: y Out [252]: array([(2.0, -1000000000.0), (2.0, 400.0), (2.0, 804846.0), (2.0, 800.0), (5.0, 900.0), (5.0, 1000.0), (5.0, 8900.0), (5.0, 11400.0), (3.0, 14500.0), (3.0, 40550.0), (3.0, 40990.0), (3.0, 44400.0)], dtype={'names':['f2','f3'], 'formats':['<f4','<f4'], 'offsets':[4,8], 'itemsize':12}) ``` There is a way to directly get an ndarray from the 'f2' and 'f3' fields of the original structured array. However, for my application, it is necessary to build this intermediary structured array as this data subset is an attribute of a class. I can't convert the intermediary structured array to a regular numpy array without doing a copy. ``` In [253]: y.view(('<f4', len(y.dtype.names))) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-54-f8fc3a40fd1b> in <module>() ----> 1 y.view(('<f4', len(y.dtype.names))) ValueError: new type not compatible with array. ``` This function can also be used to convert a record array to an ndarray: ``` def recarr_to_ndarr(x,typ): fields = x.dtype.names shape = x.shape + (len(fields),) offsets = [x.dtype.fields[name][1] for name in fields] assert not any(np.diff(offsets, n=2)) strides = x.strides + (offsets[1] - offsets[0],) y = np.ndarray(shape=shape, dtype=typ, buffer=x, offset=offsets[0], strides=strides) return y ``` However, I get the following error: ``` In [254]: recarr_to_ndarr(y,'<f4') --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-65-2ebda2a39e9f> in <module>() ----> 1 recarr_to_ndarr(y,'<f4') <ipython-input-62-8a9eea8e7512> in recarr_to_ndarr(x, typ) 8 strides = x.strides + (offsets[1] - offsets[0],) 9 y = np.ndarray(shape=shape, dtype=typ, buffer=x, ---> 10 offset=offsets[0], strides=strides) 11 return y 12 TypeError: expected a single-segment buffer object ``` The function works fine if I create a copy: ``` In [255]: recarr_to_ndarr(np.array(y),'<f4') Out[255]: array([[ 2.00000000e+00, -1.00000000e+09], [ 2.00000000e+00, 4.00000000e+02], [ 2.00000000e+00, 8.04846000e+05], [ 2.00000000e+00, 8.00000000e+02], [ 5.00000000e+00, 9.00000000e+02], [ 5.00000000e+00, 1.00000000e+03], [ 5.00000000e+00, 8.90000000e+03], [ 5.00000000e+00, 1.14000000e+04], [ 3.00000000e+00, 1.45000000e+04], [ 3.00000000e+00, 4.05500000e+04], [ 3.00000000e+00, 4.09900000e+04], [ 3.00000000e+00, 4.44000000e+04]], dtype=float32) ``` There seems to be no difference between the two arrays: ``` In [66]: y Out[66]: array([(2.0, -1000000000.0), (2.0, 400.0), (2.0, 804846.0), (2.0, 800.0), (5.0, 900.0), (5.0, 1000.0), (5.0, 8900.0), (5.0, 11400.0), (3.0, 14500.0), (3.0, 40550.0), (3.0, 40990.0), (3.0, 44400.0)], dtype={'names':['f2','f3'], 'formats':['<f4','<f4'], 'offsets':[4,8], 'itemsize':12}) In [67]: np.array(y) Out[67]: array([(2.0, -1000000000.0), (2.0, 400.0), (2.0, 804846.0), (2.0, 800.0), (5.0, 900.0), (5.0, 1000.0), (5.0, 8900.0), (5.0, 11400.0), (3.0, 14500.0), (3.0, 40550.0), (3.0, 40990.0), (3.0, 44400.0)], dtype={'names':['f2','f3'], 'formats':['<f4','<f4'], 'offsets':[4,8], 'itemsize':12}) ```
2016/12/27
[ "https://Stackoverflow.com/questions/41351431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2407231/" ]
This answer is a bit long and rambling. I started with what I knew from previous work on taking array views, and then tried to relate that to your functions. ================ In your case, all fields are 4 bytes long, both floats and ints. I can then view it as all ints or all floats: ``` In [1431]: x Out[1431]: array([(22, 2.0, -1000000000.0, 2000), (22, 2.0, 400.0, 2000), (22, 2.0, 804846.0, 2000), (44, 2.0, 800.0, 4000), (55, 5.0, 900.0, 5000), (55, 5.0, 1000.0, 5000), (55, 5.0, 8900.0, 5000), (55, 5.0, 11400.0, 5000), (33, 3.0, 14500.0, 3000), (33, 3.0, 40550.0, 3000), (33, 3.0, 40990.0, 3000), (33, 3.0, 44400.0, 3000)], dtype=[('f1', '<i4'), ('f2', '<f4'), ('f3', '<f4'), ('f4', '<i4')]) In [1432]: x.view('i4') Out[1432]: array([ 22, 1073741824, -831624408, 2000, 22, 1073741824, 1137180672, 2000, 22, 1073741824, 1229225696, 2000, 44, 1073741824, 1145569280, .... 3000]) In [1433]: x.view('f4') Out[1433]: array([ 3.08285662e-44, 2.00000000e+00, -1.00000000e+09, 2.80259693e-42, 3.08285662e-44, 2.00000000e+00, .... 4.20389539e-42], dtype=float32) ``` This view is 1d. I can reshape and slice the 2 float columns ``` In [1434]: x.shape Out[1434]: (12,) In [1435]: x.view('f4').reshape(12,-1) Out[1435]: array([[ 3.08285662e-44, 2.00000000e+00, -1.00000000e+09, 2.80259693e-42], [ 3.08285662e-44, 2.00000000e+00, 4.00000000e+02, 2.80259693e-42], ... [ 4.62428493e-44, 3.00000000e+00, 4.44000000e+04, 4.20389539e-42]], dtype=float32) In [1437]: x.view('f4').reshape(12,-1)[:,1:3] Out[1437]: array([[ 2.00000000e+00, -1.00000000e+09], [ 2.00000000e+00, 4.00000000e+02], [ 2.00000000e+00, 8.04846000e+05], [ 2.00000000e+00, 8.00000000e+02], ... [ 3.00000000e+00, 4.44000000e+04]], dtype=float32) ``` That this is a view can be verified by doing a bit of inplace math, and seeing the results in `x`: ``` In [1439]: y=x.view('f4').reshape(12,-1)[:,1:3] In [1440]: y[:,0] += .5 In [1441]: y Out[1441]: array([[ 2.50000000e+00, -1.00000000e+09], [ 2.50000000e+00, 4.00000000e+02], ... [ 3.50000000e+00, 4.44000000e+04]], dtype=float32) In [1442]: x Out[1442]: array([(22, 2.5, -1000000000.0, 2000), (22, 2.5, 400.0, 2000), (22, 2.5, 804846.0, 2000), (44, 2.5, 800.0, 4000), (55, 5.5, 900.0, 5000), (55, 5.5, 1000.0, 5000), (55, 5.5, 8900.0, 5000), (55, 5.5, 11400.0, 5000), (33, 3.5, 14500.0, 3000), (33, 3.5, 40550.0, 3000), (33, 3.5, 40990.0, 3000), (33, 3.5, 44400.0, 3000)], dtype=[('f1', '<i4'), ('f2', '<f4'), ('f3', '<f4'), ('f4', '<i4')]) ``` If the field sizes differed this might be impossible. For example if the floats were 8 bytes. The key is picturing how the structured data is stored, and imagining whether that can be viewed as a simple dtype of multiple columns. And field choice has to be equivalent to a basic slice. Working with ['f1','f4'] would be equivalent to advanced indexing with [:,[0,3], which has to be a copy. ========== The 'direct' field indexing is: ``` z = x[['f2','f3']].view('f4').reshape(12,-1) z -= .5 ``` modifies `z` but with a `futurewarning`. Also it does not modify `x`; `z` has become a copy. I can also see this by looking at `z.__array_interface__['data']`, the data buffer location (and comparing with that of `x` and `y`). ================= Your `fields_view` does create a structured view: ``` In [1480]: w=fields_view(x,['f2','f3']) In [1481]: w.__array_interface__['data'] Out[1481]: (151950184, False) In [1482]: x.__array_interface__['data'] Out[1482]: (151950184, False) ``` which can be used to modify `x`, `w['f2'] -= .5`. So it is more versatile than the 'direct' `x[['f2','f3']]`. The `w` dtype is ``` dtype({'names':['f2','f3'], 'formats':['<f4','<f4'], 'offsets':[4,8], 'itemsize':12}) ``` Adding `print(shape, typ, offsets, strides)` to your `recarr_to_ndarr`, I get (py3) ``` In [1499]: recarr_to_ndarr(w,'<f4') (12, 2) <f4 [4, 8] (16, 4) .... ValueError: ndarray is not contiguous In [1500]: np.ndarray(shape=(12,2), dtype='<f4', buffer=w.data, offset=4, strides=(16,4)) ... BufferError: memoryview: underlying buffer is not contiguous ``` That `contiguous` problem must be refering to the values shown in `w.flags`: ``` In [1502]: w.flags Out[1502]: C_CONTIGUOUS : False F_CONTIGUOUS : False .... ``` It's interesting that `w.dtype.descr` converts the 'offsets' into a unnamed field: ``` In [1506]: w.__array_interface__ Out[1506]: {'data': (151950184, False), 'descr': [('', '|V4'), ('f2', '<f4'), ('f3', '<f4')], 'shape': (12,), 'strides': (16,), 'typestr': '|V12', 'version': 3} ``` One way or other, `w` has a non-contiguous data buffer, which can't be used to create a new array. Flattened, the data buffer looks something like ``` xoox|xoox|xoox|... # x 4 bytes we want to skip # o 4 bytes we want to use # | invisible bdry between records in x ``` The `y` I constructed above has: ``` In [1511]: y.__array_interface__ Out[1511]: {'data': (151950188, False), 'descr': [('', '<f4')], 'shape': (12, 2), 'strides': (16, 4), 'typestr': '<f4', 'version': 3} ``` So it accesses the `o` bytes with a 4 byte offset, and then (16,4) strides, and (12,2) shape. If I modify your `ndarray` call to use the original `x.data`, it works: ``` In [1514]: xx=np.ndarray(shape=(12,2), dtype='<f4', buffer=x.data, offset=4, strides=(16,4)) In [1515]: xx Out[1515]: array([[ 2.00000000e+00, -1.00000000e+09], [ 2.00000000e+00, 4.00000000e+02], .... [ 3.00000000e+00, 4.44000000e+04]], dtype=float32) ``` with the same array\_interface as my `y`: ``` In [1516]: xx.__array_interface__ Out[1516]: {'data': (151950188, False), 'descr': [('', '<f4')], 'shape': (12, 2), 'strides': (16, 4), 'typestr': '<f4', 'version': 3} ```
hpaulj was right in saying that the problem is that the subset of the structured array is not contiguous. Interestingly, I figured out a way to make the array subset contiguous with the following function: ``` def view_fields(a, fields): """ `a` must be a numpy structured array. `names` is the collection of field names to keep. Returns a view of the array `a` (not a copy). """ dt = a.dtype formats = [dt.fields[name][0] for name in fields] offsets = [dt.fields[name][1] for name in fields] itemsize = a.dtype.itemsize newdt = np.dtype(dict(names=fields, formats=formats, offsets=offsets, itemsize=itemsize)) b = a.view(newdt) return b In [5]: view_fields(x,['f2','f3']).flags Out[5]: C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False ``` The old function: ``` In [10]: fields_view(x,['f2','f3']).flags Out[10]: C_CONTIGUOUS : False F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False ```
2,446
62,980,784
I'm importing skimage in a python code. ``` from skimage.feature import greycomatrix, greycoprops ``` and I get this error > > ***No module named 'skimage'*** > > > Although I've already installed the scikit-image. Can anyone help ? This is the output of pip freeze [![enter image description here](https://i.stack.imgur.com/cC9k8.png)](https://i.stack.imgur.com/cC9k8.png) [![enter image description here](https://i.stack.imgur.com/rnE9b.png)](https://i.stack.imgur.com/rnE9b.png) [![enter image description here](https://i.stack.imgur.com/jXl7N.png)](https://i.stack.imgur.com/jXl7N.png)
2020/07/19
[ "https://Stackoverflow.com/questions/62980784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8151481/" ]
You can use `pip install scikit-image`. Also, see the [recommended procedure](http://scikit-image.org/docs/dev/install.html).
If you are using python3 you should install the package using `python3 -m pip install package_name` or `pip3 install package_name` Using the `pip` binary will install the package for `python2` on some systems.
2,447
69,465,428
I have a dictionary that looks like this: d = {key1 : {(key2,key3) : value}, ...} so it is a dictionary of dictionaries and in the inside dict the keys are tuples. I would like to get a triple nested dict: {key1 : {key2 : {key3 : value}, ...} I know how to do it with 2 loops and a condition: ``` new_d = {} for key1, inside_dict in d.items(): new_d[key1] = {} for (key2,key3), value in inside_dict.items(): if key2 in new_d[key1].keys(): new_d[key1][key2][key3] = value else: new_d[key1][key2] = {key3 : value} ``` Edit: key2 values are not guaranteed to be unique. This is why I added the condition It feels very unpythonic to me. Is there a faster and/or shorter way to do this?
2021/10/06
[ "https://Stackoverflow.com/questions/69465428", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11930768/" ]
You could use the common trick for nesting dicts arbitrarily, using `collections.defaultdict`: ``` from collections import defaultdict tree = lambda: defaultdict(tree) new_d = tree() for k1, dct in d.items(): for (k2, k3), val in dct.items(): new_d[k1][k2][k3] = val ```
If I understand the problem correctly, for this case you can wrap all the looping up in a dict comprehension. This assumes that your data is unique: ```py data = {"key1": {("key2", "key3"): "val"}} {k: {keys[0]: {keys[1]: val}} for k,v in data.items() for keys, val in v.items()} ```
2,450
52,029,026
i am developing a python script for my telegram right now. The problem is: How do I know when my bot is added to a group? Is there an Event or something else for that? I want the Bot to send a message to the group he´s beeing added to which says hi and the functions he can. I dont know if any kind of handler is able deal with this.
2018/08/26
[ "https://Stackoverflow.com/questions/52029026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4847304/" ]
Very roughly, you would need to do something like this: register an handler that filters only service messages about new chat members. Then check if the bot is one of the new chat members. ``` from telegram.ext import Updater, MessageHandler, Filters def new_member(bot, update): for member in update.message.new_chat_members: if member.username == 'YourBot': update.message.reply_text('Welcome') updater = Updater('TOKEN') updater.dispatcher.add_handler(MessageHandler(Filters.status_update.new_chat_members, new_member)) updater.start_polling() updater.idle() ```
With callbacks (preferred) ========================== As of version 12, the preferred way to handle updates is via callbacks. To use them prior to version 13 state `use_context=True` in your `Updater`. Version 13 will have this as default. ``` from telegram.ext import Updater, MessageHandler, Filters def new_member(update, context): for member in update.message.new_chat_members: if member.username == 'YourBot': update.message.reply_text('Welcome') updater = Updater('TOKEN', use_context=True) # use_context will be True by default in version 13+ updater.dispatcher.add_handler(MessageHandler(Filters.status_update.new_chat_members, new_member)) updater.start_polling() updater.idle() ``` Please note that the order changed here. Instead of having the update as second, it is now the first argument. Executing the code below will result in an Exception like this: ``` AttributeError: 'CallbackContext' object has no attribute 'message' ``` Without callbacks (deprecated in version 12) ============================================ Blatantly copying from [mcont's answer](https://stackoverflow.com/a/52093608/11739543): ``` from telegram.ext import Updater, MessageHandler, Filters def new_member(bot, update): for member in update.message.new_chat_members: if member.username == 'YourBot': update.message.reply_text('Welcome') updater = Updater('TOKEN') updater.dispatcher.add_handler(MessageHandler(Filters.status_update.new_chat_members, new_member)) updater.start_polling() updater.idle() ```
2,451
58,491,838
I was setting up to use Numba along with my AMD GPU. I started out with the most basic example available on their website, to calculate the value of Pi using the Monte-Carlo simulation. I made some changes to the code so that it can run on GPU first and then on the CPU. By doing this, I just wanted to compare the time taken to execute the code and verify the results. Below is the code: ``` from numba import jit import random from timeit import default_timer as timer @jit(nopython=True) def monte_carlo_pi(nsamples): acc = 0 for i in range(nsamples): x = random.random() y = random.random() if (x ** 2 + y ** 2) < 1.0: acc += 1 return 4.0 * acc / nsamples def monte_carlo_pi_cpu(nsamples): acc = 0 for i in range(nsamples): x = random.random() y = random.random() if (x ** 2 + y ** 2) < 1.0: acc += 1 return 4.0 * acc / nsamples num = int(input()) start = timer() random.seed(0) print(monte_carlo_pi(num)) print("with gpu", timer()-start) start = timer() random.seed(0) print(monte_carlo_pi_cpu(num)) print("without gpu", timer()-start) ``` I was expecting the GPU to perform better, and so it did. But however, some results for the CPU and the CPU were not matching. ``` 1000000 # input parameter 3.140836 # gpu_out with gpu 0.2317520289998356 3.14244 # cpu_out without gpu 0.39849199899981613 ``` I am aware that Python does not fare the long floating-point operations that well, but these are only 6 decimal places, and I was not expecting such a large discrepancy. Can anyone explain as to why this difference comes up?
2019/10/21
[ "https://Stackoverflow.com/questions/58491838", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8726146/" ]
I've reorganized your code a bit: ``` import numpy from numba import jit import random from timeit import default_timer as timer @jit(nopython=True) def monte_carlo_pi(nsamples): random.seed(0) acc = 0 for i in range(nsamples): x = random.random() y = random.random() if (x ** 2 + y ** 2) < 1.0: acc += 1 return 4.0 * acc / nsamples num = 1000000 # run the jitted code once to remove compile time from timing monte_carlo_pi(10) start = timer() print(monte_carlo_pi(num)) print("jitted code", timer()-start) start = timer() print(monte_carlo_pi.py_func(num)) print("non-jitted", timer()-start) ``` results in: ``` 3.140936 jitted code 0.01403845699996964 3.14244 non-jitted 0.39901430800000526 ``` Note, you are **not** running the jitted code on your GPU. The code is compiled, but for your CPU. The reason for the difference in the computed value of Pi is likely due to differing implementations of the underlying random number generator. Numba isn't actually using Python's `random` module, but has its own implementation that is meant to mimic it. In fact, if you look at the source code, it appears as if the numba implementation is primarily designed based on numpy's random module, and then just aliases the `random` module from that, so if you swap out `random.random` for `np.random.random`, with the same seed, you get the same results: ``` @jit(nopython=True) def monte_carlo_pi2(nsamples): np.random.seed(0) acc = 0 for i in range(nsamples): x = np.random.random() y = np.random.random() if (x ** 2 + y ** 2) < 1.0: acc += 1 return 4.0 * acc / nsamples ``` Results in: ``` 3.140936 jitted code 0.013946142999998301 3.140936 non-jitted 0.9277294739999888 ``` And just a few other notes: * When timing numba jitted functions, always run the function once to compile it before doing benchmarking so you don't include the one-time compile time cost in the timing * You can access the pure python version of a numba jitted function using `.py_func`, so you don't have to duplicate the code twice.
> > **Q** : *Can anyone explain as to **why** this difference comes up?* > > > The availability and almost pedantic care of systematic use of re-setting the same state via the PRNG-of-choice **`.seed( aRepeatableExperimentSeedNUMBER )`**-method is the root-cause of all these surprises. Proper seeding works **if and only if** the same PRNG-algorithm is used - being principally different in **`random`**-module's `.random()`-method than the one in **`numpy.random`**-module's `.random()`. Another sort of observed artifact ( different values of the dart-throwing **`pi`**-guesstimates ) is related to a rather tiny scale ( yes, `1E6`-points is a tiny amount, compared to the initial axiom of the art of statistics - which is "using **infinitely and only infinitely** sized populations" ), where ***different* order** of using thenumbers that have been ( thanks to a pedantic and systematic re-`seed(0)`-ing the PRNG-FSA ) reproducibly generated into the always the same sequence of values, produces different results ( see difference of values in yesterday's experiments ). These artifacts, however, play less and less important role as the size grows ( as was shown at the very bottom, reproducible experiment ): ``` # 1E+6: 3.138196 # block-wise generation in np.where().sum() # 3.140936 # pair-wise generation in monte_carlo_pi2() # 1E+7: 3.142726 # block-wise generation in np.where().sum() # 3.142358 # pair-wise generation in monte_carlo_pi2() # 3E+7: 3.1421996 # block-wise generation in np.where().sum() # 3.1416629333333335 # pair-wise generation in monte_carlo_pi2() # 1E+8: 3.14178916 # block-wise generation in np.where().sum() # 3.14167324 # pair-wise generation in monte_carlo_pi2() # 1E+9: -. # block-wise generation in np.where().sum() -x-RAM-SWAP- # 3.141618484 # pair-wise generation in monte_carlo_pi2() # 1E10 -. # block-wise generation in np.where().sum() -x-RAM-SWAP- # 3.1415940572 # pair-wise generation in monte_carlo_pi2() # 1E11 -. # block-wise generation in np.where().sum() -x-RAM-SWAP- # 3.14159550084 # pair-wise generation in monte_carlo_pi2() ``` --- Next, let me show another aspect: What are the actual costs of doing so and where do they come from ?!? --------------------------------------------------------------------- A plain pure-**`numpy`** code was to compute this in on *`localhost`* in about **`108 [ms]`** ``` >>> from zmq import Stopwatch; clk = Stopwatch() # [us]-clock resolution >>> np.random.seed(0); clk.start();x = np.random.random( 1000000 ); y = np.random.random( 1000000 ); _ = ( np.where( x**2 + y**2 < 1.0, 1, 0 ).sum() * 4.0 / 1000000 );clk.stop() 108444 >>> _ 3.138196 ``` Here the most of the "costs" are related to the memory-I/O traffic ( for storing twice the 1E6-elements and making them squared ) "halved" problem has been "twice" as fast **`~ 52.7 [ms]`** ``` >>> np.random.seed(0); clk.start(); _ = ( np.where( np.random.random( 1000000 )**2 ... + np.random.random()**2 < 1.0, ... 1, ... 0 ... ).sum() * 4.0 / 1000000 ); clk.stop() 52696 ``` An interim-storage-less **`numpy`**-code was slower a bit on *`localhost`* in about **`~115 [ms]`** ``` >>> np.random.seed(0); clk.start(); _ = ( np.where( np.random.random( 1000000 )**2 ... + np.random.random( 1000000 )**2 < 1.0, ... 1, ... 0 ... ).sum() * 4.0 / 1000000 ); clk.stop(); print _ 114501 3.138196 ``` An ordinary python code with `numpy.random` PRNG-generator was able to compute the same but in more than **`3,937.9+ [ms]`** ( here you see the python's **`for`**-iterators' looping pains - **4 seconds** compared to **`~ 50 [ms]`** ) plus you can detect a different order of how PRNG-numbers sequence were generated and pair-wise consumed (seen in the result difference) : ``` >>> def monte_carlo_pi2(nsamples): ... np.random.seed(0) ... acc = 0 ... for i in range(nsamples): ... if ( np.random.random()**2 ... + np.random.random()**2 ) < 1.0: ... acc += 1 ... return 4.0 * acc / nsamples >>> np.random.seed( 0 ); clk.start(); _ = monte_carlo_pi2( 1000000 ); clk.stop(); print _ 3937892 3.140936 ``` A **`numba.jit()`**-compiled code was to compute the same in about **`692 [ms]`** as it has to bear and bears also the ***cost-of*-`jit`-*compilation*** ( only the next call will harvest the fruits of this one-stop-cost, executing in about **`~ 50 [ms]`** ): ``` >>> @jit(nopython=True) # COPY/PASTE ... def monte_carlo_pi2(nsamples): ... np.random.seed(0) ... acc = 0 ... for i in range(nsamples): ... x = np.random.random() ... y = np.random.random() ... if (x ** 2 + y ** 2) < 1.0: ... acc += 1 ... return 4.0 * acc / nsamples ... >>> np.random.seed( 0 ); clk.start(); _ = monte_carlo_pi2( 1000000 ); clk.stop(); print _ 692811 3.140936 >>> np.random.seed( 0 ); clk.start(); _ = monte_carlo_pi2( 1000000 ); clk.stop(); print _ 50193 3.140936 ``` --- EPILOGUE : ---------- Costs matter. Always. A `jit`-compiled code can help **if and only if** the LLVM-compiled code is re-used so often, that it can adjust the costs of the initial compilation. > > ( In case arcane gurus present a fair objection: a trick with a pre-compiled code is still paying that cost, isn't it? ) > > > And the values ? ---------------- Using just as few as **`1E6`** samples is not very convincing, neither for the pi-dart-throwing experiment, nor for the performance benchmarking (as the indeed tiny small-scale of the data samples permits in-cache introduced timing artefacts, that do not scale or fail to generalise ). The larger the scale, the closer the **`pi`**-guesstimate gets and the better will perform data-efficient computing ( stream / pair-wise will get better than block-wise ( due to data-instantiation costs and later the memory swapping-related suffocation ) **as shown in the** [**online** reproducible-experimentation **sandbox IDE**](https://tio.run/##xVbbctpADH33V2gmD7FJstmLvWtD88gPNNNnxiGmuMGXep1Jk5@nMtjG8W5ayLSDYNhBK0tH0pGgfK3XRS622wtYVUUGAG/ZT0izsqhquK@L8iWul@sZLDdPcHdQuJ7TPpA/Zw9x98CPtHbKWOsZtNLq0ah8hVhDXjrOxcmCsWyi07cEylrDtNM8bIrl081LqpNPRclLUsX5Y5ERnSSPLlDwZvsod5DmtQtirhrVAr@j7cs6qRIXwx6e2x9uB86bTDhcWe4Pt618AUboNX5cAyXgEf2cuR5MwCdwu7OeQVntMCzAc8ZATaRsHppIXQsSPjlg8UiV6HVcoim/bjHukED8K9XoCsOciPTfdbuRXMdZuUl02/UyTqtdt@EzYS4vL@ExWUFW5HWyWMbVpliUKXcHYbyp04c2udHfxctlU5z@@6qoIMU@AD7xPXE7dwNvu9lY2RrSNMIZpX1lN@uaMR3bN3iu7oD1@iqpn6vcxR5R7FRzfTtI0nHahhmVOJC@eTUVw@awuZy2BwjCRMgiOQj@NYk3UKdZglNJCZccNMA3nVRDZdgo7181aZWUMEZRd/MHORC5I50pwwXwIYtGCAWzIBTKghANnYvjHQc2xzIwHVNxdOrQj2hXhO78TOq@NXXfRMj9YerYdZ9G4qOuMxJINXaMSmWmTiP1t9THpPTQfPzuF8GRqSOYQFoQBpauU96mzuZq2h67CnDFP@Z9pCylVSwy/AuuzsN7ZZtMuaPie4Rcncb7kCubY5NVQgTn4b2SFoRBRM3Uw3DMey6C0O6Y@UQZrEKltHT9iJH/D7xHMKGKTITKp5Ztp9rUxZ73ouc9i4YL//1cRcw350oo07/0g3PwnhEVWSZfhCbCgMkTeM9xkgPTsWTm0lPRWXjPcdtGlqXHzMmUSo15z6TkkWglGDoOkCzheJf4EZGWmjJ6jn3vK/ypo2OEEvcxs09mu@/DaXvsK6DCiEnrXHHc92P/EBC@p9rAPwJR4hy8DzGwQU9JaDj@J8YJU9jM7fY3) ``` # 1E6: # 1E6: 3.138196 Real time: 0.262 s User time: 0.268 s Sys. time: 0.110 s ---------------------------- np.where().sum() block-wise # Real time: 0.231 s User time: 0.237 s Sys. time: 0.111 s # # Real time: 0.251 s User time: 0.265 s Sys. time: 0.103 s ---------------------------- np.where( .reshape().sum() ).sum() block-wise # Real time: 0.241 s User time: 0.234 s Sys. time: 0.124 s # # 3.140936 Real time: 1.567 s User time: 1.575 s Sys. time: 0.097 s ---------------------------- monte_carlo_pi2() -- -- -- -- -- -- pair-wise # Real time: 1.556 s User time: 1.557 s Sys. time: 0.102 s # # 1E7: # 1E7: 3.142726 Real time: 0.971 s User time: 0.719 s Sys. time: 0.327 s ---------------------------- np.where().sum() block-wise # Real time: 0.762 s User time: 0.603 s Sys. time: 0.271 s # # Real time: 0.827 s User time: 0.604 s Sys. time: 0.335 s ---------------------------- np.where( .reshape().sum() ).sum() block-wise # Real time: 0.767 s User time: 0.590 s Sys. time: 0.288 s # # 3.142358 Real time: 14.756 s User time: 14.619 s Sys. time: 0.103 s ---------------------------- monte_carlo_pi2() -- -- -- -- -- -- pair-wise # Real time: 14.879 s User time: 14.740 s Sys. time: 0.117 s # # 3E7: # 3E7: 3.1421996 Real time: 1.914 s User time: 1.370 s Sys. time: 0.645 s ---------------------------- np.where().sum() block-wise # Real time: 1.796 s User time: 1.380 s Sys. time: 0.516 s # # Real time: 2.325 s User time: 1.615 s Sys. time: 0.795 s ---------------------------- np.where( .reshape().sum() ).sum() block-wise # Real time: 2.099 s User time: 1.514 s Sys. time: 0.677 s # # 3.1416629333333335 Real time: 50.182 s User time: 49.680 s Sys. time: 0.107 s ---------------------------- monte_carlo_pi2() -- -- -- -- -- -- pair-wise # Real time: 47.240 s User time: 46.711 s Sys. time: 0.103 s # # 1E8: # 1E8: 3.14178916 Real time: 12.970 s User time: 5.296 s Sys. time: 7.273 s ---------------------------- np.where().sum() block-wise # Real time: 8.275 s User time: 6.088 s Sys. time: 2.172 s ``` And we did not speak about the ultimate performance edge - check a read about the [**cython**](http://docs.cython.org/en/latest/src/userguide/numpy_tutorial.html) with an option to harness an OpenMP-code as a next dose of performance-boosting steroids for python
2,452
43,810,256
In DOS or batch file on windows we can access multiple consecutive files fieldgen1.txt, fieldgen2.txt, etc. as follows: ``` for /L %%i in (1,1,250) do ( copy fieldgen%%i.txt hk.ref Process the file and go to next file. ``` I have 250 files name like fieldgen1.ref, fieldgen2.ref, etc. Now I want to access one file, process that file, and access another file whenever processing is done. As I know python do like this ``` with open('fieldgen1.txt', 'r') as inpfile, with open('fieldgen2.txt', 'r') as inpfile: ``` I can access only two files this way. Is there any short way to access multiple consecutive files in python?
2017/05/05
[ "https://Stackoverflow.com/questions/43810256", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6210264/" ]
Yes, you can access and process consecutive files in python ``` for i in range(1, 251): with open('fieldgen%s.txt' % i, 'r') as fp: lines = fp.readlines() # Do all your processing here ``` The code will loop and read each file. You can then do your processing once you have read all the lines. You didn't mention if you needed to alter the file as part of your processing so I am just including the reading part. If you do need to write back to the file make sure you do that after all the processing is done.
You could do something like ``` import os files = os.listdir(".") for f in files: print (str(f)) ``` This will print all files and directories in the current run directory. Once you have the file name you can use that to process the content.
2,453
52,621,859
I am a new python learner and I want to write a program which reads a text file, and save value of a line contains "width" and print it. The file looks like: ``` width: 10128 nlines: 7101 ``` I am trying something like: ``` filename = "text.txtr" # open the file for reading filehandle = open(filename, 'r') while True: # read a single line line = filehandle.readline() if " width " in line: num = str(num) # type: print (num) # close the pointer to that file filehandle.close() ```
2018/10/03
[ "https://Stackoverflow.com/questions/52621859", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9084038/" ]
Your approach to opening the file is not good, try using with statement whenever opening a file. Afterwards you can iterate over each line from the file and check if it contains width, and if it does you need to extract the number, which can be done using regex. See the code below. ``` import re filename = "text.txtr" with open(filename, 'r') as filehandle: # read a single line for line in filehandle: if "width" in line: num = re.search(r'(\d+)\D+$', line).group(1) num = str(num) # type: print (num) ``` Please see Matt's comment below for another solution to get the number.
It's not returning results because of the line `if " width " in line:`. As you can see from your file, there is not a line with `" width "` in there, maybe you want: ``` if "width:" in line: #Do things ``` Also note there are a few issues with the code, for example that your program will never finish becasuse of your line `While True:`, so you'll never actually reach the line `filehandle.close()` and the manner which you open the file (using `with` is preferred). Also that you are defining `num = str(num)` but num isn't already defined so you will run into issues there too.
2,455
56,561,072
I'm trying to upgrade pip, and also install pywinusb, but I'm getting the error: "**UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128)**". Pip upgrade: ``` PS C:\Python27> pip --version pip 18.1 from c:\python27\lib\site-packages\pip (python 2.7) PS C:\Python27> python -m pip install --upgrade pip Collecting pip Exception: Traceback (most recent call last): File "C:\Python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "C:\Python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "C:\Python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "C:\Python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` And in "pywinusb" install: ``` PS C:\Python27> pip install pywinusb Collecting pywinusb Exception: Traceback (most recent call last): File "c:\python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "c:\python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "c:\python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "c:\python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "c:\python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` Before this I have installed the package "pyusb" without any problem, without getting any error. I've searched in google for this error, but not getting a very good explanation. How to solve this error?
2019/06/12
[ "https://Stackoverflow.com/questions/56561072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6078511/" ]
Seems to be a specific issue concerning `Button` when contained in a `List` row. **Workaround**: ```swift List { HStack { Text("One").onTapGesture { print("One") } Text("Two").onTapGesture { print("Two") } } } ``` This yields the desired output. You can also use a `Group` instead of `Text` to have a sophisticated design for the "buttons".
One of the differences with SwiftUI is that you are not creating specific instances of, for example UIButton, because you might be in a Mac app. With SwiftUI, you are requesting a button type thing. In this case since you are in a list row, the system gives you a full size, tap anywhere to trigger the action, button. And since you've added two of them, both are triggered when you tap anywhere. You can add two separate Views and give them a `.onTapGesture` to have them act essentially as buttons, but you would lose the tap flash of the cell row and any other automatic button like features SwiftUI would give. ```swift List { HStack { Text("One").onTapGesture { print("Button 1 tapped") } Spacer() Text("Two").onTapGesture { print("Button 2 tapped") } } } ```
2,459
45,906,144
I was trying to open stackoverflow and search for a query and then click the search button. almost everything went fine except I was not able to click submit button I encountered error > > WebDriverException: unknown error: Element ... is not clickable at point (608, 31). Other element would > receive the click: (Session info: > chrome=60.0.3112.101) (Driver info: chromedriver=2.29.461591 > (62ebf098771772160f391d75e589dc567915b233),platform=Windows NT > 6.1.7601 SP1 x86) > > > ``` browser=webdriver.Chrome() browser.get("https://stackoverflow.com/questions/19035186/how-to-select-element-with-selenium-python-xpath") z=browser.find_element_by_css_selector(".f-input.js-search-field")#use .for class and replace space with . z.send_keys("geckodriver not working") submi=browser.find_element_by_css_selector(".svg-icon.iconSearch") submi.click() ```
2017/08/27
[ "https://Stackoverflow.com/questions/45906144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7698247/" ]
``` <button type="submit" class="btn js-search-submit"> <svg role="icon" class="svg-icon iconSearch" width="18" height="18" viewBox="0 0 18 18"> <path d="..."></path> </svg> </button> ``` You are trying to click on the `svg`. That icon is not clickable, but the button is. So change the button selector to `.btn.js-search-submit` will work.
Click the element with right locator, your button locator is wrong. Other code is looking good try this ``` browser=webdriver.Chrome() browser.get("https://stackoverflow.com/questions/19035186/how-to-select-element-with-selenium-python-xpath") z=browser.find_element_by_css_selector(".f-input.js-search-field")#use .for class and replace space with . z.send_keys("geckodriver not working") submi=browser.find_element_by_css_selector(".btn.js-search-submit") submi.click() ```
2,465
16,647,186
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions. I've read several questions on here that deal with a similar problem ([here](https://stackoverflow.com/questions/145270/calling-c-c-from-python) and [here](https://stackoverflow.com/questions/5090585/segfault-when-trying-to-call-a-python-function-from-c) for example) but I'm confused about which approach I need to take. One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me. Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
2013/05/20
[ "https://Stackoverflow.com/questions/16647186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1765768/" ]
You should call C from Python by writing a **ctypes** wrapper. Cython is for making python-like code run faster, ctypes is for making C functions callable from python. What you need to do is the following: 1. Write the C functions you want to use. (You probably did this already) 2. Create a shared object (.so, for linux, os x, etc) or dynamically loaded library (.dll, for windows) for those functions. (Maybe you already did this, too) 3. Write the ctypes wrapper (It's easier than it sounds, [I wrote a how-to for that](https://pgi-jcns.fz-juelich.de/portal/pages/using-c-from-python.html "Using C from Python: How to create a ctypes wrapper")) 4. Call a function from that wrapper in Python. (This is just as simple as calling any other python function)
It'll be easier to call C from python. Your scenario sounds weird - normally people write most of the code in python except for the processor-intensive portion, which is written in C. Is the two-dimensional FFT the computationally-intensive part of your code?
2,468
50,874,453
Hi I am both new to python and q/KDB. I am using qpython to get results from a kdb database doing the following: ``` q = qconnection.QConnection(host=self.host, port=self.port, username=self.username, password=self.password) results = q.sync(query) ``` The result is a qtable. I need to convert the qtable into a string which is straight forward. I just need to do this: ``` resultString = str(results) ``` However the string is somewhat convoluted. Not to say that the table contains dates and they come in a numeric format. resultsString look like this: ``` [(6606, b'XX', b'5Y', 26.67, 0.023, 4.833, -22.88, 0.4, b'sx, 570869003211035000) (6607, b'XX', b'5Y', 28.40, 0.025, 4.824, -22.75, 0.4, b'sx, 571128191858653000)] ``` I would like to know if there is a straightforward conversion of the qtable to turn the string into something like this: ``` 2018-02-01,XX,5Y,26.67,0.023,4.83,-22.88,0.4,sx,2018-02-02D06:43:23\n 2018-02-02,XX,5Y,28.40,0.025,4.82,-22.75,0.4,sx,2018-02-05D06:43:11\n ```
2018/06/15
[ "https://Stackoverflow.com/questions/50874453", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9946190/" ]
You might just want to string the table on the way out from kdb rather than in python. It'll get you what you want but the data won't be easy or efficient to deal with on the python side ``` q)csv 0: select from t "col1,col2" "a,1" "b,2" "c,3" ``` Try issuing `q.sync("csv 0: select from t")`
Converting the numerical columns to `string` can achieve the results you are after. ``` results = q.sync('t:([] 2?.z.d;2?.z.t;2?`3;p:2?100.);update string d, string t, string p from t') for item in results: t = () for x in item: t = t + (x.decode(),) print(t) ('2017.05.31', '16:46:10.161', 'jgj', '43.9081') ('2006.09.28', '19:44:11.560', 'cfl', '57.59051') ```
2,478
51,434,538
I am looking for a way to understand [ioloop in tornado](http://www.tornadoweb.org/en/stable/ioloop.html#tornado.ioloop.IOLoop), since I read the official doc several times, but can't understand it. Specifically, why it exists. ``` from tornado.concurrent import Future from tornado.httpclient import AsyncHTTPClient from tornado.ioloop import IOLoop def async_fetch_future(): http_client = AsyncHTTPClient() future = Future() fetch_future = http_client.fetch( "http://mock.kite.com/text") fetch_future.add_done_callback( lambda f: future.set_result(f.result())) return future response = IOLoop.current().run_sync(async_fetch_future) # why get current IO of this thread? display IO, hard drive IO, or network IO? print response.body ``` I know what is IO, input and output, e.g. read a hard drive, display graph on the screen, get keyboard input. by definition, `IOLoop.current()` returns the current io loop of this thread. There are many IO device on my laptop running this python code. Which IO does this `IOLoop.current()` return? I never heard of IO loop in javascript nodejs. Furthermore, why do I care this low level thing if I just want to do a database query, read a file?
2018/07/20
[ "https://Stackoverflow.com/questions/51434538", "https://Stackoverflow.com", "https://Stackoverflow.com/users/887103/" ]
Rather to say it is `IOLoop`, maybe `EventLoop` is clearer for you to understand. `IOLoop.current()` doesn't really return an IO device but just a pure python event loop which is basically the same as `asyncio.get_event_loop()` or the underlying event loop in `nodejs`. The reason why you need event loop to just do a database query is that you are using event-driven structure to do databse query(In your example, you are doing http request). Most of time you do not need to care about this low level structure. Instead you just need to use `async&await` keywords. Let's say there is a lib which supports asynchronous database access: ``` async def get_user(user_id): user = await async_cursor.execute("select * from user where user_id = %s" % user_id) return user ``` Then you just need to use this function in your handler: ``` class YourHandler(tornado.web.RequestHandler): async def get(): user = await get_user(self.get_cookie("user_id")) if user is None: return self.finish("No such user") return self.finish("Your are %s" % user.user_name) ```
> > I never heard of IO loop in javascript nodejs. > > > In node.js, the equivalent concept is the [event loop](https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/). The node event loop is mostly invisible because all programs use it - it's what's running in between your callbacks. In Python, most programs don't use an event loop, so when you want one, you have to run it yourself. This can be a Tornado IOLoop, a Twisted Reactor, or an asyncio event loop (all of these are specific types of event loops). Tornado's IOLoop is perhaps confusingly named - it doesn't do any IO directly. Instead, it coordinates all the different IO (mainly network IO) that may be happening in the program. It may help you to think of it as an "event loop" or "callback runner".
2,479
68,472,830
Today I have tried to send email with python: ``` import smtplib EMAIL_HOST = 'smtp.google.com' EMAIL_PORT = 587 EMAIL_FROM_LOGIN = 'sender@gmail.com' EMAIL_FROM_PASSWORD = 'password' MESSAGE = 'Hi!' EMAIL_TO_LOGIN = 'recipient@gmail.com' print('starting...') server = smtplib.SMTP(EMAIL_HOST, EMAIL_PORT) server.starttls() print('logging...') server.login(EMAIL_FROM_LOGIN, EMAIL_FROM_PASSWORD) print('sending message...') server.sendmail(EMAIL_FROM_LOGIN, EMAIL_TO_LOGIN, MESSAGE) ``` This script doesn't goes further than `starting...` print. I searched about this issue whole day, but found only something like: > > "check that port isn't blocked..." > > > At least I got info about blocked/disabled/etc. port, what I don't have, is specifics of problem. --- Additional info: following some of advices I found earlier, I checked output of `telnet smtp.google.com 587`. The output is static `Connecting to smtp.google.com...`. It remains for like 2 mins, and then prints: > > Could not open a connection to this host, on port 587: Connection failed > > > --- **UPD 1** I tried to open ports manually, on which python script has been run, but nothing changes... --- So, my question is: what should I do? Where I can find those blocked ports, how to unblock/enable them?
2021/07/21
[ "https://Stackoverflow.com/questions/68472830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10872199/" ]
Enable lower security in your gmail account and fix your smtp address: '**smtp.gmail.com**': My sample: ``` import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText mail_content = 'Sample text' sender_address = 'xxx@xxx' sender_pass = 'xxxx' receiver_address = 'xxx@xxx' message = MIMEMultipart() message['From'] = sender_address message['To'] = receiver_address message['Subject'] = 'Test mail' message.attach(MIMEText(mail_content, 'plain')) session = smtplib.SMTP('smtp.gmail.com', 587) session.starttls() session.login(sender_address, sender_pass) session.sendmail(sender_address, receiver_address, message.as_string()) session.quit() print('Mail Sent') ```
--- Have you checked your code? there is **smtp.google.com** instead of **smtp.gmail.com**. Before executing the script --- 1. First of all, ensure that you logged in by that mail you are going to use to send mail in your script. 2. The second thing and important you must have on your [Less Security App](https://myaccount.google.com/lesssecureapps?pli=1&rapt=AEjHL4NW9JsHpBngFJepAMWVt38ISamxkCE1oZCeN2JLrrJhjrv23mFLGCXpwzF9ZZEqzjykTOjTvr286mEHEyd65j4OHLMpYg) or [2-Step Verification](https://myaccount.google.com/signinoptions/two-step-verification/enroll-welcome). --- Now you can use the following **Python 3 Code** also --- ``` import smtplib as s obj = s.SMTP("smtp.gmail.com", 587) #smtp server host and port no. 587 obj.starttls() #tls is a way of encryption obj.login("sender@gmail.com","password")# login credential email and password by which you want to send mail. subject = "Write your subject" #Subject of mail message_body = " Hello Dear...\n Write your message here... " msg = "Subject:{}\n\n{}".format(subject,message_body) # complete mail subject + message body list_of_address = ["mail1@gmail.com", "mail2@gmail.com", "mail3@yahoo.com", "mail4@outlook.in"]# list of email address obj.sendmail("sender@gmail.com", list_of_address, msg) print("Send Successfully...") obj.quit() ``` --- Now coming to your second question **what should I do? Where I can find those blocked ports, how to unblock/enable them?** **Unblock Ports** --- **Warning!** Opening or unblocking ports are very dangerous it's like breaking the firewall. It's up to you to be careful. --- You can try unblocking the following ports **80, 433, 443, 3478, 3479, 5060, 5062, 5222, 6250, and 12000-65000.** --- 1. Go to **Control Panel**->Click **System and Security**->Click **Windows Firewall**->Select **Advanced settings**, and then select **Inbound Rules** in the left pane->Right-click **Inbound Rules** and then select **New Rule**->Select **Port**->click **Next**. 2. Select **TCP** as the protocol to apply the rule. 3. Select **Specific Local Ports**, add all the above ports, and then click **Next**. 4. Select **Allow the connection**. 5. Leave **Domain**, **Private**, and **Public** checked to apply the rule to all types of networks. 6. Name the rule something click **Finish**. --- I hope your problem will be solved.
2,480
46,143,079
I have wriiten a code for linear search in python language. The code is working fine for single digit numbers but its not working for double digit numbers or for numbers more than that. Here is my code. ``` def linear_search(x,sort_lst): i = 0 c= 0 for i in range(len(sort_lst)): if sort_lst[i] == x : c= c+1 if (c > 0): print ("item found") else : print ("not found") sort_lst= input("enter an array of numbers:") item= input("enter the number to searched :") linear_search(item,sort_lst) ``` any suggestions ?
2017/09/10
[ "https://Stackoverflow.com/questions/46143079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8279672/" ]
`redux-promise` will handle only a promise but ``` { pass : Promise, fail : Promise, exempt : Promise, } ``` is not a promise. You have to convert it to single promise so that `redux-promise` can handle it. I think you need `Promise.all` for this task. Try something like: ``` const payload = Promise.all([ pass, fail, exempt ]) .then( ([ pass, fail, exempt ]) => { return { pass, fail, exempt } }); // now payload will be a single promise and you can pass it on normally. return { type: FETCH_RATINGS, payload: payload }; ``` `Promise.all` will convert your multiple promises into single promise and will resolve only if all the promise will resolve, else it will get rejected. **Reference:** Read more about [Promise.all()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all)
**Edit**: the answer of [Raghavgarg](https://stackoverflow.com/users/3439731/raghavgarg) is probably better if you already have logic that depends on your final payload (the one in the reducer) having the same structure as before. The middle-ware you use for promises probably expects the payload to be a promise, not an object that happens to contain promises. To solve this you could wrap them all in [`Promise.all`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) ```js return { type: FETCH_RATINGS, payload: Promise.all([pass, fail, exempt]) }; ``` Then in your reducer the payload would be an array where the responses will be ordered in the same way as you put them (above).
2,481
73,513,397
I am having issues with emails address and with a small correction, they are can be converted to valid email addresses. For Ex: ``` %20adi@gmail.com, --- Not valid 'sam@tell.net, --- Not valid (hi@telligen.com), --- Not valid (gii@weerte.com), --- Not valid :qwert34@embright.com, --- Not valid //24adifrmaes@microsot.com --- Not valid tellei@apple.com --- valid ... ``` I could write "if else", but if a new email address comes with new issues, I need to write "ifelse " and update every time. What is the best way to clean all these small issues, some python packes or regex? PLease suggest.
2022/08/27
[ "https://Stackoverflow.com/questions/73513397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17867413/" ]
You can do this (I basically check if the elements in the email are alpha characters or a point, and remove them if not so): ``` emails = [ 'sam@tell.net', '(hi@telligen.com)', '(gii@weerte.com)', ':qwert34@embright.com', '//24adifrmaes@microsot.com', 'tellei@apple.com' ] def correct_email_format(email): return ''.join(e for e in email if (e.isalnum() or e in ['.', '@'])) for email in emails: corrected_email = correct_email_format(email) print(corrected_email) ``` output: ``` sam@tell.net hi@telligen.com gii@weerte.com qwert34@embright.com 24adifrmaes@microsot.com tellei@apple.com ```
Data clean-up is messy but I found the approach of defining a set of rules to be an easy way to manage this (order of the rules matters): ``` rules = [ lambda s: s.replace('%20', ' '), lambda s: s.strip(" ,'"), ] addresses = [ '%20adi@gmail.com,', 'sam@tell.net,' ] for a in addresses: for r in rules: a = r(a) print(a) ``` and here is the resulting output: ``` adi@gmail.com sam@tell.net ``` Make sure you write a test suite that covers both invalid and valid data. It's easy break, and you may be tweaking the rules often. While I used lambda for the rules above, it can be an arbitrary complex function that accepts and return a string.
2,482
20,262,552
I have an embedded system using a python interface. Currently the system is using a (system-local) XML-file to persist data in case the system gets turned off. But normally the system is running the entire time. When the system starts, the XML-file is read in and information is stored in python-objects. The information then is used for processing. My aim is to edit this information remotely (over TCP/IP) even during process. I would like to use JAVA to get this done, and i have been thinking about something to share the objects. The problem is, that I'm missing some keywords to find the right technologies to get this done. What i found is SOAP, but i think it is not the right thing for this case, is that true? I'm grateful for any tips.
2013/11/28
[ "https://Stackoverflow.com/questions/20262552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2127432/" ]
You can use [FileVersionInfo](http://msdn.microsoft.com/en-us/library/system.diagnostics.fileversioninfo%28v=vs.110%29.aspx) class to get the version of another program. ``` FileVersionInfo myFileVersionInfo = FileVersionInfo.GetVersionInfo(Environment.SystemDirectory + "\\Notepad.exe"); Console.WriteLine("File: " + myFileVersionInfo.FileDescription + '\n' + "Version number: " + myFileVersionInfo.FileVersion); ```
If I'm not wrong, you are trying to fetch the version number of a file using c#. You can try the below example: ``` using System; using System.IO; using System.Diagnostics; class Class1 { public static void Main(string[] args) { // Get the file version for the notepad. // Use either of the two following commands. FileVersionInfo.GetVersionInfo(Path.Combine(Environment.SystemDirectory, "Notepad.exe")); FileVersionInfo myFileVersionInfo = FileVersionInfo.GetVersionInfo(Environment.SystemDirectory + "\\Notepad.exe"); // Print the file name and version number. Console.WriteLine("File: " + myFileVersionInfo.FileDescription + '\n' + "Version number: " + myFileVersionInfo.FileVersion); } } ``` Source: <http://msdn.microsoft.com/en-us/library/system.diagnostics.fileversioninfo(v=vs.110).aspx>
2,485
9,598,739
I have two versions of python installed on Win7. (Python 2.5 and Python 2.7). These are located in 'C:/Python25' and 'C:/Python27' respectively. I am trying to run a file using Python 2.5 but by default Cygwin picks up 2.7. How do I change which version Cygwin uses?
2012/03/07
[ "https://Stackoverflow.com/questions/9598739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145456/" ]
The fast way is to reorder your $PATH so that 2.5 is picked up first. The correct way is to use virtualenv to create a jail environment that's specific to a python version.
As an addition to Bon's post, if you're not sand-boxing your not doing it right. Why would you want to put your global install of Python at risk of anything? With Virtualenv you can select which Python interpreter is used for that particular sand-box. Virtualenv and Virtualenvwrapper(or custom solution) are two of the most essential tools a Python Developer can have. You can view your virtualenvs, create, delete, and activate them all with ease. You can get both pieces of software from pip. If you're not using those I assume you're not using requirements files either? $ pip freeze > requirements.txt will generate a requirements.txt with all exact versions and dependencies of your project. That way you can do fast deployment. If your current project requires 10 dependencies from pip if you deploy a lot then requirements files will help you tremendously. You can have a good beginners look at virtualenv and pip [here](http://www.saltycrane.com/blog/2009/05/notes-using-pip-and-virtualenv-django/ "here")
2,487
38,967,402
I'm trying to multiply two pandas dataframes with each other. Specifically, I want to multiply every column with every column of the other df. The dataframes are one-hot encoded, so they look like this: ``` col_1, col_2, col_3, ... 0 1 0 1 0 0 0 0 1 ... ``` I could just iterate through each of the columns using a for loop, but in python that is computationally expensive, and I'm hoping there's an easier way. One of the dataframes has 500 columns, the other has 100 columns. This is the fastest version that I've been able to write so far: ``` interact_pd = pd.DataFrame(index=df_1.index) df1_columns = [column for column in df_1] for column in df_2: col_pd = df_1[df1_columns].multiply(df_2[column], axis="index") interact_pd = interact_pd.join(col_pd, lsuffix='_' + column) ``` I iterate over each column in df\_2 and multiply all of df\_1 by that column, then I append the result to interact\_pd. I would rather not do it using a for loop however, as this is very computationally costly. Is there a faster way of doing it? EDIT: example df\_1: ``` 1col_1, 1col_2, 1col_3 0 1 0 1 0 0 0 0 1 ``` df\_2: ``` 2col_1, 2col_2 0 1 1 0 0 0 ``` interact\_pd: ``` 1col_1_2col_1, 1col_2_2col_1,1col_3_2col_1, 1col_1_2col_2, 1col_2_2col_2,1col_3_2col_2 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 ```
2016/08/16
[ "https://Stackoverflow.com/questions/38967402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3950550/" ]
``` # use numpy to get a pair of indices that map out every # combination of columns from df_1 and columns of df_2 pidx = np.indices((df_1.shape[1], df_2.shape[1])).reshape(2, -1) # use pandas MultiIndex to create a nice MultiIndex for # the final output lcol = pd.MultiIndex.from_product([df_1.columns, df_2.columns], names=[df_1.columns.name, df_2.columns.name]) # df_1.values[:, pidx[0]] slices df_1 values for every combination # like wise with df_2.values[:, pidx[1]] # finally, I marry up the product of arrays with the MultiIndex pd.DataFrame(df_1.values[:, pidx[0]] * df_2.values[:, pidx[1]], columns=lcol) ``` [![enter image description here](https://i.stack.imgur.com/YaMNM.png)](https://i.stack.imgur.com/YaMNM.png) --- ### Timing **code** ``` from string import ascii_letters df_1 = pd.DataFrame(np.random.randint(0, 2, (1000, 26)), columns=list(ascii_letters[:26])) df_2 = pd.DataFrame(np.random.randint(0, 2, (1000, 52)), columns=list(ascii_letters)) def pir1(df_1, df_2): pidx = np.indices((df_1.shape[1], df_2.shape[1])).reshape(2, -1) lcol = pd.MultiIndex.from_product([df_1.columns, df_2.columns], names=[df_1.columns.name, df_2.columns.name]) return pd.DataFrame(df_1.values[:, pidx[0]] * df_2.values[:, pidx[1]], columns=lcol) def Test2(DA,DB): MA = DA.as_matrix() MB = DB.as_matrix() MM = np.zeros((len(MA),len(MA[0])*len(MB[0]))) Col = [] for i in range(len(MB[0])): for j in range(len(MA[0])): MM[:,i*len(MA[0])+j] = MA[:,j]*MB[:,i] Col.append('1col_'+str(i+1)+'_2col_'+str(j+1)) return pd.DataFrame(MM,dtype=int,columns=Col) ``` **results** [![enter image description here](https://i.stack.imgur.com/WJ7KH.png)](https://i.stack.imgur.com/WJ7KH.png)
You can use numpy. Consider this example code, I did modify the variable names, but `Test1()` is essentially your code. I didn't bother create the correct column names in that function though: ``` import pandas as pd import numpy as np A = [[1,0,1,1],[0,1,1,0],[0,1,0,1]] B = [[0,0,1,0],[1,0,1,0],[1,1,0,0],[1,0,0,1],[1,0,0,0]] DA = pd.DataFrame(A).T DB = pd.DataFrame(B).T def Test1(DA,DB): E = pd.DataFrame(index=DA.index) DAC = [column for column in DA] for column in DB: C = DA[DAC].multiply(DB[column], axis="index") E = E.join(C, lsuffix='_' + str(column)) return E def Test2(DA,DB): MA = DA.as_matrix() MB = DB.as_matrix() MM = np.zeros((len(MA),len(MA[0])*len(MB[0]))) Col = [] for i in range(len(MB[0])): for j in range(len(MA[0])): MM[:,i*len(MA[0])+j] = MA[:,j]*MB[:,i] Col.append('1col_'+str(i+1)+'_2col_'+str(j+1)) return pd.DataFrame(MM,dtype=int,columns=Col) print Test1(DA,DB) print Test2(DA,DB) ``` Output: ``` 0_1 1_1 2_1 0 1 2 0_3 1_3 2_3 0 1 2 0 1 2 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 2 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1col_1_2col_1 1col_1_2col_2 1col_1_2col_3 1col_2_2col_1 1col_2_2col_2 \ 0 0 0 0 1 0 1 0 0 0 0 0 2 1 1 0 1 1 3 0 0 0 0 0 1col_2_2col_3 1col_3_2col_1 1col_3_2col_2 1col_3_2col_3 1col_4_2col_1 \ 0 0 1 0 0 1 1 0 0 1 1 0 2 0 0 0 0 0 3 0 0 0 0 1 1col_4_2col_2 1col_4_2col_3 1col_5_2col_1 1col_5_2col_2 1col_5_2col_3 0 0 0 1 0 0 1 0 0 0 0 0 2 0 0 0 0 0 3 0 1 0 0 0 ``` Performance of your function: ``` %timeit(Test1(DA,DB)) 100 loops, best of 3: 11.1 ms per loop ``` Performance of my function: ``` %timeit(Test2(DA,DB)) 1000 loops, best of 3: 464 µs per loop ``` It's not beautiful, but it's efficient.
2,490
23,237,692
I use PyDev in Eclipse and have a custom source path for my Python project: *src/main/python*/. The path is added to the PythonPath. Now, i want to use the library pyMIR: <https://github.com/jsawruk/pymir>, which doesn't has any install script. So I downloaded it and and included it direclty into my project as a Pydev package, the complete path to the pyMIR is: *src/main/python/music/pymir*. In the music package (*src/main/python/music*), now i want to use the library and import it via: `from pymir import AudioFile`. There appears no error, so class AudioFile is found. Afterward, I want to read an audio file via: `AudioFile.open(path)` and there i get the error "Undefined variable from import: open". But when I run the script, it works, no error occurs. Furthermore, when I look in the pyMIR package, there are also unresolved import errors. For example: `from pymir import Frame` in the class AudioFile produces the error: "Unresolved import: Frame", when I change it to `from music.pymir import Frame`, the error disappears, but then I get an error when it runs: "type object 'Frame' has no attribute 'Frame'". 1. What I have to change, another import or how to include a Pydev package? 2. When I make a new project with a standard path "src", then no "unresolved impor" errors appear. Where is the difference to *src/main/python*? Because I changed the path of source folders to *src/main/python*. Thanks in advance.
2014/04/23
[ "https://Stackoverflow.com/questions/23237692", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I tried to download and install the pymir package. There is one project structure that works for me: ``` project/music/ project/music/pymir/ project/music/pymir/AudioFile project/music/pymir/... project/music/audio_files/01.wav project/music/test.py ``` The test.py: ``` import numpy from pymir import AudioFile filename = "audio_files/01.wav" print "Opening File: " + filename audiofile = AudioFile.open(filename) frames = audiofile.frames(2048, numpy.hamming) print len(frames) ``` If I moved 'test.py' out from 'music' package, I haven't found a way to make it work. The reason why the project structure is sensitive and tricky is, in my opinion, the pymir package is not well structured. E.g., the author set module name as "Frame.py" and inside the module a class is named "Frame". Then in "\_\_init\_\_.py", codes are like "from Frame import Frame". And in "AudioFile.py", codes are "from pymir import Frame". I really think the naming and structure of the current pymir is messy. Suggest you use this package carefully
add **"\_\_init\_\_.py"** empty file in base folder location and it works
2,493
20,154,490
I am trying to use `RotatingHandler` for our logging purpose in Python. I have kept backup files as 500 which means it will create maximum of 500 files I guess and the size that I have set is 2000 Bytes (not sure what is the recommended size limit is). If I run my below code, it doesn't log everything into a file. I want to log everything into a file - ``` #!/usr/bin/python import logging import logging.handlers LOG_FILENAME = 'testing.log' # Set up a specific logger with our desired output level my_logger = logging.getLogger('agentlogger') # Add the log message handler to the logger handler = logging.handlers.RotatingFileHandler(LOG_FILENAME, maxBytes=2000, backupCount=100) # create a logging format formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) my_logger.addHandler(handler) my_logger.debug('debug message') my_logger.info('info message') my_logger.warn('warn message') my_logger.error('error message') my_logger.critical('critical message') # Log some messages for i in range(10): my_logger.error('i = %d' % i) ``` This is what gets printed out in my `testing.log` file - ``` 2013-11-22 12:59:34,782 - agentlogger - WARNING - warn message 2013-11-22 12:59:34,782 - agentlogger - ERROR - error message 2013-11-22 12:59:34,782 - agentlogger - CRITICAL - critical message 2013-11-22 12:59:34,782 - agentlogger - ERROR - i = 0 2013-11-22 12:59:34,782 - agentlogger - ERROR - i = 1 2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 2 2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 3 2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 4 2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 5 2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 6 2013-11-22 12:59:34,784 - agentlogger - ERROR - i = 7 2013-11-22 12:59:34,784 - agentlogger - ERROR - i = 8 2013-11-22 12:59:34,784 - agentlogger - ERROR - i = 9 ``` It doesn't print out `INFO`, `DEBUG` message into the file somehow.. Any thoughts why it is not working out? And also, right now, I have defined everything in this python file for logging purpose. I want to define above things in the `logging conf` file and read it using the `fileConfig()` function. I am not sure how to use the `RotatingFileHandler` example in the `logging.conf` file? **UPDATE:-** Below is my updated Python code that I have modified to use with `log.conf` file - ``` #!/usr/bin/python import logging import logging.handlers my_logger = logging.getLogger(' ') my_logger.config.fileConfig('log.conf') my_logger.debug('debug message') my_logger.info('info message') my_logger.warn('warn message') my_logger.error('error message') my_logger.critical('critical message') # Log some messages for i in range(10): my_logger.error('i = %d' % i) ``` And below is my `log.conf file` - ``` [loggers] keys=root [handlers] keys=logfile [formatters] keys=logfileformatter [logger_root] level=DEBUG handlers=logfile [logger_zkagentlogger] level=DEBUG handlers=logfile qualname=zkagentlogger propagate=0 [formatter_logfileformatter] format=%(asctime)s %(name)-12s: %(levelname)s %(message)s [handler_logfile] class=handlers.RotatingFileHandler level=NOTSET args=('testing.log',2000,100) formatter=logfileformatter ``` But whenever I compile it, this is the error I got on my console - ``` $ python logtest3.py Traceback (most recent call last): File "logtest3.py", line 6, in <module> my_logger.config.fileConfig('log.conf') AttributeError: 'Logger' object has no attribute 'config' ``` Any idea what wrong I am doing here?
2013/11/22
[ "https://Stackoverflow.com/questions/20154490", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
> > It doesn't print out INFO, DEBUG message into the file somehow.. Any > thoughts why it is not working out? > > > you don't seem to set a loglevel, so the default (warning) is used from <http://docs.python.org/2/library/logging.html> : > > Note that the root logger is created with level WARNING. > > > as for your second question, something like this should do the trick (I haven't tested it, just adapted from my config which is using the TimedRotatingFileHandler): ``` [loggers] keys=root [handlers] keys=logfile [formatters] keys=logfileformatter [logger_root] level=DEBUG handlers=logfile [formatter_logfileformatter] format=%(asctime)s %(name)-12s: %(levelname)s %(message)s [handler_logfile] class=handlers.RotatingFileHandler level=NOTSET args=('testing.log','a',2000,100) formatter=logfileformatter ```
I know, it is very late ,but I just got same error, and while searching that are I got your problem. I am able to resolve my problem, and I thought it might be helpful for some other user also : you have created a logger object and trying to access **my\_logger.config.fileConfig('log.conf')** which is wrong you should use **logger.config.fileConfig('log.conf')** as I mention below and need to import **logging.config** as well : ``` #!/usr/bin/python import logging import logging.handlers import logging.config logging.config.fileConfig('log.config',disable_existing_loggers=0) my_logger = logging.getLogger('you logger name as you mention in your conf file') my_logger.debug('debug message') my_logger.info('info message') my_logger.warn('warn message') my_logger.error('error message') my_logger.critical('critical message') ``` after doing these changes, **AttributeError: 'Logger' object has no attribute 'config'** error must be gone.
2,495
20,420,937
I have python script that set the IP4 address for my wireless and wired interfaces. So far, I use `subprocess` command like : ``` subprocess.call(["ip addr add local 192.168.1.2/24 broadcast 192.168.1.255 dev wlan0"]) ``` How can I set the IP4 address of an interface using python libraries? and if there is any way to get an already existing IP configurations using python libraries ?
2013/12/06
[ "https://Stackoverflow.com/questions/20420937", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2468276/" ]
Set an address via the older `ioctl` interface: ``` import socket, struct, fcntl SIOCSIFADDR = 0x8916 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) def setIpAddr(iface, ip): bin_ip = socket.inet_aton(ip) ifreq = struct.pack('16sH2s4s8s', iface, socket.AF_INET, '\x00' * 2, bin_ip, '\x00' * 8) fcntl.ioctl(sock, SIOCSIFADDR, ifreq) setIpAddr('em1', '192.168.0.1') ``` (setting the netmask is done with `SIOCSIFNETMASK = 0x891C`) Ip addresses can be retrieved in the same way: [Finding local IP addresses using Python's stdlib](https://stackoverflow.com/questions/166506/finding-local-ip-addresses-using-pythons-stdlib/9267833#9267833) I believe there is a python implementation of Netlink should you want to use that over `ioctl`
You have multiple options to do it from your python program. One could use the `ip` tool like you showed. While this is not the best option at all this usualy does the job while being a little bit slow and arkward to program. Another way would be to do the things `ip` does on your own by using the kernel netlink interface directly. I know that [libnl](http://www.carisma.slowglass.com/~tgr/libnl/) has some experimental (?) python bindings. This may work but you will have to deal with a lot of low level stuff. I wouldn't recommend this way for simple "set and get" ips but it's the most "correct" and reliable way to do so. The best way in my opinion (if you only want to set and get ips) would be to use the NetworkManagers dbus interface. While this is very limited and may have its own problems (it might behave not the way you would like it to) this is the most straight forward way if the NetworkManager is running anyway. So, choose the `libnl` approach if you want to get your hands dirty, it's clearly superior but also way more work. You may also get away with the NetworkManager dbus interface, depending on your needs and general system setup. Otherwise you can just leave it that way.
2,496
48,756,249
I have 2 models `Task` and `TaskImage` which is a collection of images belonging to `Task` object. What I want is to be able to add multiple images to my `Task` object, but I can only do it using 2 models. Currently, when I add images, it doesn't let me upload them and save new objects. **settings.py** ``` MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/media/' ``` **serializers.py** ``` class TaskImageSerializer(serializers.ModelSerializer): class Meta: model = TaskImage fields = ('image',) class TaskSerializer(serializers.HyperlinkedModelSerializer): user = serializers.ReadOnlyField(source='user.username') images = TaskImageSerializer(source='image_set', many=True, read_only=True) class Meta: model = Task fields = '__all__' def create(self, validated_data): images_data = validated_data.pop('images') task = Task.objects.create(**validated_data) for image_data in images_data: TaskImage.objects.create(task=task, **image_data) return task ``` **models.py** ``` class Task(models.Model): title = models.CharField(max_length=100, blank=False) user = models.ForeignKey(User) def save(self, *args, **kwargs): super(Task, self).save(*args, **kwargs) class TaskImage(models.Model): task = models.ForeignKey(Task, on_delete=models.CASCADE) image = models.FileField(blank=True) ``` However, when I do a post request: [![enter image description here](https://i.stack.imgur.com/NndxK.png)](https://i.stack.imgur.com/NndxK.png) I get the following traceback: > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/django/core/handlers/exception.py" > in inner > 41. response = get\_response(request) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/django/core/handlers/base.py" > in \_get\_response > 187. response = self.process\_exception\_by\_middleware(e, request) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/django/core/handlers/base.py" > in \_get\_response > 185. response = wrapped\_callback(request, \*callback\_args, \*\*callback\_kwargs) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/django/views/decorators/csrf.py" > in wrapped\_view > 58. return view\_func(\*args, \*\*kwargs) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/rest\_framework/viewsets.py" > in view > 95. return self.dispatch(request, \*args, \*\*kwargs) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/rest\_framework/views.py" > in dispatch > 494. response = self.handle\_exception(exc) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/rest\_framework/views.py" > in handle\_exception > 454. self.raise\_uncaught\_exception(exc) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/rest\_framework/views.py" > in dispatch > 491. response = handler(request, \*args, \*\*kwargs) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/rest\_framework/mixins.py" > in create > 21. self.perform\_create(serializer) > > > File "/Users/gr/Desktop/PycharmProjects/godo/api/views.py" in > perform\_create > 152. serializer.save(user=self.request.user) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/rest\_framework/serializers.py" > in save > 214. self.instance = self.create(validated\_data) > > > File "/Users/gr/Desktop/PycharmProjects/godo/api/serializers.py" in > create > 67. images\_data = validated\_data.pop('images') > > > Exception Type: KeyError at /api/tasks/ Exception Value: 'images' > > >
2018/02/12
[ "https://Stackoverflow.com/questions/48756249", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4729764/" ]
**Description for the issue** The origin of the exception was a `KeyError`, because of this statement ``` images_data = validated_data.pop('images') ``` This is because the validated data has no key `images`. This means the images input doesn't validate the image inputs from postman. Django post request store `InMemmoryUpload` in `request.FILES`, so we use it for fetching files. also, you want multiple image upload at once. So, you have to use different image\_names while your image upload (in postman). Change your `serializer` to like this: ``` class TaskSerializer(serializers.HyperlinkedModelSerializer): user = serializers.ReadOnlyField(source='user.username') images = TaskImageSerializer(source='taskimage_set', many=True, read_only=True) class Meta: model = Task fields = ('id', 'title', 'user', 'images') def create(self, validated_data): images_data = self.context.get('view').request.FILES task = Task.objects.create(title=validated_data.get('title', 'no-title'), user_id=1) for image_data in images_data.values(): TaskImage.objects.create(task=task, image=image_data) return task ``` I don't know about your view, but I'd like to use `ModelViewSet` preferrable view class ``` class Upload(ModelViewSet): serializer_class = TaskSerializer queryset = Task.objects.all() ``` Postman console: [![enter image description here](https://i.stack.imgur.com/QeTGn.png)](https://i.stack.imgur.com/QeTGn.png) DRF result: ``` { "id": 12, "title": "This Is Task Title", "user": "admin", "images": [ { "image": "http://127.0.0.1:8000/media/Screenshot_from_2017-12-20_07-18-43_tNIbUXV.png" }, { "image": "http://127.0.0.1:8000/media/game-of-thrones-season-valar-morghulis-wallpaper-1366x768_3bkMk78.jpg" }, { "image": "http://127.0.0.1:8000/media/IMG_212433_lZ2Mijj.jpg" } ] } ``` **UPDATE** This is the answer for your comment. In django `reverse foreignKey` are capturing using `_set`. see this [official doc](https://docs.djangoproject.com/en/dev/topics/db/queries/#following-relationships-backward). Here, `Task` and `TaskImage` are in `OneToMany` relationship, so if you have one `Task` instance, you could get all related `TaskImage` instance by this `reverse look-up` feature. Here is the example: ``` task_instance = Task.objects.get(id=1) task_img_set_all = task_instance.taskimage_set.all() ``` Here this `task_img_set_all` will be equal to `TaskImage.objects.filter(task_id=1)`
You have `read_only` set to true in `TaskImageSerializer` nested field. So there will be no validated\_data there.
2,498
7,461,570
I'm trying to build the most recent version of OpenCV on a minimal enough VPS but am running into trouble with CMake. I'm not familiar with CMake so I'm finding it difficult to interpret the log output and thus how to proceed to debug the problem. From the command line (x11 isn't installed) and within devel/OpenCV/-2.3.1/release I issue the following ``` sudo cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local .. ``` and the result of this is the following: ``` -- Extracting svn version, please wait... -- SVNVERSION: exported -- Detected version of GNU GCC: 44 (404) -- Could NOT find ZLIB (missing: ZLIB_LIBRARY ZLIB_INCLUDE_DIR) -- Could NOT find ZLIB (missing: ZLIB_LIBRARY ZLIB_INCLUDE_DIR) -- Could NOT find PNG (missing: PNG_LIBRARY PNG_PNG_INCLUDE_DIR) -- Could NOT find TIFF (missing: TIFF_LIBRARY TIFF_INCLUDE_DIR) -- Could NOT find JPEG (missing: JPEG_LIBRARY JPEG_INCLUDE_DIR) -- Use NumPy headers from: /usr/lib/python2.6/site-packages/numpy-1.6.1-py2.6-linux-i686.egg/numpy/core/include -- Found Sphinx 0.6.6: /usr/bin/sphinx-build -- Parsing 'cvconfig.h.cmake' -- -- General configuration for opencv 2.3.1 ===================================== -- -- Built as dynamic libs?: YES -- Compiler: /usr/bin/c++ -- C++ flags (Release): -Wall -pthread -march=i686 -ffunction-sections -O3 -DNDEBUG -fomit-frame-pointer -msse -msse2 -mfpmath=387 -DNDEBUG\ -- C++ flags (Debug): -Wall -pthread -march=i686 -ffunction-sections -g -O0 -DDEBUG -D_DEBUG -ggdb3 -- Linker flags (Release): -- Linker flags (Debug): -- -- GUI: -- GTK+ 2.x: NO -- GThread: NO -- -- Media I/O: -- ZLib: build -- JPEG: build -- PNG: build -- TIFF: build -- JPEG 2000: FALSE -- OpenEXR: NO -- OpenNI: NO -- OpenNI PrimeSensor Modules: NO -- XIMEA: NO -- -- Video I/O: -- DC1394 1.x: NO -- DC1394 2.x: NO -- FFMPEG: NO -- codec: NO -- format: NO -- util: NO -- swscale: NO -- gentoo-style: NO -- GStreamer: NO -- UniCap: NO -- PvAPI: NO -- V4L/V4L2: FALSE/FALSE -- Xine: NO -- -- Other third-party libraries: -- Use IPP: NO -- Use TBB: NO -- Use ThreadingFramework: NO -- Use Cuda: NO -- Use Eigen: NO -- -- Interfaces: -- Python: NO -- Python interpreter: /usr/bin/python2.6 -B (ver 2.6) -- Python numpy: YES -- Java: NO -- -- Documentation: -- Sphinx: /usr/bin/sphinx-build (ver 0.6.6) -- PdfLaTeX compiler: NO -- Build Documentation: NO -- -- Tests and samples: -- Tests: YES -- Examples: NO -- -- Install path: /usr/local -- -- cvconfig.h is in: /home/ec2-user/OpenCV-2.3.1/release -- ----------------------------------------------------------------- -- -- Configuring incomplete, errors occurred! ``` Also when I run the command I also seem to be getting the following error message CMake Error at CMakeLists.txt:44 (set\_property): set\_property given invalid scope CACHE. Valid scopes are GLOBAL, DIRECTORY, TARGET, SOURCE, TEST. Line 42-45 is the following: ``` set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "Configs" FORCE) if(DEFINED CMAKE_BUILD_TYPE) set_property( CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS ${CMAKE_CONFIGURATION_TYPES} ) endif() ``` However I'm not sure what this means? Does aNyone have any pointers? Many thanks
2011/09/18
[ "https://Stackoverflow.com/questions/7461570", "https://Stackoverflow.com", "https://Stackoverflow.com/users/413797/" ]
Check your CMake version. Support for `set_property(CACHE ... )` was implemented in 2.8.0. If upgrading CMake is not an option for you - I guess it's safe to comment line #44. It seems to be used to create values for drop-down list in GUI. <http://www.kitware.com/blog/home/post/82> <http://blog.bethcodes.com/cmake-tips-tricks-drop-down-list>
I've experienced lots of error building opencv that were caused by the wrong version of OpenCV. I successfully built opencv 3.0 using cmake 3.0 (though cmake 2.6 did not work for me). Then when I found I had to downgrade to opencv 2.4.9 I had to go back to my system's default cmake 2.6, as cmake 3.0 did not work. The first thing to check if you get errors when running cmake in opencv is the version.
2,499
24,044,734
I'm looking for a way to use pandas and python to combine several columns in an excel sheet with known column names into a new, single one, keeping all the important information as in the example below: input: ``` ID,tp_c,tp_b,tp_p 0,transportation - cars,transportation - boats,transportation - planes 1,checked,-,- 2,-,checked,- 3,checked,checked,- 4,-,checked,checked 5,checked,checked,checked ``` desired output: ``` ID,tp_all 0,transportation 1,cars 2,boats 3,cars+boats 4,boats+planes 5,cars+boats+planes ``` The row with ID of 0 contans a description of the contents of the column. Ideally the code would parse the description in the second row, look after the '-' and concatenate those values in the new "tp\_all" column.
2014/06/04
[ "https://Stackoverflow.com/questions/24044734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3700450/" ]
OK a more dynamic method: ``` In [63]: # get a list of the columns col_list = list(df.columns) # remove 'ID' column col_list.remove('ID') # create a dict as a lookup col_dict = dict(zip(col_list, [df.iloc[0][col].split(' - ')[1] for col in col_list])) col_dict Out[63]: {'tp_b': 'boats', 'tp_c': 'cars', 'tp_p': 'planes'} In [64]: # define a func that tests the value and uses the dict to create our string def func(x): temp = '' for col in col_list: if x[col] == 'checked': if len(temp) == 0: temp = col_dict[col] else: temp = temp + '+' + col_dict[col] return temp df['combined'] = df[1:].apply(lambda row: func(row), axis=1) df Out[64]: ID tp_c tp_b tp_p \ 0 0 transportation - cars transportation - boats transportation - planes 1 1 checked NaN NaN 2 2 NaN checked NaN 3 3 checked checked NaN 4 4 NaN checked checked 5 5 checked checked checked combined 0 NaN 1 cars 2 boats 3 cars+boats 4 boats+planes 5 cars+boats+planes [6 rows x 5 columns] In [65]: df = df.ix[1:,['ID', 'combined']] df Out[65]: ID combined 1 1 cars 2 2 boats 3 3 cars+boats 4 4 boats+planes 5 5 cars+boats+planes [5 rows x 2 columns] ```
Here is one way: ``` newCol = pandas.Series('',index=d.index) for col in d.ix[:, 1:]: name = '+' + col.split('-')[1].strip() newCol[d[col]=='checked'] += name newCol = newCol.str.strip('+') ``` Then: ``` >>> newCol 0 cars 1 boats 2 cars+boats 3 boats+planes 4 cars+boats+planes dtype: object ``` You can create a new DataFrame with this column or do what you like with it. Edit: I see that you have edited your question so that the names of the modes of transportation are now in row 0 instead of in the column headers. It is easier if they're in the column headers (as my answer assumes), and your new column headers don't seem to contain any additional useful information, so you should probably start by just setting the column names to the info from row 0, and deleting row 0.
2,500
43,470,010
I am getting acquainted to Haskell, currently writing my third "homework" to some course I found on the web. The homework assignment needs to be presented\* in a file, named `Golf.hs`, starting with `module Golf where`. All well and good, this seems to be idiomatic in the language. However, I am used to python modules ending in `if __name__ == "__main__:`, where one can put tests over the module, including during module development.`ghc` doesn't seem happy with such an approach: ``` $ ghc Golf.hs -o Golf && ./Golf <no location info>: error: output was redirected with -o, but no output will be generated ``` Even though using `cabal` seems to be the norm, I would like to also understand the raw command-line invocations, that make programs work. `ghci` seems to be another approach to testing newly written code, yet reloading modules is peta. What is the easiest way to write some invocations of my functions with predefined test data and observe on `stdout` the result? \* - for students, who actually attend the course, I just follow the lecture notes and strive to complete the homeworks Golf2.hs: ``` {-# OPTIONS_GHC -Wall #-} module Golf2 where foo :: Int -> Int foo n = 42 main = putStr "Hello" ``` The output: ``` $ ghc Golf2.hs -o Golf2 [1 of 1] Compiling Golf ( Golf2.hs, Golf2.o ) Golf2.hs:6:5: warning: [-Wunused-matches] Defined but not used: ‘n’ Golf2.hs:8:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () <no location info>: error: output was redirected with -o, but no output will be generated because there is no Main module. ```
2017/04/18
[ "https://Stackoverflow.com/questions/43470010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145760/" ]
Firstly, you are using **imgpt1** in every case which should not be the scenario. Rather use ``` v.getTag.equals("xxx") ``` After resolving that try to use the best android practice for comparison of strings. Best practice while checking the strings in android(Java) first check the null and empty string using ``` String string1 = "abc", string2 = "Abc"; TextUtils.isEmpty(string1); // Returns true if the string is empty or null ``` Then check for the equal cases by using the below mentioned code ``` string1.equals(string2) //Checks with case sensitivity string1.equalsIgnoreCase(string2) // Checks without case sensitivity. Here this will return true. ```
You should check the tag of the `view` not the static item as they will be always true! look at your first condition. `imgpt1` tag is "`frontbumpers`" thus that condition is always true! hence it shows the same message every time. ``` @Override public void onClick(View v) { String message=""; //for your clarification here v is the view which was clicked ex. impt1 or 2 3 4.... or anything which has an onClick listener assigned and was click will call this method. if(v.getTag()=="frontbumpers") { message="This is Bumper"; } else if(v.getTag()=="frontfenders") { message="This is Fenders"; } else if(v.getTag()=="frontheadlight") { message="This is headlight"; } else if(v.getTag()=="frontgrilles") { message="This is grilles"; } } ``` > > \*\* recommended to use `equals()` rather than `==` for `String`. change to `v.getTag().equals("someValueYouWantToCheck") > > >
2,502
17,659,010
I'm trying to use the ctypes module to call, from within a python program, a (fortran) library of linear algebra routines that I have written. I have successfully imported the library and can call my *subroutines* and functions that return a single value. My problem is calling functions that return an array of doubles. I can't figure out how to specify the return type. As a result, I get segfaults whenever I call a function like that. Here's a minimum working example, a routine to take the cross product between two 3-vectors: ``` !**************************************************************************************** ! Given vectors a and b, c = a x b function cross_product(a,b) real(dp) a(3), b(3), cross_product(3) cross_product = (/a(2)*b(3) - a(3)*b(2), & a(3)*b(1) - a(1)*b(3), & a(1)*b(2) - a(2)*b(1)/) end function cross_product ``` Here's my python script: ``` #!/usr/bin/python from ctypes import byref, cdll, c_double testlib = cdll.LoadLibrary('/Users/hart/codes/celib/trunk/libutils.so') cross = testlib.vector_matrix_utilities_mp_cross_product_ a = (c_double * 3)() b = (c_double * 3)() a[0] = c_double(0.0) a[1] = c_double(1.0) a[2] = c_double(2.0) b[0] = c_double(1.0) b[1] = c_double(3.0) b[2] = c_double(2.0) print a,b cross.restype = c_double * 3 print cross.restype print cross(byref(a),byref(b)) ``` And here's the output: ``` goku:~/python/ctypes> ./test_example.py <__main__.c_double_Array_3 object at 0x10399b710> <__main__.c_double_Array_3 object at 0x10399b7a0> <class '__main__.c_double_Array_3'> Segmentation fault: 11 goku:~/python/ctypes> ``` I've tried different permutations for the line "cross.restype = ..." but I can't figure out what should actually go there. Thanks for reading this question. --Gus
2013/07/15
[ "https://Stackoverflow.com/questions/17659010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1733205/" ]
The compiler may return a pointer to the array, or the array descriptor... So, when mixing languages, you should always use `bind(C)` except when the wrapper specifically supports Fortran. And (not surprisingly) `bind(C)` functions cannot return arrays. You could theoretically allocate the array and return `type(c_ptr)` to it, but how to dealocate it after the use? So my suggestion is to use a subroutine.
With gfortran the function call has a hidden argument: ``` >>> from ctypes import * >>> testlib = CDLL('./libutils.so') >>> cross = testlib.cross_product_ >>> a = (c_double * 3)(*[0.0, 1.0, 2.0]) >>> b = (c_double * 3)(*[1.0, 3.0, 2.0]) >>> c = (c_double * 3)() >>> pc = pointer(c) >>> cross(byref(pc), a, b) 3 >>> c[:] [-4.0, 2.0, -1.0] ``` But [Vladimir's suggestion](https://stackoverflow.com/a/17664115/205580) to use `bind(C)` and a subroutine is the better way to go. FYI, arrays become pointers in C function calls, so using `byref` is redundant. I needed `byref` and `pointer` in order to create a `double **` for the hidden argument.
2,512
5,762,766
I've created a little helper application using Python and GTK. I've never used GTK before. As per the comment on <http://www.pygtk.org/> I used the PyGObject interface. Now I would like to add spell checking to my Gtk.TextBuffer. I found a library called GtkSpell and an associated python-gtkspell in the package manager, but when I try to import it it fails with "ImportError: cannot import name TextView from gtk", I presume this means it is using PyGtk instead of PyGObject. Is there someway to get this working with PyGObject? Or some other premade GTK spellcheck system I can use instead?
2011/04/23
[ "https://Stackoverflow.com/questions/5762766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10471/" ]
I wrote one, yesterday because I had the same problem, so it's a bit alpha but it works fine. You could get the source from: <https://github.com/koehlma/pygtkspellcheck>. It requires [pyenchant](http://packages.python.org/pyenchant/) and I only test it with Python 3 on Archlinux. If something doesn't work feel free to fill out a bug report on Github. You have to install it with `python3 setup.py install`. It consists of two packages, `gtkspellcheck` which does the spellchecking and `pylocale` which provides human readable internationalized names for language Codes like `de_DE` or `en_US`. Because there is no documentation yet, an example: ```python # -*- coding:utf-8 -*- import locale from gtkspellcheck import SpellChecker, languages, language_exists from gi.repository import Gtk as gtk for code, name in languages: print('code: %5s, language: %s' % (code, name)) window = gtk.Window.new(gtk.WindowType(0)) view = gtk.TextView.new() if language_exists(locale.getdefaultlocale()[0]): spellchecker = SpellChecker(view, locale.getdefaultlocale()[0]) else: spellchecker = SpellChecker(view) window.set_default_size(600, 400) window.add(view) window.show_all() window.connect('delete-event', lambda widget, event: gtk.main_quit) gtk.main() ```
I'm afraid that the PyGObject interface is new enough that GtkSpell hasn't been updated to use it yet. As far as I know there is no other premade GTK spell checker.
2,513
61,893,719
I am trying to load my saved model from `s3` using `joblib` ``` import pandas as pd import numpy as np import json import subprocess import sqlalchemy from sklearn.externals import joblib ENV = 'dev' model_d2v = load_d2v('model_d2v_version_002', ENV) def load_d2v(fname, env): model_name = fname if env == 'dev': try: model=joblib.load(model_name) except: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) else: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) return model ``` But I get this error: ``` from sklearn.externals import joblib ImportError: cannot import name 'joblib' from 'sklearn.externals' (C:\Users\prane\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\externals\__init__.py) ``` Then I tried installing `joblib` directly by doing ``` import joblib ``` but it gave me this error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 8, in load_d2v_from_s3 File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle obj = unpickler.load() File "/usr/lib64/python3.7/pickle.py", line 1088, in load dispatch[key[0]](self) File "/usr/lib64/python3.7/pickle.py", line 1376, in load_global klass = self.find_class(module, name) File "/usr/lib64/python3.7/pickle.py", line 1426, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.externals.joblib' ``` Can you tell me how to solve this?
2020/05/19
[ "https://Stackoverflow.com/questions/61893719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114945/" ]
Maybe your code is outdated. For anyone who aims to use `fetch_mldata` in digit handwritten project, you should `fetch_openml` instead. ([link](https://stackoverflow.com/questions/47324921/cant-load-mnist-original-dataset-using-sklearn/52297457)) In old version of sklearn: ``` from sklearn.externals import joblib mnist = fetch_mldata('MNIST original') ``` In **sklearn 0.23** (stable release): ``` import sklearn.externals import joblib dataset = datasets.fetch_openml("mnist_784") features = np.array(dataset.data, 'int16') labels = np.array(dataset.target, 'int') ``` For more info about deprecating `fetch_mldata` see scikit-learn [doc](https://scikit-learn.org/0.20/modules/generated/sklearn.datasets.fetch_mldata.html)
When getting error: **from sklearn.externals import joblib** it deprecated older version. For new version follow: 1. conda install -c anaconda scikit-learn (install using "Anaconda Promt") 2. import joblib (Jupyter Notebook)
2,514
65,590,149
I am trying to make a python script that will make payment automatically on [this](https://www.audiobooks.com/signup) site. I am able to get credit-card-number input but i can't access expirty month or CVV. **Code I tried** I used this to get credit card number field below ``` WebDriverWait(browser, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"//iframe[@id='braintree-hosted-field-number']"))) WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, "//input[@class='number' and @id='credit-card-number']"))).send_keys("0000000000000000") ``` I used same thing to get Expiry month field, like this, ``` WebDriverWait(browser, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, '//iframe[@id="braintree-hosted-field-expirationMonth"]'))) WebDriverWait(browser, 60).until(EC.element_to_be_clickable((By.XPATH, "//input[@class='expirationMonth' and @id='expiration-month']"))).send_keys("12/2024") ``` But this code don't work So what I want is, I want to detect Expiration field and also CVV field, the method I used can't detect the field.
2021/01/06
[ "https://Stackoverflow.com/questions/65590149", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10830982/" ]
if you switch to one iframe you have to swithc to default content before you can interact with another iframe outside the current iframe in which the code focus is use ``` WebDriverWait(browser, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"//iframe[@id='braintree-hosted-field-number']"))) WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, "//input[@class='number' and @id='credit-card-number']"))).send_keys("0000000000000000") browser.switch_to_default_content() WebDriverWait(browser, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, '//iframe[@id="braintree-hosted-field-expirationMonth"]'))) WebDriverWait(browser, 60).until(EC.element_to_be_clickable((By.XPATH, "//input[@class='expirationMonth' and @id='expiration-month']"))).send_keys("12/2024") ```
[![Try switch first, then catch the xpath](https://i.stack.imgur.com/Pp7gY.png)](https://i.stack.imgur.com/Pp7gY.png) Try to switch to the iframe first, then you can identify the column with xpath
2,524
54,058,184
I'm new to GCS and Cloud Functions and would like to understand how I can do an lightweight ETL using these two technologies combined with Python (3.7). I have a GCS bucket called 'Test\_1233' containing 3 files (all structurally identical). When a new file is added to this gcs bucket, I would like the following python code to run and produce an 'output.csv file' and save in the same bucket. The code I'm trying to run is below: ``` import pandas as pd import glob import os import re import numpy as np path = os.getcwd() files = os.listdir(path) ## Originally this was intentended for finding files in the local directlory - I now need this adapted for finding files within gcs(!) ### Loading Files by Variable ### df = pd.DataFrame() data = pd.DataFrame() for files in glob.glob('gs://test_1233/Test *.xlsx'): ## attempts to find all relevant files within the gcs bucket data = pd.read_excel(files,'Sheet1',skiprows=1).fillna(method='ffill') date = re.compile(r'([\.\d]+ - [\.\d]+)').search(files).groups()[0] data['Date'] = date data['Start_Date'], data['End_Date'] = data['Date'].str.split(' - ', 1).str data['End_Date'] = data['End_Date'].str[:10] data['Start_Date'] = data['Start_Date'].str[:10] data['Start_Date'] =pd.to_datetime(data['Start_Date'],format ='%d.%m.%Y',errors='coerce') data['End_Date']= pd.to_datetime(data['End_Date'],format ='%d.%m.%Y',errors='coerce') df = df.append(data) df df['Product'] = np.where(df['Product'] =='BR: Tpaste Adv Wht 2x120g','ToothpasteWht2x120g',df['Product']) ##Stores cleaned data back into same gcs bucket as 'csv' file df.to_csv('Test_Output.csv') ``` As I'm totally new to this, I'm not sure how I create the correct path to read all the files within the cloud environment (I used to read files from my local directory!). Any help would be most appreciated.
2019/01/06
[ "https://Stackoverflow.com/questions/54058184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7638546/" ]
``` document.getElementById("loginField").getAttribute("name") ```
You can easily get it by attr method: ``` var name = $("#id").attr("name"); ```
2,526
23,653,147
I need to run a command as a different user in the %post section of an RPM. At the moment I am using a bit of a hack via python but it can't be the best way (it does feel a little dirty) ... ``` %post -p /usr/bin/python import os, pwd, subprocess os.setuid(pwd.getpwnam('apache')[2]) subprocess.call(['/usr/bin/something', 'an arg']) ``` Is there a proper way to do this?
2014/05/14
[ "https://Stackoverflow.com/questions/23653147", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2245703/" ]
If `/usr/bin/something` is something you are installing as part of the package, install it with something like ``` attr(4755, apache, apache) /usr/bin/something ``` When installed like this, `/usr/bin/something` will *always* run as user `apache`, regardless of what user actually runs it.
The accepted answer here is wrong IMO. It is not often at all you want to set attributes to allow *anyone* execute something as the owner. If you want to run something as a specific user, and that user doesn't have a shell set, you can use `su -s` to set the shell to use. For example: `su -s /bin/bash apache -c "/usr/bin/something an arg"`
2,527
7,988,772
I have already created a 64-bit program for windows using cx freeze on a 64-bit machine. I am using Windows 7 64-bit Home premium. py2exe is not working because as i understand it does not work with python 3.2.2 yet. Is there an option i have to specify in cx freeze to compile in 32-bit instead of 64-bit. Thanks!
2011/11/02
[ "https://Stackoverflow.com/questions/7988772", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1026738/" ]
To produce 32 bit executables you need to install 32-bit versions of Python and cx\_freeze.
All the "produce an executable from Python code" methods I know of basically create a file that bundles up the Python interpreter with the Python code you want to execute inside a single file. It is nothing at all like compiling C code to an executable; Python is just about impossible to compile to machine code in any significantly more useful way than just gluing the Python bytecode to the machine code for a Python interpreter. So that's almost certainly why you can't produce a 32 bit exe from a 64 bit installation of Python; there isn't a 32 bit interpreter to embed in the output file.
2,528
41,448,447
I am trying to run a **list of tasks** (*here running airflow but it could be anything really*) that require to be executed in a existing Conda environment. I would like to do these tasks: ``` - name: activate conda environment # does not work, just for the sake of understanding command: source activate my_conda_env - name: initialize the database command: airflow initdb - name: start the web server command: 'airflow webserver -p {{ airflow_webserver_port }}' - name: start the scheduler command: airflow scheduler ``` Of course, this does not work as each task is independent and the `conda environment` activation in the first task is ignored by the following tasks. I guess the issue would be the same if using a `python virtualenv` instead of `conda`. How can I achieve each task being run in the Conda environment?
2017/01/03
[ "https://Stackoverflow.com/questions/41448447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7370442/" ]
Each of your commands will be executed in a different process. `source` command, on the other hand, is used for reading the environment variables into the current process only (and its children), so it will apply only to the `activate conda environment` task. What you can try to do is: ``` - name: initialize the database shell: source /full/path/to/conda/activate my_conda_env && airflow initdb args: executable: /bin/bash - name: start the web server shell: 'source /full/path/to/conda/activate my_conda_env && airflow webserver -p {{ airflow_webserver_port }}' args: executable: /bin/bash - name: start the scheduler shell: source /full/path/to/conda/activate my_conda_env && airflow scheduler args: executable: /bin/bash ``` Before, check what's the full path to `activate` on the target machine with `which activate` (you need to do it before any environment is sourced). If Conda was installed in a user's space, you should use the same user for the Ansible connection.
Was looking out for something similar. Found a neater solution than having multiple actions: ``` - name: Run commands in conda environment shell: source activate my_conda_env && airflow {{ item }} with_items: - initdb - webserver -p {{ airflow_webserver_port }} - scheduler ```
2,531
51,273,827
I thought I read somewhere that python (3.x at least) is smart enough to handle this: ``` x = 1.01 if 1 < x < 0: print('out of range!') ``` However it is not working for me. I know I can use this instead: ``` if ((x > 1) | (x < 0)): print('out of range!') ``` ... but is it possible to fix the version above?
2018/07/10
[ "https://Stackoverflow.com/questions/51273827", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3126298/" ]
It works well, it is your expression that is always False; try this one instead: ``` x = .99 if 1 > x > 0: print('out of range!') ```
You can do it in one *compound* expression, as you've already noted, and others have commented. You cannot do it in an expression with an implied conjunction (and / or), as you're trying to do with `1 < x < 0`. Your expression requires an `or` conjunction, but Python's implied operation in this case is `and`. Therefore, to get what you want, you have to reverse your conditional branches and apply deMorgan's laws: ``` if not(0 <= x <= 1): print('out of range!') ``` Now you have the implied `and` operation, and you get the control flow you wanted.
2,532
63,739,587
I've been following along to [Corey Schafer's awesome youtube tutorial](https://www.youtube.com/watch?v=MwZwr5Tvyxo&list=PL-osiE80TeTs4UjLw5MM6OjgkjFeUxCYH) on the basic flaskblog. In addition to Corey's code, I`d like to add a logic, where users have to verify their email-address before being able to login. I've figured to do this with the URLSafeTimedSerializer from itsdangerous, like suggested by [prettyprinted here](https://www.youtube.com/watch?v=vF9n248M1yk). The whole token creation and verification process seems to work. Unfortunately due to my very fresh python knowledge in general, I can't figure out a clean way on my own how to get that saved into the sqlite3 db. In my models I've created a Boolean Column email\_confirmed with default=False which I am intending to change to True after the verification process. My question is: how do I best identify the user (for whom to alter the email\_confirmed Column) when he clicks on his custom url? Would it be a good practice to also save the token inside a db Column and then filter by that token to identify the user? Here is some of the relevant code: **User Class in my modely.py** ``` class User(db.Model, UserMixin): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(20), unique=True, nullable=False) email = db.Column(db.String(120), unique=True, nullable=False) image_file = db.Column(db.String(20), nullable=False, default='default_profile.jpg') password = db.Column(db.String(60), nullable=False) date_registered = db.Column(db.DateTime, nullable=False, default=datetime.utcnow) email_confirmed = db.Column(db.Boolean(), nullable=False, default=False) email_confirm_date = db.Column(db.DateTime) projects = db.relationship('Project', backref='author', lazy=True) def get_mail_confirm_token(self, expires_sec=1800): s = URLSafeTimedSerializer(current_app.config['SECRET_KEY'], expires_sec) return s.dumps(self.email, salt='email-confirm') @staticmethod def verify_mail_confirm_token(token): s = URLSafeTimedSerializer(current_app.config['SECRET_KEY']) try: return s.loads(token, salt='email-confirm', max_age=60) except SignatureExpired: return "PROBLEM" ``` **Registration Logic in my routes (using a users blueprint):** ``` @users.route('/register', methods=['GET', 'POST']) def register(): if current_user.is_authenticated: return redirect(url_for('dash.dashboard')) form = RegistrationForm() if form.validate_on_submit(): hashed_password = bcrypt.generate_password_hash(form.password.data).decode('utf-8') user = User(username=form.username.data, email=form.email.data, password=hashed_password) db.session.add(user) db.session.commit() send_mail_confirmation(user) return redirect(url_for('users.welcome')) return render_template('register.html', form=form) @users.route('/welcome') def welcome(): return render_template('welcome.html') @users.route('/confirm_email/<token>') def confirm_email(token): user = User.verify_mail_confirm_token(token) current_user.email_confirmed = True current_user.email_confirm_date = datetime.utcnow return user ``` The last parts `current_user.email_confirmed = True` and `current_user.email_confirm_date =datetime.utcnow` are probably the lines in question. Like stated above the desired entries aren't made because the user is not logged in at this stage, yet. Im grateful for any help on this! Thanks a lot in advance!
2020/09/04
[ "https://Stackoverflow.com/questions/63739587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13828684/" ]
The key to your question is this: > > My question is: how do I best identify the user (for whom to alter the email\_confirmed Column) when he clicks on his custom url? > > > The answer can be seen [in the example on URL safe serialisation using itsdangerous](https://itsdangerous.palletsprojects.com/en/1.1.x/url_safe/). The token itself *contains* the e-mail address, because that's what you are using inside your `get_mail_confirm_token()` function. You can then use the serialiser to retrieve the e-mail address from that token. You can do that inside your `verify_mail_confirm_token()` function, but, because it's a static-method you still need a session. You can pass this in as a separate argument though without problem. You also should treat the `BadSignature` exception from `itsdangerous`. It would then become: ``` @staticmethod def verify_mail_confirm_token(session, token): s = URLSafeTimedSerializer(current_app.config['SECRET_KEY']) try: email = s.loads(token, salt='email-confirm', max_age=60) except (BadSignature, SignatureExpired): return "PROBLEM" user = session.query(User).filter(User.email == email).one_or_none() return user ``` > > Would it be a good practice to also save the token inside a db Column and then filter by that token to identify the user? > > > No. The token should be short-lived and should not be kept around. Finally, in your `get_mail_confirm_token` implementation you are not using the `URLSafeTimedSerializer` class correctly. You pass in a second argument called `expires_sec`, but if you [look at the docs](https://itsdangerous.palletsprojects.com/en/1.1.x/url_safe/#itsdangerous.url_safe.URLSafeTimedSerializer) you will see that the second argument is the salt, which might lead to unintended problems.
Thanks to @exhuma. Here is how I eventually got it to work - also in addition I'm posting the previously missing part of email-sending. **User Class in my models.py** ``` class User(db.Model, UserMixin): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(20), unique=True, nullable=False) email = db.Column(db.String(120), unique=True, nullable=False) image_file = db.Column(db.String(20), nullable=False, default="default_profile.jpg") password = db.Column(db.String(60), nullable=False) date_registered = db.Column(db.DateTime, nullable=False, default=datetime.utcnow) email_confirmed = db.Column(db.Boolean(), nullable=False, default=False) email_confirm_date = db.Column(db.DateTime) projects = db.relationship("Project", backref="author", lazy=True) def get_mail_confirm_token(self): s = URLSafeTimedSerializer( current_app.config["SECRET_KEY"], salt="email-comfirm" ) return s.dumps(self.email, salt="email-confirm") @staticmethod def verify_mail_confirm_token(token): try: s = URLSafeTimedSerializer( current_app.config["SECRET_KEY"], salt="email-confirm" ) email = s.loads(token, salt="email-confirm", max_age=3600) return email except (SignatureExpired, BadSignature): return None ``` **Send Mail function in my utils.py** ``` def send_mail_confirmation(user): token = user.get_mail_confirm_token() msg = Message( "Please Confirm Your Email", sender="noreply@demo.com", recipients=[user.email], ) msg.html = render_template("mail_welcome_confirm.html", token=token) mail.send(msg) ``` **Registration Logic in my routes.py (using a users blueprint):** ``` @users.route("/register", methods=["GET", "POST"]) def register(): if current_user.is_authenticated: return redirect(url_for("dash.dashboard")) form = RegistrationForm() if form.validate_on_submit(): hashed_password = bcrypt.generate_password_hash(form.password.data).decode( "utf-8" ) user = User( username=form.username.data, email=form.email.data, password=hashed_password ) db.session.add(user) db.session.commit() send_mail_confirmation(user) return redirect(url_for("users.welcome")) return render_template("register.html", form=form) @users.route("/welcome") def welcome(): return render_template("welcome.html") @users.route("/confirm_email/<token>") def confirm_email(token): email = User.verify_mail_confirm_token(token) if email: user = db.session.query(User).filter(User.email == email).one_or_none() user.email_confirmed = True user.email_confirm_date = datetime.utcnow() db.session.add(user) db.session.commit() return redirect(url_for("users.login")) flash( f"Your email has been verified and you can now login to your account", "success", ) else: return render_template("errors/token_invalid.html") ``` **Only missing** from my point of view is a simple conditional logic, to check if email\_confirmed = True before logging in, as well as the same check inside the confirm\_email(token) function to not make this process repeatable in case the user clicks on the confirmation link several times. Thanks again! Hope this is of some help to anyone else!
2,535
17,457,608
I'm trying to time several things in python, including upload time to Amazon's S3 Cloud Storage, and am having a little trouble. I can time my hash, and a few other things, but not the upload. I thought [this](https://stackoverflow.com/questions/7523767/how-to-use-python-timeit-when-passing-variables-to-functions) post would finally, get me there, but I can't seem to find salvation. Any help would be appreciated. Very new to python, thanks! ``` import timeit accKey = r"xxxxxxxxxxx"; secKey = r"yyyyyyyyyyyyyyyyyyyyyyyyy"; bucket_name = 'sweet_data' c = boto.connect_s3(accKey, secKey) b = c.get_bucket(bucket_name); k = Key(b); p = '/my/aws.path' f = 'C:\\my.file' def upload_data(p, f): k.key = p k.set_contents_from_filename(f) return t = timeit.Timer(lambda: upload_data(p, f), "from aws_lib import upload_data; p=%r; f = %r" % (p,f)) # Just calling the function works fine #upload_data(p, f) ```
2013/07/03
[ "https://Stackoverflow.com/questions/17457608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2407064/" ]
I know this is heresy in the Python community, but I actually recommend *not* to use `timeit`, especially for something like this. For your purposes, I believe it will be good enough (and possibly even better than `timeit`!) if you simply use `time.time()` to time things. In other words, do something like ``` from time import time t0 = time() myfunc() t1 = time() print t1 - t0 ``` Note that depending on your platform, you might want to try `time.clock()` instead (see Stack Overflow questions such as [this](https://stackoverflow.com/questions/85451/python-time-clock-vs-time-time-accuracy) and [this](https://stackoverflow.com/questions/1938048/high-precision-clock-in-python)), and if you're on Python 3.3, then you have [better options](http://docs.python.org/3/library/time.html), due to [PEP 418](http://www.python.org/dev/peps/pep-0418/).
You can use the command line interface to `timeit`. Just save your code as a module without the timing stuff. For example: ``` # file: test.py data = range(5) def foo(l): return sum(l) ``` Then you can run the timing code from the command line, like this: ``` $ python -mtimeit -s 'import test;' 'test.foo(test.data)' ``` See also: * <http://docs.python.org/2/library/timeit.html#command-line-interface> * <http://docs.python.org/2/library/timeit.html#examples>
2,536
48,344,035
**Scenario:** I am trying to work out a way to send a quick test message in skype with a python code. From the documentations (<https://pypi.python.org/pypi/SkPy/0.1>) I got a snippet that should allow me to do that. **Problem:** I refilled the information as expected, but I am getting an error when trying to create the connection to skype in: ``` sk = Skype(username, password) ``` I get: > > SkypeAuthException: ("Couldn't retrieve t field from login response", > ) > > > I have no idea what this error means. **Question:** Any idea on how to solve this? **Code:** This is basically what I am using, plus my username and password: ``` from skpy import Skype sk = Skype(username, password) # connect to Skype sk.user # you sk.contacts # your contacts sk.chats # your conversations ch = sk.contacts["joe.4"].chat # 1-to-1 conversation ch.sendMsg(content) # plain-text message ``` **Question 2:** Is there any way to do this, in which the password and username should not be in the code? For example, would it be possible to use the skype instance that is already open in that computer?
2018/01/19
[ "https://Stackoverflow.com/questions/48344035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7321700/" ]
It might look like you have been blocked from your server's IP if you logged in elsewhere recently. This works for me. ``` from skpy import Skype loggedInUser = Skype("userName", "password") print(loggedInUser.users) // loggedIn user info print(loggedInUser.contacts) // loggedIn user contacts ``` PS: skpy version: 0.8.1
try this: ``` def connect_skype(user, pwd, token): s = Skype(connect=False) s.conn.setTokenFile(token) try: s.conn.readToken() except SkypeAuthException: s.conn.setUserPwd(user, pwd) s.conn.getSkypeToken() s.conn.writeToken() finally: sk = Skype(user, pwd, tokenFile=token) return sk ``` The token parameter can be a empty file, but you need to create before to use this function. The function will be write in this file the client token. If you still continue the problem, try to sign in to Skype online, sometimes need to update some information then try again.
2,537
34,004,510
I'm a beginner in the Python language. Is there a "try and except" function in python to check if the input is a LETTER or multiple LETTERS. If it isn't, ask for an input again? (I made one in which you have to enter an integer number) ``` def validation(i): try: result = int(i) return(result) except ValueError: print("Please enter a number") def start(): x = input("Enter Number: ") z = validation(x) if z != None: #Rest of function code print("Success") else: start() start() ``` When the above code is executed, and an integer number is entered, you get this: ``` Enter Number: 1 Success ``` If and invalid value however, such as a letter or floating point number is entered, you get this: ``` Enter Number: Hello Please enter a number Enter Number: 4.6 Please enter a number Enter Number: ``` As you can see it will keep looping until a valid **NUMBER** value is entered. So is it possible to use the "try and except" function to keep looping until a **letter** is entered? To make it clearer, I'll explain in vague structured English, not pseudo code, but just to help make it clearer: ``` print ("Hello this will calculate your lucky number") # Note this isn't the whole program, its just the validation section. input (lucky number) # English on what I want the code to do: x = input (luckynumber) ``` So what I want is that if the variable "x" IS NOT a letter, or multiple letters, it should repeat this input (x) until the user enters a valid **letter** or multiple **letters**. In other words, if a letter(s) isn't entered, the program will not continue until the input is a letter(s). I hope this makes it clearer.
2015/11/30
[ "https://Stackoverflow.com/questions/34004510", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5622261/" ]
You can just call the same function again, in the try/except clause - to do that, you'll have to adjust your logic a bit: ``` def validate_integer(): x = input('Please enter a number: ') try: int(x) except ValueError: print('Sorry, {} is not a valid number'.format(x)) return validate_integer() return x def start(): x = validate_integer() if x: print('Success!') ```
Don't use recursion in Python when simple iteration will do. ``` def validate(i): try: result = int(i) return result except ValueError: pass def start(): z = None while z is None: x = input("Please enter a number: ") z = validate(x) print("Success") start() ```
2,538
4,393,830
In the process of trying to write a Python script that uses PIL today, I discovered I don't seem have it on my local machine (OS X 10.5.8, default 2.5 Python install). So I run: ``` easy_install --prefix=/usr/local/python/ pil ``` and it complains a little about /usr/local/python/lib/python2.5/site-packages not yet existing, so I create it, and try again, and get this: > > TEST FAILED: > /usr/local/python//lib/python2.5/site-packages > does NOT support .pth files error: bad > install directory or PYTHONPATH > > > You are attempting to install a > package to a directory that is not on > PYTHONPATH and which Python does not > read ".pth" files from. The > installation directory you specified > (via --install-dir, --prefix, or the > distutils default setting) was: > > > > ``` > /usr/local/python//lib/python2.5/site-packages > > ``` > > and your PYTHONPATH environment > variable currently contains: > > > > ``` > '' > > ``` > > OK, fair enough -- I hadn't done anything to set the path. So I add a quick line to ~/.bash\_profile: > > PYTHONPATH="$PYTHONPATH:/usr/local/python/lib/python2.5" > > > and `source` it, and try again. Same error message. This is kindof curious, given that PYTHONPATH is clearly set; I can `echo $PYTHONPATH` and get back `:/usr/local/python/lib/python2.5`. I decided to check out what the include path looked like from inside: ``` import sys print "\n".join(sys.path) ``` which yields: > > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python25.zip > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5 > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/plat-darwin > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/plat-mac > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/plat-mac/lib-scriptpackages > /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/lib-tk > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/lib-dynload > /Library/Python/2.5/site-packages > /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/PyObjC > > > from which `/usr/local/python/yadda/yadda` is notably missing. Not sure what I'm supposed to do here. How do I get python to recognize this location as an include path? **UPDATE** As Sven Marnach suggested, I was neglecting to export PYTHONPATH. I've corrected that problem, and now see it show up when I print out `sys.path` from within Python. However, I still got the `TEST FAILED` error message I mentioned above, just with my new PYTHONPATH environment variable. So, I tried changing it from `/usr/local/python/lib/python2.5` to `/usr/local/python/lib/python2.5/site-packages`, exporting, and running the same `easy_install` command again. This leads to an all new result that at first *looked* like success (but isn't): ``` Creating /usr/local/python/lib/python2.5/site-packages/site.py Searching for pil Reading http://pypi.python.org/simple/pil/ Reading http://www.pythonware.com/products/pil Reading http://effbot.org/zone/pil-changes-115.htm Reading http://effbot.org/downloads/#Imaging Best match: PIL 1.1.7 Downloading http://effbot.org/media/downloads/PIL-1.1.7.tar.gz Processing PIL-1.1.7.tar.gz Running PIL-1.1.7/setup.py -q bdist_egg --dist-dir /var/folders/XW/XWpClVq7EpSB37BV3zTo+++++TI/-Tmp-/easy_install-krj9oR/PIL-1.1.7/egg-dist-tmp--Pyauy --- using frameworks at /System/Library/Frameworks [snipped: compiler warnings] -------------------------------------------------------------------- PIL 1.1.7 SETUP SUMMARY -------------------------------------------------------------------- version 1.1.7 platform darwin 2.5.1 (r251:54863, Sep 1 2010, 22:03:14) [GCC 4.0.1 (Apple Inc. build 5465)] -------------------------------------------------------------------- --- TKINTER support available --- JPEG support available --- ZLIB (PNG/ZIP) support available *** FREETYPE2 support not available *** LITTLECMS support not available -------------------------------------------------------------------- To add a missing option, make sure you have the required library, and set the corresponding ROOT variable in the setup.py script. To check the build, run the selftest.py script. zip_safe flag not set; analyzing archive contents... Image: module references __file__ No eggs found in /var/folders/XW/XWpClVq7EpSB37BV3zTo+++++TI/-Tmp-/easy_install-krj9oR/PIL-1.1.7/egg-dist-tmp--Pyauy (setup script problem?) ``` Again, this looks good, but when I go to run my script: > > Traceback (most recent call last): > > File "checkerboard.py", line 1, in > > import Image, ImageDraw ImportError: No module named Image > > > When I check what's now under `/usr/local/python/` using `find .`, I get: > > ./lib ./lib/python2.5 > ./lib/python2.5/site-packages > ./lib/python2.5/site-packages/site.py > ./lib/python2.5/site-packages/site.pyc > > > So... nothing module-looking (I'm assuming site.py and site.pyc are metadata or helper scripts). Where did the install go? I note this: > > To check the build, run the > selftest.py script. > > > But don't really know what that is. And I also noticed the "No eggs found" message. Are either of these hints?
2010/12/09
[ "https://Stackoverflow.com/questions/4393830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/87170/" ]
You are using the Apple-supplied Python 2.5 in OS X; it's a framework build and, by default, uses `/Library/Python/2.5/site-packages` as the location for installed packages, not `/usr/local`. Normally you shouldn't need to specify `--prefix` with an OS X framework build. Also beware that the `setuptools` (`easy_install`) supplied by Apple with OS X 10.5 is also rather old as is the version of Python itself. That said, installing `PIL` completely and correctly on OS X especially OS X 10.5 is not particularly simple. Search the archives or elsewhere for tips and/or binary packages. Particularly if you are planning to use other modules like MySQL or Django, my recommendation is to install everything (Python and PIL) using a package manager like [MacPorts](http://www.macports.org/).
Why did you specify `--prefix` in your `easy_install` invocation? Did you try just: ``` sudo easy_install pil ``` If you're only trying to install PIL to the default location, I would think `easy_install` could work out the correct path. (Clearly, `/usr/local/python` isn't it...) **EDIT**: Someone down-voted this answer, maybe because it was too terse . That's what I get for trying to post an answer from my cell phone, I guess. But the gist of it is perfectly valid, IMHO: if you're using `--prefix` to specify a custom install location with `easy_install`, you're kind of 'doing it wrong'. It might be *possible* to make this work, but the `easy_install` documentation has a section on [custom installation locations](http://peak.telecommunity.com/DevCenter/EasyInstall#custom-installation-locations) that doesn't even mention this as a possibility, except as a small tweak to the [virtual python](http://peak.telecommunity.com/DevCenter/EasyInstall#creating-a-virtual-python) option. I'd suggest following the [OS X instructions](http://peak.telecommunity.com/DevCenter/EasyInstall#mac-os-x-user-installation) if you want to install to a custom location on a Mac, `--prefix` just does not seem like the right tool for the job.
2,539
40,373,609
I am actually reading [Oracle-cx\_Oracle](http://www.oracle.com/technetwork/articles/dsl/python-091105.html) tutorial. There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing. So could somebody help me understand what are they and how they are different to each other. Thank you.
2016/11/02
[ "https://Stackoverflow.com/questions/40373609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1293013/" ]
The error means that you're navigating to a view whose model is declared as typeof `Foo` (by using `@model Foo`), but you actually passed it a model which is typeof `Bar` (note the term *dictionary* is used because a model is passed to the view via a `ViewDataDictionary`). The error can be caused by **Passing the wrong model from a controller method to a view (or partial view)** Common examples include using a query that creates an anonymous object (or collection of anonymous objects) and passing it to the view ```cs var model = db.Foos.Select(x => new { ID = x.ID, Name = x.Name }; return View(model); // passes an anonymous object to a view declared with @model Foo ``` or passing a collection of objects to a view that expect a single object ```cs var model = db.Foos.Where(x => x.ID == id); return View(model); // passes IEnumerable<Foo> to a view declared with @model Foo ``` The error can be easily identified at compile time by explicitly declaring the model type in the controller to match the model in the view rather than using `var`. **Passing the wrong model from a view to a partial view** Given the following model ```cs public class Foo { public Bar MyBar { get; set; } } ``` and a main view declared with `@model Foo` and a partial view declared with `@model Bar`, then ```cs Foo model = db.Foos.Where(x => x.ID == id).Include(x => x.Bar).FirstOrDefault(); return View(model); ``` will return the correct model to the main view. However the exception will be thrown if the view includes ```cs @Html.Partial("_Bar") // or @{ Html.RenderPartial("_Bar"); } ``` By default, the model passed to the partial view is the model declared in the main view and you need to use ```cs @Html.Partial("_Bar", Model.MyBar) // or @{ Html.RenderPartial("_Bar", Model.MyBar); } ``` to pass the instance of `Bar` to the partial view. Note also that if the value of `MyBar` is `null` (has not been initialized), then by default `Foo` will be passed to the partial, in which case, it needs to be ```cs @Html.Partial("_Bar", new Bar()) ``` **Declaring a model in a layout** If a layout file includes a model declaration, then all views that use that layout must declare the same model, or a model that derives from that model. If you want to include the html for a separate model in a Layout, then in the Layout, use `@Html.Action(...)` to call a `[ChildActionOnly]` method initializes that model and returns a partial view for it.
**Passing the model value that is populated from a controller method to a view** ``` public async Task<IActionResult> Index() { //Getting Data from Database var model= await _context.GetData(); //Selecting Populated Data from the Model and passing to view return View(model.Value); } ```
2,540
54,706,513
According to the xgboost documentation (<https://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.training>) the xgboost returns feature importances: > > **feature\_importances\_** > > > Feature importances property > > > **Note** > > > Feature importance is defined only for tree boosters. Feature importance is only defined when the decision tree model is chosen as base learner > ((booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear). > > > **Returns:** feature\_importances\_ > > > **Return type:** array of shape [n\_features] > > > However, this does not seem to case, as the following toy example shows: ``` import seaborn as sns import xgboost as xgb mpg = sns.load_dataset('mpg') toy = mpg[['mpg', 'cylinders', 'displacement', 'horsepower', 'weight', 'acceleration']] toy = toy.sample(frac=1) N = toy.shape[0] N1 = int(N/2) toy_train = toy.iloc[:N1, :] toy_test = toy.iloc[N1:, :] toy_train_x = toy_train.iloc[:, 1:] toy_train_y = toy_train.iloc[:, 1] toy_test_x = toy_test.iloc[:, 1:] toy_test_y = toy_test.iloc[:, 1] max_depth = 6 eta = 0.3 subsample = 0.8 colsample_bytree = 0.7 alpha = 0.1 params = {"booster" : 'gbtree' , 'objective' : 'reg:linear' , 'max_depth' : max_depth, 'eta' : eta,\ 'subsample' : subsample, 'colsample_bytree' : colsample_bytree, 'alpha' : alpha} dtrain_toy = xgb.DMatrix(data = toy_train_x , label = toy_train_y) dtest_toy = xgb.DMatrix(data = toy_test_x, label = toy_test_y) watchlist = [(dtest_toy, 'eval'), (dtrain_toy, 'train')] xg_reg_toy = xgb.train(params = params, dtrain = dtrain_toy, num_boost_round = 1000, evals = watchlist, \ early_stopping_rounds = 20) xg_reg_toy.feature_importances_ --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-378-248f7887e307> in <module>() ----> 1 xg_reg_toy.feature_importances_ AttributeError: 'Booster' object has no attribute 'feature_importances_' ```
2019/02/15
[ "https://Stackoverflow.com/questions/54706513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8270077/" ]
If you set: ``` "moment": "^2.22.2" ``` the user will download almost the `v2.22.2`. In this case you will download the `v2.24.0` If you set: ``` "moment": "2.22.2" ``` the user will download exactly that version If you set: ``` "moment": "~2.22.1" ``` the user will download almost the `v2.22.1`. In this case you will download the `v2.22.2` You can use the functions in `v2.9.9` if and only if the module respect the [semver](https://semver.org/) standard. That is true the 99.999% of times.
> > can we use any of version 2.x.x functionality( i.e. we can use the new functions provided by 2.9.9 in our app, though we installed 2.22.2 on our computer) > > > Just to avoid confusion. You will not install version 2.22.2 on your computer. By saying ^2.22.2, npm will look what is the highest version of 2.x.x and install that version. You *will never* install version 2.22.2. You *will* install version 2.24, and when moment updates its packages to 2.25.0, you will install that version. So you will always have the latest verison 2.x.x installed, so you will get the functions of 2.9.9. > > are we saying that anyone else who uses our code of app can use any 2.x.x version of "moment" package ? > > > Yes, you can verify this by checking out package-lock.json which is created by NPM and describes the exact dependency tree. <https://docs.npmjs.com/files/package-lock.json> If your package.json is version 1.0.0 and you have 2.22.2 dependency on moment, and do npm install, you will see in package-lock. ``` { "name": "mypackage", "version": "1.0.0", "lockfileVersion": 1, "requires": true, "dependencies": { "moment": { "version": "2.24.0", "resolved": "https://registry.npmjs.org/moment/-/moment-2.24.0.tgz", } } } ``` So everybody that installs your version 1.0.0 of your package will get moment version 2.24 > > why do I need to install "moment.js" again (i.e. update it) once its > installed on my computer – > > > You don't have to to. But the common rule is to leave node\_modules out of repositories and only have package.json. So that when you publish your website to for example AWS, Azure or DigitalOcean, they will do npm install and therefore install everything, every time you publish your website. **To clarify how the flow of packages usually is** 1. You create a package/module with specific verison 2. I decide to use your package 3. So I will do npm install (to use your package) 4. NPM will go through the dependency tree and install versions accordingly. 5. My website works and I am happy 6. In the meanwhile you are changing your code, and updating your package. 7. Few months pass and I decide to change my website. So now when I do npm install (because I updated my code), I will get your updates as well.
2,550
50,750,688
In python I can do: ``` >>> 5 in [2,4,6] False >>> 5 in [4,5,6] True ``` to determine if the give value `5` exists in the list. I want to do the same concept in `jq`. But, there is no `in`. Here is an example with a more realistic data set, and how I can check for 2 values. In my real need I have to check for a few hundred and don't want to have all those `or`ed together. ``` jq '.[] | select(.PrivateIpAddress == "172.31.6.209" or .PrivateIpAddress == "172.31.6.229") | .PrivateDnsName' <<EOF [ { "PrivateDnsName": "ip-172-31-6-209.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.209" }, { "PrivateDnsName": "ip-172-31-6-219.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.219" }, { "PrivateDnsName": "ip-172-31-6-229.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.229" }, { "PrivateDnsName": "ip-172-31-6-239.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.239" } ] EOF ```
2018/06/07
[ "https://Stackoverflow.com/questions/50750688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/117471/" ]
using `,` --------- I don't know where in <https://stedolan.github.io/jq/manual/v1.5/> this is documented. But the answer is in that `jq` does implicit one-to-many and many-to-one munging. ``` jq '.[] | select(.PrivateIpAddress == ("172.31.6.209", "172.31.6.229")) | .PrivateDnsName' <<EOF [ { "PrivateDnsName": "ip-172-31-6-209.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.209" }, { "PrivateDnsName": "ip-172-31-6-219.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.219" }, { "PrivateDnsName": "ip-172-31-6-229.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.229" }, { "PrivateDnsName": "ip-172-31-6-239.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.239" } ] EOF ``` (the formatting/indenting of code was made to match that of the OP to simplify visual comparison) The output is: ``` "ip-172-31-6-209.us-west-2.compute.internal" "ip-172-31-6-229.us-west-2.compute.internal" ``` "Seems like voodoo to me." using `| IN("a","b","c")` ------------------------- **Update:** It's been 16 months, and I've finally learned how to use the `IN` function. Here is a demo that will produce the same results as above. ``` cat > filter.jq <<EOF # Either of these work in jq < v1.5, but I've commented them out since I'm using v1.6 # def IN(s): first( if (s == .) then true else empty end ) // false; # def IN(s): first(select(s == .)) // false; .[] | select(.PrivateIpAddress | IN("172.31.6.209","172.31.6.229")) | .PrivateDnsName EOF jq -f filter.jq <<EOF [ { "PrivateDnsName": "ip-172-31-6-209.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.209" }, { "PrivateDnsName": "ip-172-31-6-219.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.219" }, { "PrivateDnsName": "ip-172-31-6-229.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.229" }, { "PrivateDnsName": "ip-172-31-6-239.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.239" } ] EOF ```
> > But, there is no `in`. > > > You could use `index/1`, as documented in the manual. Even better would be to use `IN`, which however was only introduced after the release of jq 1.5. If your jq does not have it, you can use this definition for `IN/1`: ``` # return true or false as . is in the stream s def IN(s): first( if (s == .) then true else empty end ) // false; ``` If you want to check membership in an array, say $a, simply use `IN( $a[] )`.
2,551
3,221,314
Is there a generic notion of asynchronous programming in python? Could I assign a callback to a function, execute it and return to the main program flow immediately, no matter how long the execution of that function would take?
2010/07/11
[ "https://Stackoverflow.com/questions/3221314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/282307/" ]
Take a look here: [Asynchronous Programming in Python](http://xph.us/2009/12/10/asynchronous-programming-in-python.html) [An Introduction to Asynchronous Programming and Twisted](http://krondo.com/blog/?p=1247) Worth checking out: [asyncio (previously Tulip) has been checked into the Python default branch](https://plus.google.com/103282573189025907018/posts/6gLX8Nhk5WM) ### Edited on 14-Mar-2018 Today Python has [asyncIO — Asynchronous I/O, event loop, coroutines and tasks](https://docs.python.org/3/library/asyncio.html) built in. Description taken from the link above: > > The **asyncIO** module provides infrastructure for writing single-threaded > concurrent code using coroutines, multiplexing I/O access over sockets > and other resources, running network clients and servers, and other > related primitives. Here is a more detailed list of the package > contents: > > > 1. a pluggable event loop with various system-specific implementations; > 2. transport and protocol abstractions (similar to those in Twisted); > 3. concrete support for TCP, UDP, SSL, subprocess pipes, delayed calls, > and others (some may be system-dependent); > 4. a Future class that mimics the one in the concurrent.futures module, but adapted for use with the event loop; > 5. coroutines and tasks based on yield from (PEP 380), to > help write concurrent code in a sequential fashion; > 6. cancellation support for Futures and coroutines; > 7. synchronization primitives for use > between coroutines in a single thread, mimicking those in the > threading module; > 8. an interface for passing work off to a threadpool, > for times when you absolutely, positively have to use a library that > makes blocking I/O calls. > > > Asynchronous programming is more complex > than classical “sequential” programming: see the [Develop with asyncio > page](https://docs.python.org/3/library/asyncio-dev.html#asyncio-dev) which lists common traps and explains how to avoid them. Enable > the debug mode during development to detect common issues. > > > Also worth checking out: [A guide to asynchronous programming in Python with asyncIO](https://medium.freecodecamp.org/a-guide-to-asynchronous-programming-in-python-with-asyncio-232e2afa44f6)
The other respondents are pointing you to Twisted, which is a great and very comprehensive framework but in my opinion it has a very un-pythonic design. Also, AFAICT, you have to use the Twisted main loop, which may be a problem for you if you're already using something else that provides its own loop. Here is a contrived example that would demonstrate using the `threading` module: ``` from threading import Thread def background_stuff(): while True: print "I am doing some stuff" t = Thread(target=background_stuff) t.start() # Continue doing some other stuff now ``` However, in pretty much every useful case, you will want to communicate between threads. You should look into [synchronization primitives](http://en.wikipedia.org/wiki/Synchronization_primitive), and become familiar with the concept of [concurrency](http://en.wikipedia.org/wiki/Concurrency_%28computer_science%29) and the related issues. The `threading` module provides many such primitives for you to use, if you know how to use them.
2,552
59,860,579
I used postman to get urls from an api so I can look at certain titles. The response was saved as a .json file. A snippet of my response.json file looks like this: ``` { "apiUrl":"https://api.ft.com/example/83example74-3c9b-11ea-a01a-example547046735", "title": { "title": "Example title example title example title" }, "lifecycle": { "initialPublishDateTime":"2020-01-21T22:54:57Z", "lastPublishDateTime":"2020-01-21T23:38:19Z" }, "location":{ "uri":"https://www.ft.com/exampleurl/83example74-3c9b-11ea-a01a-example547046735" }, "summary": "...", # ............(this continues for all different titles I found) } ``` Since I want to look at the articles I want to generate a list of all urls. I am not interested in the apiUrl but only in the uri. My current python file looks like this ``` with open ("My path to file/response.json") as file: for line in file: urls = re.findall('https://(?:[-\www.]|(?:%[\da-fA-F]{2}))+', line) print(urls) ``` This gives me the following output: `['https://api.ft.com', 'https://www.ft.com', 'https://api.ft.com', 'https://www.ft.com',........` However, I want to be able to see the entire url for www.ft.com ( so not the api.ft.com url's since I'm not interested in those). For example I want my program to extract something like: <https://www.ft.com/thisisanexampleurl/83example74-3c9b-11ea-a01a-example547046735> I want the program to do this for the entire response file Does anyone know a way to do this? All help would be appreciated. Raymond
2020/01/22
[ "https://Stackoverflow.com/questions/59860579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11197012/" ]
If you are using materialize CSS framework make sure you initialize the select again, after appending new options. This worked for me ``` $.each(jsonArray , (key , value)=>{ var option = new Option(value.name , value.id) $('#subcategory').append(option) }) $('select').formSelect(); ```
Try This : ``` function PopulateDropDown(jsonArray) { if (jsonArray != null && jsonArray.length > 0) { $("#subcategory").removeAttr("disabled"); $.each(jsonArray, function () { $("#subcategory").append($("<option></option>").val(this['id']).html(this['name'])); }); } } ```
2,562
49,091,870
I want a model with 5 choices, but I cannot enforce them and display the display value in template. I am using CharField(choice=..) instead of ChoiceField or TypeChoiceField as in the [docs](https://docs.djangoproject.com/en/dev/ref/models/instances/#django.db.models.Model.get_FOO_display). I tried the solutions [here](https://stackoverflow.com/questions/1105638/django-templates-verbose-version-of-a-choice) but they don't work for me (see below). model.py: ``` class Language(models.Model): language = models.CharField(max_length=20,blank=False) ILR_scale = ( (5, 'Native'), (4, 'Full professional proficiency'), (3, 'Professional working proficiency'), (2, 'Limited professional proficiency'), (1, 'Elementary professional proficiency') ) level = models.CharField(help_text='Choice between 1 and 5', default=5, max_length=25, choices=ILR_scale) def level_verbose(self): return dict(Language.ILR_scale)[self.level] class Meta: ordering = ['level','id'] def __unicode__(self): return ''.join([self.language, '-', self.level]) ``` view.py ``` .. def index(request): language = Language.objects.all() .. ``` mytemplate.html ``` <div class="subheading strong-underlined mb-3 my-3"> Languages </div> {% regroup language|dictsortreversed:"level" by level as level_list %} <ul> {% for lan_list in level_list %} <li> {% for lan in lan_list.list %} <strong>{{ lan.language }}</strong>: {{ lan.level_verbose }}{%if not forloop.last%},{%endif%} {% endfor %} </li> {% endfor %} </ul> ``` From shell: ``` python3 manage.py shell from resume.models import Language l1=Language.objects.create(language='English',level=4) l1.save() l1.get_level_display() #This is good Out[20]: 'Full professional proficiency' ``` As soon as I create a Language instance from shell I cannot load the site. It fails at line 0 of the template with Exception Type: KeyError, Exception Value: '4', Exception Location: /models.py in level\_verbose, line 175 (which is the return line of the level\_verbose method). Also, I was expecting a validation error here from shell: ``` l1.level='asdasd' l1.save() #Why can I save this instance with this level? ``` And I can also save a shown above when using ChoiceField, meaning that I do not understand what that field is used for. How to force instances to take field values within choices, and display the display value in templates?
2018/03/04
[ "https://Stackoverflow.com/questions/49091870", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3592827/" ]
Well this is the common issue even when I started with django. So first let's look at django's feature that you can do it like below (Note: your choice case's value are going to be store as integer so you should use `models.IntegerField` instead of `models.CharField`): * [get\_FOO\_display()](https://docs.djangoproject.com/en/2.0/ref/models/instances/#django.db.models.Model.get_FOO_display) : you are very near about this solution. As you can see in documentation `FOO` is the field name of your model. in your case it is `level` so when you want to access corresponding choice value in shell or view you can call method with model instance as below as you have already mentioned: ``` `l1.get_level_display()` ``` but when you want to access it in template file you need to write it like below: ``` {{ l1.get_level_display }} ``` * Now let's look at your method `level_verbose()` if you see quite again your model is a class and `level_verbose()` is the method you have created you can access `self.ILR_scale` directly just as you have used `self.level` the main catch in you create dictionary of ILR\_scale it's keys are Integer values `(i.e. 1, 2, 3, 4, 5)` but you have used `CharField()` to store the level values which returns you string values `(i.e. '1', '2', '3', '4' or '5')` and in python dictionary key 1 and '1' are both different one is integer and other is string. So you may change your model field to `models.IntegerField()` or you can access the keys like ``` dict(self.ILR_scal)[int(self.level)] ```
You can also use `models.CharField` but you have to set field option `choices` to your tuples. For exapmle: ``` FRESHMAN = 'FR' SOPHOMORE = 'SO' JUNIOR = 'JR' SENIOR = 'SR' LEVELS = ( (FRESHMAN, 'Freshman'), (SOPHOMORE, 'Sophomore'), (JUNIOR, 'Junior'), (SENIOR, 'Senior'), ) level = models.CharField( max_length=2, choices=LEVELS, default=FRESHMAN, ) ``` Then in your template you can use [get\_FOO\_display()](https://docs.djangoproject.com/en/2.0/ref/models/instances/#django.db.models.Model.get_FOO_display) for example: `{{l1.get_level_display}}` See more in [docs](https://docs.djangoproject.com/en/2.0/ref/models/fields/)
2,563
53,520,300
Using the python bindings for libVLC in a urwid music player I am building. libVLC keeps outputting some errors about converting time and such when pausing and resuming a mp3 file. As far as I can gather from various posts on the vlc mailing list and forums, these errors appear in mp3 files all the time and as long as the file is playing like it should one should not worry about them. That would be the end of it, but the errors keep getting written on top of the urwid interface and that is a problem. How can I either stop libVLC from outputting these non-essential errors or or perhaps simply prevent them from showing on top of the urwid interface?
2018/11/28
[ "https://Stackoverflow.com/questions/53520300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150033/" ]
This is just a macro `Privileged_Data` doing nothing. The compiler will not even see it after the preprocessor pass. It's probably a readability or company standards decision to tag some variables like this.
A preprocessor macro can be defined without an associated value. When that is the case, the macro is substituted with nothing after preprocessing. So given this: ``` #define Privileged_Data ``` Then this: ``` Privileged_Data static int dVariable ``` Becomes this after preprocessing: ``` static int dVariable ``` So this particular macro has no effect on the program, and was probably put in place for documentation purposes.
2,564
43,714,967
I found (lambda \*\*x: x) is very useful for defining a dict in a succinct way, e.g. ``` xxx = (lambda **x: x)(a=1, b=2, c=3) ``` Is there any pre-defined python function does that?
2017/05/01
[ "https://Stackoverflow.com/questions/43714967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4927088/" ]
The `dict` function/constructor can be used in the same manner. ``` >>> (lambda **x: x)(a=1, b=2, c=3) == dict(a=1, b=2, c=3) True ``` See `help(dict)` for more ways to instantiate `dict`s. You are not limited to just defining them with `{'a': 1, 'b': 2, 'c': 3}`.
Try the `{}` literal dictionary syntax. It is quite succinct. See [5.5. *Dictionaries*](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) in the **Data Structures tutorial**. ``` >>> xxx = {'a': 1, 'b': 2, 'c': 3} >>> xxx {'a': 1, 'b': 2, 'c': 3} ```
2,565
48,103,343
I was a little surprised to find that: ``` # fast_ops_c.pyx cimport cython cimport numpy as np @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function @cython.nonecheck(False) def c_iseq_f1(np.ndarray[np.double_t, ndim=1, cast=False] x, double val): # Test (x==val) except gives NaN where x is NaN cdef np.ndarray[np.double_t, ndim=1] result = np.empty_like(x) cdef size_t i = 0 cdef double _x = 0 for i in range(len(x)): _x = x[i] result[i] = (_x-_x) + (_x==val) return result ``` is orders or magnitude faster than: ``` @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function @cython.nonecheck(False) def c_iseq_f2(np.ndarray[np.double_t, ndim=1, cast=False] x, double val): cdef np.ndarray[np.double_t, ndim=1] result = np.empty_like(x) cdef size_t i = 0 cdef double _x = 0 for _x in x: # Iterate over elements result[i] = (_x-_x) + (_x==val) return result ``` (for large arrays). I'm using the following to test the performance: ``` # fast_ops.py try: import pyximport pyximport.install(setup_args={"include_dirs": np.get_include()}, reload_support=True) except Exception: pass from fast_ops_c import * import math import nump as np NAN = float("nan") import unittest class FastOpsTest(unittest.TestCase): def test_eq_speed(self): from timeit import timeit a = np.random.random(500000) a[1] = 2. a[2] = NAN a2 = c_iseq_f(a, 2.) def f1(): c_iseq_f2(a, 2.) def f2(): c_iseq_f1(a, 2.) # warm up [f1() for x in range(20)] [f2() for x in range(20)] n=1000 dur = timeit(f1, number=n) print dur, "DUR1 s/iter", dur/n dur = timeit(f2, number=n) print dur, "DUR2 s/iter", dur/n dur = timeit(f1, number=n) print dur, "DUR1 s/iter", dur/n assert dur/n <= 0.005 dur = timeit(f2, number=n) print dur, "DUR2 s/iter", dur/n print a2[:10] assert a2[0] == 0. assert a2[1] == 1. assert math.isnan(a2[2]) ``` I'm guessing that `for _x in x` is interpreted as execute the python iterator for x, and `for i in range(n):` is interpreted as a C for loop, and `x[i]` interpreted as C's `x[i]` array indexing. However, I'm kinda guessing and trying to follow by example. In its [working with numpy](http://docs.cython.org/en/latest/src/tutorial/numpy.html) (and [here](http://docs.cython.org/en/latest/src/userguide/numpy_tutorial.html)) docs, Cython is a little quiet on what's optimized with respect to numpy, and what's not. Is there a guide to what *is* optimized. --- Similarly, the following, which assumes contiguous array memory, is considerably faster that either of the above. ``` @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function def c_iseq_f(np.ndarray[np.double_t, ndim=1, cast=False, mode="c"] x not None, double val): cdef np.ndarray[np.double_t, ndim=1] result = np.empty_like(x) cdef size_t i = 0 cdef double* _xp = &x[0] cdef double* _resultp = &result[0] for i in range(len(x)): _x = _xp[i] _resultp[i] = (_x-_x) + (_x==val) return result ```
2018/01/04
[ "https://Stackoverflow.com/questions/48103343", "https://Stackoverflow.com", "https://Stackoverflow.com/users/48956/" ]
Current versions of Cython (at least >=0.29.20) produce similar performant C-code for both variants. The answer bellow holds for older Cython-versions. --- The reason for this surprise is that `x[i]` is more subtle as it looks. Let's take a look at the following cython function: ``` %%cython def cy_sum(x): cdef double res=0.0 cdef int i for i in range(len(x)): res+=x[i] return res ``` And measure its performance: ``` import numpy as np a=np.random.random((2000,)) %timeit cy_sum(a) >>>1000 loops, best of 3: 542 µs per loop ``` This is pretty slow! If you look into the produced C-code, you will see, that `x[i]` uses the `__getitem()__` functionality, which takes a `C-double`, creates a python-Float object, casts it back to a `C-double` and destroys the temporary python-float. Pretty much overhead for a single `double`-addition! Let's make it clear to cython, that `x` is a typed memory view: ``` %%cython def cy_sum_memview(double[::1] x): cdef double res=0.0 cdef int i for i in range(len(x)): res+=x[i] return res ``` with a much better performance: ``` %timeit cy_sum_memview(a) >>> 100000 loops, best of 3: 4.21 µs per loop ``` So what happened? Because cython know, that `x` is a [typed memory view](http://cython.readthedocs.io/en/latest/src/userguide/memoryviews.html) (I would rather use typed memory view than numpy-array in the signature of the cython-functions), it no longer must use the python-functionality `__getitem__` but can access the `C-double` value directly without the need to create an intermediate python-float. But back to the numpy-arrays. Numpy arrays can be intepreted by cython as typed memory views and thus `x[i]` can be translated into a direct/fast access to the underlying memory. So what about for-range? ``` %%cython cimport array def cy_sum_memview_for(double[::1] x): cdef double res=0.0 cdef double x_ for x_ in x: res+=x_ return res %timeit cy_sum_memview_for(a) >>> 1000 loops, best of 3: 736 µs per loop ``` It is slow again. So cython seems not to be clever enough to replace the for-range through direct/fast access and once again uses python-functionality with the resulting overhead. I'm must confess I'm as surprised as you are, because at first sight there is no good reason why cython should not be able to use fast access in the case of the for-range. But this is how it is... --- I'm not sure, that this is the reason but the situation is not that simple with two dimensional arrays. Consider the following code: ``` import numpy as np a=np.zeros((5,1), dtype=int) for d in a: print(int(d)+1) ``` This code works, because `d` is a 1-length array and thus can be be converted to Python scalar via `int(d)`. However, ``` for d in a.T: print(int(d)+1) ``` throws, because now `d`'s length is `5` and thus it cannot be converted to a Python scalar. Because we want this code have the same behavior as pure Python when cythonized and it can be determined only during the runtime whether the conversion to int is Ok or not, we have use a Python-object for `d` first and only than can we access the content of this array.
Cython can translate `range(len(x))` loops into nearly onLy C Code: ``` for i in range(len(x)): ``` Generated code: ``` __pyx_t_6 = PyObject_Length(((PyObject *)__pyx_v_x)); if (unlikely(__pyx_t_6 == -1)) __PYX_ERR(0, 17, __pyx_L1_error) for (__pyx_t_7 = 0; __pyx_t_7 < __pyx_t_6; __pyx_t_7+=1) { __pyx_v_i = __pyx_t_7; ``` But this remains Python: ``` for _x in x: # Iterate over elements ``` Generated code: ``` if (likely(PyList_CheckExact(((PyObject *)__pyx_v_x))) || PyTuple_CheckExact(((PyObject *)__pyx_v_x))) { __pyx_t_1 = ((PyObject *)__pyx_v_x); __Pyx_INCREF(__pyx_t_1); __pyx_t_6 = 0; __pyx_t_7 = NULL; } else { __pyx_t_6 = -1; __pyx_t_1 = PyObject_GetIter(((PyObject *)__pyx_v_x)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 12, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_t_7 = Py_TYPE(__pyx_t_1)->tp_iternext; if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 12, __pyx_L1_error) } for (;;) { if (likely(!__pyx_t_7)) { if (likely(PyList_CheckExact(__pyx_t_1))) { if (__pyx_t_6 >= PyList_GET_SIZE(__pyx_t_1)) break; #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS __pyx_t_3 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_3); __pyx_t_6++; if (unlikely(0 < 0)) __PYX_ERR(0, 12, __pyx_L1_error) #else __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 12, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); #endif } else { if (__pyx_t_6 >= PyTuple_GET_SIZE(__pyx_t_1)) break; #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_3); __pyx_t_6++; if (unlikely(0 < 0)) __PYX_ERR(0, 12, __pyx_L1_error) #else __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 12, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); #endif } } else { __pyx_t_3 = __pyx_t_7(__pyx_t_1); if (unlikely(!__pyx_t_3)) { PyObject* exc_type = PyErr_Occurred(); if (exc_type) { if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); else __PYX_ERR(0, 12, __pyx_L1_error) } break; } __Pyx_GOTREF(__pyx_t_3); } __pyx_t_8 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_8 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 12, __pyx_L1_error) __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; __pyx_v__x = __pyx_t_8; /* … */ } __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; ``` Generating this output is typically the best way to find out.
2,566
51,584,994
In python if my list is ``` TheTextImage = [["111000"],["222999"]] ``` How would one loop through this list creating a new one of ``` NewTextImage = [["000111"],["999222"]] ``` Can use `[:]` but not `[::-1]`, and cannot use `reverse()`
2018/07/29
[ "https://Stackoverflow.com/questions/51584994", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10092065/" ]
You may not use `[::-1]` but you can multiply each range index by -1. ``` t = [["111000"],["222999"]] def rev(x): return "".join(x[(i+1)*-1] for i in range(len(x))) >>> [[rev(x) for x in z] for z in t] [['000111'], ['999222']] ``` --- If you may use the `step` arg in `range`, can do AChampions suggestion: ``` def rev(x): return ''.join(x[i-1] for i in range(0, -len(x), -1)) ```
If you can't use any standard functionality such as `reversed` or `[::-1]`, you can use `collections.deque` and `deque.appendleft` in a loop. Then use a list comprehension to apply the logic to multiple items. ``` from collections import deque L = [["111000"], ["222999"]] def reverser(x): out = deque() for i in x: out.appendleft(i) return ''.join(out) res = [[reverser(x[0])] for x in L] print(res) [['000111'], ['999222']] ``` Note you *could* use a list, but appending to the beginning of a list is inefficient.
2,567
8,377,157
I want to find the fastest way to do the job of `switch` in C. I'm writing some Python code to replace C code, and it's all working fine except for a bottleneck. This code is used in a tight loop, so it really is quite crucial that I get the best performance. **Optimsation Attempt 1:** First attempt, as per previous questions such as [this](https://stackoverflow.com/questions/1429505/python-does-python-have-an-equivalent-to-switch) suggest using hash tables for lookups. This ended up being incredibly slow. **Optimsation Attempt 2** Another optimisation I have made is to create a run of `if ... return` statements which gives me a 13% speed boost. It's still disappointingly slow. **Optimsation Attempt 3** I created an `array.array` of all possible input values, and did an index lookup. This results in an over-all speed up of 43%, which is respectable. I'm running over an `array.array` using `map` and passing a transform function to it. This function is doing the lookup. My switch is working on short integers (it's a typed array). If this were GCC C, the compiler would create a jump table. It's frustrating to know that Python is either hashing my value to lookup a table entry or in the case of if, performing lots of comparisons. I know from profiling it that the slow functions are precisely the ones that are doing the look-up. What is the absolute fastest way of mapping one integer to another, mapped over an `array.array` if relevant. Anything faster than the above? EDIT ---- Although it makes me look like an idiot for only just realising, I will say it anwyay! Remember that running your code in a profiler slows your code down a *lot*. In my case, 19 times slower. Suddenly my bottleneck isn't so bad! Thanks very much everyone for all your answers. The question is still valid. I'll leave the question open for a bit because there may be some interesting answers. With profiler, for my test set of data: ``` real 0m37.309s user 0m33.263s sys 0m4.002s ``` without: ``` real 0m2.595s user 0m2.526s sys 0m0.028s ```
2011/12/04
[ "https://Stackoverflow.com/questions/8377157", "https://Stackoverflow.com", "https://Stackoverflow.com/users/148423/" ]
I think others are right to suggest numpy or pure c; but for pure python, here are some timings, for what they're worth. Based on these, I'm a bit surprised that `array.array` performed so much better than a `dict`. Are you creating these tables on the fly inside the loop? Or have I misunderstood something else about your question? In any case, this suggests that a `list` is actually the best way to go. ``` >>> def make_lookup_func(table): ... def lookup(val, t=table): ... return t[val] ... return lookup ... >>> lookup_tuple = make_lookup_func(tuple(range(10))) >>> lookup_list = make_lookup_func(list(range(10))) >>> lookup_array = make_lookup_func(array.array('i', range(10))) >>> lookup_dict = make_lookup_func(dict(zip(range(10), range(10)))) >>> %timeit lookup_tuple(9) 10000000 loops, best of 3: 177 ns per loop >>> %timeit lookup_list(9) 10000000 loops, best of 3: 158 ns per loop >>> %timeit lookup_array(9) 10000000 loops, best of 3: 181 ns per loop >>> %timeit lookup_dict(9) 10000000 loops, best of 3: 166 ns per loop ``` Scaling behavior: ``` >>> lookup_tuple = make_lookup_func(tuple(range(10000))) >>> lookup_list = make_lookup_func(list(range(10000))) >>> lookup_array = make_lookup_func(array.array('i', range(10000))) >>> lookup_dict = make_lookup_func(dict(zip(range(10000), range(10000)))) >>> %timeit lookup_tuple(9000) 10000000 loops, best of 3: 177 ns per loop >>> %timeit lookup_list(9000) 10000000 loops, best of 3: 158 ns per loop >>> %timeit lookup_array(9000) 10000000 loops, best of 3: 186 ns per loop >>> %timeit lookup_dict(9000) 10000000 loops, best of 3: 195 ns per loop ```
Branch logic in general can be painfully slow in python when used in this type of application and you basically struck on one of the better ways of doing this for a tight inner loop where you are converting between integers. A few more things to experiment with: You might try would be working with [np.array](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#arrays-indexing) or using [Cython](http://cython.org/) (or just straight C) for the tight loop. These require some additional setup (and possibly writing the inner loop in C), but can also give tremendous speedups for this type of application and can let you take advantage of a good C optimizer. Something that can go either way and is more of a micro-optimization is that you could try using a list comprehension instead of a map, or make sure you aren't using a lambda in your map. Not using a lambda in a `map()` is actually a pretty big one, while the difference between a list comprehension and a map tends to be relatively small otherwise.
2,571
19,174,634
**I found a better error message (see below).** I have a model called App in core/models.py. The error occurs when trying to access a specific app object in django admin. Even on an empty database (after syncdb) with a single app object. Seems core\_app\_history is something django generated. Any help is appreciated. Here is the exception: ``` NoReverseMatch at /admin/core/app/251/ Reverse for 'core_app_history' with arguments '(u'',)' and keyword arguments '{}' not found. Request Method: GET Request URL: http://weblocal:8001/admin/core/app/251/ Django Version: 1.5.4 Exception Type: NoReverseMatch Exception Value: Reverse for 'core_app_history' with arguments '(u'',)' and keyword arguments '{}' not found. Exception Location: /opt/virtenvs/django_slice/local/lib/python2.7/site-packages/django/template/defaulttags.py in render, line 426 Python Executable: /opt/virtenvs/django_slice/bin/python Python Version: 2.7.3 Python Path: ['/opt/src/slicephone/cloud', '/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg', '/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/pip-1.2.1-py2.7.egg', '/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/distribute-0.6.35-py2.7.egg', '/opt/virtenvs/django_slice/lib/python2.7', '/opt/virtenvs/django_slice/lib/python2.7/plat-linux2', '/opt/virtenvs/django_slice/lib/python2.7/lib-tk', '/opt/virtenvs/django_slice/lib/python2.7/lib-old', '/opt/virtenvs/django_slice/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/opt/virtenvs/django_slice/local/lib/python2.7/site-packages'] Server time: Fri, 11 Oct 2013 22:06:43 +0000 ``` And it occurs in /django/contrib/admin/templates/admin/change\_form.html ``` 32 <li><a href="{% url opts|admin_urlname:'history' original.pk|admin_urlquote %}" class="historylink">{% trans "History" %}</a></li> ``` Here is the (possible) relevant urls: ``` /admin/core/app/ HANDLER: changelist_view /admin/core/app/add/ HANDLER: add_view /admin/core/app/(.+)/history/ HANDLER: history_view /admin/core/app/(.+)/delete/ HANDLER: delete_view /admin/core/app/(.+)/ HANDLER: change_view ```
2013/10/04
[ "https://Stackoverflow.com/questions/19174634", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1252307/" ]
i Think this is not valid JSON JSON should be like ``` [ { "id": 1, "src": "src1", "name": "name1" }, { "id": 2, "src": "src2", "name": "name2" }, { "id": 3, "src": "src3", "name": "name3" }, { "id": 4, "src": "src4", "name": "name4" } ] ``` Validate Your JSON @ <http://jsonlint.com/>
Your outer object in json does not have a key where the internal list is stored in. Also, your strings in json should be quoted. `src1`, `name1` are unquoted.
2,572
54,761,993
Passing the file as an argument and storing to an object reference seems very straightforward and easy to understand for the open() function, however the read () function does not take the argument in, and is using the format file.read() instead. Why does the read function not take in the file as arguments, such as read(in\_file), and why is it not included in the Python Standard Library of built-in functions? I've checked the list of built in functions in the standard library: <https://docs.python.org/3/library/functions.html#open> ``` # calls the open function passing from_file argument and storing to in_file object reference in_file = open(from_file) # why is this not written as read(in_file) instead? in_data = in_file.read() ```
2019/02/19
[ "https://Stackoverflow.com/questions/54761993", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11000101/" ]
It's not included there because it's not a *function*, it's a *method* of the object that's exposing a file-oriented API, which is, in this case, `in_file`.
Because you have file reference by `in_file = open(from_file)` so when you do `in_file.read()` you are calling the read on the reference itself which is equivalent of `self` it means the object in this case file object
2,573
49,314,270
stuck with create dbf file in python3 with dbf lib. im tried this - ``` import dbf Tbl = dbf.Table( 'sample.dbf', 'ID N(6,0); FCODE C(10)') Tbl.open('read-write') Tbl.append() with Tbl.last_record as rec: rec.ID = 5 rec.FCODE = 'GA24850000' ``` and have next error: ``` Traceback (most recent call last): File "c:\Users\operator\Desktop\2.py", line 3, in <module> Tbl.open('read-write') File "C:\Users\operator\AppData\Local\Programs\Python\Python36-32\lib\site-packages\dbf\__init__.py", line 5778, in open raise DbfError("mode for open must be 'read-write' or 'read-only', not %r" % mode) dbf.DbfError: mode for open must be 'read-write' or 'read-only', not 'read-write' ``` if im remove 'read-write' - next: ``` Traceback (most recent call last): File "c:\Users\operator\Desktop\2.py", line 4, in <module> Tbl.append() File "C:\Users\operator\AppData\Local\Programs\Python\Python36-32\lib\site-packages\dbf\__init__.py", line 5492, in append raise DbfError('%s not in read/write mode, unable to append records' % meta.filename) dbf.DbfError: sample.dbf not in read/write mode, unable to append records ``` thats im doing wrong? if im not try append, im just get .dbf with right columns, so dbf library works.
2018/03/16
[ "https://Stackoverflow.com/questions/49314270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9394255/" ]
I had the same error. In the older versions of the dbf module, I was able to write dbf files by opening them just with `Tbl.open()` However, with the new version (dbf.0.97), I have to open the files with `Tbl.open(mode=dbf.READ_WRITE)` in order to be able to write them.
here's an append example: ``` table = dbf.Table('sample.dbf', 'cod N(1,0); name C(30)') table.open(mode=dbf.READ_WRITE) row_tuple = (1, 'Name') table.append(row_tuple) ```
2,574
14,633,952
I'm new to Elastic Search and to the non-SQL paradigm. I've been following ES tutorial, but there is one thing I couldn't put to work. In the following code (I'me using [PyES](http://packages.python.org/pyes/) to interact with ES) I create a single document, with a nested field (subjects), that contains another nested field (concepts). ``` from pyes import * conn = ES('127.0.0.1:9200') # Use HTTP # Delete and Create a new index. conn.indices.delete_index("documents-index") conn.create_index("documents-index") # Create a single document. document = { "docid": 123456789, "title": "This is the doc title.", "description": "This is the doc description.", "datepublished": 2005, "author": ["Joe", "John", "Charles"], "subjects": [{ "subjectname": 'subject1', "subjectid": [210, 311, 1012, 784, 568], "subjectkey": 2, "concepts": [ {"name": "concept1", "score": 75}, {"name": "concept2", "score": 55} ] }, { "subjectname": 'subject2', "subjectid": [111, 300, 141, 457, 748], "subjectkey": 0, "concepts": [ {"name": "concept3", "score": 88}, {"name": "concept4", "score": 55}, {"name": "concept5", "score": 66} ] }], } # Define the nested elements. mapping1 = { 'subjects': { 'type': 'nested' } } mapping2 = { 'concepts': { 'type': 'nested' } } conn.put_mapping("document", {'properties': mapping1}, ["documents-index"]) conn.put_mapping("subjects", {'properties': mapping2}, ["documents-index"]) # Insert document in 'documents-index' index. conn.index(document, "documents-index", "document", 1) # Refresh connection to make queries. conn.refresh() ``` I'm able to query *subjects* nested field: ``` query1 = { "nested": { "path": "subjects", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"subjects.subjectname": "subject1"} }, { "range": {"subjects.subjectkey": {"gt": 1}} } ] } } } } results = conn.search(query=query1) for r in results: print r # as expected, it returns the entire document. ``` but I can't figure out how to query based on *concepts* nested field. ES [documentation](http://www.elasticsearch.org/guide/reference/query-dsl/nested-query.html) refers that > > Multi level nesting is automatically supported, and detected, > resulting in an inner nested query to automatically match the relevant > nesting level (and not root) if it exists within another nested query. > > > So, I tryed to build a query with the following format: ``` query2 = { "nested": { "path": "concepts", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"concepts.name": "concept1"} }, { "range": {"concepts.score": {"gt": 0}} } ] } } } } ``` which returned 0 results. I can't figure out what is missing and I haven't found any example with queries based on two levels of nesting.
2013/01/31
[ "https://Stackoverflow.com/questions/14633952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/759733/" ]
Ok, after trying a tone of combinations, I finally got it using the following query: ``` query3 = { "nested": { "path": "subjects", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"subjects.concepts.name": "concept1"} } ] } } } } ``` So, the nested **path** attribute (*subjects*) is always the same, no matter the nested attribute level, and in the query definition I used the attribute's full path (*subject.concepts.name*).
Shot in the dark since I haven't tried this personally, but have you tried the fully qualified path to Concepts? ``` query2 = { "nested": { "path": "subjects.concepts", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"subjects.concepts.name": "concept1"} }, { "range": {"subjects.concepts.score": {"gt": 0}} } ] } } } } ```
2,575
22,444,378
I am looking for a simple solution to display thumbnails using wxPython. This is not about creating the thumbnails. I have a directory of thumbnails and want to display them on the screen. I am purposely not using terms like (Panel, Frame, Window, ScrolledWindow) because I am open to various solutions. Also note I have found multiple examples for displaying a single image, so referencing any such solution will not help me. The solution must be for displaying multiple images at the same time in wx. It seems that what I want to do is being done in ThumbnailCtrl, but Andrea's code is complex and I cannot find the portion that does the display to screen. I did find a simple solution in Mark Lutz's Programming Python book, but while his viewer\_thumbs.py example definitely has the simplicity that I am looking for, it was done using Tkinter. So please any wx solution will be greatly appreciated. EDIT: I am adding a link to one place where Mark Lutz's working Tkinter code can be found. Can anyone think of a wx equivalent? <http://codeidol.com/community/python/viewing-and-processing-images-with-pil/17565/#part-33>
2014/03/16
[ "https://Stackoverflow.com/questions/22444378", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3381864/" ]
I would recommend using the ThumbNailCtrl widget: <http://wxpython.org/Phoenix/docs/html/lib.agw.thumbnailctrl.html>. There is a good example in the wxPython demo. Or you could use this one from the documentation. Note that the ThumbNailCtrl requires the Python Imaging Library to be installed. ``` import os import wx import wx.lib.agw.thumbnailctrl as TC class MyFrame(wx.Frame): def __init__(self, parent): wx.Frame.__init__(self, parent, -1, "ThumbnailCtrl Demo") panel = wx.Panel(self) sizer = wx.BoxSizer(wx.VERTICAL) thumbnail = TC.ThumbnailCtrl(panel, imagehandler=TC.NativeImageHandler) sizer.Add(thumbnail, 1, wx.EXPAND | wx.ALL, 10) thumbnail.ShowDir(os.getcwd()) panel.SetSizer(sizer) # our normal wxApp-derived class, as usual app = wx.App(0) frame = MyFrame(None) app.SetTopWindow(frame) frame.Show() app.MainLoop() ``` Just change the line **thumbnail.ShowDir(os.getcwd())** so that it points at the right folder on your machine. I also wrote up an article for viewing photos here: <http://www.blog.pythonlibrary.org/2010/03/26/creating-a-simple-photo-viewer-with-wxpython/> It doesn't use thumbnails though.
I would just display them as wx.Image inside a frame. <http://www.wxpython.org/docs/api/wx.Image-class.html> From the class: "A platform-independent image class. An image can be created from data, or using wx.Bitmap.ConvertToImage, or loaded from a file in a variety of formats. Functions are available to set and get image bits, so it can be used for basic image manipulation." Seems it should be able to do what you want, unless I'm missing something.
2,578
39,053,393
I'm using the formula "product of two number is equal to the product of their GCD and LCM". Here's my code : ``` # Uses python3 import sys def hcf(x, y): while(y): x, y = y, x % y return x a,b = map(int,sys.stdin.readline().split()) res=int(((a*b)/hcf(a,b))) print(res) ``` It works great for small numbers. But when i give input as : > > Input: > 226553150 1023473145 > > > My output: > 46374212988031352 > > > Correct output: > 46374212988031350 > > > Can anyone please tell me where am I going wrong ?
2016/08/20
[ "https://Stackoverflow.com/questions/39053393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6032875/" ]
Elaborating on the comments. In Python 3, true division, `/`, converts its arguments to floats. In your example, the true answer of `lcm(226553150, 1023473145)` is `46374212988031350`. By looking at `bin(46374212988031350)` you can verify that this is a 56 bit number. When you compute `226553150*1023473145/5` (5 is the gcd) you get `4.637421298803135e+16`. Documentation suggests that such floats only have 53 bits of precision. Since 53 < 56, you have lost information. Using `//` avoids this. Somewhat counterintuitively, in cases like this it is "true" division which is actually false. By the way, a useful module when dealing with exact calculations involving large integers is [fractions](https://docs.python.org/3/library/fractions.html) (\*): ``` from fractions import gcd def lcm(a,b): return a*b // gcd(a,b) >>> lcm(226553150,1023473145) 46374212988031350 ``` (\*) I just noticed that the documentation on `fractions` says this about its `gcd`: "Deprecated since version 3.5: Use math.gcd() instead", but I decided to keep the reference to `fractions` since it is still good to know about it and you might be using a version prior to 3.5.
You should use a different method to find the **GCD** that will be the issue: Use: ``` def hcfnaive(a, b): if(b == 0): return abs(a) else: return hcfnaive(b, a % b) ``` You can try one more method: ``` import math a = 13 b = 5 print((a*b)/math.gcd(a,b)) ```
2,581
46,996,102
python is new to me and I'm facing this little, probably for most of you really easy to solve, problem. I am trying for the first time to use a class so I dont have to make so many functions and just pick one out of the class!! so here is what I have writen so far: ``` from tkinter import * import webbrowser class web_open3: A = "webbrowser.open(www.google.de") def open(self): self.A = webbrowser.open("www.google.de") test = web_open3.open() root = Tk() b1 = Button(root, text="button", command=test) b1.pack() root.mainloop() ``` The Error I get : > > Traceback (most recent call last): > line 11, in > test = web\_open3.open() > TypeError: open() missing 1 required positional argument: 'self' > > > greetings Slake
2017/10/29
[ "https://Stackoverflow.com/questions/46996102", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8839994/" ]
You need to initiate a class first variable = web\_open3(). The **init** is a magic function that is ran when you create an instance of the class. This is to show how to begin writing a class in python. ``` from tkinter import * import webbrowser class web_open3: def __init__(self): self.A = "http://www.google.de" def open(self): webbrowser.open_new(self.A) test = web_open3() root = Tk() b1 = Button(root, text="button", command=test.open) b1.pack() root.mainloop() ```
In programming, a class is an object. What is an object? It's an instance. In order to use your object, you first have to create it. You do that by instantiating it, `web = web_open3()`. Then, you can use the `open()` function. Now, objects may also be static. A static object, is an object that you don't instantiate. Any class, independent of being instantiated or not, may have static variables and functions. Let's take a look at your code: ``` # Classes should be named with CamelCase convention: 'WebOpen3' class web_open3: # This is a static variable. Variables should be named with lowercase letters A = "webbrowser.open(www.google.de" # This is an instance method def open(self): # You are accessing a static variable as an instance variable self.A = webbrowser.open("www.google.de") # Here, you try to use an instance method without first initializing your object. That raises an error, the one you gave in the description. test = web_open3.open() ``` Let's now look at a static example: ``` class WebOpen3: a = "webbrowser.open(www.google.de" @staticmethod def open(): WebOpen3.a = webbrowser.open("www.google.de") test = WebOpen3.open() ``` and an instance example: ``` class WebOpen3: def __init__(self): self.a = "webbrowser.open(www.google.de" def open(self): self.a = webbrowser.open("www.google.de") web = WebOpen3() test = web.open() ``` There is still one problem left. When saying: `test = web.open()`, or `test = WebOpen3.open()`, you're trying to bind the returning value from `open()` to `test`, however that function doesn't return anything. So, you need to add a return statement to it. Let's use the instance method/function as an example: ``` def open(self): self.a = webbrowser.open("www.google.de") return self.a ``` or, instead of returning a value, just call the function straight-forward: ``` WebOpen3.open() ``` or ``` web.open() ``` > > **Note**: functions belonging to instances, are also called methods. > > > **Note**: `self` refers to an instance of that class. > > > **Note**: `def __init__(self)`, is an instance´s initializer. For your case, you call it by using `WebOpen3()`. You will later find more special functions defined as `def __func_name__()`. > > > **Note**: For more on variables in a class, you should read this: [Static class variables in Python](https://stackoverflow.com/questions/68645/static-class-variables-in-python) > > > As for the case of your Tkinter window, to get a button in your view: you can use this code: ``` from tkinter import * app = Tk() button = Button(app, text='Open in browser') button.bind("<Button-1>", web.open) # Using 'web.open', or 'WebOpen3.open', both without parenthesis, will send a reference to your function. button.pack() app.mainloop() ```
2,582
57,270,642
I have a program that uploads videos to via the vimeo api. But everytime I click run, the program that runs is not the current one, its an old program, which I have now deleted and even deleted from recycle bin, yet everytime I run my vimeo code it runs a completely different program that shouldnt even exist its driving me crazy! I've tried to adjust my setting file which currently looks like below. ``` "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "internalConsole" } ] } ```
2019/07/30
[ "https://Stackoverflow.com/questions/57270642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8066094/" ]
I suspect you have a script cached somewhere. To troubleshoot please do the following: * Restart VScode * Restart PC (if on windows 10 use `shutdown/r /f /t 000` in cmd to force a full restart and avoid windows fast-boot saving anything.) * check what happens if you run the script manually via `python *your script*` and see what happens. Comment if this doesn't help and add more info such as your OS and how you are running your script.
If you are importing any module like "import some\_module" you could change it to "from some\_module import \*", or the specific function you want.
2,583
10,076,075
I have a data structure like this: ``` { 'key1':[ [1,1,'Some text'], [2,0,''], ... ], ... 'key99':[ [1,1,'Some text'], [2,1,'More text'], ... ], } ``` The size of this will be only like 100 keys and 100 lists in each key. I like to store it and retrieve it (the entire list) based on the key. This is for a use in a web-server with not very high traffic. However, the back end must handle concurrent reads and writes. How to do this in a safe way and without writing too much code? I suppose storing the [pickled](http://docs.python.org/library/pickle) object in SQLite is a possible solution. Are there better ways?
2012/04/09
[ "https://Stackoverflow.com/questions/10076075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126463/" ]
You should use a [`ObservableCollection<SomeType>`](http://msdn.microsoft.com/en-us/library/ms668604.aspx) for this instead. `ObservableCollection<T>` provides the `CollectionChanged` event which you can subscribe to - the [`CollectionChanged`](http://msdn.microsoft.com/en-us/library/ms653375.aspx) event fires when an item is added, removed, changed, moved, or the entire list is refreshed.
`List` does not expose any events for that. You should consider using [`ObservableCollection`](http://msdn.microsoft.com/en-us/library/ms653375.aspx) instead. It has `CollectionChanged` event which occurs when an item is added, removed, changed, moved, or the entire list is refreshed.
2,584
70,234,520
I am new at python, im trying to write a code to print several lines after an if statement. for example, I have a file "test.txt" with this style: ``` Hello how are you? fine thanks how old are you? 24 good how old are you? i am 26 ok bye. Hello how are you? fine how old are you? 13 good how old are you? i am 34 ok bye. Hello how are you? good how old are you? 17 good how old are you? i am 19 ok bye. Hello how are you? perfect how old are you? 26 good how old are you? i am 21 ok bye. ``` so I want to print one line after each "how old are you" my code is like this: ``` fhandle=open('test.txt') for line in fhandle: if line.startswith('how old are you?') print(line) /*** THIS IS THE PROBLEM ``` I want to print next line after how old are you ( maybe print two lines after "how old are you" )
2021/12/05
[ "https://Stackoverflow.com/questions/70234520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17594775/" ]
You can use `readlines()` function that returnes lines of a file as a list and use `enumerate()` function to loop through list elements: ``` lines = open('test.txt').readlines() for i,line in enumerate(lines): if line.startswith('how old are you?'): print(lines[i+1], line[i+2]) ```
You could convert the file to a list and use a variable which increases by 1 for each line: ```py fhandle = list(open('test.txt')) i = 1 for line in fhandle: if line.startswith('how old are you?') print(fhandle[i]) i += 1 ```
2,593
2,009,379
``` import re from decimal import * import numpy from scipy.signal import cspline1d, cspline1d_eval import scipy.interpolate import scipy import math import numpy from scipy import interpolate Y1 =[0.48960000000000004, 0.52736099999999997, 0.56413900000000006, 0.60200199999999993, 0.64071400000000001, 0.67668399999999995, 0.71315899999999999, 0.75050499999999998, 0.61494199999999999, 0.66246900000000009] X1 =[0.024, 0.026000000000000002, 0.028000000000000004, 0.029999999999999999, 0.032000000000000001, 0.034000000000000002, 0.035999999999999997, 0.038000000000000006, 0.029999999999999999, 0.032500000000000001] rep = scipy.interpolate.splrep(X1,Y1) ``` IN the above code i am getting and error of ``` Traceback (most recent call last): File "/home/vibhor/Desktop/timing_tool/timing/interpolation_cap.py", line 64, in <module> rep = scipy.interpolate.splrep(X1,Y1) File "/usr/lib/python2.6/site-packages/scipy/interpolate/fitpack.py", line 418, in splrep raise _iermess[ier][1],_iermess[ier][0] ValueError: Error on input data ``` Don't know what is happening
2010/01/05
[ "https://Stackoverflow.com/questions/2009379", "https://Stackoverflow.com", "https://Stackoverflow.com/users/240524/" ]
I believe it's due to the X1 values not being ordered from smallest to largest plus also you have one duplicate x point, i.e, you need to sort the values for X1 and Y1 before you can use the splrep and remove duplicates. splrep from the docs seem to be low level access to FITPACK libraries which expects a sorted, non-duplicate list that's why it returns an error interpolate.interp1d might seem to work, but have you actually tried to use it to find a new point? I think you'll find an error when you call it i.e. rep(2)
The X value 0.029999999999999999 occurs twice, with two different Y coordinates. It wouldn't surprise me if that caused a problem trying to fit a polynomial spline segment....
2,596
70,639,556
Recently I have started to use [hydra](https://hydra.cc/docs/intro/) to manage the configs in my application. I use [Structured Configs](https://hydra.cc/docs/tutorials/structured_config/intro/) to create schema for .yaml config files. Structured Configs in Hyda uses [dataclasses](https://docs.python.org/3/library/dataclasses.html) for type checking. However, I also want to use some kind of validators for some of the parameter I specify in my Structured Configs (something like [this](https://pydantic-docs.helpmanual.io/usage/validators/)). Do you know if it is somehow possible to use Pydantic for this purpose? When I try to use Pydantic, OmegaConf complains about it: ```sh omegaconf.errors.ValidationError: Input class 'SomeClass' is not a structured config. did you forget to decorate it as a dataclass? ```
2022/01/09
[ "https://Stackoverflow.com/questions/70639556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10943470/" ]
For those of you wondering how this works exactly, here is an example of it: ```py import hydra from hydra.core.config_store import ConfigStore from omegaconf import OmegaConf from pydantic.dataclasses import dataclass from pydantic import validator @dataclass class MyConfigSchema: some_var: float @validator("some_var") def validate_some_var(cls, some_var: float) -> float: if some_var < 0: raise ValueError(f"'some_var' can't be less than 0, got: {some_var}") return some_var cs = ConfigStore.instance() cs.store(name="config_schema", node=MyConfigSchema) @hydra.main(config_path="/path/to/configs", config_name="config") def my_app(config: MyConfigSchema) -> None: # The 'validator' methods will be called when you run the line below OmegaConf.to_object(config) if __name__ == "__main__": my_app() ``` And `config.yaml` : ```yaml defaults: - config_schema some_var: -1 # this will raise a ValueError ```
See [pydantic.dataclasses.dataclass](https://pydantic-docs.helpmanual.io/usage/dataclasses/), which are a drop-in replacement for the standard-library dataclasses with some extra type-checking.
2,597
31,460,152
I am writing a python code that will work as a dameon in a Raspberry pi. However, the person I am writing this for want to see the raw output it gets while it is running, not just my log files. My first idea to do this was to use a bash script using the Screen program, but that has some features in it that I CANNOT have. Mainly the ability to kill the program through the Screen program. Is there a way I can write a program (preferably python) or bash script, that is able to read the output of another program running, but doesn't send anything to it? Thanks.
2015/07/16
[ "https://Stackoverflow.com/questions/31460152", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3346931/" ]
In the latest seaborn, you can use the `countplot` function: ``` seaborn.countplot(x='reputation', data=df) ``` To do it with `barplot` you'd need something like this: ``` seaborn.barplot(x=df.reputation.value_counts().index, y=df.reputation.value_counts()) ``` You can't pass `'reputation'` as a column name to `x` while also passing the counts in `y`. Passing 'reputation' for `x` will use the *values* of `df.reputation` (all of them, not just the unique ones) as the `x` values, and seaborn has no way to align these with the counts. So you need to pass the unique values as `x` and the counts as `y`. But you need to call `value_counts` twice (or do some other sorting on both the unique values and the counts) to ensure they match up right.
Using just `countplot` you can get the bars in the same order as `.value_counts()` output too: ``` seaborn.countplot(data=df, x='reputation', order=df.reputation.value_counts().index) ```
2,598
35,230,093
When a terminal is opened, the environmental shell is set. If I then type "csh" it starts running a c shell as a program within the bash terminal. My question is, from a python script, how can I check to determine if csh has been executed prior to starting the python script. THanks
2016/02/05
[ "https://Stackoverflow.com/questions/35230093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5889345/" ]
You can check the shell environment by using ``` import os shell = os.environ['SHELL'] ``` Then you can make sure `shell` is set to `/bin/csh`
You can use `os.getppid()` to find the [parent PID](https://unix.stackexchange.com/q/18166/3330), and `ps` to find the name of the command: ``` import subprocess import os ppid = os.getppid() out = subprocess.check_output(['ps', '--format', '%c', '--pid', str(ppid)]) print(out.splitlines()[-1]) ``` --- ``` % csh % script.py csh % bash (dev)13:53:04 unutbu@buster:~% script.py bash ``` Note that the parent process may not be a shell. If I run the code from an IPython session launched inside emacs, then the parent is emacs: ``` In [170]: ppid = os.getppid() out = subprocess.check_output(['ps', '--format', '%c', '--pid', str(ppid)]) print(out.splitlines()[-1]) In [172]: emacs ```
2,599
44,453,416
I'm packing python application into docker with nix's `dockerTools` and all is good except of the image size. Python itself is about 40Mb, and if you add `numpy` and `pandas` it would be few hundreds of megabytes, while the application code is only ~100Kb. The only solution I see is to pack dependencies in separate image and then inherit main one from it, it won't fix the size, but at least I won't need to transfer huge images on every commit. Also I don't know how to do this, should I use some image with nix, or build environment with `pythonPackages.buildEnv` and the attach my app to it? It would be great to have some generic solution, but python specific would be good. Even if you have imperfect solution, please share. Ok, with `fromImage` attr for `buildImage` I split one huge layer into huge dependency layer and small app code layer. I wonder if there is any way to move this fat dependency layer into separate image, so I could share it among my other projects?
2017/06/09
[ "https://Stackoverflow.com/questions/44453416", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1307593/" ]
After googling a bit and reading `dockerTools` code I ended with this solution: ``` let deps = pkgs.dockerTools.buildImage { name = "deps"; content = [ list of all deps here ]; }; in pkgs.dockertools.buildImage { name = "app"; fromImage = deps; } ``` This will build two layer docker image, one of them would be dependencies, other one is app. Also is seems that value for `fromImage` could be result of `pullImage` which should give you same result (if I understood code correctly), but I wasn't able to check it.
There is no need to package your dependencies in a separate image and inherit it, although that can't do harm. All you need to do is make sure that you add your application code as one of the last steps in the Dockerfile. Each command will have its own layer, so if you only change your application code, all layers above that change can be used from cache. Example from the [Docker Images and Layers](https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/#images-and-layers.) documentation: The dockerfile ``` FROM ubuntu:15.10 COPY . /app RUN make /app CMD python /app/app.py ``` contains four distinct layers. If you only modify the last line, only that layer and all layers below that will have to be transferred. When pushing or pulling you will see `4b0ba2c4050a: Already exists` next to the layers being used from cache. Following this approach you don't end up with a smaller image, but as you say you don't have to pull large images on each change.
2,600
60,397,004
Hi new to python and programming in general I'm trying to find an element in an array based on user input here's what i've done ``` a =[31,41,59,26,41,58] input = input("Enter number : ") for i in range(1,len(a),1) : if input == a[i] : print(i) ``` problem is that it doesn't print out anything. what am I doing wrong here?
2020/02/25
[ "https://Stackoverflow.com/questions/60397004", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12932148/" ]
`input` returns a string. To make them integers wrap them in `int`. ``` inp=int(input('enter :')) for i in range(0,len(a)-1): if inp==a[i]: print(i) ``` Indices in `list` start from *0* to *len(list)-1*. Instead of using `range(0,len(a)-1)` it's preferred to use `enumerate`. ``` for idx,val in enumerate(a): if inp==val: print(idx) ``` --- To check if a `inp` is in `a` you can this. ``` >>> inp in a True #if it exists or else False ``` --- You can use `try-except` also. ``` try: print(a.index(inp)) except ValueError: print('Element not Found') ```
`input` returns a string; `a` contains integers. Your loop starts at 1, so it will never test against `a[0]` (in this case, 31). And you shouldn't re-define the name `input`.
2,601
26,532,216
I am trying to install some additional packages that do not come with Anaconda. All of these packages can be installed using `pip install PackageName`. However, when I type this command at the Anaconda Command Prompt, I get the following error: ``` Fatal error in launcher: Unable to create process using '"C:\Python27\python.exe " "C:\python27\scripts\pip.exe" install MechanicalSoup' ``` I also tried to run the command from the python interpreter after `import pip` but that also did not work (I got a `SyntaxError: invalid syntax`). I am a noob and understand that this might be a very basic question so thanks for your help in advance! PS: I am using Windows 7, 64 bit, conda version: 3.7.1 and python version: 2.7.6.
2014/10/23
[ "https://Stackoverflow.com/questions/26532216", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4174494/" ]
When installing anaconda, you are asked if you want to include the installed python to your system PATH variable. Make sure you have it in your PATH. If everything is set up correct, you can run pip from your regular command prompt aswell.
There is a way around the use of pip From the anaconda terminal window you can run: ``` conda install PackageName ``` Because MechanicalSoup isn't in one of anaconda's package channels you will have to do a bit of editing See instructions near the bottom [on their blog](http://www.continuum.io/blog/conda)
2,604
61,302,203
``` File "<ipython-input-6-b985bbbd8c62>", line 21 cv2.rectangle(img,(ix,iy),(x,y),(255,0,0),-1) ^ IndentationError: expected an indented block ``` my code ``` import cv2 import numpy as np #variables #True while mouse button down, False while mouse button up drawing = False ix,iy = -1 #Function def draw_rectangle(event,x,y,param,flags): global ix,iy,drawing if event == cv2.EVENT_LBUTTONDOWN: drawing = True ix,iy = x,y elif event == cv2.EVENT_MOUSEMOVE: if drawing == True: cv2.rectangle(img,(ix,iy),(x,y),(255,0,0),-1) elif event == cv2.EVENT_LBUTTONUP: drawing = False cv2.rectangle(img,(ix,iy),(x,y),(255,0,0),-1) #Showing images with opencv #black img = np.zeros((612,612,3)) cv2.namedwindow(winname='draw_painting') cv2.setMouseCallback('draw_painting',draw_rectangle) while True: cv2.imshow('draw_painting',img) cv2.waitkey(20) & 0xFF = 27: break cv2.destryAllWindows() ```
2020/04/19
[ "https://Stackoverflow.com/questions/61302203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12148825/" ]
launch `npm install` and in your body add the class `<body class="mat-app-background">`, or if you want you can try to add `import { MatSidenavModule } from "@angular/material/sidenav";` in your app.module.ts and put your html code in `<mat-sidenav-container>`
Try add `@import '@angular/material/prebuilt-themes/pink-bluegrey.css';` in your `styles.css` file.
2,609
38,274,695
Can anybody help me with this? I'm a beginner in python and programming. Thanks very much. I got this TypeError: 'dict' object is not callable when I execute this function. ``` def goodVsEvil(good, evil): GoodTeam = {'Hobbits':1, 'Men':2, 'Elves':3, 'Dwarves':3, 'Eagles':4, 'Wizards':10} EvilTeam = {'Orcs':1, 'Men':2, 'Wargs':2, 'Goblins':2, 'Uruk Hai':3, 'Trolls':5, 'Wizards':10} Gworth = 0 Eworth = 0 for k, val in GoodTeam(): Input = raw_input ('How many of {0} : ') Gworth = Gworth + int(Input) * val for k, val in EvilTeam(): inp = raw_input ('How many of {0} : ') Eworth = Eworth + int(inp) * val if Gworth > Eworth: return 'Battle Result: Good triumphs over Evil' if Eworth > Gworth: return 'Battle Result: Evil eradicates all trace of Good' if Eworth == Gworth: return 'Battle Result: No victor on this battle field' ```
2016/07/08
[ "https://Stackoverflow.com/questions/38274695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6566925/" ]
Those parenthesis are unnecessary. You intend to use `.items()` which allows you to iterate on the keys and values of your dictionary: ``` for k, val in GoodTeam.items(): # your code ``` You should replicate this change for `EvilTeam` also.
Like the error says, `GoodTeam` is a dict, but you're trying to call it. I think you mean to call its `items` method: ``` for k, val in GoodTeam.items(): ``` The same is true for BadTeam. Note you have other errors; you're using the string format method but haven't given it anything to actually format.
2,610
34,090,999
With pythons [`logging`](https://docs.python.org/2/library/logging.html) module, is there a way to **collect multiple events into one log entry**? An ideal solution would be an extension of python's `logging` module or a **custom formatter/filter** for it so collecting logging events of the same kind happens in the background and **nothing needs to be added in code body** (e.g. at every call of a logging function). Here an **example** that generates a **large number of the same or very similar logging** events: ``` import logging for i in range(99999): try: asdf[i] # not defined! except NameError: logging.exception('foo') # generates large number of logging events else: pass # ... more code with more logging ... for i in range(88888): logging.info('more of the same %d' % i) # ... and so on ... ``` So we have the same exception **99999** times and log it. It would be nice, if the log just said something like: ``` ERROR:root:foo (occured 99999 times) Traceback (most recent call last): File "./exceptionlogging.py", line 10, in <module> asdf[i] # not defined! NameError: name 'asdf' is not defined INFO:root:foo more of the same (occured 88888 times with various values) ```
2015/12/04
[ "https://Stackoverflow.com/questions/34090999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/789308/" ]
You should probably be writing a message aggregate/statistics class rather than trying to hook onto the logging system's [singletons](https://stackoverflow.com/questions/31875/is-there-a-simple-elegant-way-to-define-singletons-in-python) but I guess you may have an existing code base that uses logging. I'd also suggest that you should instantiate your loggers rather than always using the default root. The [Python Logging Cookbook](https://docs.python.org/2/howto/logging-cookbook.html) has extensive explanation and examples. The following class should do what you are asking. ``` import logging import atexit import pprint class Aggregator(object): logs = {} @classmethod def _aggregate(cls, record): id = '{0[levelname]}:{0[name]}:{0[msg]}'.format(record.__dict__) if id not in cls.logs: # first occurrence cls.logs[id] = [1, record] else: # subsequent occurrence cls.logs[id][0] += 1 @classmethod def _output(cls): for count, record in cls.logs.values(): record.__dict__['msg'] += ' (occured {} times)'.format(count) logging.getLogger(record.__dict__['name']).handle(record) @staticmethod def filter(record): # pprint.pprint(record) Aggregator._aggregate(record) return False @staticmethod def exit(): Aggregator._output() logging.getLogger().addFilter(Aggregator) atexit.register(Aggregator.exit) for i in range(99999): try: asdf[i] # not defined! except NameError: logging.exception('foo') # generates large number of logging events else: pass # ... more code with more logging ... for i in range(88888): logging.error('more of the same') # ... and so on ... ``` Note that you don't get any logs until the program exits. The result of running it this is: ``` ERROR:root:foo (occured 99999 times) Traceback (most recent call last): File "C:\work\VEMS\python\logcount.py", line 38, in asdf[i] # not defined! NameError: name 'asdf' is not defined ERROR:root:more of the same (occured 88888 times) ```
Create a counter and only log it for `count=1`, then increment thereafter and write out in a finally block (to ensure it gets logged no matter how bad the application crashes and burns). This could of course pose an issue if you have the same exception for different reasons, but you could always search for the line number to verify it's the same issue or something similar. A minimal example: ``` name_error_exception_count = 0 try: for i in range(99999): try: asdf[i] # not defined! except NameError: name_error_exception_count += 1 if name_error_exception_count == 1: logging.exception('foo') else: pass except Exception: pass # this is just to get the finally block, handle exceptions here too, maybe finally: if name_error_exception_count > 0: logging.exception('NameError exception occurred {} times.'.format(name_error_exception_count)) ```
2,611
57,907,518
So i'm trying to login web-client wifi login page with python. The web-client keep generating special octal character for every login session. So what i'm trying to do is: requests.get(web-client).text -> get the octal code by looping the text index -> combine with the password the problem is: -if i write ``` password="password" special="\340" + password + "\043\242\062\374\062\365\062\266\201\323\145\251\200\303\025\315" print(special) ``` it returns = ``` àpassword#¢2ü2õ2¶Óe©ÃÍ #this is what i want, python translate it to char ``` -but if i index the webpage ``` import requests webtext= requests.get(web-client url).text password= "password" special1= "" special2= "" for i in range(3163, 3167): #range of the first octal special1 = special1+webtext[i] for i in range(3204, 3268): #range of the second octal special2 = special2+webtext[i] special=special1+password+special2 print(special) ``` it returns = ``` \340password\043\242\062\374\062\365\062\266\201\323\145\251\200\303\025\315 ``` as you can see it's not decoded to char, the python translate it as a string. So what should i do to get the same result? btw i'm simulating the requests by opening the saved text file of the web-page html
2019/09/12
[ "https://Stackoverflow.com/questions/57907518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11561888/" ]
You need to order comments. in micropost view: ``` <% @post.comments.order(created_at: :asc).each do |comment| %> .. <% end %> ```
Inside your show method: ``` @post = Micropost.find(params[:id]) @comments = @post.includes(:comments).order('comments.created_at DESC') ``` Then you can iterate @comments in your HTML files.
2,616
73,770,461
I am using the Twitter API StreamingClient using the python module Tweepy. I am currently doing a short stream where I am collecting tweets and saving the entire ID and text from the tweet inside of a json object and writing it to a file. My goal is to be able to collect the Twitter handle from each specific tweet and save it to a json file (preferably print it in the output terminal as well). This is what the current code looks like: ```py KEY_FILE = './keys/bearer_token' DURATION = 10 def on_data(json_data): json_obj = json.loads(json_data.decode()) #print('Received tweet:', json_obj) print(f'Tweet Screen Name: {json_obj.user.screen_name}') with open('./collected_tweets/tweets.json', 'a') as out: json.dump(json_obj, out) bearer_token = open(KEY_FILE).read().strip() streaming_client = tweepy.StreamingClient(bearer_token) streaming_client.on_data = on_data streaming_client.sample(threaded=True) time.sleep(DURATION) streaming_client.disconnect() ``` And I have no idea how to do this, the only thing I found is that someone did this: ``` json_obj.user.screen_name ``` However, this did not work at all, and I am completely stuck.
2022/09/19
[ "https://Stackoverflow.com/questions/73770461", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11977945/" ]
So a couple of things Firstly, I'd recommend using `on_response` rather than `on_data` because StreamClient already defines a `on_data` function to parse the json. (Then it will fire `on_tweet`, `on_response`, `on_error`, etc) Secondly, `json_obj.user.screen_name` is part of API v1 I believe, which is why it doesn't work. --- To get extra data using Twitter Apiv2, you'll want to use Expansions and Fields ([Tweepy Documentation](https://docs.tweepy.org/en/stable/expansions_and_fields.html), [Twitter Documentation](https://developer.twitter.com/en/docs/twitter-api/expansions)) For your case, you'll probably want to use `"username"` which is under the `user_fields`. ```py def on_response(response:tweepy.StreamResponse): tweet:tweepy.Tweet = response.data users:list = response.includes.get("users") # response.includes is a dictionary representing all the fields (user_fields, media_fields, etc) # response.includes["users"] is a list of `tweepy.User` # the first user in the list is the author (at least from what I've tested) # the rest of the users in that list are anyone who is mentioned in the tweet author_username = users and users[0].username print(tweet.text, author_username) streaming_client = tweepy.StreamingClient(bearer_token) streaming_client.on_response = on_response streaming_client.sample(threaded=True, user_fields = ["id", "name", "username"]) # using user fields time.sleep(DURATION) streaming_client.disconnect() ``` Hope this helped. *also tweepy documentation definitely needs more examples for api v2*
```py KEY_FILE = './keys/bearer_token' DURATION = 10 def on_data(json_data): json_obj = json.loads(json_data.decode()) print('Received tweet:', json_obj) with open('./collected_tweets/tweets.json', 'a') as out: json.dump(json_obj, out) bearer_token = open(KEY_FILE).read().strip() streaming_client = tweepy.StreamingClient(bearer_token) streaming_client.on_data = on_data streaming_client.on_closed = on_finish streaming_client.sample(threaded=True, expansions="author_id", user_fields="username", tweet_fields="created_at") time.sleep(DURATION) streaming_client.disconnect() ```
2,617
4,618,373
How do I tell Selenium to use HTMLUnit? I'm running selenium-server-standalone-2.0b1.jar as a Selenium server in the background, and the latest Python bindings installed with "pip install -U selenium". Everything works fine with Firefox. But I'd like to use HTMLUnit, as it is lighter weight and doesn't need X. This is my attempt to do so: ``` >>> import selenium >>> s = selenium.selenium("localhost", 4444, "*htmlunit", "http://localhost/") >>> s.start() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/selenium/selenium/selenium.py", line 189, in start result = self.get_string("getNewBrowserSession", start_args) File "/usr/local/lib/python2.6/dist-packages/selenium/selenium/selenium.py", line 223, in get_string result = self.do_command(verb, args) File "/usr/local/lib/python2.6/dist-packages/selenium/selenium/selenium.py", line 217, in do_command raise Exception, data Exception: Failed to start new browser session: Browser not supported: *htmlunit Supported browsers include: *firefox *mock *firefoxproxy *pifirefox *chrome *iexploreproxy *iexplore *firefox3 *safariproxy *googlechrome *konqueror *firefox2 *safari *piiexplore *firefoxchrome *opera *iehta *custom ``` So the question is, what is the HTMLUnit driver called? How do I enable it? The code for HTMLUnit seems to be in the source for Selenium 2, so I expected it to be available by default like the other browsers. I can't find any instructions on how to enable it.
2011/01/06
[ "https://Stackoverflow.com/questions/4618373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/284340/" ]
As of the 2.0b3 release of the python client you can create an HTMLUnit webdriver via a remote connection like so: ``` from selenium import webdriver driver = webdriver.Remote( desired_capabilities=webdriver.DesiredCapabilities.HTMLUNIT) driver.get('http://www.google.com') ``` You can also use the `HTMLUNITWITHJS` capability item for a browser with Javascript support. Note that you need to run the Selenium Java server for this to work, since HTMLUnit is implemented on the Java side.
I use it like this: ``` from selenium.remote import connect b = connect('htmlunit') b.get('http://google.com') q = b.find_element_by_name('q') q.send_keys('selenium') q.submit() for l in b.find_elements_by_xpath('//h3/a'): print('%s\n\t%s\n' % (l.get_text(), l.get_attribute('href'))) ```
2,618
41,599,600
(The question was edited based on feedback received. I will continue to edit it based on input received until the issue is resolved) I am learning Pyhton and beautiful soup in particular and I am doing the Google Exercise on Regex using the set of html files that contains popular baby names for different years (e.g. baby1990.html etc). You can find this dataset if you are interested here: <https://developers.google.com/edu/python/exercises/baby-names> Each html file contains a table with baby names data that looks like this: [![enter image description here](https://i.stack.imgur.com/r4GpY.png)](https://i.stack.imgur.com/r4GpY.png) Before the table with the baby names there is another table. The html code in the Tags of the two tables is respectively the following ``` <table width="100%" border="0" cellspacing="0" cellpadding="4"> # Unwanted table <table width="100%" border="0" cellspacing="0" cellpadding="4" summary="formatting"> # targeted table ``` You may observe that the targeted differs from the unwanted table by the attribute: summary="formatting" The first table--the one we must skip -- has the following html code: ``` <table width="100%" border="0" cellspacing="0" cellpadding="4"> <tbody> <tr><td class="sstop" valign="bottom" align="left" width="25%"> Social Security Online </td><td valign="bottom" class="titletext"> <!-- sitetitle -->Popular Baby Names </td> </tr> <tr bgcolor="#333366"><td colspan="2" height="2"></td></tr> <tr><td class="graystars" width="25%" valign="top"> <a href="../OACT/babynames/">Popular Baby Names</a></td><td valign="top"> <a href="http://www.ssa.gov/"><img src="/templateimages/tinylogo.gif" width="52" height="47" align="left" alt="SSA logo: link to Social Security home page" border="0"></a><a name="content"></a> <h1>Popular Names by Birth Year</h1>September 12, 2007</td> </tr> <tr bgcolor="#333366"><td colspan="2" height="1"></td></tr> </tbody></table> ``` Within the targeted table the code is the following: ``` <table width="100%" border="0" cellspacing="0" cellpadding="4" summary="formatting"> <tr valign="top"><td width="25%" class="greycell"> <a href="../OACT/babynames/background.html">Background information</a> <p><br /> &nbsp; Select another <label for="yob">year of birth</label>?<br /> <form method="post" action="/cgi-bin/popularnames.cgi"> &nbsp; <input type="text" name="year" id="yob" size="4" value="1990"> <input type="hidden" name="top" value="1000"> <input type="hidden" name="number" value=""> &nbsp; <input type="submit" value=" Go "></form> </td><td> <h3 align="center">Popularity in 1990</h3> <p align="center"> <table width="48%" border="1" bordercolor="#aaabbb" cellpadding="2" cellspacing="0" summary="Popularity for top 1000"> <tr align="center" valign="bottom"> <th scope="col" width="12%" bgcolor="#efefef">Rank</th> <th scope="col" width="41%" bgcolor="#99ccff">Male name</th> <th scope="col" bgcolor="pink" width="41%">Female name</th></tr> <tr align="right"><td>1</td><td>Michael</td><td>Jessica</td> # Targeted row <tr align="right"><td>2</td><td>Christopher</td><td>Ashley</td> # Targeted row etc... ``` You can see that the distinctive attribute of the targeted rows is: align = "right". Now the code to extract the content of the targeted cells is the following: ``` with open("C:/Users/ALEX/MyFiles/JUPYTER NOTEBOOKS/google-python-exercises/babynames/baby1990.html","r") \ as f: soup = bs(f.read(), 'html.parser') print soup.tr print "number of elemenents in the soup:" , len(soup) right_table = soup.find("table", summary = "formatting") print(right_table.prettify()) print "right_table" , len(right_table) print(right_table[0].prettify()) for row in right_table[1].find_all("tr", allign = "right"): cells = row.find_all("td") try: print "cells[0]: " , cells[0] except: print "cells[0] : NaN" try: print "cells[1]: " , cells[1] except: print "cells[1] : NaN" try: print "cells[2]: " , cells[2] except: print "cells[2] : NaN" ``` The output is an error message: ``` <tr><td align="left" class="sstop" valign="bottom" width="25%"> Social Security Online </td><td class="titletext" valign="bottom"> <!-- sitetitle -->Popular Baby Names </td> </tr> number of elemenents in the soup: 4 --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-116-3ec77a65b5ad> in <module>() 6 right_table = soup.find("table", summary = "formatting") 7 ----> 8 print(right_table.prettify()) 9 10 print "right_table" , len(right_table) C:\users\alex\Anaconda2\lib\site-packages\bs4\element.pyc in prettify(self, encoding, formatter) 1198 def prettify(self, encoding=None, formatter="minimal"): 1199 if encoding is None: -> 1200 return self.decode(True, formatter=formatter) 1201 else: 1202 return self.encode(encoding, True, formatter=formatter) C:\users\alex\Anaconda2\lib\site-packages\bs4\element.pyc in decode(self, indent_level, eventual_encoding, formatter) 1164 indent_contents = None 1165 contents = self.decode_contents( -> 1166 indent_contents, eventual_encoding, formatter) 1167 1168 if self.hidden: C:\users\alex\Anaconda2\lib\site-packages\bs4\element.pyc in decode_contents(self, indent_level, eventual_encoding, formatter) 1233 elif isinstance(c, Tag): 1234 s.append(c.decode(indent_level, eventual_encoding, -> 1235 formatter)) 1236 if text and indent_level and not self.name == 'pre': 1237 text = text.strip() ... last 2 frames repeated, from the frame below ... C:\users\alex\Anaconda2\lib\site-packages\bs4\element.pyc in decode(self, indent_level, eventual_encoding, formatter) 1164 indent_contents = None 1165 contents = self.decode_contents( -> 1166 indent_contents, eventual_encoding, formatter) 1167 1168 if self.hidden: RuntimeError: maximum recursion depth exceeded while calling a Python object ``` The questions are the following: 1. Why the code returns the first table -- the unwanted one-- given that we have passed the argument summary = "formatting"? 2. What the error message implies? Why it is created? 3. What are other errors you can observe in the code -- if any? Your advice will be appreciated.
2017/01/11
[ "https://Stackoverflow.com/questions/41599600", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7128498/" ]
Your test is failing because those strings are not in ascending order. It fails at `word-e` of first string and `wordc` of second string where `c` is before `e` and hyphen is ignored by default. If you want to include the hyphen in ordering use `StringComparer.Ordinal`: ``` Assert.That(anotherList, Is.Ordered.Ascending.Using((IComparer)StringComparer.Ordinal)); ``` Now the test will succeed.
Thanks, abdul In some cases, if your collection has an UpperCase item, you should use StringComparer.OrdinalIgnoreCase instead of StringComparer.Ordinal
2,620
55,296,584
I am new to python. I want to get the ipaddress of the system. I am connected in LAN. When i use the below code to get the ip, it shows 127.0.1.1 instead of 192.168.1.32. Why it is not showing the LAN ip. Then how can i get my LAN ip. Every tutorials shows this way only. I also checked via connecting with mobile hotspot. Eventhough, it shows the same. ``` import socket hostname = socket.gethostname() IPAddr = socket.gethostbyname(hostname) print("Your Computer Name is:" + hostname) print("Your Computer IP Address is:" + IPAddr) ``` Output: ``` Your Computer Name is:smackcoders Your Computer IP Address is:127.0.1.1 ``` Required Output: ``` Your Computer Name is:smackcoders Your Computer IP Address is:192.168.1.32 ```
2019/03/22
[ "https://Stackoverflow.com/questions/55296584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11090395/" ]
I got this same problem with my raspi. ``` host_name = socket.gethostname()` host_addr = socket.gethostbyname(host_name) ``` and now if i print host\_addr, it will print 127.0.1.1. So i foundthis: <https://www.raspberrypi.org/forums/viewtopic.php?t=188615#p1187999> ``` host_addr = socket.gethostbyname(host_name + ".local") ``` and it worked.
i get the same problem what your are facing. but I get the solution with help of my own idea, And don't worry it is simple to use. if you familiar to linux you should heard the `ifconfig` command which return the informations about the network interfaces, and also you should understand about `grep` command which filter the lines which consist specified words now just open the terminal and type ``` ifconfig | grep 255.255.255.0 ``` and hit `enter` now you will get wlan inet address line alone like below ``` inet 192.168.43.248 netmask 255.255.255.0 broadcast 192.168.43.255 ``` in your terminal in your python script just insert ``` #!/usr/bin/env python import subprocess cmd = "ifconfig | grep 255.255.255.0" inet = subprocess.check_output(cmd, shell = True) inet = wlan.decode("utf-8") inet = wlan.split(" ") inet_addr = inet[inet.index("inet")+1] print(inet_addr) ``` this script return your local ip address, this script works for me and I hope this will work for your linux machine all the best
2,621
46,004,408
I bought a Raspberry Pi yesterday and I am facing quite a large problem. I can't sudo apt-get update. I think this error comes from my dns because I am connected via ethernet (Physically). so the message it prints when I execute the command is that: ``` pi@raspberrypi:~ $ sudo apt-get update Err:1 http://goddess-gate.com/archive.raspbian.org/raspbian jessie InRelease Temporary failure resolving 'goddess-gate.com' Err:2 http://archive.raspberrypi.org/debian stretch InRelease Temporary failure resolving 'archive.raspberrypi.org' Reading package lists... Done W: Failed to fetch http://goddess-gate.com/archive.raspbian.org/raspbian/dists/jessie/InRelease Temporary failure resolving 'goddess-gate.com' W: Failed to fetch http://archive.raspberrypi.org/debian/dists/stretch/InRelease Temporary failure resolving 'archive.raspberrypi.org' W: Some index files failed to download. They have been ignored, or old ones used instead. ``` So to resolve this problem I have tried a few things: ``` - Changing the etc/apt/sources.list to a valid mirror of my country (france) - Reinstalling Raspbian (1st try was with NOOBS) and now I installed Raspbian with the .img file - Changing my /ect/resolv.conf and /etc/network/interfaces nameservers to these ip 8.8.8.8 8.8.4.4 ``` Nothing worked... I am really stucked, there is something elese, I can't browse any website with Chromium but I have internet connexion because I can pip install python modules... here is the Chromium message: 'This site can't be reached' ERR\_NAME\_RESOLUTION\_FAILED Other things, my inet ip is not valid, usally it should start with 192.168 but here it is 168.254.241.6 ... here is my if config: ``` pi@raspberrypi:~ $ ifconfig enxb827ebaf69fc: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 169.254.241.6 netmask 255.255.0.0 broadcast 169.254.255.255 inet6 fe80::5d8b:1a8c:c520:c339 prefixlen 64 scopeid 0x20<link> ether b8:27:eb:af:69:fc txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 995 bytes 61042 (59.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 806 bytes 77318 (75.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 806 bytes 77318 (75.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 wlan0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether b8:27:eb:fa:3c:a9 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ```
2017/09/01
[ "https://Stackoverflow.com/questions/46004408", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7767248/" ]
Type following at the command line in order to edit `resolv.conf` which is the linux configuration file where **domain-name to IP mapping** is stored for the purpose of **DNS resolution**. ```sh sudo nano /etc/resolv.conf ``` then add these 2 lines: ``` nameserver 8.8.8.8 nameserver 8.8.4.4 ``` hope it will help ...
The ip-adress range 169.254.0.0 to 169.254.255.255 is used by zeroconf. Probably there is no active DHCP server in the LAN. Mostly the router is also a DHCP server. You also have no public IPv6 address. But this could also come from a IPv4 only internet connection. Try to configure the interface completly manual with corrected ip-address. When there should be an active DHCP server, try to fix it. Sometimes a reboot helps. You can show your gateway with "ip r". It should be the address of the router. Important is that the ip-address of the Pi is in the same subnet as the gateway.
2,631
62,585,234
It seems that the output of [`zlib.compress`](https://docs.python.org/3/library/zlib.html#zlib.compress) uses all possible byte values. Is this possible to use 255 of 256 byte values (for example avoid using `\n`)? Note that I just use the python manual as a reference, but the question is not specific to python (i.e. any other languages that has a `zlib` library).
2020/06/25
[ "https://Stackoverflow.com/questions/62585234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1424739/" ]
No, this is not possible. Apart from the compressed data itself, there is standardized control structures which contain integers. Those integers may accidentially lead to any 8-bit character ending up in the bytestream. Your only chance would be to encode the zlib bytestream into another format, e.g. base64.
As [@ypnos says](https://stackoverflow.com/a/62585291/3798897), this isn't possible within zlib itself. You mentioned that base64 encoding is too inefficient, but it's pretty easy to use an escape character to encode a character you want to avoid (like newlines). This isn't the most efficient code in the world (and you might want to do something like finding the least used bytes to save a tiny bit more space), but it's readable enough and demonstrates the idea. You can losslessly encode/decode, and the encoded stream won't have any newlines. ``` def encode(data): # order matters return data.replace(b'a', b'aa').replace(b'\n', b'ab') def decode(data): def _foo(): pair = False for b in data: if pair: # yield b'a' if b==b'a' else b'\n' yield 97 if b==97 else 10 pair = False elif b==97: # b'a' pair = True else: yield b return bytes(_foo()) ``` As some measure of confidence you can check this exhaustively on small bytestrings: ``` from itertools import * all( bytes(p) == decode(encode(bytes(p))) for c in combinations_with_replacement(b'ab\nc', r=6) for p in permutations(c) ) ```
2,634
1,984,759
Is there any solution to force the RawConfigParser.write() method to export the config file with an alphabetical sort? Even if the original/loaded config file is sorted, the module mixes the section and the options into the sections arbitrarily, and is really annoying to edit manually a huge unsorted config file. PD: I'm using python 2.6
2009/12/31
[ "https://Stackoverflow.com/questions/1984759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/235709/" ]
Three solutions: 1. Pass in a dict type (second argument to the constructor) which returns the keys in your preferred sort order. 2. Extend the class and overload `write()` (just copy this method from the original source and modify it). 3. Copy the file ConfigParser.py and add the sorting to the method `write()`. See [this article](http://www.voidspace.org.uk/python/odict.html) for a ordered dict or maybe use [this implementation](http://code.activestate.com/recipes/496761/) which preserves the original adding order.
The first method looked as the most easier, and safer way. But, after looking at the source code of the ConfigParser, it creates an empty built-in dict, and then copies all the values from the "second parameter" one-by-one. That means it won't use the OrderedDict type. An easy work around can be to overload the CreateParser class. ``` class OrderedRawConfigParser(ConfigParser.RawConfigParser): def __init__(self, defaults=None): self._defaults = type(defaults)() ## will be correct with all type of dict. self._sections = type(defaults)() if defaults: for key, value in defaults.items(): self._defaults[self.optionxform(key)] = value ``` It leaves only one flaw open... namely in ConfigParser.items(). odict doesn't support `update` and `comparison` with normal dicts. Workaround (overload this function too): ``` def items(self, section): try: d2 = self._sections[section] except KeyError: if section != DEFAULTSECT: raise NoSectionError(section) d2 = type(self._section)() ## Originally: d2 = {} d = self._defaults.copy() d.update(d2) ## No more unsupported dict-odict incompatibility here. if "__name__" in d: del d["__name__"] return d.items() ``` Other solution to the items issue is to modify the `odict.OrderedDict.update` function - maybe it is more easy than this one, but I leave it to you. PS: I implemented this solution, but it doesn't work. If i figure out, ConfigParser is still mixing the order of the entries, I will report it. PS2: Solved. The reader function of the ConfigParser is quite idiot. Anyway, only one line had to be changed - and some others for overloading in an external file: ``` def _read(self, fp, fpname): cursect = None optname = None lineno = 0 e = None while True: line = fp.readline() if not line: break lineno = lineno + 1 if line.strip() == '' or line[0] in '#;': continue if line.split(None, 1)[0].lower() == 'rem' and line[0] in "rR": continue if line[0].isspace() and cursect is not None and optname: value = line.strip() if value: cursect[optname] = "%s\n%s" % (cursect[optname], value) else: mo = self.SECTCRE.match(line) if mo: sectname = mo.group('header') if sectname in self._sections: cursect = self._sections[sectname] ## Add ConfigParser for external overloading elif sectname == ConfigParser.DEFAULTSECT: cursect = self._defaults else: ## The tiny single modification needed cursect = type(self._sections)() ## cursect = {'__name__':sectname} cursect['__name__'] = sectname self._sections[sectname] = cursect optname = None elif cursect is None: raise ConfigParser.MissingSectionHeaderError(fpname, lineno, line) ## Add ConfigParser for external overloading. else: mo = self.OPTCRE.match(line) if mo: optname, vi, optval = mo.group('option', 'vi', 'value') if vi in ('=', ':') and ';' in optval: pos = optval.find(';') if pos != -1 and optval[pos-1].isspace(): optval = optval[:pos] optval = optval.strip() if optval == '""': optval = '' optname = self.optionxform(optname.rstrip()) cursect[optname] = optval else: if not e: e = ConfigParser.ParsingError(fpname) ## Add ConfigParser for external overloading e.append(lineno, repr(line)) if e: raise e ``` Trust me, I didn't wrote this thing. I copy-pasted it entirely from ConfigParser.py So overall what to do? 1. Download odict.py from one of the links previously suggested 2. Import it. 3. Copy-paste these codes in your favorite utils.py (which will create the `OrderedRawConfigParser` class for you) 4. `cfg = utils.OrderedRawConfigParser(odict.OrderedDict())` 5. use cfg as always. it will stay ordered. 6. Sit back, smoke a havanna, relax. PS3: The problem I solved here is only in Python 2.5. In 2.6 there is already a solution for that. They created a second custom parameter in the `__init__` function, which is a custom dict\_type. So this workaround is needed only for 2.5
2,636
36,958,167
I need to update a document in an array inside another document in Mongo DB. ``` { "_id" : ObjectId("51cff693d342704b5047e6d8"), "author" : "test", "body" : "sdfkj dsfhk asdfjad ", "comments" : [ { "author" : "test", "body" : "sdfkjdj\r\nasdjgkfdfj", "email" : "test@tes.com" }, { "author" : "hola", "body" : "sdfl\r\nhola \r\nwork here" } ], "date" : ISODate("2013-06-30T09:12:51.629Z"), "permalink" : "mxwnnnqafl", "tags" : [ "ab" ], "title" : "cd" } ``` If I try to update first document in comments array by below command, it works. ``` db.posts.update({'permalink':"cxzdzjkztkqraoqlgcru"},{'$inc': {"comments.0.num_likes": 1}}) ``` But if I put the same in python code like below, I am getting Write error, that it can't traverse the element. I am not understanding what is missing!! Can anyone help me out please. ``` post = self.posts.find_one({'permalink': permalink}) response = self.posts.update({'permalink': permalink}, {'$inc':"comments.comment_ordinal.num_likes": 1}}) WriteError: cannot use the part (comments of comments.comment_ordinal.num_likes) to traverse the element ```
2016/04/30
[ "https://Stackoverflow.com/questions/36958167", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3123752/" ]
The moment you give your HTTP/HTTPS endpoint and create subscription from aws console, what happens is , the Amazon sends a subscription msg to that endpoint. Now this is a rest call, and your app must have a handler for this endpoint, otherwise you miss catching this subscription message. The httpRequest object that your handler is passed, needs to access it's SNSMsgTypeHdr header field. This value will be "SubscriptionConfirmation". You need to catch this particular message first and then get the subscription url. You can handle it in your app itself or maybe print it out, and then manually visit that url to make the subscription. I would ideally suggest to make a subscription to the same topic at the same with your mail id, so that everytime your app gets a messages pushed , your mail id also gets the message(albeit the tokens will be different) but at least you will be sure that the message was pushed to your endpoint. All you need to do is keep working your app to handle the messages at that endpoint as per your requirements then.
There are 3 types of messages with SNS. Subscribe, Unsubscribe, and Notification. You will not get any Notification messages until you have correctly handled the subscribe message. Which involves making an API request to AWS when you receive the Subscribe request. The call in this case is ConfirmSubscription: <http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SNS.html#confirmSubscription-property> Once you do that, then you will start receiving notification messages and you can handle those as your code allows.
2,645
67,347,499
I've error in Python Selenium. I'm trying to download all songs with Selenium, but there is some error. Here is code: ``` from selenium import webdriver import time driver = webdriver.Chrome('/home/tigran/Documents/chromedriver/chromedriver') url = 'https://sefon.pro/genres/shanson/top/' driver.get(url) songs = driver.find_elements_by_xpath('/html/body/div[2]/div[2]/div[1]/div[3]/div/div[3]/div[2]/a') for song in songs: song.click() time.sleep(5) driver.find_element_by_xpath('/html/body/div[2]/div[2]/div[1]/div[1]/div[2]/div/div[3]/div[1]/a[2]').click() time.sleep(8) driver.get(url) time.sleep(5) ``` And here is error: ``` Traceback (most recent call last): File "test.py", line 13, in <module> song.click() File "/home/tigran/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webelement.py", line 80, in click self._execute(Command.CLICK_ELEMENT) File "/home/tigran/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webelement.py", line 633, in _execute return self._parent.execute(command, params) File "/home/tigran/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute self.error_handler.check_response(response) File "/home/tigran/.local/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document (Session info: chrome=90.0.4430.72) ``` Any ideas why error comes?
2021/05/01
[ "https://Stackoverflow.com/questions/67347499", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14646178/" ]
You can try to use Image.fromarray: ``` Image.fromarray(matrice, mode=couleur) ```
Sorry for my longtime answer. The problem is in the type being used and converting it to the image. If you use a single-byte type for an image, then the matrix type must also be single-byte. Example: ``` from PIL import Image import numpy as np size_x = 50 size_y = 8 m = "L" matrix = np.array([[255] * 50 for _ in range(size_y)], dtype="uint8") im = Image.fromarray(matrix, mode=m) im.save('Degrade.jpg') im.show() ```
2,650
53,719,606
Without changing any code, the graph plotted will be different. Correct at the first run in a fresh bash, disordered in the next runs. (maybe it can cycle back to correct order) To be specific: Environment: MacOS Mojave 10.14.2, python3.7.1 installed through homebrew. To do: Plot `scatter` for two or three sets of data on the same `axes`, each with a different `markertype` and different `colors`. Plot customised legend showing which data set each `markertype` represents. I am sorry I don't have enough time to prepare a testable code (for now), but this part seems to be the problem: ``` markerTypes = cycle(['o', 's', '^', 'd', 'p', 'P', '*']) strainLegends = [] strains = list(set([idx.split('_')[0] for idx in pca2Plot.index])) for strain in strains: # markerType is fixed here, and shouldn't be passed on to the next python run anyway. markerType = next(markerTypes) # strainSamples connects directly to strain variable, then data is generated from getting strainSamples: strainSamples = [sample for sample in samples if sample.split('_')[0] == strain] xData = pca2Plot.loc[strainSamples, 'PC1'] yData = pca2Plot.loc[strainSamples, 'PC2'] # See pictures below, data is correctly identified from source # both scatter and legend instance use the same fixed markerType ax.scatter(xData, yData, c=drawColors[strainSamples], s=40, marker=markerType, zorder=3) strainLegends.append(Line2D([0], [0], marker=markerType, color='k', markersize=10, linewidth=0, label=strain)) # print([i for i in ax.get_children() if isinstance(i, PathCollection)]) ax.legend(handles=strainLegends) ``` As you can see the `markerType` and `strain` data are correlated with the data. For the first run with `python3 my_code.py` in bash, it creates a correct picture: see the circle represents A, square represents B [![see the circle represents A, square represents B](https://i.stack.imgur.com/tAEN7.png)](https://i.stack.imgur.com/tAEN7.png) A = circle, B = square. See the square around `(-3, -3.8)`, this data point is from dataset B. While if I run the code again within the same terminal `python3 my_code.py` [![enter image description here](https://i.stack.imgur.com/TudM1.png)](https://i.stack.imgur.com/TudM1.png) Note A and B completely massed up, un-correlated. Now as the legend: A = square, B = circle. Again see the data point `(-3, -3.8)` which comes from dataset B, now annotated as A. If I run the code again, it might produce another result. Here is the code I used to generate annotation: ``` dictColor = {ax: pd.Series(index=pca2Plot.index), } HoverClick = interactionHoverClick( dictColor, fig, ax) fig.canvas.mpl_connect("motion_notify_event", HoverClick.hover) fig.canvas.mpl_connect("button_press_event", HoverClick.click) ``` In class `HoverClick`, I have ``` def hover(self, event): if event.inaxes != None: ax = event.inaxes annot = self.annotAxs[ax] # class matplotlib.collections.PathCollection, here refere to the scatter plotting event (correct?) drawingNum = sum(isinstance(i, PathCollection) for i in ax.get_children()) # print([i for i in ax.get_children() if isinstance(i, PathCollection)]) plotSeq = 0 jump = [] indInd = [] indIndInstances = [] for i in range(drawingNum): sc = ax.get_children()[i] cont, ind = sc.contains(event) jump.append(len(sc.get_facecolor())) indIndInstances.append(ind['ind']) if cont: plotSeq = i indInd.extend(ind['ind']) # here plotSeq is the index of last PathCollection instance that program find my mouse hovering on a datapoint of it. sc = ax.get_children()[plotSeq] cont, ind = sc.contains(event) if cont: try: exist = (indInd[0] in self.hovered) except: exist = False if not exist: hovered = indInd[0] pos = sc.get_offsets()[indInd[0]] textList = [] for num in range(plotSeq + 1): singleJump = sum(jump[:num]) textList.extend([self.colorDict[ax].index[i + singleJump] for i in indIndInstances[num]]) text = '\n'.join(textList) annot.xy = pos annot.set_text(text) annot.set_visible(True) self.fig.canvas.draw_idle() else: if annot.get_visible(): annot.set_visible(False) self.fig.canvas.draw_idle() # hover ``` Note that I annotated the code for print each instances. This is tested because I thought it might be the order of instances that has been changed throughout other part of code. But the result showed in both correct and wrong cases, the order was not changed. Does anyone knows what happened? Anyone have experienced this before? If I need to clean the memory in the end of the code, what should I do?
2018/12/11
[ "https://Stackoverflow.com/questions/53719606", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6823079/" ]
Since your code is incomplete it is difficult to say for sure, but it seems that the order of markers is being messed up by the `cycle` iterator. Why don't you just try: ``` markerTypes = ['o', 's', '^'] strainLegends = [] for strain, markerType in zip(strains, markerTypes): strainSamples = [sample for sample in samples if sample.split('_')[0] == strain] xData = pca2Plot.loc[strainSamples, 'PC1'] yData = pca2Plot.loc[strainSamples, 'PC2'] ax.scatter(xData, yData, c=drawColors[strainSamples], s=40, marker=markerType, zorder=3) strainLegends.append(Line2D([0], [0], marker=markerType, color='k', markersize=10, linewidth=0, label=strain)) ax.legend(handles=strainLegends) ``` This of course assumes that `strains` and `markerTypes` are of the same length and the markers are in the same position in the list as the strain value you want to assign them.
I found this issue caused by a de-replication process I made in `strains`. ``` # wrong code: strains = list(set([idx.split('_')[0] for idx in pca2Plot.index])) # correct code: strains = list(OrderedDict.fromkeys([idx.split('_')[0] for idx in pca2Plot.index])) ``` Thus the question I asked was not a valid question. Thanks and sorry for everyone looked into this.
2,651
56,937,573
How do they run these python commands in python console within their django project. Here is [example](https://docs.djangoproject.com/en/2.2/intro/overview/#enjoy-the-free-api). I'm using Windows 10, PyCharm and python 3.7. I know how to run the project. But when I run the project, - console opens, which gives regular input/output for the project running. When I open python console - I can run commands, so that they execute immidiately, but how do I run python console, so that I can type some commands and they would execute immediately, but that would happen within some project? Example from [here](https://docs.djangoproject.com/en/2.2/intro/overview/#enjoy-the-free-api): ``` # Import the models we created from our "news" app >>> from news.models import Article, Reporter # No reporters are in the system yet. >>> Reporter.objects.all() <QuerySet []> # Create a new Reporter. >>> r = Reporter(full_name='John Smith') # Save the object into the database. You have to call save() explicitly. >>> r.save() # Now it has an ID. >>> r.id 1 ```
2019/07/08
[ "https://Stackoverflow.com/questions/56937573", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7402089/" ]
When you run the project you're using a management command: `python manage.py runserver`. To enter a console that has access to all your Django apps, the ORM, etc., use another management command: `python manage.py shell`. That will allow you to import models as shown in your example. As an additional tip, consider installing the [Django extensions](https://github.com/django-extensions/django-extensions) package, which includes a management command `shell_plus`. It's helpful, especially (but not only) in development, as it imports all your models, along with some other handy tools.
Django has a [Shell](https://docs.djangoproject.com/en/2.2/ref/django-admin/#shell) management command that allows you to open a Python shell with all the Django stuff bootstrapped and ready to be executed. So by using `./manage.py shell` you will get an interactive python shell where you can write code.
2,652
59,530,439
I am trying to [`save`](https://code.kx.com/q/ref/save/) a [matrix](https://code.kx.com/q4m3/3_Lists/#3112-formal-definition-of-matrices) to file in binary format in KDB as per below: ``` matrix: (til 10)*/:til 10; save matrix; ``` However, I get the error `'type`. I guess `save` only works with tables? In which case does anyone know of a workaround? Finally, I would like to read the matrix from the binary file into Python with [NumPy](https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html), which I presume is just: ``` import numpy as np matrix = np.fromfile('C:/q/w32/matrix', dtype='f') ``` Is that right? *Note: I'm aware of [KDB-Python libraries](http://www.timestored.com/kdb-guides/python-api), but have been unable to install them thus far.*
2019/12/30
[ "https://Stackoverflow.com/questions/59530439", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1681681/" ]
`save` does work, you just have to reference it by name. ``` save`matrix ``` You can also save using ``` `:matrix set matrix; `:matrix 1: matrix; ``` But I don't think you'll be able to read this into python directly using numpy as it is stored in kdb format. It could be read into python using one of the python-kdb interfaces (e.g PyQ) or by storing it in a common format such as csv.
Another option is to save in KDB+ IPC format and then read it into Python with [qPython](https://github.com/exxeleron/qPython) as a Pandas DataFrame. On the KDB+ side you can save it with ``` matrix:(til 10)*/:til 10; `:matrix.ipc 1: -8!matrix; ``` On the Python side you do ``` from pandas import DataFrame from qpython.qreader import QReader with open('matrix.ipc',"rb") as f: matrix = DataFrame(QReader(f).read().data) print(matrix) ```
2,653
3,526,748
Sometimes, when fetching data from the database either through the python shell or through a python script, the python process dies, and one single word is printed to the terminal: `Killed` That's literally all it says. It only happens with certain scripts, but it always happens for those scripts. It consistently happens with this one single query that takes a while to run, and also with a south migration that adds a bunch of rows one-by-one to the database. My initial hunch was that a single transaction was taking too long, so I turned on autocommit for Postgres. Didn't solve the problem. I checked the Postgres logs, and this is the only thing in there: `2010-08-19 22:06:34 UTC LOG: could not receive data from client: Connection reset by peer` `2010-08-19 22:06:34 UTC LOG: unexpected EOF on client connection` I've tried googling, but as you might expect, a one-word error message is tough to google for. I'm using Django 1.2 with Postgres 8.4 on a single Ubuntu 10.4 rackspace cloud VPS, stock config for everything.
2010/08/19
[ "https://Stackoverflow.com/questions/3526748", "https://Stackoverflow.com", "https://Stackoverflow.com/users/836/" ]
Only one thing I could think of that will kill automatically a process on Linux - the OOM killer. What's in the system logs?
If psycopg is being used the issue is probably that the db connection isn't being closed. As per the psycopg [docs](http://initd.org/psycopg/docs/usage.html) example: ``` # Connect to an existing database >>> conn = psycopg2.connect("dbname=test user=postgres") # Open a cursor to perform database operations >>> cur = conn.cursor() # Close communication with the database >>> cur.close() >>> conn.close() ``` Note that if you do delete the connection (using `dbcon.close()` or by deleting the connection object you probably need to issue a commit or rollback, depending on what sort of transaction type your connection is working under. See [the close connection docs](http://initd.org/psycopg/docs/connection.html#connection.close) for more details.
2,654
2,876,337
I am currently learning PHP and want to learn about OOP. 1. I know Python is a well-organized and is all OOP, so would learning Python be a wise choose to learn OOP? The thing is I am more towards web development then just general programming, and I know Python is just a general purpose language, but there is Django. 2. So how should I go about learning Python if I am lending towards web development? Is there any good books/websites that help me learn Python for web development? 3. Is there any free webhosting companies that allow Python? I never used Python before, only PHP, and not sure how it works? Is there like a "xampp" for python?
2010/05/20
[ "https://Stackoverflow.com/questions/2876337", "https://Stackoverflow.com", "https://Stackoverflow.com/users/345690/" ]
As long as you stay within their quota Google Apps Engine provides free hosting for Python. Django is a great framework when you want to do webdevelopment with Python. Django also has great documention with <http://www.djangobook.com/> and the official Django website.
You could learn using books, but nothing beats practical hands-on approach - so make sure you have Python installed in a computer to help you learn. If you decide to buy a Python book, I strongly suggest you **DO NOT** buy a copy of Vernon Ceder's [Python Book](http://valashiya.wordpress.com/2010/04/22/the-quick-python-book/), it has very bad reviews. I bought a copy and was also disappointed. If you'd like to join a mailing list, we have a good community at [Python Tutor](http://mail.python.org/mailman/listinfo/tutor). Sign up and post your questions there as well. Good luck
2,655
55,031,604
So I haven't been doing python for a while and haven't needed to deal with this before so if i'm making some stupid mistake don't go crazy. I have a list that is pulled from an SQLite database with `.fetchall()` on the end and it returns a list of one tuple and inside that tuple are all the results: ``` [('Bob', 'Science Homework Test', 'Science homework is a test about Crude Oil development', 'Science-Chemistry', '2019-03-06', '2019-02-27', None, 0)] ``` I want to get inside this tuple to get the items but if I loop the list it doesn't seem to do anything. I want to do this to `pop()` an item in the list which is `Science-Chemistry` for a HTML select option. I have had a look before but no one seems to have this same problem with only 1 tuple inside the list
2019/03/06
[ "https://Stackoverflow.com/questions/55031604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6635590/" ]
If there is always going to be exactly only one tuple in the returning list, you can unpack it into meaningfully named variables, the number of which should match the number of output columns in your query: ``` (name, test, description, subject, updated, created, flags, score), = cursor.fetchall() ``` Note the comma after the parentheses that makes it unpack as the first tuple of a sequence.
I suggest going from the most outside element to the inner one. At the beginning you have a list with one tuple. ``` >>>> result = [('Bob', 'Science Homework Test', 'Science homework is a test about Crude Oil development', 'Science-Chemistry', '2019-03-06', '2019-02-27', None, 0)] ``` To get the tuple, just get the first item of the list: ``` >>> tuple_ = result[0] ('Bob', 'Science Homework Test', 'Science homework is a test about Crude Oil development', 'Science-Chemistry', '2019-03-06', '2019-02-27', None, 0) ``` Then you can loop over it or access it like an array, to get the items: ``` for item in tuple_: # do stuff with the item print(item) ``` or ``` item = tuple_[0] ```
2,665
10,971,468
Similar posts such as the following do not answer my question. [Convert a string to integer with decimal in Python](https://stackoverflow.com/questions/1094717/convert-a-string-to-integer-with-decimal-in-python) Consider the following Python code. ``` >>> import decimal >>> s = '23.456' >>> d = decimal.Decimal(s) >>> d Decimal('23.456') # How do I represent this as simply 23.456? >>> d - 1 22 # How do I obtain the output to be 22.456? ``` How do I convert a string to a decimal number, so I am able to perform arithmetic functions on it and obtain an output with the correct precision?
2012/06/10
[ "https://Stackoverflow.com/questions/10971468", "https://Stackoverflow.com", "https://Stackoverflow.com/users/322885/" ]
If you want to stay in `decimal` numbers, safest is to convert everything: ``` >>> s = '23.456' >>> d = decimal.Decimal(s) >>> d - decimal.Decimal('1') Decimal('22.456') >>> d - decimal.Decimal('1.0') Decimal('22.456') ``` In Python 2.7, there's an implicit conversion for integers, but not floats. ``` >>> d - 1 Decimal('22.456') >>> d - 1.0 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for -: 'Decimal' and 'float' ```
Use the bultin float function: ``` >>> d = float('23.456') >>> d 23.456 >>> d - 1 22.456 ``` See the docs here: <http://docs.python.org/library/functions.html#float>
2,667
49,519,789
I want to have a black box in python where * The input is a list A. * There is a random number C for the black box which is randomly selected the first time the black box is called and stays the same for the next times the black box is called. * Based on list A and number C, the output is a list B. I was thinking of defining this black box as a function but the issue is that a function cannot keep the selected number C for next calls. Note that the input and output of the black box are as described above and we cannot have C also as output and use it for next calls. Any suggestion?
2018/03/27
[ "https://Stackoverflow.com/questions/49519789", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9559925/" ]
Make it a Class so C will persist. ``` class BlackBox(): def __init__(self): self.C = rand.randint(100) etc... ``` *As a side note, using some pretty cool Python functionality...* You can make objects of this class callable by implementing `__call__()` for your new class. ``` #inside the BlackBox class def __call__(self, A): B = [] #do something to B with A and self.C return B ``` You can then use this in your main code. ``` bb = BlackBox() A = [1, 2, 3] B = bb(A) ```
> > the issue is that a function cannot keep the selected number C for next calls. > > > This may be true in other languages, but not so in Python. Functions in Python are objects like any other, so you can store things on them. Here's a minimal example of doing so. ``` import random def this_function_stores_a_value(): me = this_function_stores_a_value if 'value' not in me.__dict__: me.value = random.random() return me.value ``` This doesn't directly solve your list problem, but it should point you in the right direction. --- *Side note:* You can also store persistent data in optional arguments, like ``` def this_function_also_stores_a_value(value = random.random()): ... ``` I don't, however, recommend this approach because users can tamper with your values by passing an argument explicitly.
2,677
66,030,433
I am having trouble setting up a GStreamer pipeline to forward a video stream over UDP via OpenCV. I have a laptop, and an AGX Xavier connected to the same network. The idea is to forward the webcam video feed to AGX which will do some OpenCV optical flow estimation on the GPU (in Python), draw flow vectors on the original image and send it back to my laptop. Up until now, I can configure two pipelines. As a minimum example, I have made two bash scripts and a Python script that ideally would function as pass-through over OpenCV's VideoCapture and VideoWriter objects. servevideo.bash: ``` #!/bin/bash gst-launch-1.0 v4l2src device=[device-fd] \ ! video/x-raw, width=800, height=600, framerate=24/1 \ ! jpegenc ! rtpjpegpay ! rtpstreampay \ ! udpsink host=[destination-ip] port=12345 ``` receivevideo.bash: ``` #!/bin/bash gst-launch-1.0 -e udpsrc port=12344 \ ! application/x-rtp-stream,encoding-name=JPEG \ ! rtpstreamdepay ! rtpjpegdepay ! jpegdec \ ! autovideosink ``` If I run these two scripts on either the same computer or on two different computers on the network, it works fine. When I throw my Python script (listed below) in the mix, I start to experience issues. Ideally, I would run the bash scripts on my laptop with the intended setup in mind while running the Python script on my Jetson. I would then expect to see the webcam video feed at my laptop after taking a detour around the Jetson. webcam\_passthrough.py: #!/usr/bin/python3.6 ``` import cv2 video_in = cv2.VideoCapture("udpsrc port=12345 ! application/x-rtp-stream,encoding-name=JPEG ! rtpstreamdepay ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink", cv2.CAP_GSTREAMER) video_out = cv2.VideoWriter("appsrc ! videoconvert ! jpegenc ! rtpjpegpay ! rtpstreampay ! udpsink host=[destination-ip] port=12344", cv2.CAP_GSTREAMER, 0, 24, (800, 600), True) while True: ret, frame = video_in.read() if not ret: break video_out.write(frame) cv2.imshow('Original', frame) key = cv2.waitKey(1) & 0xff if key == 27: break cv2.destroyAllWindows() video_out.release() video_in.release() ``` With the following Python script, I can visualise the frames via `cv2.imshow` received from the pipeline set up by the `servevideo.bash` script. So I think my problem is connected to how I am setting up the VideoWriter `video_out` in OpenCV. I have verified my two bash scripts are working when I am relaying the webcam video feed between those two pipelines created, and I have verified that the `cv2.VideoCapture` receives the frames. I am no expert here, and my GStreamer knowledge is almost non-existent, so there might be several misunderstandings in my minimum example. It would be greatly appreciated if some of you could point out what I am missing here. I will also happily provide more information if something is unclear or missing. **EDIT:** So it seems the intention of my minimum example was not clearly communicated. The three scripts provided as a minimum example serve to relay my webcam video feed from my laptop to the Jetson AGX Xavier who then relays the video-feed back to the laptop. The `servevideo.bash` creates a GStreamer pipeline on the laptop that uses v4l2 to grab frames from the camera and relay it on to a UDP socket. The `webcam_passthrough.py` runs on the Jetson where it "connects" to the UDP socket created by the pipeline running on the laptop. The Python script serves a passthrough which ideally will open a new UDP socket on another port and relay the frames back to the laptop. The `receivevideo.bash` creates yet another pipeline on the laptop for receiving the frames that were passed through the Python script at the Jetson. The second pipeline on the laptop is only utilised for visualisation purpose. Ideally, this minimum example shows the "raw" video feed from the camera connected to the laptop. The two bash scripts are working in isolation, both running locally on the laptop and running `receivevideo.bash` remotely on another computer. The `cv2.VideoCapture` configuration in the Python script also seems to work as I can visualise the frames (with `cv2.imshow`) received over the UDP socket provided by the `servevideo.bash` script. This is working locally and remotely as well. The part that is causing me some headache (I believe) is the configuration of `cv2.VideoWriter`; ideally, that should open a UDP socket which I can "connect" to via my `receivevideo.bash` script. I have tested this locally and remotely but to no avail. When I run `receivevideo.bash` to connect to the UDP socket provided by the Python script I get the following output: ``` Setting pipeline to PAUSED ... Pipeline is live and does not need PREROLL ... Setting pipeline to PLAYING ... New clock: GstSystemClock ``` This does not seem wrong to me, I have tried to run the different scripts with GST\_DEBUG=3 which gave some warnings, but as the pipeline configurations are basically the same in the bash scripts and for the cv2 `VideoCapture` and `VideoWriter` I do not add much value to those warnings. As an example I have included one such warning below: ``` 0:00:06.595120595 8962 0x25b8cf0 WARN rtpjpegpay gstrtpjpegpay.c:596:gst_rtp_jpeg_pay_read_sof:<rtpjpegpay0> warning: Invalid component ``` This warning is printed continuously running the Python script with `GST_DEBUG=3`. Running the `receivevideo.bash` with the same debug level gave: ``` Setting pipeline to PAUSED ... Pipeline is live and does not need PREROLL ... 0:00:00.013911480 9078 0x55be0899de80 FIXME videodecoder gstvideodecoder.c:933:gst_video_decoder_drain_out:<jpegdec0> Sub-class should implement drain() Setting pipeline to PLAYING ... New clock: GstSystemClock ``` I hope my intention is clearer now, and as I already pointed out I believe something is wrong with my `cv2.VideoWriter` in the Python script, but I am no expert and GStreamer is far from something that I use every day. Thus, I may have misunderstood something. **EDIT 2:** So now I have tried to split the two pipelines into two separate processes as suggested by @abysslover. I still see the same result, and I still have no clue why that is. My current implementation of the Python script is listed below. webcam\_passthrough.py: ``` #!/usr/bin/python3.6 import signal, cv2 from multiprocessing import Process, Pipe is_running = True def signal_handler(sig, frame): global is_running print("Program was interrupted - terminating ...") is_running = False def produce(pipe): global is_running video_in = cv2.VideoCapture("udpsrc port=12345 ! application/x-rtp-stream,encoding-name=JPEG ! rtpstreamdepay ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink", cv2.CAP_GSTREAMER) while is_running: ret, frame = video_in.read() if not ret: break print("Receiving frame ...") pipe.send(frame) video_in.release() if __name__ == "__main__": consumer_pipe, producer_pipe = Pipe() signal.signal(signal.SIGINT, signal_handler) producer = Process(target=produce, args=(producer_pipe,)) video_out = cv2.VideoWriter("appsrc ! videoconvert ! jpegenc ! rtpjpegpay ! rtpstreampay ! udpsink host=[destination-ip] port=12344", cv2.CAP_GSTREAMER, 0, 24, (800, 600), True) producer.start() while is_running: frame = consumer_pipe.recv() video_out.write(frame) print("Sending frame ...") video_out.release() producer.join() ``` The pipe that I have created between the two processes is providing a new frame as expected. When I try to listen to UDP port 12344 with `netcat`, I do not receive anything that is the same thing as before. I also have a hard time understanding how differentiating the pipelines are changing much as I would expect them to already run in different contexts. Still, I could be wrong concerning this assumption.
2021/02/03
[ "https://Stackoverflow.com/questions/66030433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15008550/" ]
you were very close to the solution. The problem lies in the warning you yourself noticed `warning: Invalid component`. The problem is that rtp jpeg payloader gets stuck due to not supporting video format it is getting. Check [this](http://gstreamer-devel.966125.n4.nabble.com/ximagesrc-to-jpegenc-td4669619.html) However I was blind and missed what you wrote and went full debug mode into the problem. So lets just keep the debug how-to for others or for similar problems: 1, First debugging step - check with wireshark if the receiving machine is getting udp packets on port 12344. Nope it does not. 2, Would this work without opencv stuff? Lets check with replacing opencv logic with some random processing - say rotation of video. Also eliminate appsrc/appsink to simplify. Then I used this: `GST_DEBUG=3 gst-launch-1.0 udpsrc port=12345 ! application/x-rtp-stream,encoding-name=JPEG ! rtpstreamdepay ! rtpjpegdepay ! jpegdec ! videoconvert ! rotate angle=0.45 ! videoconvert ! jpegenc ! rtpjpegpay ! rtpstreampay ! queue ! udpsink host=[my ip] port=12344` Hm now I get weird warnings like: ``` 0:00:00.174424533 90722 0x55cb38841060 WARN rtpjpegpay gstrtpjpegpay.c:596:gst_rtp_jpeg_pay_read_sof:<rtpjpegpay0> warning: Invalid component WARNING: from element /GstPipeline:pipeline0/GstRtpJPEGPay:rtpjpegpay0: Invalid component ``` 3, Quick search yielded above mentioned GStreamer forum page. 4, When I added `video/x-raw,format=I420` after videoconvert it started working and my second machine started getting the udp packets. 5, So the solution to your problem is just limit the jpegenc to specific video format that the subsequent rtp payloader can handle: ``` #!/usr/bin/python3 import signal, cv2 from multiprocessing import Process, Pipe is_running = True def signal_handler(sig, frame): global is_running print("Program was interrupted - terminating ...") is_running = False def produce(pipe): global is_running video_in = cv2.VideoCapture("udpsrc port=12345 ! application/x-rtp-stream,encoding-name=JPEG ! rtpstreamdepay ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink", cv2.CAP_GSTREAMER) while is_running: ret, frame = video_in.read() if not ret: break print("Receiving frame ...") pipe.send(frame) video_in.release() if __name__ == "__main__": consumer_pipe, producer_pipe = Pipe() signal.signal(signal.SIGINT, signal_handler) producer = Process(target=produce, args=(producer_pipe,)) # the only edit is here, added video/x-raw capsfilter: <------- video_out = cv2.VideoWriter("appsrc ! videoconvert ! video/x-raw,format=I420 ! jpegenc ! rtpjpegpay ! rtpstreampay ! udpsink host=[receiver ip] port=12344", cv2.CAP_GSTREAMER, 0, 24, (800, 600), True) producer.start() while is_running: frame = consumer_pipe.recv() rr = video_out.write(frame) print("Sending frame ...") print(rr) video_out.release() producer.join() ```
Note: I cannot write a comment due to the low reputation. According to your problem description, it is difficult to understand what your problem is. Simply, you will run two bash scripts (`servevideo.bash` and `receivevideo.bash`) on your laptop, which may receive and send web-cam frames from the laptop (?), while a Python script(`webcam_passthrough.py`) runs on a Jetson AGX Xavier. Your bash scripts work, so I guess you have some problems in the Python script. According to your explanation, you've already got the frames from the gst-launch in the bash scripts and visualized the frames. Thus, what is your real problem? What are you trying to solve using the Python script? The following statement is unclear to me. > > When I throw my Python script (listed below) in the mix, I start to experience issues. > > > How about the following configuration? servevideo.bash: ``` #!/bin/bash gst-launch-1.0 videotestsrc device=[device-fd] \ ! video/x-raw, width=800, height=600, framerate=20/1 \ ! videoscale ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink host=[destination-ip] port=12345 ``` receivevideo.bash ``` #!/bin/bash gst-launch-1.0 -v udpsrc port=12345 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" \ ! rtph264depay \ ! decodebin \ ! videoconvert \ ! autovideosink ``` Python script: ``` import numpy as np import cv2 from multiprocessing import Process def send_process(): video_in = cv2.VideoCapture("videotestsrc ! video/x-raw,framerate=20/1 ! videoscale ! videoconvert ! appsink", cv2.CAP_GSTREAMER) video_out = cv2.VideoWriter("appsrc ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink host=[destination_ip] port=12345", cv2.CAP_GSTREAMER, 0, 24, (800,600), True) if not video_in.isOpened() or not video_out.isOpened(): print("VideoCapture or VideoWriter not opened") exit(0) while True: ret,frame = video_in.read() if not ret: break video_out.write(frame) cv2.imshow("send_process", frame) if cv2.waitKey(1)&0xFF == ord("q"): break video_in.release() video_out.release() def receive_process(): cap_receive = cv2.VideoCapture('udpsrc port=12345 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! videoconvert ! appsink', cv2.CAP_GSTREAMER) if not cap_receive.isOpened(): print("VideoCapture not opened") exit(0) while True: ret,frame = cap_receive.read() if not ret: break cv2.imshow('receive_process', frame) if cv2.waitKey(1)&0xFF == ord('q'): break cap_receive.release() if __name__ == '__main__': s = Process(target=send_process) r = Process(target=receive_process) s.start() r.start() s.join() r.join() cv2.destroyAllWindows() ``` I cannot test with codes since I do not have your configuration. I think that the receiver and sender needs to be forked into two separate processes using multiprocessing.Process in Python. You may need to adjust some detailed parameters in order to work with these scripts in your configuration. Good luck to you.
2,682
53,055,563
The python `collections.Counter` object keeps track of the counts of objects. ``` >> from collections import Counter >> myC = Counter() >> myC.update("cat") >> myC.update("cat") >> myC["dogs"] = 8 >> myC["lizards"] = 0 >> print(myC) {"cat": 2, "dogs": 8, "lizards": 0} ``` Is there an analogous C++ object where I can easily keep track of the occurrence counts of a type? Maybe a `map` to `string`? Keep in mind that the above is just an example, and in C++ this would generalize to other types to count.
2018/10/30
[ "https://Stackoverflow.com/questions/53055563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5843327/" ]
You could use an `std::map` like: ``` #include <iostream> #include <map> int main() { std::map<std::string,int> counter; counter["dog"] = 8; counter["cat"]++; counter["cat"]++; counter["1"] = 0; for (auto pair : counter) { cout << pair.first << ":" << pair.second << std::endl; } } ``` Output: ``` 1:0 cat:2 dog:8 ```
You can use [std::unordered\_map](https://en.cppreference.com/w/cpp/container/unordered_map) if you want constant on average lookup complexity (as you get using collections.Counter). [std::map](https://en.cppreference.com/w/cpp/container/map) "usually implemented as red-black trees", so complexity for lookup is logarithmic in the size of the container. And we don't have red-black tree implementation in Python in built-in library. ``` std::unordered_map<std::string,int> counter; counter["dog"] = 8; counter["cat"]++; counter["cat"]++; counter["1"] = 0; for (auto pair : counter) { cout << pair.first << ":" << pair.second << std::endl; } ```
2,683
54,726,459
I'm working through an Exploit Development course on Pluralsight and in the lab I'm currently on we are doing a basic function pointer overwrite. The python script for the lab essentially runs the target executable with a 24 byte string input ending with the memory address of the "jackpot" function. Here's the code: ``` #!/usr/bin/python import sys import subprocess import struct # 20+4+8+4=36 would overwrite 'r', but we only want to hit the func ptr jackpot = 0x401591 # we only take 3 of the 4 bytes because strings cannot have a null, # but will be null terminated terminated to complete the dword address jackpot_packed = struct.pack('L', jackpot)[0:3] arg = "A" * 20 arg += jackpot_packed # or # arg += "\x91\x15\x40" subprocess.call(['functionoverwrite.exe', arg]) ``` The script runs without error and works as expected using python 2.7.8, but with 3.7.2 I get this error: > > Traceback (most recent call last): > File "c:/Users/rossk/Desktop/Pluralsight/Exploit Development/03/demos/lab2/solution/solution.py", line 14, in > arg += jackpot\_packed > TypeError: can only concatenate str (not "bytes") to str > > > So I've tried commenting out the "arg += jackpot\_packed" expression and using the "arg += "\x91\x15\x40" one instead, but apparently that doesn't result in the same string because when I run the script the target executable crashes without calling the jackpot function. I'm looking for a way to fix this program for python 3. How can this code be rewritten so that it works for 3.x?
2019/02/16
[ "https://Stackoverflow.com/questions/54726459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9919300/" ]
The way this works is by processing the string from the end, each time you look at a character you check it's position in the array (I use a flipped array as it's more efficient than using `array_search()` each time). Then if the character is at the end of the array, then set it to the 0th element of the alphabet and increment the next digit to the left. If there is another letter from the alphabet to increment the current value, then just replace it and stop the loop. The last bit is that if you have processed every character and the loop was still going, then there is a carry - so add the 0th digit to the start. ``` $characters = ['a', 'b', 'c']; $string = 'cccc'; $index = array_flip($characters); $alphabetCount = count($index)-1; for ( $i = strlen($string)-1; $i >= 0; $i--) { $current = $index[$string[$i]]+1; // Carry if ( $current > $alphabetCount ) { $string[$i] = $characters[0]; } else { // update and exit $string[$i] = $characters[$current]; break; } } // As reached end of loop - carry if ( $i == -1 ) { $string = $characters[0].$string; } echo $string; ``` gives ``` aaaaa ``` with ``` $characters = ['f', 'h', 'z', '@', 's']; $string = 'ffff@zz'; ``` you get ``` ffff@z@ ```
I ended up with something like this: ```php $string = 'ccc'; $alphabet = ['a', 'b', 'c']; $numbers = array_keys($alphabet); $numeric = str_replace($alphabet, $numbers, $string); $base = count($alphabet) + 1; $decimal = base_convert($numeric, $base, 10); $string = base_convert(++$decimal, 10, $base); strlen($decimal) !== strlen($string) and $string = str_replace('0', '1', $string); echo str_replace($numbers, $alphabet, $string); ``` This one has the advantage of supporting multi byte characters
2,689
46,716,912
I am new to scala. As title, I would like to create a mutable map `Map[Int,(Int, Int)]` and with default value as tuple (0,0) if key not exist. In python the "defaultdict" make such effort easy. what is the elegant way to do it in Scala?
2017/10/12
[ "https://Stackoverflow.com/questions/46716912", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1269298/" ]
Use `withDefaultValue` after creating the map: ``` import scala.collection.mutable val map = mutable.Map[Int,(Int, Int)]().withDefaultValue((0, 0)) ```
you are probably lookign for `.getOrElseUpdate` which takes the key, if not present updates with given value. ``` scala> val googleMap = Map[Int, (Int, Int)]().empty googleMap: scala.collection.mutable.Map[Int,(Int, Int)] = Map() scala> googleMap.getOrElseUpdate(100, (0, 0)) res3: (Int, Int) = (0,0) scala> googleMap res4: scala.collection.mutable.Map[Int,(Int, Int)] = Map(100 -> (0,0)) ``` You can also pass the `orElse` part implicitly, ``` scala> implicit val defaultValue = (0, 0) defaultValue: (Int, Int) = (0,0) scala> googleMap.getOrElseUpdate(100, implicitly) res8: (Int, Int) = (0,0) scala> googleMap res9: scala.collection.mutable.Map[Int,(Int, Int)] = Map(100 -> (0,0)) ```
2,690
31,110,801
I am learning recursion in python. I wrote a program but it is not working correctly. I am a beginner in python. I have two functions **scrabbleScore()** and **letterScore()**. **scrabbleScore()** calls **letterscore()** and itself also. Here is my code: ``` def letterScore( let ): if let in ['a','e','i','l','n','o','r','s','t','u']: return 1 elif let in ['d','g']: return 2 elif let in ['b','c','m','p']: return 3 elif let in ['f','h','v','w','y']: return 4 elif let=='k': return 5 elif let in ['j','x']: return 8 elif let in ['q','z']: return 10 else: return 0 p = 0 def scrabbleScore( S ): if S == "": return 0 else: global p p=p+letterScore(S[0]) scrabbleScore(S[1:]) return p print "scrabbleScore('quetzal'): 25 ==", scrabbleScore('quetzal') print "scrabbleScore('jonquil'): 23 ==", scrabbleScore('jonquil') print "scrabbleScore('syzygy'): 25 ==", scrabbleScore('syzygy') print "scrabbleScore('abcdefghijklmnopqrstuvwxyz'): 87 ==", scrabbleScore('abcdefghijklmnopqrstuvwxyz') print "scrabbleScore('?!@#$%^&*()'): 0 ==", scrabbleScore('?!@#$%^&*()') print "scrabbleScore(''): 0 ==", scrabbleScore('') ```
2015/06/29
[ "https://Stackoverflow.com/questions/31110801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3222665/" ]
I would rewrite your scrabbleScore to something like this ``` def scrabbleScore(S): def helper(S, p): if S == "": return p else: p += letterScore(S[0]) return helper(S[1:], p) return helper(S, 0) ``` this is how you could write it. `p` is called a akkumulator and should not be global (in general, global variabls should be avoided). I didn't figured out why your programm does not work, but my will print the correct result.
Your variable `p` should be initialized to zero again after each call to `scrabbleScore(S)`. This will solve your problem. Example: ``` print "scrabbleScore('quetzal'): 25 ==", scrabbleScore('quetzal') p=0 print "scrabbleScore('jonquil'): 23 ==", scrabbleScore('jonquil') ```
2,693
1,826,824
On my ubuntu server I run the following command: ``` python -c 'import os; os.kill(5555, 0)' ``` This is done so that I can see if pid 5555 is running. From my understanding this should raise an OSError if the pid is not running. This is not raising an OSError for me which means it should be a running process. However when I run: ``` ps aux | grep 5555 ``` I see no process running with that pid. This also happens on several other pids in that general range, but it does not happen with say 555 or 55555. Does anyone have any insight as to why os.kill would not raise an OSError like it is expected to? Note: this is running under python 2.5.1.
2009/12/01
[ "https://Stackoverflow.com/questions/1826824", "https://Stackoverflow.com", "https://Stackoverflow.com/users/205114/" ]
Try installing htop (sudo apt-get install htop), it sometimes displays process that ps doesn't.
I don't know why that OSError is not raised in some cases, but it's important to note that there is a max pid value on linux and unix based OS: ``` $> cat /proc/sys/kernel/pid_max 32768 ```
2,695
6,091,688
I have a Tkinter program and running it like: `python myWindow.py` starts it all right, but the window is behind the terminal that I use to start it. Is there a way to make it grab the focus and be the foreground application? Does it depend on the platform?
2011/05/23
[ "https://Stackoverflow.com/questions/6091688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/175461/" ]
This might be a feature of your particular window manager. One thing to try is for your app to call `focus_force` at startup, after all the widgets have been created.
Have you tried this at the end of your script ? ``` root.iconify() root.update() root.deiconify() root.mainloop() ```
2,700
68,230,917
this is what I did. The code is down bellow. I have the music.csv dataset. The error is Found input variables with inconsistent numbers of samples: [4, 1]. The error details is after the code. ```py # importing Data import pandas as pd music_data = pd.read_csv('music.csv') music_data # split into training and testing- nothing to clean # genre = predictions # Inputs are age and gender and output is genre # method=drop X = music_data.drop(columns=['genre']) # has everything but genre # X= INPUT Y = music_data['genre'] # only genre # Y=OUTPUT # now select algorithm from sklearn.tree import DecisionTreeClassifier model = DecisionTreeClassifier() # model model.fit(X, Y) prediction = model.predict([[21, 1]]) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2) # 20% of date=testing # first two input other output model.fit(X_train, y_train) from sklearn.metrics import accuracy_score score = accuracy_score(y_test, predictions) ``` Then this error comes. This error is a value error ``` ValueError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_28312/3992581865.py in <module> 5 model.fit(X_train, y_train) 6 from sklearn.metrics import accuracy_score ----> 7 score = accuracy_score(y_test, predictions) c:\users\shrey\appdata\local\programs\python\python39\lib\site- packages\sklearn\utils\validation.py in inner_f(*args, **kwargs) 61 extra_args = len(args) - len(all_args) 62 if extra_args <= 0: ---> 63 return f(*args, **kwargs) 64 65 # extra_args > 0 c:\users\shrey\appdata\local\programs\python\python39\lib\site- packages\sklearn\metrics\_classification.py in accuracy_score(y_true, y_pred, normalize, sample_weight) 200 201 # Compute accuracy for each possible representation --> 202 y_type, y_true, y_pred = _check_targets(y_true, y_pred) 203 check_consistent_length(y_true, y_pred, sample_weight) 204 if y_type.startswith('multilabel'): c:\users\shrey\appdata\local\programs\python\python39\lib\site- packages\sklearn\metrics\_classification.py in _check_targets(y_true, y_pred) 81 y_pred : array or indicator matrix 82 """ ---> 83 check_consistent_length(y_true, y_pred) 84 type_true = type_of_target(y_true) 85 type_pred = type_of_target(y_pred) c:\users\shrey\appdata\local\programs\python\python39\lib\site- packages\sklearn\utils\validation.py in check_consistent_length(*arrays) 317 uniques = np.unique(lengths) 318 if len(uniques) > 1: --> 319 raise ValueError("Found input variables with inconsistent numbers of" 320 " samples: %r" % [int(l) for l in lengths]) 321 ValueError: Found input variables with inconsistent numbers of samples: [4, 1] ``` Pls help me. I dont know whats happening but I think it has to do with this score = accuracy\_score(y\_test, predictions).
2021/07/02
[ "https://Stackoverflow.com/questions/68230917", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14672478/" ]
I don't have your tables so I'll demonstrate it on Scott's EMP. ``` SQL> select empno, ename, to_char(hiredate, 'dd.mm.yyyy, dy') hiredate 2 from emp 3 where to_char(hiredate, 'dy', 'nls_date_language = english') in ('sat', 'sun'); EMPNO ENAME HIREDATE ---------- ---------- ------------------------ 7521 WARD 22.02.1981, sun 7934 MILLER 23.01.1982, sat SQL> ``` So, literally fetch rows whose date values fall into sat(urday) and sun(day). --- Your query would then be ``` select client_id, created_date from dba_clientlist where to_char(created_date, 'dy', 'nls_date_language = english') in ('sat', 'sun'); ```
You may use ISO week as a starting point, which is culture independent: ``` select * from your_table where trunc(created_date) - trunc(created_date, 'IW') in (5,6) ``` ISO week starts on Monday.
2,702
32,016,428
I'm getting following error while running the script and the script is to get the SPF records for a list of domains from a file and i'm not sure about the error,Can any one please help me on this issue ? ``` #!/usr/bin/python import sys import socket import dns.resolver import re def getspf (domain): answers = dns.resolver.query(domain, 'TXT') for rdata in answers: for txt_string in rdata.strings: if txt_string.startswith('v=spf1'): return txt_string.replace('v=spf1','') f=open('Input_Domains.txt','r') a=f.readlines() domain=a print domain x=0 while x<len(domain): full_spf=getspf(domain) print 'Initial SPF string : ', full_spf x=x+1 f.close() ``` Input\_Domains.txt ``` box.com bhah.com cnn.com .... ``` Error Message: ``` ['box.com\n'] Traceback (most recent call last): File "sample.py", line 22, in <module> full_spf=getspf(domain) File "sample.py", line 10, in getspf answers = dns.resolver.query(domain, 'TXT') File "/usr/local/lib/python2.7/dist-packages/dns/resolver.py", line 1027, in query raise_on_no_answer, source_port) File "/usr/local/lib/python2.7/dist-packages/dns/resolver.py", line 817, in query if qname.is_absolute(): AttributeError: 'list' object has no attribute 'is_absolute' ```
2015/08/14
[ "https://Stackoverflow.com/questions/32016428", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5093018/" ]
`domain` is a list, not a string. You want to pass *elements* of `domain` to `getspf`, not the entire list. ``` f=open('Input_Domains.txt','r') a=f.readlines() domain=a print domain x=0 while x<len(domain): # domain[x], not domain full_spf=getspf(domain[x]) print 'Initial SPF string : ', full_spf x=x+1 f.close() ``` You also don't need to read the entire file into a list at once; you can iterate over the file one line at a time. ``` with open('Input_Domains.txt','r') as f: for line in f: full_spf = getspf(line.strip()) print 'Initial SPF string : ', full_spf ```
When you run `getspf (domain)`, `domain` is the whole list of domains in your file. Instead of ``` f=open('Input_Domains.txt','r') a=f.readlines() domain=a print domain x=0 while x<len(domain): full_spf=getspf(domain) print 'Initial SPF string : ', full_spf x=x+1 f.close() ``` do ``` with open('Input_Domains.txt','r') as domains_file: for domain in domains_file: full_spf = getspf(domain) print 'Initial SPF string : ', full_spf ```
2,705