qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
18,485,044
It's not under the supported libraries here: <https://developers.google.com/api-client-library/python/reference/supported_apis> Is it just not available with Python? If not, what language is it available for?
2013/08/28
[ "https://Stackoverflow.com/questions/18485044", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2721465/" ]
Andre's answer points you at a correct place to reference the API. Since your question was python specific, allow me to show you a basic approach to building your submitted search URL in python. This example will get you all the way to search content in just a few minutes after you sign up for Google's free API key. ``` ACCESS_TOKEN = <Get one of these following the directions on the places page> import urllib def build_URL(search_text='',types_text=''): base_url = 'https://maps.googleapis.com/maps/api/place/textsearch/json' # Can change json to xml to change output type key_string = '?key='+ACCESS_TOKEN # First think after the base_url starts with ? instead of & query_string = '&query='+urllib.quote(search_text) sensor_string = '&sensor=false' # Presumably you are not getting location from device GPS type_string = '' if types_text!='': type_string = '&types='+urllib.quote(types_text) # More on types: https://developers.google.com/places/documentation/supported_types url = base_url+key_string+query_string+sensor_string+type_string return url print(build_URL(search_text='Your search string here')) ``` This code will build and print a URL searching for whatever you put in the last line replacing "Your search string here". You need to build one of those URLs for each search. In this case I've printed it so you can copy and paste it into your browser address bar, which will give you a return (in the browser) of a JSON text object the same as you will get when your program submits that URL. I recommend using the python **requests** library to get that within your program and you can do that simply by taking the returned URL and doing this: ``` response = requests.get(url) ``` Next up you need to parse the returned response JSON, which you can do by converting it with the **json** library (look for [json.loads](http://docs.python.org/2/library/json.html) for example). After running that response through json.loads you will have a nice python dictionary with all your results. You can also paste that return (e.g. from the browser or a saved file) into an [online JSON viewer](http://www.jsoneditoronline.org/) to understand the structure while you write code to access the dictionary that comes out of json.loads. Please feel free to post more questions if part of this isn't clear.
Somebody has written a wrapper for the API: <https://github.com/slimkrazy/python-google-places> Basically it's just HTTP with JSON responses. It's easier to access through JavaScript but it's just as easy to use `urllib` and the `json` library to connect to the API.
17,315
37,659,072
I'm new with python and I have to sort by date a voluminous file text with lot of line like these: ``` CCC!LL!EEEE!EW050034!2016-04-01T04:39:54.000Z!7!1!1!1 CCC!LL!EEEE!GH676589!2016-04-01T04:39:54.000Z!7!1!1!1 CCC!LL!EEEE!IJ6758004!2016-04-01T04:39:54.000Z!7!1!1!1 ``` Can someone help me please ? Thank you all !
2016/06/06
[ "https://Stackoverflow.com/questions/37659072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4989650/" ]
Have you considered using the \*nix [`sort`](http://linux.die.net/man/1/sort) program? in raw terms, it'll probably be faster than most Python scripts. Use `-t \!` to specify that columns are separated by a `!` char, `-k n` to specify the field, where `n` is the field number, and `-o outputfile` if you want to output the result to a new file. Example: ``` sort -t \! -k 5 -o sorted.txt input.txt ``` Will sort `input.txt` on its 5th field, and output the result to `sorted.txt`
I would like to convert the time to timestamp then sort. first convert the date to list. ``` rawData = '''CCC!LL!EEEE!EW050034!2016-04-01T04:39:54.000Z!7!1!1!1 CCC!LL!EEEE!GH676589!2016-04-01T04:39:54.000Z!7!1!1!1 CCC!LL!EEEE!IJ6758004!2016-04-01T04:39:54.000Z!7!1!1!1''' a = rawData.split('\n') >>> import dateutil.parser,time >>> sorted(a,key= lambda line:time.mktime(dateutil.parser.parse(line.split('!')[4]).timetuple())) ['CCC!LL!EEEE!EW050034!2016-04-01T04:39:54.000Z!7!1!1!1 ', ' CCC!LL!EEEE!GH676589!2016-04-01T04:39:54.000Z!7!1!1!1', ' CCC!LL!EEEE!IJ6758004!2016-04-01T04:39:54.000Z!7!1!1!1'] ```
17,318
42,620,323
I am trying to parse many files found in a directory, however using multiprocessing slows my program. ``` # Calling my parsing function from Client. L = getParsedFiles('/home/tony/Lab/slicedFiles') <--- 1000 .txt files found here. combined ~100MB ``` Following this example from python documentation: ``` from multiprocessing import Pool def f(x): return x*x if __name__ == '__main__': p = Pool(5) print(p.map(f, [1, 2, 3])) ``` I've written this piece of code: ``` from multiprocessing import Pool from api.ttypes import * import gc import os def _parse(pathToFile): myList = [] with open(pathToFile) as f: for line in f: s = line.split() x, y = [int(v) for v in s] obj = CoresetPoint(x, y) gc.disable() myList.append(obj) gc.enable() return Points(myList) def getParsedFiles(pathToFile): myList = [] p = Pool(2) for filename in os.listdir(pathToFile): if filename.endswith(".txt"): myList.append(filename) return p.map(_pars, , myList) ``` I followed the example, put all the names of the files that end with a `.txt` in a list, then created Pools, and mapped them to my function. Then I want to return a list of objects. Each object holds the parsed data of a file. However it amazes me that I got the following results: ``` #Pool 32 ---> ~162(s) #Pool 16 ---> ~150(s) #Pool 12 ---> ~142(s) #Pool 2 ---> ~130(s) ``` **Graph:** [![enter image description here](https://i.stack.imgur.com/wVsZg.png)](https://i.stack.imgur.com/wVsZg.png) Machine specification: ```none 62.8 GiB RAM Intel® Core™ i7-6850K CPU @ 3.60GHz × 12 ``` What am I missing here ? Thanks in advance!
2017/03/06
[ "https://Stackoverflow.com/questions/42620323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6530695/" ]
Looks like you're [I/O bound](https://en.wikipedia.org/wiki/I/O_bound): > > In computer science, I/O bound refers to a condition in which the time it takes to complete a computation is determined principally by the period spent waiting for input/output operations to be completed. This is the opposite of a task being CPU bound. This circumstance arises when the rate at which data is requested is slower than the rate it is consumed or, in other words, more time is spent requesting data than processing it. > > > You probably need to have your main thread do the reading and add the data to the pool when a subprocess becomes available. This will be different to using `map`. As you are processing a line at a time, and the inputs are split, you can use [**`fileinput`**](https://docs.python.org/2/library/fileinput.html) to iterate over lines of multiple files, and map to a function processing lines instead of files: Passing one line at a time might be too slow, so we can ask map to pass chunks, and can adjust until we find a sweet-spot. Our function parses chunks of lines: ``` def _parse_coreset_points(lines): return Points([_parse_coreset_point(line) for line in lines]) def _parse_coreset_point(line): s = line.split() x, y = [int(v) for v in s] return CoresetPoint(x, y) ``` And our main function: ``` import fileinput def getParsedFiles(directory): pool = Pool(2) txts = [filename for filename in os.listdir(directory): if filename.endswith(".txt")] return pool.imap(_parse_coreset_points, fileinput.input(txts), chunksize=100) ```
In general it is never a good idea to read from the same physical (spinning) hard disk from different threads simultaneously, because every switch causes an extra delay of around 10ms to position the read head of the hard disk (would be different on SSD). As @peter-wood already said, it is better to have one thread reading in the data, and have other threads processing that data. Also, to really test the difference, I think you should do the test with some bigger files. For example: current hard disks should be able to read around 100MB/sec. So reading the data of a 100kB file in one go would take 1ms, while positioning the read head to the beginning of that file would take 10ms. On the other hand, looking at your numbers (assuming those are for a single loop) it is hard to believe that being I/O bound is the only problem here. Total data is 100MB, which should take 1 second to read from disk plus some overhead, but your program takes 130 seconds. I don't know if that number is with the files cold on disk, or an average of multiple tests where the data is already cached by the OS (with 62 GB or RAM all that data should be cached the second time) - it would be interesting to see both numbers. So there has to be something else. Let's take a closer look at your loop: ``` for line in f: s = line.split() x, y = [int(v) for v in s] obj = CoresetPoint(x, y) gc.disable() myList.append(obj) gc.enable() ``` While I don't know Python, my guess would be that the `gc` calls are the problem here. They are called for every line read from disk. I don't know how expensive those calls are (or what if `gc.enable()` triggers a garbage collection for example) and why they would be needed around `append(obj)` only, but there might be other problems because this is multithreading: Assuming the `gc` object is global (i.e. not thread local) you could have something like this: ``` thread 1 : gc.disable() # switch to thread 2 thread 2 : gc.disable() thread 2 : myList.append(obj) thread 2 : gc.enable() # gc now enabled! # switch back to thread 1 (or one of the other threads) thread 1 : myList.append(obj) thread 1 : gc.enable() ``` And if the number of threads <= number of cores, there wouldn't even be any switching, they would all be calling this at the same time. Also, if the `gc` object is thread safe (it would be worse if it isn't) it would have to do some locking in order to safely alter it's internal state, which would force all other threads to wait. For example, `gc.disable()` would look something like this: ``` def disable() lock() # all other threads are blocked for gc calls now alter internal data unlock() ``` And because `gc.disable()` and `gc.enable()` are called in a tight loop, this will hurt performance when using multiple threads. So it would be better to remove those calls, or place them at the beginning and end of your program if they are really needed (or only disable `gc` at the beginning, no need to do `gc` right before quitting the program). Depending on the way Python copies or moves objects, it might also be slightly better to use `myList.append(CoresetPoint(x, y))`. So it would be interesting to test the same on one 100MB file with one thread and without the `gc` calls. If the processing takes longer than the reading (i.e. not I/O bound), use one thread to read the data in a buffer (should take 1 or 2 seconds on one 100MB file if not already cached), and multiple threads to process the data (but still without those `gc` calls in that tight loop). You don't have to split the data into multiple files in order to be able to use threads. Just let them process different parts of the same file (even with the 14GB file).
17,320
56,465,109
I am looking for an example of using python multiprocessing (i.e. a process-pool/threadpool, job queue etc.) with hylang.
2019/06/05
[ "https://Stackoverflow.com/questions/56465109", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7740698/" ]
The first example from the [`multiprocessing`](https://docs.python.org/3/library/multiprocessing.html) documentation can be literally translated to Hy like so: ``` (import multiprocessing [Pool]) (defn f [x] (* x x)) (when (= __name__ "__main__") (with [p (Pool 5)] (print (.map p f [1 2 3])))) ```
Note that a straightforward translation runs into a problem on macOS (which is not officially supported, but mostly works anyway): Hy sets `sys.executable` to the Hy interpreter, and `multiprocessing` relies on that value to start up new processes. You can work around that particular problem by calling `(multiprocessing.set_executable hy.sys_executable)`, but then it will fail to parse the file containing the Hy code itself, which it does again for some reason in the child process. So there doesn't seem to be a good solution for using multiprocessing with Hy running natively on a Mac. Which is why we have Docker, I suppose.
17,323
38,217,594
[Distinguishable objects into distinguishable boxes](https://math.stackexchange.com/questions/468824/distinguishable-objects-into-distinguishable-boxes?rq=1) It is very similar to this question posted. I'm trying to get python code for this question. Note although it is similar there is a key difference. i.e. A bucket can be empty, while the other buckets contain all the items. Even this case will be considered as a separate case. for example: Consider I have 3 items A,B,C and 3 buckets B1, B2, B3 The table below will show the expected result: ``` B1 B2 B3 (A,B,C) () () () (A,B,C) () () () (A,B,C) (A) (B) (C) (A) (C) (B) (B) (A) (C) (B) (C) (A) (C) (B) (A) (C) (A) (B) (A,B) (C) () (A,B) () (C) (B,C) (A) () (B,C) () (A) (A,C) (B) () (A,C) () (B) () (A,B) (C) (C) (A,B) () () (B,C) (A) (A) (B,C) () () (A,C) (B) (B) (A,C) () () (C) (A,B) (C) () (A,B) () (A) (B,C) (A) () (B,C) () (B) (A,C) (B) () (A,C) Length is 27. ``` ``` >>def make_sets(items, num_of_baskets=3): pass >>make_sets(('A', 'B', 'C', 'D', 'E'), 3) ``` I'm expecting the output of a function to give me these combinations in a form of list of lists of tuples. I'm saying this again the number of items is variable and the number of buckets is variable too. \*\* Please provide python code for the make\_sets function. If someone can explain the math combinatorics. I'd greatly appreciate that too. I spent more than 2 days on this problem without reaching a definite solution.
2016/07/06
[ "https://Stackoverflow.com/questions/38217594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6055596/" ]
I think there is no way to combine more than one language in one editor. Please refer to this link. <https://www.tinymce.com/docs/configure/localization/#language> TinyMce is made for simplicity and easyness. If you want to have more than one language that points to one ID please play around with your Database Design.
Actually now you can add languages in Tinymce by downloading different languages packages and integrating it with your editor. <https://www.tiny.cloud/docs/configure/localization/> here you will find the list of Available Language Packages and how to use them
17,324
50,311,713
Hello I'm trying to make a python script to loop text and toggle through it. I'm able to get python to toggle through the text once but what I cant get it to do is to keep toggling through the text. After it toggles through the text once I get a message that says Traceback (most recent call last): File "test.py", line 24, in hello() File "test.py", line 22, in hello hello() TypeError: 'str' object is not callable ``` import time, sys, os from colorama import init from termcolor import colored def hello(): os.system('cls') init() hello = '''Hello!''' print(colored(hello,'green',)) time.sleep(1) os.system('cls') print(colored(hello,'blue',)) time.sleep(1) os.system('cls') print(colored(hello,'yellow',)) time.sleep(1) os.system('cls') hello() hello() ```
2018/05/13
[ "https://Stackoverflow.com/questions/50311713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9394080/" ]
> > Is this not redundant?? > > > Maybe it is redundant for instance methods and constructors. It isn't redundant for static methods or class initialization pseudo-methods. --- It is also possible that the (supposedly) redundant reference gets optimized away by the JIT compiler. (Or maybe it isn't optimized away ... because they have concluded that the redundancy leads to faster execution *on average*.) Or maybe the actual implementation of the JVM1 is just different. Bear in mind that the JVM spec is describing an idealized stack frame. The actual implementation may be different ... provided that it *behaves* the way that the spec says it should. --- On @EJP's point on normativeness, the only normative references for Java are the JLS and JVM specifications, and the Javadoc for the class library. You can also consult the source code of the JVM itself. The specifications say what *should* happen, and the code (in a sense) says what *does* happen. An article you might find in a published paper or a web article is not normative, and may well be incorrect or out of date. --- 1 - The actual implementation may vary from one version to the next, or between vendors. Furthermore, I have heard of a JVM implementation where a bytecode rewriter transformed from standard bytecodes to another abstract machine language at class load time. It wasn't a great idea from a performance perspective ... but it was certainly within the spirit of the JVM spec.
> > The stack frame will contain the "current class constant pool reference" and also it will have the reference to the object in heap which in turn will also point to the class data. Is this not redundant?? > > > You missed the precondition of that statement, or you misquoted it, or it was just plainly wrong where you saw it. The "reference to the object in heap" is only added for non-static method, and it refers to the hidden `this` parameter. As it says in section "[Local Variables Array](http://blog.jamesdbloom.com/JVMInternals.html#local_variables_array)": > > The array of local variables contains all the variables used during the execution of the method, including a reference to `this`, all method parameters and other locally defined variables. For class methods (i.e. static methods) the method parameters start from zero, however, **for instance method the zero slot is reserved for `this`**. > > > So, for static methods, there is no redundancy. Could the constant pool reference be eliminated when `this` is present? Yes, but then there would need to be a different way to locate the constant pool reference, requiring different bytecode instructions, so that would be a different kind of redundancy. Always having the constant pool reference available in a well-known location in the stack frame, simplifies the bytecode logic.
17,325
69,416,562
I have this simple csv: ``` date,count 2020-07-09,144.0 2020-07-10,143.5 2020-07-12,145.5 2020-07-13,144.5 2020-07-14,146.0 2020-07-20,145.5 2020-07-21,146.0 2020-07-24,145.5 2020-07-28,143.0 2020-08-05,146.0 2020-08-10,147.0 2020-08-11,147.5 2020-08-14,146.5 2020-09-01,143.5 2020-09-02,143.0 2020-09-09,144.5 2020-09-10,143.5 2020-09-25,144.0 2021-09-21,132.4 2021-09-23,131.2 2021-09-25,131.0 2021-09-26,130.8 2021-09-27,130.6 2021-09-28,128.4 2021-09-30,126.8 2021-10-02,126.2 ``` If I copy it into excel and scatter plot it, it looks like this [![excel](https://i.stack.imgur.com/ZNrCN.png)](https://i.stack.imgur.com/ZNrCN.png) This is correct; there should be a big gap in the middle (look carefully at the data, it jumps from 2020 to 2021) However if I do this in python: ``` import matplotlib.pyplot as plt import pandas as pd data = pd.read_csv('data.csv') data.plot.scatter('date', 'count') plt.show() ``` It looks like this: [![matplotlib](https://i.stack.imgur.com/e872e.png)](https://i.stack.imgur.com/e872e.png) It evenly spaces them at the gap is gone. How do I stop that behavior? I tried to do ``` plt.xticks = data.date ``` But that didn't do anything different.
2021/10/02
[ "https://Stackoverflow.com/questions/69416562", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7503046/" ]
I made some research and found this: [how to close server on ctrl+c when in no-daemon](https://github.com/Unitech/pm2/issues/2833#issuecomment-298560152) ```sh pm2 kill && pm2 start ecosystem.json --only dev --no-daemon ``` It works if you run pm2 alone but you are running 2 programs together, so give it a try below script: ```json { "scripts": { "dev": "yarn pm2:del && yarn pm2:dev && yarn wp:dev && yarn pm2:del" } } ``` **How does it work?** * first, kill all pm2 daemons * start a pm2 daemon * start webpack * finally, kill all pm2 daemons again, it will run when you press `CTRL + C`
I've created `dev.sh` script: ``` #!/bin/bash yarn pm2:del yarn pm2:dev yarn wp:dev yarn pm2:del ``` And run it using `yarn dev`: ``` "scripts": { "dev": "sh ./scripts/dev.sh", "pm2:dev": "pm2 start ecosystem.config.js --only dev", "pm2:del": "pm2 delete all || exit 0", "wp:dev": "webpack --mode=development --watch" } ```
17,328
61,819,993
I'm trying to run a Python script from a (windows/c#) background process. I'm successfully getting python.exe to run with the script file, but it's erroring out on the first line, "import pandas as pd". The exact error I'm getting from stderr is... Traceback (most recent call last): File "predictX.py", line 1, in import pandas as pd ModuleNotFoundError: No module named 'pandas' When I run the script from an anaconda prompt, it runs fine. I copied the "Path" environment variable from the anaconda prompt and replicated that in my background process. Might there be any other environment variables it's looking for? Any other thoughts? Thanks!! -- Curt
2020/05/15
[ "https://Stackoverflow.com/questions/61819993", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13507069/" ]
You should install it in your desktop before using it. ``` $ pip install pandas ``` Then it should work fine. If not, try un-install and re-install it. [EDIT] Anaconda is a package for python which includes more module that wasn't included in the original python installer. So the script can run in Anaconda, but not with original python runner.
Pilot error... Apparently there are at least two python.exe files on my computer. I changed the path to reflect the one under the Anaconda folder and everything came right up.
17,329
14,974,659
Please bear with me as I'm new to Python/Django/Unix in general. I'm learning how to use different `settings.py` files for local and production environments. The following is from the section on the `--settings` option in [the official Django docs page on `django-admin.py`](https://docs.djangoproject.com/en/1.5/ref/django-admin/), > > --settings Example usage: > > > django-admin.py syncdb --settings=mysite.settings > > > My project is structured as following: ``` mysite L manage.py L mysite L __init__.py L local.py L urls.py L production.py L wsgi.py ``` However when I run the following command from the parent `mysite` directory, > > $ django-admin.py runserver --settings=mysite.local > > > I get the following error: ``` File "/Users/testuser/.virtualenvs/djdev/lib/python2.7/site-packages/django/conf/__init__.py", line 95, in __init__ raise ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e)) ImportError: Could not import settings 'mysite.local' (Is it on sys.path?): No module named mysite.local ``` From what I gathered on various articles on the web, I think I need to add my project directory path to the `PYTHONPATH` variable in bash profile. Is this the right way to go? EDIT: changed the slash to dot, but same error persists.
2013/02/20
[ "https://Stackoverflow.com/questions/14974659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/312462/" ]
the `--settings` flag takes a dotted Python path, not a relative path on your filesystem. Meaning `--settings=mysite/local` should actually be `--settings=mysite.local`. If your current working directory is your project root when you run `django-admin`, then you shouldn't have to touch your `PYTHONPATH`.
You have to replace `/` with `.` ``` $ django-admin.py runserver --settings=mysite.local ``` You can update PYTHONPATH in the `manage.py` too. Inside `if __name__ == "__main__":` add the following. ``` import sys sys.path.append(additional_path) ```
17,330
22,429,004
I have multiple forms in a html file, which all call the same python cgi script. For example: ``` <html> <body> <form method="POST" name="form1" action="script.cgi" enctype="multipart/data-form"> .... </form> ... <form method="POST" name="form2" action="script.cgi" enctype="multipart/data-form"> ... </form> ... </body> </html> ``` And in my cgi script I do the following: ``` #!/usr/bin/python import os import cgi print "content-type: text/html; charset=utf-8\n\n" form = cgi.FieldStorate(); ... ``` I am unable to get the data from the second from. I have tried to call FieldStorage multiple times, but that did not seem to work. So my question is how do I access different forms in the same cgi script?
2014/03/15
[ "https://Stackoverflow.com/questions/22429004", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2415118/" ]
You cannot. The browser submits one form, or the other, but not both. If you need data from both forms, merge the forms into one `<form>` tag instead.
First, `FieldStorage()` consumes standard input, so it should only be instantiated once. Second, only the data in the submitted form is sent to the server. The other forms may as well not exist. So while you can use the same cgi script to process both forms, if you need process both forms at the same time, as Martijn suggested, merge the forms into one `<form>`.
17,331
46,511,011
The question has racked my brains There are 26 underscores presenting English alphabet in-sequence. means that letter a,b and g should be substituted by the letter k, j and r respectively, while all the other letters are not substituted. how do I do like this? How can python detect each underscore = each English alphabet? I thought I could use `str.replace to do this` but it's more difficult than I thought. thanks
2017/10/01
[ "https://Stackoverflow.com/questions/46511011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You could use `str.translate`: ``` In [8]: from string import ascii_lowercase In [9]: text.translate({ord(l): l if g == '_' else g for g, l in zip(guess, ascii_lowercase)}) Out[9]: 'i km jen .' ``` This maps elements of `string.ascii_lowercase` to elements of `guess` (by position). If an element of `guess` is the underscore, the corresponding letter from `ascii_lowercase` is used instead.
If you had a list of the alphabet, then the list of underscores, enter a for loop and then just compare the two values, appending to a list if it does or doesn’t
17,332
73,646,972
I am using the following function to estimate the Gaussian window rolling average of my timeseries. Though it works great from small size averaging windows, it crushes (or gets extremely slow) for larger averaging windows. ``` def norm_factor_Gauss_window(s, dt): numer = np.arange(-3*s, 3*s+dt, dt) multiplic_fac = np.exp(-(numer)**2/(2*s**2)) norm_factor = np.sum(multiplic_fac) window = len(multiplic_fac) return window, multiplic_fac, norm_factor # Create dataframe for MRE aa = np.sin(np.linspace(0,2*np.pi,1000000))+0.15*np.random.rand(1000000) df = pd.DataFrame({'x':aa}) hmany = 10 dt = 1 # ['seconds'] s = hmany*dt # Define averaging window size ['s'] # Estimate multip factor, normalizatoon factor etc window, multiplic_fac, norm_factor= norm_factor_Gauss_window(s, dt) # averaged timeseries res2 =(1/norm_factor)*df.x.rolling(window, center=True).apply(lambda x: (x * multiplic_fac).sum(), raw=True, engine='numba', engine_kwargs= {'nopython': True, 'parallel': True} , args=None, kwargs=None) #Plot plt.plot(df.x[0:2000]) plt.plot(res2[0:2000]) ``` I am aware that people usually speed up moving average operations using convolve(e.g., [How to calculate rolling / moving average using python + NumPy / SciPy?](https://stackoverflow.com/questions/14313510/how-to-calculate-rolling-moving-average-using-python-numpy-scipy)) Would it be possible to use convolve here somehow to fix this issue? Also, are there any other suggestion that would help me speed up the operation for large averaging windows?
2022/09/08
[ "https://Stackoverflow.com/questions/73646972", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15353940/" ]
Using [numba njit decorator](https://numba.pydata.org/numba-doc/latest/user/parallel.html?highlight=njit) on `norm_factor_Gauss_window` function on my pc I get a **10x** speed up (from 10µs to 1µs) on the execution time of this function. ``` import numba as nb @nb.njit(nogil=True) def norm_factor_Gauss_window(s, dt): numer = np.arange(-3*s, 3*s+dt, dt) multiplic_fac = np.exp(-(numer)**2/(2*s**2)) norm_factor = np.sum(multiplic_fac) window = len(multiplic_fac) return window, multiplic_fac, norm_factor ``` This is not a big improvement seen on the total execution time which depends heavily on rolling mean on my pc 900ms. With some adjustments I was able to get to 650ms (**-25%** execution time) by removing the keyword `'parallel'`, as in this case there is nothing that can be parallelized with this approach, as evidenced by the warning `'NumbaPerformanceWarning'`. I also removed the other keywords, as they are the default values. ``` df.x.rolling(window, center=True).apply(lambda x: (x * multiplic_fac).sum(), raw=True, engine='numba') ```
I was able to drastically improve the speed of this code using the following: ``` from scipy import signal def norm_factor_Gauss_window(s, dt): numer = np.arange(-3*s, 3*s+dt, dt) multiplic_fac = np.exp(-(numer)**2/(2*s**2)) norm_factor = np.sum(multiplic_fac) window = len(multiplic_fac) return window, multiplic_fac, norm_factor # Create dataframe for MRE aa = np.sin(np.linspace(0,2*np.pi,1000000))+0.15*np.random.rand(1000000) df = pd.DataFrame({'x':aa}) hmany = 10 dt = 1 # ['seconds'] s = hmany*dt # Define averaging window size ['s'] # Estimate multip factor, normalizatoon factor etc window, multiplic_fac, norm_factor= norm_factor_Gauss_window(s, dt) # averaged timeseries res2 = (1/norm_factor)*signal.fftconvolve(df.x.values, multiplic_fac[::-1], 'same') #Plot plt.plot(df.x[0:2000]) plt.plot(res2[0:2000]) ```
17,336
8,765,568
I am trying to make a windows executable from a python script that uses matplotlib and it seems that I am getting a common error. > > File "run.py", line 29, in > import matplotlib.pyplot as plt File "matplotlib\pyplot.pyc", line 95, in File "matplotlib\backends\_\_init\_\_.pyc", line > 25, in pylab\_setup ImportError: No module named backend\_tkagg > > > The problem is that I didn't found a solution while googling all over the internet. Here is my `setup.py` ``` from distutils.core import setup import matplotlib import py2exe matplotlib.use('TkAgg') setup(data_files=matplotlib.get_py2exe_datafiles(),console=['run.py']) ```
2012/01/06
[ "https://Stackoverflow.com/questions/8765568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/842785/" ]
First, the easy question, is that backend installed? On my Fedora system I had to install it separately from the base matplotlib. At a Python console can you: ``` >>> import matplotlib.backends.backend_tkagg ``` If that works, then force py2exe to include it. In your config: ``` opts = { 'py2exe': { "includes" : ["matplotlib.backends.backend_tkagg"] } } ```
If you are using py2exe it doesn't handle .egg formatted Python modules. If you used easy\_install to install the trouble module then you might only have the .egg version. See the py2exe site for more info on how to fix it. <http://www.py2exe.org/index.cgi/ExeWithEggs>
17,337
46,006,513
I'm trying to evaluate the accuracy of an algorithm that segments regions in 3D MRI Volumes (Brain). I've been using Dice, Jaccard, FPR, TNR, Precision... etc but I've only done this pixelwise (I.E. FNs= number of false neg pixels). Is there a python package (or pseudo code) out there to do this at the lesion level? For example, calculate TPs as number of lesions (3d disconnected objects in grd trth) detected by my algorithm? This way the size of the lesion doesn't play as much of an effect on the accuracy metrics.
2017/09/01
[ "https://Stackoverflow.com/questions/46006513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7914014/" ]
You could use scipy's [`label`](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.measurements.label.html) to find connected components in an image: ``` from scipy.ndimage.measurements import label label_pred, numobj_pred = label(my_predictions) label_true, numobj_true = label(my_groundtruth) ``` And then compare them using the metric of your choice. PS: Or scikit-image's, with a demo [here](http://www.scipy-lectures.org/packages/scikit-image/auto_examples/plot_labels.html).
Here is the code I ended up writing to do this task. Please let me know if anyone sees any errors. ``` def distance(p1, p2,dim): if dim==2: return math.sqrt((p2[0] - p1[0])**2 + (p2[1] - p1[1])**2) elif dim==3: return math.sqrt((p2[0] - p1[0])**2 + (p2[1] - p1[1])**2+ (p2[2] - p1[2])**2) else: print 'error' def closest(true_cntrd,pred_pts,dim): dist_list=[] for pred_pt in pred_pts: dist_list.append( distance(true_cntrd, pred_pt,dim) ) min_idx = np.argmin(dist_list) return pred_pts[min_idx],min_idx def eval_disconnected(y_true,y_pred,dim): y_pred=y_pred>0.5 label_pred, numobj_pred = label(y_pred) label_true, numobj_true = label(y_true) true_labels,pred_labels=np.arange(numobj_true+1)[1:],np.arange(numobj_pred+1)[1:] true_centroids=center_of_mass(y_true,label_true,true_labels) pred_centroids=center_of_mass(y_pred,label_pred,pred_labels) if len(pred_labels)==0: TP,FN,FP=0,len(true_centroids),0 return TP,FN,FP true_lbl_hit_list=[] pred_lbl_hit_list=[] for (cntr_true,lbl_t) in zip(true_centroids,np.arange(numobj_true+1)[1:]): closest_pred_cntr,idx = closest(cntr_true,pred_centroids,dim) closest_pred_cntr=tuple(int(coor) for coor in closest_pred_cntr) if label_true[closest_pred_cntr]==lbl_t: true_lbl_hit_list.append(lbl_t) pred_lbl_hit_list.append(pred_labels[idx] ) pred_lbl_miss_list = [pred_lbl for pred_lbl in pred_labels if not(pred_lbl in pred_lbl_hit_list)] true_lbl_miss_list = [true_lbl for true_lbl in true_labels if not(true_lbl in true_lbl_hit_list)] TP=len(true_lbl_hit_list) # all the grd truth labels that were predicted FN=len(true_lbl_miss_list) # all the grd trth labels that were missed FP=len(pred_lbl_miss_list) # all of the predicted labels that didn't hit return TP,FN,FP ```
17,342
67,959,301
I want to print the code exactly after one min ``` import time from datetime import datetime while True: time.sleep(1) now = datetime.now() current_datetime = now.strftime("%d-%m-%Y %H:%M:%S") if current_datetime==today.strftime("%d-%m-%Y") + "09:15:00": sec = 60 time.sleep(sec) print("time : ", current_datetime) ``` I am trying to achieve these steps. 1. Start running the code at or before 09 am. 2. check if exactly 09.15 am today 3. print the time 4. Run after exactly 1 min and print time. Output : ``` '2021-06-14 09:15:00+05:30' '2021-06-14 09:16:00+05:30' '2021-06-14 09:17:00+05:30' '2021-06-14 09:18:00+05:30' '2021-06-14 09:19:00+05:30' '2021-06-14 09:20:00+05:30' ``` and so on till '2021-06-14 14:30:00+05:30' What is the best pythonic way to do this?
2021/06/13
[ "https://Stackoverflow.com/questions/67959301", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778942/" ]
No. They are different things. Auto-incremented columns in MySQL are not guaranteed to be gapless. Gaps can occur for multiple reasons. The most common are: * Concurrent transactions. * Deletion. It sounds like you have a unique identifier in Java which is either redundant or an item of data. If the latter, then add it as an additional column. More likely, though, you might want to reconsider your design, so there is only one auto-incremented value for a given record. I would recommend using the one in the database, because that would apply regardless of how inserts are made into the database.
It isn't compulsory to create and unique id field in the database . You can instead change the table like--> ``` CREATE TABLE companies ( 'COMPANYID' int NOT NULL, `NAME` varchar(200) DEFAULT NULL, `EMAIL` varchar(200) DEFAULT NULL, `PASSWORD` varchar(200) DEFAULT NULL, PRIMARY KEY (`ID`) ``` since you are auto incrementing the same the same value twice , it will create some problems. your ID column will be like this--> ``` Id| --- 2 | --- 4 | -- 6 | -- 8 | ``` it will increment the values twice
17,343
39,852,963
I have the following list of tuples already sorted, with "sorted" in python: ``` L = [("1","blaabal"), ("1.2","bbalab"), ("10","ejej"), ("11.1","aaua"), ("12.1","ehjej"), ("12.2 (c)", "ekeke"), ("12.2 (d)", "qwerty"), ("2.1","baala"), ("3","yuio"), ("4","poku"), ("5.2","qsdfg")] ``` My problem is as you can notice, at first it is good, though after "12.2 (d)" the list restart at "2.1",I don't how to solve this problem. Thanks
2016/10/04
[ "https://Stackoverflow.com/questions/39852963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6726377/" ]
Since the first element in each tuple is a string, Python is performing lexographic sorting in which all strings that start with `'1'` come before strings that start with a `'2'`. To get the sorting you desire, you'll want to treat the first entry *as a `float`* instead of a string. We can use `sorted` along with a custom sorting function which converts the first entry to a float prior to sorting. It also keeps the second tuple element to handle the case when you may have non-unique first entries. ``` result = sorted(L, key = lambda x: (float(x[0].split()[0]), x[1])) # [('1', 'blaabal'), ('1.2', 'bbalab'), ('2.1', 'baala'), ('3', 'yuio'), ('4', 'poku'), ('5.2', 'qsdfg'), ('10', 'ejej'), ('11.1', 'aaua'), ('12.1', 'ehjej'), ('12.2 (c)', 'ekeke'), ('12.2 (d)', 'qwerty')] ``` I had to add in a `x[0].split()[0]` so that we split the first tuple element at the space and only grab the first pieces since some have values such as `'12.2 (d)'` and we only want the `'12.2'`. If the second part of that first element that we've discarded matters, then you could use a sorting function similar to the following which breaks that first element into pieces and converts just the first piece to a float and leaves the rest as strings. ``` def sorter(value): parts = value[0].split() # Convert the first part to a number and leave all other parts as strings parts[0] = float(parts[0]); return (parts, value[1]) result = sorted(L, key = sorter) ```
The first value of your tuples are strings, and are being sorted in lexicographic order. If you want them to remain strings, sort with ``` sorted(l, key = lambda x: float(x[0])) ```
17,344
21,699,251
I got a function to call an exec in **node.js** server. I'm really lost about getting the stdout back. This is function: ``` function callPythonFile(args) { out = null var exec = require('child_process').exec, child; child = exec("../Prácticas/python/Taylor.py 'sin(w)' -10 10 0 10", function (error, stdout, stderr) { console.log('stderr: ' + stderr) if (error !== null) console.log('exec error: ' + error); out = stdout }) return out } ``` When I call to `console.log(stdout)` I actually get an output. But when I try to print outside the function, it's output, it'll always be null. I can't really see how I can get it
2014/02/11
[ "https://Stackoverflow.com/questions/21699251", "https://Stackoverflow.com", "https://Stackoverflow.com/users/742560/" ]
Because you return from the function before the exec is finished and the callback is executed. Exec in this case is asynchronous and unfortunately there is no synchronous exec in node.js in the last version (0.10.x). There are two ways to do what you are trying to do. Wait until the exec is done --------------------------- ``` var exec = require('child_process').exec, function callPythonFile (args, callback) { var out = null exec("../Prácticas/python/Taylor.py 'sin(w)' -10 10 0 10", function (error, stdout, stderr) { if (error !== null) callback(err); callback(null, out); }); } //then you call the function like this: callPythonFile(args , function (err, out) { console.log('output is', out); }); ``` You will see this pattern a lot in node.js, instead of returning something you have to pass a callback. Return a ChildProcess object ---------------------------- The exec function returns a [ChildProcess](http://nodejs.org/api/child_process.html#child_process_class_childprocess) object which is basically an EventEmitter and has two important properties `stdout` and `stderr`: ``` var exec = require('child_process').exec, function callPythonFile (args) { return exec("../Prácticas/python/Taylor.py 'sin(w)' -10 10 0 10"); } //then you call the function like this: var proc = callPythonFile(args) proc.stdout.on('data', function (data) { //do something with data }); proc.on('error', function (err) { //handle the error }); ``` The interesting thing is that stdout and stderr are streams, so you can basically `pipe` to files, http responses, etc. and there are plenty of modules to handle streams. This is an http server that always call the process and reply with the stdout of the process: ``` var http = require('http'); http.createServer(function (req, res) { callPythonFile(args).stdout.pipe(res); }).listen(8080); ```
Have a look here about the `exec`: [nodejs doc](http://nodejs.org/api/child_process.html#child_process_child_process_exec_command_options_callback). The callback function does not really return anything. So if you want to "return" the output, why don't you just read the stream and return the resulting string ([nodejs doc](http://nodejs.org/api/stream.html#stream_readable_read_size))?
17,346
3,289,330
I have 5 python cgi pages. I can navigate from one page to another. All pages get their data from the same database table just that they use different queries. The problem is that the application as a whole is slow. Though they connect to the same database, each page creates a new handle every time I visit it and handles are not shared by the pages. I want to improve performance. Can I do that by setting up sessions for the user? Suggestions/Advices are welcome. Thanks
2010/07/20
[ "https://Stackoverflow.com/questions/3289330", "https://Stackoverflow.com", "https://Stackoverflow.com/users/343409/" ]
cgi requires a new interpreter to start up for each request, and then all the resources such as db connections to be acquired and released. [fastcgi](http://en.wikipedia.org/wiki/FastCGI) or [wsgi](http://en.wikipedia.org/wiki/Wsgi) improve performance by allowing you to keep running the same process between requests
Django and Pylons are both frameworks that solve this problem quite nicely, namely by abstracting the DB-frontend integration. They are worth considering.
17,347
24,863,576
I have a python script that have \_\_main\_\_ statement and took all values parametric. I want to import and use it in my own script. Actually I can import but don't know how to use it. As you see below, \_\_main\_\_ is a bit complicated and rewriting it will take time because I even don't know what does most of code mean. Want to know is there any way to import and use the code as a function? ``` import os import sys import time import base64 from urllib2 import urlopen from urllib2 import Request from urllib2 import HTTPError from urllib import urlencode from urllib import quote from exceptions import Exception from email.mime.multipart import MIMEMultipart from email.mime.base import MIMEBase from email.mime.application import MIMEApplication from email.encoders import encode_noop from api_util import json2python, python2json class MalformedResponse(Exception): pass class RequestError(Exception): pass class Client(object): default_url = 'http://nova.astrometry.net/api/' def __init__(self, apiurl = default_url): self.session = None self.apiurl = apiurl def get_url(self, service): return self.apiurl + service def send_request(self, service, args={}, file_args=None): ''' service: string args: dict ''' if self.session is not None: args.update({ 'session' : self.session }) print 'Python:', args json = python2json(args) print 'Sending json:', json url = self.get_url(service) print 'Sending to URL:', url # If we're sending a file, format a multipart/form-data if file_args is not None: m1 = MIMEBase('text', 'plain') m1.add_header('Content-disposition', 'form-data; name="request-json"') m1.set_payload(json) m2 = MIMEApplication(file_args[1],'octet-stream',encode_noop) m2.add_header('Content-disposition', 'form-data; name="file"; filename="%s"' % file_args[0]) #msg.add_header('Content-Disposition', 'attachment', # filename='bud.gif') #msg.add_header('Content-Disposition', 'attachment', # filename=('iso-8859-1', '', 'FuSballer.ppt')) mp = MIMEMultipart('form-data', None, [m1, m2]) # Makie a custom generator to format it the way we need. from cStringIO import StringIO from email.generator import Generator class MyGenerator(Generator): def __init__(self, fp, root=True): Generator.__init__(self, fp, mangle_from_=False, maxheaderlen=0) self.root = root def _write_headers(self, msg): # We don't want to write the top-level headers; # they go into Request(headers) instead. if self.root: return # We need to use \r\n line-terminator, but Generator # doesn't provide the flexibility to override, so we # have to copy-n-paste-n-modify. for h, v in msg.items(): print >> self._fp, ('%s: %s\r\n' % (h,v)), # A blank line always separates headers from body print >> self._fp, '\r\n', # The _write_multipart method calls "clone" for the # subparts. We hijack that, setting root=False def clone(self, fp): return MyGenerator(fp, root=False) fp = StringIO() g = MyGenerator(fp) g.flatten(mp) data = fp.getvalue() headers = {'Content-type': mp.get('Content-type')} if False: print 'Sending headers:' print ' ', headers print 'Sending data:' print data[:1024].replace('\n', '\\n\n').replace('\r', '\\r') if len(data) > 1024: print '...' print data[-256:].replace('\n', '\\n\n').replace('\r', '\\r') print else: # Else send x-www-form-encoded data = {'request-json': json} print 'Sending form data:', data data = urlencode(data) print 'Sending data:', data headers = {} request = Request(url=url, headers=headers, data=data) try: f = urlopen(request) txt = f.read() print 'Got json:', txt result = json2python(txt) print 'Got result:', result stat = result.get('status') print 'Got status:', stat if stat == 'error': errstr = result.get('errormessage', '(none)') raise RequestError('server error message: ' + errstr) return result except HTTPError, e: print 'HTTPError', e txt = e.read() open('err.html', 'wb').write(txt) print 'Wrote error text to err.html' def login(self, apikey): args = { 'apikey' : apikey } result = self.send_request('login', args) sess = result.get('session') print 'Got session:', sess if not sess: raise RequestError('no session in result') self.session = sess def _get_upload_args(self, **kwargs): args = {} for key,default,typ in [('allow_commercial_use', 'd', str), ('allow_modifications', 'd', str), ('publicly_visible', 'y', str), ('scale_units', None, str), ('scale_type', None, str), ('scale_lower', None, float), ('scale_upper', None, float), ('scale_est', None, float), ('scale_err', None, float), ('center_ra', None, float), ('center_dec', None, float), ('radius', None, float), ('downsample_factor', None, int), ('tweak_order', None, int), ('crpix_center', None, bool), # image_width, image_height ]: if key in kwargs: val = kwargs.pop(key) val = typ(val) args.update({key: val}) elif default is not None: args.update({key: default}) print 'Upload args:', args return args def url_upload(self, url, **kwargs): args = dict(url=url) args.update(self._get_upload_args(**kwargs)) result = self.send_request('url_upload', args) return result def upload(self, fn, **kwargs): args = self._get_upload_args(**kwargs) try: f = open(fn, 'rb') result = self.send_request('upload', args, (fn, f.read())) return result except IOError: print 'File %s does not exist' % fn raise def submission_images(self, subid): result = self.send_request('submission_images', {'subid':subid}) return result.get('image_ids') def overlay_plot(self, service, outfn, wcsfn, wcsext=0): from astrometry.util import util as anutil wcs = anutil.Tan(wcsfn, wcsext) params = dict(crval1 = wcs.crval[0], crval2 = wcs.crval[1], crpix1 = wcs.crpix[0], crpix2 = wcs.crpix[1], cd11 = wcs.cd[0], cd12 = wcs.cd[1], cd21 = wcs.cd[2], cd22 = wcs.cd[3], imagew = wcs.imagew, imageh = wcs.imageh) result = self.send_request(service, {'wcs':params}) print 'Result status:', result['status'] plotdata = result['plot'] plotdata = base64.b64decode(plotdata) open(outfn, 'wb').write(plotdata) print 'Wrote', outfn def sdss_plot(self, outfn, wcsfn, wcsext=0): return self.overlay_plot('sdss_image_for_wcs', outfn, wcsfn, wcsext) def galex_plot(self, outfn, wcsfn, wcsext=0): return self.overlay_plot('galex_image_for_wcs', outfn, wcsfn, wcsext) def myjobs(self): result = self.send_request('myjobs/') return result['jobs'] def job_status(self, job_id, justdict=False): result = self.send_request('jobs/%s' % job_id) if justdict: return result stat = result.get('status') if stat == 'success': result = self.send_request('jobs/%s/calibration' % job_id) print 'Calibration:', result result = self.send_request('jobs/%s/tags' % job_id) print 'Tags:', result result = self.send_request('jobs/%s/machine_tags' % job_id) print 'Machine Tags:', result result = self.send_request('jobs/%s/objects_in_field' % job_id) print 'Objects in field:', result result = self.send_request('jobs/%s/annotations' % job_id) print 'Annotations:', result result = self.send_request('jobs/%s/info' % job_id) print 'Calibration:', result return stat def sub_status(self, sub_id, justdict=False): result = self.send_request('submissions/%s' % sub_id) if justdict: return result return result.get('status') def jobs_by_tag(self, tag, exact): exact_option = 'exact=yes' if exact else '' result = self.send_request( 'jobs_by_tag?query=%s&%s' % (quote(tag.strip()), exact_option), {}, ) return result if __name__ == '__main__': import optparse parser = optparse.OptionParser() parser.add_option('--server', dest='server', default=Client.default_url, help='Set server base URL (eg, %default)') parser.add_option('--apikey', '-k', dest='apikey', help='API key for Astrometry.net web service; if not given will check AN_API_KEY environment variable') parser.add_option('--upload', '-u', dest='upload', help='Upload a file') parser.add_option('--wait', '-w', dest='wait', action='store_true', help='After submitting, monitor job status') parser.add_option('--wcs', dest='wcs', help='Download resulting wcs.fits file, saving to given filename; implies --wait if --urlupload or --upload') parser.add_option('--kmz', dest='kmz', help='Download resulting kmz file, saving to given filename; implies --wait if --urlupload or --upload') parser.add_option('--urlupload', '-U', dest='upload_url', help='Upload a file at specified url') parser.add_option('--scale-units', dest='scale_units', choices=('arcsecperpix', 'arcminwidth', 'degwidth', 'focalmm'), help='Units for scale estimate') #parser.add_option('--scale-type', dest='scale_type', # choices=('ul', 'ev'), help='Scale bounds: lower/upper or estimate/error') parser.add_option('--scale-lower', dest='scale_lower', type=float, help='Scale lower-bound') parser.add_option('--scale-upper', dest='scale_upper', type=float, help='Scale upper-bound') parser.add_option('--scale-est', dest='scale_est', type=float, help='Scale estimate') parser.add_option('--scale-err', dest='scale_err', type=float, help='Scale estimate error (in PERCENT), eg "10" if you estimate can be off by 10%') parser.add_option('--ra', dest='center_ra', type=float, help='RA center') parser.add_option('--dec', dest='center_dec', type=float, help='Dec center') parser.add_option('--radius', dest='radius', type=float, help='Search radius around RA,Dec center') parser.add_option('--downsample', dest='downsample_factor', type=int, help='Downsample image by this factor') parser.add_option('--parity', dest='parity', choices=('0','1'), help='Parity (flip) of image') parser.add_option('--tweak-order', dest='tweak_order', type=int, help='SIP distortion order (default: 2)') parser.add_option('--crpix-center', dest='crpix_center', action='store_true', default=None, help='Set reference point to center of image?') parser.add_option('--sdss', dest='sdss_wcs', nargs=2, help='Plot SDSS image for the given WCS file; write plot to given PNG filename') parser.add_option('--galex', dest='galex_wcs', nargs=2, help='Plot GALEX image for the given WCS file; write plot to given PNG filename') parser.add_option('--substatus', '-s', dest='sub_id', help='Get status of a submission') parser.add_option('--jobstatus', '-j', dest='job_id', help='Get status of a job') parser.add_option('--jobs', '-J', dest='myjobs', action='store_true', help='Get all my jobs') parser.add_option('--jobsbyexacttag', '-T', dest='jobs_by_exact_tag', help='Get a list of jobs associated with a given tag--exact match') parser.add_option('--jobsbytag', '-t', dest='jobs_by_tag', help='Get a list of jobs associated with a given tag') parser.add_option( '--private', '-p', dest='public', action='store_const', const='n', default='y', help='Hide this submission from other users') parser.add_option('--allow_mod_sa','-m', dest='allow_mod', action='store_const', const='sa', default='d', help='Select license to allow derivative works of submission, but only if shared under same conditions of original license') parser.add_option('--no_mod','-M', dest='allow_mod', action='store_const', const='n', default='d', help='Select license to disallow derivative works of submission') parser.add_option('--no_commercial','-c', dest='allow_commercial', action='store_const', const='n', default='d', help='Select license to disallow commercial use of submission') opt,args = parser.parse_args() if opt.apikey is None: # try the environment opt.apikey = os.environ.get('AN_API_KEY', None) if opt.apikey is None: parser.print_help() print print 'You must either specify --apikey or set AN_API_KEY' sys.exit(-1) args = {} args['apiurl'] = opt.server c = Client(**args) c.login(opt.apikey) if opt.upload or opt.upload_url: if opt.wcs or opt.kmz: opt.wait = True kwargs = dict( allow_commercial_use=opt.allow_commercial, allow_modifications=opt.allow_mod, publicly_visible=opt.public) if opt.scale_lower and opt.scale_upper: kwargs.update(scale_lower=opt.scale_lower, scale_upper=opt.scale_upper, scale_type='ul') elif opt.scale_est and opt.scale_err: kwargs.update(scale_est=opt.scale_est, scale_err=opt.scale_err, scale_type='ev') elif opt.scale_lower or opt.scale_upper: kwargs.update(scale_type='ul') if opt.scale_lower: kwargs.update(scale_lower=opt.scale_lower) if opt.scale_upper: kwargs.update(scale_upper=opt.scale_upper) for key in ['scale_units', 'center_ra', 'center_dec', 'radius', 'downsample_factor', 'tweak_order', 'crpix_center',]: if getattr(opt, key) is not None: kwargs[key] = getattr(opt, key) if opt.parity is not None: kwargs.update(parity=int(opt.parity)) if opt.upload: upres = c.upload(opt.upload, **kwargs) if opt.upload_url: upres = c.url_upload(opt.upload_url, **kwargs) stat = upres['status'] if stat != 'success': print 'Upload failed: status', stat print upres sys.exit(-1) opt.sub_id = upres['subid'] if opt.wait: if opt.job_id is None: if opt.sub_id is None: print "Can't --wait without a submission id or job id!" sys.exit(-1) while True: stat = c.sub_status(opt.sub_id, justdict=True) print 'Got status:', stat jobs = stat.get('jobs', []) if len(jobs): for j in jobs: if j is not None: break if j is not None: print 'Selecting job id', j opt.job_id = j break time.sleep(5) success = False while True: stat = c.job_status(opt.job_id, justdict=True) print 'Got job status:', stat if stat.get('status','') in ['success']: success = (stat['status'] == 'success') break time.sleep(5) if success: c.job_status(opt.job_id) # result = c.send_request('jobs/%s/calibration' % opt.job_id) # print 'Calibration:', result # result = c.send_request('jobs/%s/tags' % opt.job_id) # print 'Tags:', result # result = c.send_request('jobs/%s/machine_tags' % opt.job_id) # print 'Machine Tags:', result # result = c.send_request('jobs/%s/objects_in_field' % opt.job_id) # print 'Objects in field:', result #result = c.send_request('jobs/%s/annotations' % opt.job_id) #print 'Annotations:', result retrieveurls = [] if opt.wcs: # We don't need the API for this, just construct URL url = opt.server.replace('/api/', '/wcs_file/%i' % opt.job_id) retrieveurls.append((url, opt.wcs)) if opt.kmz: url = opt.server.replace('/api/', '/kml_file/%i/' % opt.job_id) retrieveurls.append((url, opt.kmz)) for url,fn in retrieveurls: print 'Retrieving file from', url, 'to', fn f = urlopen(url) txt = f.read() w = open(fn, 'wb') w.write(txt) w.close() print 'Wrote to', fn opt.job_id = None opt.sub_id = None if opt.sdss_wcs: (wcsfn, outfn) = opt.sdss_wcs c.sdss_plot(outfn, wcsfn) if opt.galex_wcs: (wcsfn, outfn) = opt.galex_wcs c.galex_plot(outfn, wcsfn) if opt.sub_id: print c.sub_status(opt.sub_id) if opt.job_id: print c.job_status(opt.job_id) #result = c.send_request('jobs/%s/annotations' % opt.job_id) #print 'Annotations:', result if opt.jobs_by_tag: tag = opt.jobs_by_tag print c.jobs_by_tag(tag, None) if opt.jobs_by_exact_tag: tag = opt.jobs_by_exact_tag print c.jobs_by_tag(tag, 'yes') if opt.myjobs: jobs = c.myjobs() print jobs #print c.submission_images(1) ```
2014/07/21
[ "https://Stackoverflow.com/questions/24863576", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2681662/" ]
No, there is no clean way to do so. When the module is being imported, it's code is executed and all global variables are set as attributes to the module object. So if part of the code is not executed at all (is guarded by `__main__` condition) there is no clean way to get access to that code. You can however run code of this module with substituted `__name__` but that's very hackish. You should refactor this module and move whole `__main__` part into a method and call it like this: ``` def main(): do_everything() if __name__ == '__main__': main() ``` This way consumer apps will be able to run code without having to run it in a separate process.
by what your saying you want to call a function in the script that is importing the module so try: ``` import __main__ __main__.myfunc() ```
17,348
43,754,065
I want to get the shade value of each circles from an image. 1. I try to detect circles using `HoughCircle`. 2. I get the center of each circle. 3. I put the text (the circle numbers) in a circle. 4. I set the pixel subset to obtain the shading values and calculate the averaged shading values. 5. I want to get the results of circle number, the coordinates of the center, and averaged shading values in CSV format. But, in the 3rd step, the circle numbers were randomly assigned. So, it's so hard to find circle number. How can I number circles in a sequence? [![enter image description here](https://i.stack.imgur.com/w823U.jpg)](https://i.stack.imgur.com/w823U.jpg) ``` # USAGE # python detect_circles.py --image images/simple.png # import the necessary packages import numpy as np import argparse import cv2 import csv # define a funtion of ROI calculating the average value in specified sample size def ROI(img,x,y,sample_size): Each_circle=img[y-sample_size:y+sample_size, x-sample_size:x+sample_size] average_values=np.mean(Each_circle) return average_values # open the csv file named circles_value circles_values=open('circles_value.csv', 'w') # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-i", "--image", required = True, help = "Path to the image") args = vars(ap.parse_args()) # load the image, clone it for output, and then convert it to grayscale image = cv2.imread(args["image"]) output = image.copy() gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # detect circles in the image circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2,50, 100, 1, 1, 20, 30) # ensure at least some circles were found if circles is not None: # convert the (x, y) coordinates and radius of the circles to integers circles = np.round(circles[0, :]).astype("int") number=1 font = cv2.FONT_HERSHEY_SIMPLEX # loop over the (x, y) coordinates and radius of the circles for (x, y, r) in circles: # draw the circle in the output image, then draw a rectangle # corresponding to the center of the circle number=str(number) cv2.circle(output, (x, y), r, (0, 255, 0), 4) cv2.rectangle(output, (x - 10, y - 10), (x + 10, y + 10), (0, 128, 255), -1) # number each circle, but its result shows irregular pattern cv2.putText(output, number, (x,y), font,0.5,(0,0,0),2,cv2.LINE_AA) # get the average value in specified sample size (20 x 20) sample_average_value=ROI(output, x, y, 20) # write the csv file with number, (x,y), and average pixel value circles_values.write(number+','+str(x)+','+str(y)+','+str(sample_average_value)+'\n') number=int(number) number+=1 # show the output image cv2.namedWindow("image", cv2.WINDOW_NORMAL) cv2.imshow("image", output) cv2.waitKey(0) # close the csv file circles_values.close() ```
2017/05/03
[ "https://Stackoverflow.com/questions/43754065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7955795/" ]
I can't answer completely, because it depends entirely on what's in `$HashVariable`. The easiest way to tell what's in there is: ``` use Data::Dumper; print Dumper $HashVariable; ``` Assuming this is a hash *reference* - which it would be, if `print $HashVariable` gives `HASH(0xdeadbeef)` as an output. So this *should* work: ``` #!/usr/bin/env perl use strict; use warnings; my $HashVariable = { somekey => 'somevalue' }; foreach my $key ( keys %$HashVariable ) { print $key, " => ", $HashVariable->{$key},"\n"; } ``` The only mistake you're making is that `$HashVariable{$key}` won't work - you need to dereference, because as it stands it refers to `%HashVariable` not `$HashVariable` which are two completely different things. Otherwise - if it's not entering the loop - it may mean that `keys %$HashVariable` isn't returning anything. Which is why that `Dumper` test would be useful - is there any chance you're either not populating it correctly, or you're *writing* to `%HashVariable` instead. E.g.: ``` my %HashVariable; $HashVariable{'test'} = "foo"; ```
There's an obvious problem here, but it wouldn't cause the behaviour that you are seeing. You think that you have a hash reference in `$HashVariable` and that sounds correct given the `HASH(0xd1007d0)` output that you see when you print it. But setting up a hash reference and running your code, gives slightly strange results: ``` my $HashVariable = { foo => 1, bar => 2, baz => 3, }; foreach my $var(keys %{$HashVariable}){ print"In the loop \n"; print"$var and $HashVariable{$var}\n"; } ``` The output I get is: ``` In the loop baz and In the loop bar and In the loop foo and ``` Notice that the values aren't being printed out. That's because of the problem I mentioned above. Adding `use strict` to the program (which you should always do) tells us what the problem is. ``` Global symbol "%HashVariable" requires explicit package name (did you forget to declare "my %HashVariable"?) at hash line 14. Execution of hash aborted due to compilation errors. ``` You are using `$HashVariable{$var}` to look up a key in your hash. That would be correct if you had a hash called `%HashVariable`, but you don't - you have a hash reference called `$HashVariable` (note the `$` instead of `%`). To look up a key from a hash reference, you need to use a dereferencing arrow - `$HashVariable->{$var}`. Fixing that, your program works as expected. ``` use strict; use warnings; my $HashVariable = { foo => 1, bar => 2, baz => 3, }; foreach my $var(keys %{$HashVariable}){ print"In the loop \n"; print"$var and $HashVariable->{$var}\n"; } ``` And I see: ``` In the loop bar and 2 In the loop foo and 1 In the loop baz and 3 ``` The only way that you could get the results you describe (the `HASH(0xd1007d0)` output but no iterations of the loop) is if you have a hash reference but the hash has no keys. So (as I said in a comment) we need to see how your hash reference is created.
17,351
37,096,806
I have landed into quite a unique problem. I created the model **1.**'message', used it for a while, then i changed it to **2.** 'messages' and after that again changed it back to **3.** 'message' but this time with many changes in the model fields. As i got to know afterwards, django migrations gets into some problems while renaming models. In my migrations, some problems have arose. Although I had run all migrations in the right way, while running the 3rd migration for message, i faced few problems that i fixed manually. Now when i ran migration for changes in other models, i found that this migration is still dependent on the 2nd migration of the messages. However, the fields for which it was dependent on the 2nd migration were actually created in third migration. The traceback i am getting: ``` ValueError: Lookup failed for model referenced by field activities.Enquiry.message_fk: chat.Message ``` and: ``` Applying contacts.0002_mailsend...Traceback (most recent call last): File "/home/sp/webapps/myenv/lib/python3.4/site-packages/django/apps/config.py", line 163, in get_model return self.models[model_name.lower()] KeyError: 'message' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/sp/webapps/myenv/lib/python3.4/site-packages/django/db/migrations/state.py", line 84, in render model = self.apps.get_model(lookup_model[0], lookup_model[1]) File "/home/sp/webapps/myenv/lib/python3.4/site-packages/django/apps/registry.py", line 202, in get_model return self.get_app_config(app_label).get_model(model_name.lower()) File "/home/sp/webapps/myenv/lib/python3.4/site-packages/django/apps/config.py", line 166, in get_model "App '%s' doesn't have a '%s' model." % (self.label, model_name)) LookupError: App 'chat' doesn't have a 'message' model. ``` What i want to ask is whether I should manually edit the dependencies in the migration file to change it from migration 2 to migration 3 in messages. PS: using django 1.7.2
2016/05/08
[ "https://Stackoverflow.com/questions/37096806", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4510252/" ]
Normally, You should not edit them manually. Once you start editing them, you will land into cyclic dependencies problems and if you do not remember what changes you made, your entire migrations will be messed up. What you can do is revert back migrations if you do not have any data to lose. If you are deleting migrations, you should take extra precaution just to ensure that in the migration table no entry remains which points towards unexisting migrations. (I would suggest not to delete migrations manually as it might get complicated.) If only you have analyzed the migration files and have clear idea as at what position problem has occurred, then only you should think of editing the migration file but don' do it until you can handle it. In you case, yes the problem might have generated due to renaming and as you say while running a migration you landed into some problem which you fixed manually, it might have happened that the process would have been stuck in between and it created some problem. You can change the dependency and run `makemigrations`. If there is a circular dependency, it will come directly, then you should revert back the change. Or otherwise, just do a little more analysis and remove the cyclic dependency issue by editing a few more files. (keep backup) If you are lucky or you understand migrations deeply, you might end up with success.
No, I don't think so, you are better off deleting the migration files after the last successful migrations and running it again.
17,352
57,060,964
I am using `sklearn` modules to find the best fitting models and model parameters. However, I have an unexpected Index error down below: ``` > IndexError Traceback (most recent call > last) <ipython-input-38-ea3f99e30226> in <module> > 22 s = mean_squared_error(y[ts], best_m.predict(X[ts])) > 23 cv[i].append(s) > ---> 24 print(np.mean(cv, 1)) > IndexError: tuple index out of range ``` what I want to do is to find best fitting regressor and its parameters, but I got above error. I looked into `SO` and tried [this solution](https://stackoverflow.com/questions/20296188/indexerror-tuple-index-out-of-range-python) but still, same error bumps up. any idea to fix this bug? can anyone point me out why this error happening? any thought? **my code**: ``` from sklearn.model_selection import KFold from sklearn.metrics import mean_squared_error from sklearn.linear_model import LinearRegression, Ridge, Lasso from xgboost.sklearn import XGBRegressor from sklearn.datasets import make_regression models = [SVR(), RandomForestRegressor(), LinearRegression(), Ridge(), Lasso(), XGBRegressor()] params = [{'C': [0.01, 1]}, {'n_estimators': [10, 20]}] X, y = make_regression(n_samples=10000, n_features=20) with warnings.catch_warnings(): warnings.filterwarnings("ignore") cv = [[] for _ in range(len(models))] fold = KFold(5,shuffle=False) for tr, ts in fold.split(X): for i, (model, param) in enumerate(zip(models, params)): best_m = GridSearchCV(model, param) best_m.fit(X[tr], y[tr]) s = mean_squared_error(y[ts], best_m.predict(X[ts])) cv[i].append(s) print(np.mean(cv, 1)) ``` **desired output**: if there is a way to fix up above error, I am expecting to pick up best-fitted models with parameters, then use it for estimation. Any idea to improve the above attempt? Thanks
2019/07/16
[ "https://Stackoverflow.com/questions/57060964", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7302169/" ]
The root cause of your issue is that, while you ask for the evaluation of 6 models in `GridSearchCV`, you provide parameters only for the first 2 ones: ``` models = [SVR(), RandomForestRegressor(), LinearRegression(), Ridge(), Lasso(), XGBRegressor()] params = [{'C': [0.01, 1]}, {'n_estimators': [10, 20]}] ``` The result of `enumerate(zip(models, params))` in this setting, i.e: ``` for i, (model, param) in enumerate(zip(models, params)): print((model, param)) ``` is ``` (SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma='auto', kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False), {'C': [0.01, 1]}) (RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1, oob_score=False, random_state=None, verbose=0, warm_start=False), {'n_estimators': [10, 20]}) ``` i.e the last 4 models are simply ignored, so you get empty entries for them in `cv`: ``` print(cv) # result: [[5950.6018771284835, 5987.293514740653, 6055.368320208183, 6099.316091619069, 6146.478702335218], [3625.3243553665975, 3301.3552182952058, 3404.3321983193728, 3521.5160621260898, 3561.254684271113], [], [], [], []] ``` which causes the downstream error when trying to get the `np.mean(cv, 1)`. The solution, as already correctly pointed out by Psi in their answer, is to go for empty dictionaries in the models in which you actually **don't** perform any CV search; omitting the `XGBRegressor` (have not installed it), here are the results: ``` models = [SVR(), RandomForestRegressor(), LinearRegression(), Ridge(), Lasso()] params2 = [{'C': [0.01, 1]}, {'n_estimators': [10, 20]}, {}, {}, {}] cv = [[] for _ in range(len(models))] fold = KFold(5,shuffle=False) for tr, ts in fold.split(X): for i, (model, param) in enumerate(zip(models, params2)): best_m = GridSearchCV(model, param) best_m.fit(X[tr], y[tr]) s = mean_squared_error(y[ts], best_m.predict(X[ts])) cv[i].append(s) ``` where `print(cv)` gives: ``` [[4048.660483326826, 3973.984055352062, 3847.7215568088545, 3907.0566348092684, 3820.0517432992765], [1037.9378737329769, 1025.237441119364, 1016.549294695313, 993.7083268195154, 963.8115632611381], [2.2948917095935095e-26, 1.971022007799432e-26, 4.1583774042712844e-26, 2.0229469068846665e-25, 1.9295075684919642e-26], [0.0003350178681602639, 0.0003297411022124562, 0.00030834076832371557, 0.0003355298330301431, 0.00032049282437794516], [10.372789356303688, 10.137748082073076, 10.136028304131141, 10.499159069700834, 9.80779910439471]] ``` and `print(np.mean(cv, 1))` works OK, giving: ``` [3.91949489e+03 1.00744890e+03 6.11665355e-26 3.25824479e-04 1.01907048e+01] ``` So, in your case, you should indeed change `params` to: ``` params = [{'C': [0.01, 1]}, {'n_estimators': [10, 20]}, {}, {}, {}, {}] ``` as already suggested by Psi.
When you define ``` cv = [[] for _ in range(len(models))] ``` it has an empty list for each model. In the loop, however, you go over `enumerate(zip(models, params))` which has only **two** elements, since your `params` list has two elements (because `list(zip(x,y))` [has length](https://docs.python.org/3.3/library/functions.html#zip) equal to `min(len(x),len(y)`). Hence, you get an `IndexError` because some of the lists in `cv` are empty (all but the first two) when you calculate the mean with `np.mean`. **Solution:** If you don't need to use `GridSearchCV` on the remaining models you may just extend the `params` list with empty dictionaries: ``` params = [{'C': [0.01, 1]}, {'n_estimators': [10, 20]}, {}, {}, {}, {}] ```
17,354
31,387,660
How I can use the Kivy framework in Qpython3 (Python 3.2 for android) app? I know that Qpython (Python 2.7 for android) app support this framework. pip\_console don't install kivy. I have an error, when I try to install it. Please help me.
2015/07/13
[ "https://Stackoverflow.com/questions/31387660", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5111676/" ]
``` Session["email"] = email; ``` This will store the value between response and postback. Let me know if this is what you were looking for.
**TempData** can work for you. Another option is to store it in hidden field and receive it back on POST but you should be aware that "bad users" can modify that (via browser developer tools for example).
17,355
7,391,689
Here is what I can read in the python subprocess module documentation: ``` Replacing shell pipeline output=`dmesg | grep hda` ==> p1 = Popen(["dmesg"], stdout=PIPE) p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE) p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits. output = p2.communicate()[0] The p1.stdout.close() call after starting the p2 is important in order for p1 to receive a SIGPIPE if p2 exits before p1. ``` I don't really understand why we have to close p1.stdout after have created p2. When is exactly executed the p1.stdout.close()? What happens when p2 never ends? What happens when nor p1 or p2 end?
2011/09/12
[ "https://Stackoverflow.com/questions/7391689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
From [Wikipedia](http://en.wikipedia.org/wiki/SIGPIPE), **SIGPIPE** is the signal sent to a process when it attempts to write to a pipe without a process connected to the other end. When you first create `p1` using `stdout=PIPE`, there is one process connected to the pipe, which is your Python process, and you can read the output using `p1.stdout`. When you create `p2` using `stdin=p1.stdout` there are now two processes connected to the pipe `p1.stdout`. Generally when you are running processes in a pipeline you want all processes to end when any of the processes end. For this to happen automatically you need to close `p1.stdout` so `p2.stdin` is the only process attached to that pipe, this way if `p2` ends and `p1` writes additional data to stdout, it will receive a SIGPIPE since there are no longer any processes attached to that pipe.
OK I see. p1.stdout is closed from my python script but remains open in p2, and then p1 and p2 communicate together. Except if p2 is already closed, then p1 receives a SIGPIPE. Am I correct?
17,357
46,517,814
sudo python yantest.py 255,255,0 ``` who = sys.argv[1] print sys.argv[1] print who print 'Number of arguments:', len(sys.argv), 'arguments.' print 'Argument List:', str(sys.argv) yanon(strip, Color(who)) ``` output from above is ``` 255,255,0 255,255,0 Number of arguments: 2 arguments. Argument List: ['yantest.py', '255,255,0'] Traceback (most recent call last): File "yantest.py", line 46, in <module> yanon(strip, Color(who)) TypeError: Color() takes at least 3 arguments (1 given) Segmentation fault ``` How do I use the variable "who" inside the Color function? Ive tried ('who'), ("who") neither of which work either.
2017/10/01
[ "https://Stackoverflow.com/questions/46517814", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7509061/" ]
The problem with your implementation is that it does not distinguish original numbers from the squares that you have previously added. First, since you are doing this recursively, you don't need a `for` loop. Each invocation needs to take care of the initial value of the list alone. Next, `add(n)` adds the number at the end, while your example shows adding numbers immediately after the original value. Therefore, you should use `num.add(1, hold)`, and skip two initial numbers when making a recursive call. Here is how the fixed method should look: ``` public static int sumOfSquares(List<Integer> num) { if (num.isEmpty()) { return 0; } // Deal with only the initial element int hold= num.get(0)*num.get(0); // Insert at position 1, right after the squared number num.add(1, hold); // Truncate two initial numbers, the value and its square: return num.get(1) + sumOfSquares(num.subList(2, num.size())); } ``` [Demo.](https://ideone.com/ByylCV)
There are two ways to safely add (or remove) elements to a list while iterating it: 1. Iterate backwards over the list, so that the indexes of the upcoming elements don't shift. 2. Use an [`Iterator`](https://docs.oracle.com/javase/9/docs/api/java/util/Iterator.html) or [`ListIterator`](https://docs.oracle.com/javase/9/docs/api/java/util/ListIterator.html). You can fix your code using either strategy, but I recommend a `ListIterator` for readable code. ``` import java.util.ListIterator; public static void insertSquares(List<Integer> num) { ListIterator<Integer> iter = num.listIterator(); while (iter.hasNext()) { int value = iter.next(); iter.add(value * value); } } ``` Then, move the summing code into a separate method so that the recursion doesn't interfere with the inserting of squares into the list. Your recursive solution will work, but an iterative solution would be more efficient for Java.
17,358
45,765,946
I'm using some objects in python with dynamic properties, all with numbers and strings. Also I created a simple method to make a copy of an object. One of the property is a list, but I don't need it to be deep copied. This method seems to work fine, but I found an odd problem. This piece of code shows it: ``` #!/usr/bin/env python3 # class used for the example class test(object): def copy(self): retval = test() # just create a new, empty object, and populate it with # my defined properties for element in dir(self): if element.startswith("_"): continue setattr(retval, element, getattr(self, element)) return retval test1 = test() # here I dynamically create an attribute (called "type") in this object setattr(test1, "type", "A TEST VALUE") # this print shows "A TEST VALUE", as expected print(test1.type) # Let's copy test1 as test2 test2 = test1.copy() # this print shows also "A TEST VALUE", as expected print(test2.type) test2.type = "ANOTHER VALUE" # this print shows "ANOTHER VALUE", as expected print(test2.type) # Let's copy test2 as test3 test3 = test2.copy() # this print shows "A TEST VALUE", but "ANOTHER VALUE" was expected print(test3.type) ``` Where is my conceptual error? Thanks.
2017/08/18
[ "https://Stackoverflow.com/questions/45765946", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1502508/" ]
Your `copy()` method copied the `copy` method (*not* the function from the class) from `test1`, which means that `self` in `test2.copy()` is still `test1`.
If you take a look at `dir(test1)`, you'll see that one of the elements is `'copy'`. In other words, you're not just copying the `type` attribute. **You're copying the `copy` method.** `test2` gets `test2.copy` set to `test1.copy`, a bound method that will copy `test1`. Don't use `dir` for this. Look at the instance's `__dict__`, which only contains instance-specific data.
17,359
4,834,538
``` import os import sys os.environ['DJANGO_SETTINGS_MODULE'] = "trade.settings" from trade.turkey.models import * d = DemoRecs.objects.all() d.delete() ``` When I run this, it imports fine if I leave out the `d.delete()` line. It's erroring on that line. Why? If I comment that out, everything is cool. I can insert. I can update. But when I have that line everything screws up. The traceback is: ``` d.delete() File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 447, in delete obj._collect_sub_objects(seen_objs) File "/usr/local/lib/python2.6/dist-packages/django/db/models/base.py", line 585, in _collect_sub_objects for related in self._meta.get_all_related_objects(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/options.py", line 347, in get_all_related_objects self._fill_related_objects_cache() File "/usr/local/lib/python2.6/dist-packages/django/db/models/options.py", line 374, in _fill_related_objects_cache for klass in get_models(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/loading.py", line 167, in get_models self._populate() File "/usr/local/lib/python2.6/dist-packages/django/db/models/loading.py", line 61, in _populate self.load_app(app_name, True) File "/usr/local/lib/python2.6/dist-packages/django/db/models/loading.py", line 76, in load_app app_module = import_module(app_name) File "/usr/local/lib/python2.6/dist-packages/django/utils/importlib.py", line 35, in import_module __import__(name) ImportError: No module named turkey ```
2011/01/29
[ "https://Stackoverflow.com/questions/4834538", "https://Stackoverflow.com", "https://Stackoverflow.com/users/179736/" ]
The directory for the `trade` project is missing from `sys.path`.
Try adding "trade" to the pythonpath... ``` import os.path _pypath = os.path.realpath(os.path.dirname(__file__) + '/trade') sys.path.append(_pypath) ```
17,360
50,809,052
So in python, if I want to make an if statement I need to do something like this (where a,b,c are conditions): ``` if(a) x=1 elsif(b) x=1 elseif(c) x=1 ``` is there a way to simply do something like: ``` if(a or b or c) x=1 ``` this would save a huge amount of time, but it doesn't evaluate.
2018/06/12
[ "https://Stackoverflow.com/questions/50809052", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9928114/" ]
Turns out, whatever the issue was internally, it was actually triggered by this library in my `build.gradle` file: ``` implementation "com.github.bigfishcat.android:svg-android:2.0.8" ``` How a library cause this, I do not know. Everything builds fine now though.
apply plugin: 'com.android.application' **apply plugin: 'kotlin-android'** **apply plugin: 'kotlin-android-extensions'** android { ``` compileSdkVersion 26 defaultConfig { applicationId "com.example.admin.myapplication" minSdkVersion 15 targetSdkVersion 26 versionCode 1 versionName "1.0" testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } ``` } dependencies { ``` implementation fileTree(dir: 'libs', include: ['*.jar']) implementation"org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin_version" implementation 'com.android.support:appcompat-v7:26.1.0' implementation 'com.android.support.constraint:constraint-layout:1.1.0' testImplementation 'junit:junit:4.12' androidTestImplementation 'com.android.support.test:runner:1.0.2' androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2' implementation 'com.android.support:recyclerview-v7:26.1.0' ``` } my android studio version is 3.1.1 wroking properly remove all kotlin library from your gradle and put implementation"org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin\_version" apply plugin: 'kotlin-android' apply plugin: 'kotlin-android-extensions and in top level gradle bulid.gradle(your\_app\_name) put buildscript { ``` ext.kotlin_version = '1.2.30' repositories { google() jcenter() } dependencies { classpath 'com.android.tools.build:gradle:3.1.1' classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } ``` }
17,361
12,173,856
I'm trying to reimplement python [slice notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation) in another language (php) and looking for a snippet (in any language or pseudocode) that would mimic the python logic. That is, given a list and a triple `(start, stop, step)` or a part thereof, determine correct values or defaults for all parameters and return a slice as a new list. I tried looking into [the source](http://hg.python.org/cpython/file/3d4d52e47431/Objects/sliceobject.c). That code is far beyond my c skills, but I can't help but agree with the comment saying: ``` /* this is harder to get right than you might think */ ``` Also, if something like this is already done, pointers will be greatly appreciated. This is my test bench (make sure your code passes before posting): ``` #place your code below code = """ def mySlice(L, start=None, stop=None, step=None): or <?php function mySlice($L, $start=NULL, $stop=NULL, $step=NULL) ... or function mySlice(L, start, stop, step) ... """ import itertools L = [0,1,2,3,4,5,6,7,8,9] if code.strip().startswith('<?php'): mode = 'php' if code.strip().startswith('def'): mode = 'python' if code.strip().startswith('function'): mode = 'js' if mode == 'php': var, none = '$L', 'NULL' print code, '\n' print '$L=array(%s);' % ','.join(str(x) for x in L) print "function _c($s,$a,$e){if($a!==$e)echo $s,' should be [',implode(',',$e),'] got [',implode(',',$a),']',PHP_EOL;}" if mode == 'python': var, none = 'L', 'None' print code, '\n' print 'L=%r' % L print "def _c(s,a,e):\n\tif a!=e:\n\t\tprint s,'should be',e,'got',a" if mode == 'js': var, none = 'L', 'undefined' print code, '\n' print 'L=%r' % L print "function _c(s,a,e){if(a.join()!==e.join())console.log(s+' should be ['+e.join()+'] got ['+a.join()+']');}" print n = len(L) + 3 start = range(-n, n) + [None, 100, -100] stop = range(-n, n) + [None, 100, -100] step = range(-n, n) + [100, -100] for q in itertools.product(start, stop, step): if not q[2]: q = q[:-1] actual = 'mySlice(%s,%s)' % (var, ','.join(none if x is None else str(x) for x in q)) slice_ = 'L[%s]' % ':'.join('' if x is None else str(x) for x in q) expect = eval(slice_) if mode == 'php': expect = 'array(%s)' % ','.join(str(x) for x in expect) print "_c(%r,%s,%s);" % (slice_, actual, expect) if mode == 'python': print "_c(%r,%s,%s);" % (slice_, actual, expect) if mode == 'js': print "_c(%r,%s,%s);" % (slice_, actual, expect) ``` how to use it: * save into a file (`test.py`) * place your python, php or javascript code between `"""`s * run `python test.py | python` or `python test.py | php` or `python test.py | node`
2012/08/29
[ "https://Stackoverflow.com/questions/12173856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/989121/" ]
I've written a PHP port based on the C code, optimized for step sizes -1 and 1: ``` function get_indices($length, $step, &$start, &$end, &$size) { if (is_null($start)) { $start = $step < 0 ? $length - 1 : 0; } else { if ($start < 0) { $start += $length; if ($start < 0) { $start = $step < 0 ? -1 : 0; } } elseif ($start >= $length) { $start = $step < 0 ? $length - 1 : $length; } } if (is_null($end)) { $end = $step < 0 ? -1 : $length; } else { if ($end < 0) { $end += $length; if ($end < 0) { $end = $step < 0 ? - 1 : 0; } } elseif ($end >= $length) { $end = $step < 0 ? $length - 1 : $length; } } if (($step < 0 && $end >= $start) || ($step > 0 && $start >= $end)) { $size = 0; } elseif ($step < 0) { $size = ($end - $start + 1) / $step + 1; } else { $size = ($end - $start - 1) / $step + 1; } } function mySlice($L, $start = NULL, $end = NULL, $step = 1) { if (!$step) { return false; // could throw exception too } $length = count($L); get_indices($length, $step, $start, $end, $size); // optimize default step if ($step == 1) { // apply native array_slice() return array_slice($L, $start, $size); } elseif ($step == -1) { // negative step needs an array reversal first // with range translation return array_slice(array_reverse($L), $length - $start - 1, $size); } else { // standard fallback $r = array(); for ($i = $start; $step < 0 ? $i > $end : $i < $end; $i += $step) { $r[] = $L[$i]; } return $r; } } ```
I can't say there's no bug in the codes, but it had past your test program :) ``` def mySlice(L, start=None, stop=None, step=None): ret = [] le = len(L) if step is None: step = 1 if step > 0: #this situation might be easier if start is None: start = 0 else: if start < 0: start += le if start < 0: start = 0 if start > le: start = le if stop is None: stop = le else: if stop < 0: stop += le if stop < 0: stop = 0 if stop > le: stop = le else: if start is None: start = le-1 else: if start < 0: start += le if start < 0: start = -1 if start >= le: start = le-1 if stop is None: stop = -1 #stop is not 0 because we need L[0] else: if stop < 0: stop += le if stop < 0: stop = -1 if stop >= le: stop = le #(stop-start)*step>0 to make sure 2 things: #1: step != 0 #2: iteration will end while start != stop and (stop-start)*step > 0 and start >=0 and start < le: ret.append( L[start] ) start += step return ret ```
17,362
1,376,016
I was playing around with Python's subprocess module, trying a few examples but I can't seem to get heredoc statements to work. Here is the trivial example I was playing with: ``` import subprocess a = "A String of Text" p = subprocess.Popen(["cat", "<<DATA\n" + a + "\nDATA"]) ``` I get the following error when I run the code above: ``` cat: <<DATA\nA String of Text\nDATA: No such file or directory ``` Am I doing it wrong? Is this even possible? If so how would I go about doing it? --- Update Just wanted to say that this should never be performed in a real python program because there are better ways of doing this.
2009/09/03
[ "https://Stackoverflow.com/questions/1376016", "https://Stackoverflow.com", "https://Stackoverflow.com/users/124861/" ]
The shell "heredoc" support is a shell feature. `subprocess.Popen` does not run your command through the shell by default, so this syntax certainly won't work. However, since you're using pipes anyway, there isn't any need to use the heredoc support of the shell. Just write your string `a` to the stdin pipe of the process you just started. This is exactly what the shell would do with the heredoc anyway. You can do this with [`Popen.communicate()`](http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate): ``` p.communicate(a) ``` The return value of the `communicate()` function contains the output of the process (in two streams, see the docs).
You're passing shell syntax as an arguments to `cat` program. You can try to do it like that: ``` p = subprocess.Popen(["sh", "-c", "cat <<DATA\n" + a + "\nDATA"]) ``` But the concept itself is wrong. You should use Python features instead of calling shell scripts inside your python scripts. And in this particular case you should that shell's heredoc syntax interpolates variables, so you'll need to escape all the text inside `a` and make sure there's no `DATA` line in it. --- For Python equivalent, I think the closest idea to this (assuming you don't want just `print(a)` ;-)) is passing the value of the variable to stdin of a spawned process: ``` p = subprocess.Popen(["program", ...], stdin=subprocess.PIPE) p.communicate(a) ```
17,372
30,438,227
I am building an application in python that uses a wrap to a library that performs hardware communication I would like to create some test units and I am pretty new to unit tests, so I would like to mock the communications but I really don't know how to do it quick example: this is the application code using the comm lib ``` def changeValue(id, val): current_value = comm.getval(id) if (current_value != val): comm.send(id, val) ``` I want to test this without performing communications, i.e. replacing the comm.getval return by some mocked value, and sending comm.send to a mocked comm class. Can anyone give a hint on that? --- The thing is that comm is a object inside a class let's say the class is like this: ``` class myClass: comm = Comm() .... def __init__(): comm = comm.start() def changeValue(id, val): .... .... ```
2015/05/25
[ "https://Stackoverflow.com/questions/30438227", "https://Stackoverflow.com", "https://Stackoverflow.com/users/180699/" ]
You can use [`mock`](https://docs.python.org/3/library/unittest.mock.html#module-unittest.mock) framework to this kind of jobs. First of all you use `comm = Comm()` in `MyClass` and that means you have something like `from comm_module import Comm` in `MyClass`'s module. In these cases you need to patch `Comm` reference in `MyClass`'s module to make your patch active. So an example of how you can test your code without do any connection could be: ``` @patch("my_class.Comm", autospec=True) def test_base(self, mock_comm_factory): mock_comm = mock_comm_factory.return_value MyClass() mock_comm.start.assert_called_with() @patch("my_class.Comm", autospec=True) def test_changeValue(self, mock_comm_factory): mock_comm = mock_comm_factory.return_value mock_comm.getval.return_value = 13 MyClass().changeValue(33, 23) mock_comm.getval.assert_called_with(33) mock_comm.send.assert_called_with(33, 23) mock_comm.reset_mock() mock_comm.getval.return_value = 23 MyClass().changeValue(33, 23) mock_comm.getval.assert_called_with(33) self.assertFalse(mock_comm.send.called) ``` Now I can start to explain all details of my answer like why use [`autospec=True`](https://docs.python.org/3/library/unittest.mock.html#autospeccing) or [how to apply patch to all methods](https://docs.python.org/3/library/unittest.mock.html#test-prefix) but that means to rewrite a lot of `mock` documentations and a SO answers. So I hope that is enough as starting point.
The trick is not to use global objects like `comm`. If you can, make it so that `comm` gets injected to your class or method by the caller. Then what you do is pass a mocked `comm` when testing and then real one when in production. So either you make a `comm` reference a field in your class (and inject it via a constructor or setter method) like so ``` class myClass: .... def __init__(myComm): comm = myComm; comm = comm.start() def changeValue(id, val): current_value = comm.getval(id) if (current_value != val): comm.send(id, val) .... ``` or you make it a parameter in the method where it is used, like so ``` def changeValue(id, val, myComm): current_value = myComm.getval(id) if (current_value != val): myComm.send(id, val) ``` Using global *anything* makes mocking a huge pain, try to use [Dependency Injection](https://stackoverflow.com/questions/130794/what-is-dependency-injection) whenever you need to mock something. This is another good post about DI. It is in java, but it should be the same in python <http://googletesting.blogspot.ca/2008/07/how-to-think-about-new-operator-with.html>
17,377
247,301
Besides the syntactic sugar and expressiveness power what are the differences in runtime efficiency. I mean, plpgsql can be faster than, lets say plpythonu or pljava? Or are they all approximately equals? We are using stored procedures for the task of detecting nearly-duplicates records of people in a moderately sized database (around 10M of records)
2008/10/29
[ "https://Stackoverflow.com/questions/247301", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18300/" ]
plpgsql provides greater type safety I believe, you have to perform explicit casts if you want to perform operations using two different columns of similar type, like varchar and text or int4 and int8. This is important because if you need to have your stored proc use indexes, postgres requires that the types match exactly between join conditions (edit: for equality checks too I think). There may be a facility for this in the other languages though, I haven't used them. In any case, I hope this gives you a better starting point for your investigation.
Without doing actual testing, I would expect plpgsql to be somewhat more efficient than other languages, because it's small. Having said that, remember that SQL functions are likely to be even faster than plpgsql, if a function is simple enough that you can write it in just SQL.
17,378
14,053,552
I am writing a webapp and I would like to start charging my users. What are the recommended billing platforms for a python/Django webapp? I would like something that keeps track of my users' purchase history, can elegantly handle subscription purchases, a la carte items, coupon codes, and refunds, makes it straightforward to generate invoices/receipts, and can easily integrate with most payment processors. Extra points if it comes with a fancy admin interface. I found this [django-billing project](https://github.com/gabrielgrant/django-billing), are there any others? Also, do you rely on your payment processor to handle these tasks or do you do all of them yourself? *Note: I am not asking what payment processors to use, but rather what middleware/libraries one should run on their webapp itself.*
2012/12/27
[ "https://Stackoverflow.com/questions/14053552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/234270/" ]
The **[koalixcrm](https://github.com/scaphilo/koalixcrm)** is perhaps something you could start with. It offers some of your required functionality. Still it is in a prealpha stage but it already provides PDF export for Invoices and Quotes, there is already one included plugin for subscriptions. also try the **[demo](https://github.com/scaphilo/koalixcrm/wiki)** As i am the developer of the koalixcrm im very interested to work with you - perhaps we can merge our projects.
It's not really clear why Django Community hasn't come up a with complete billing system or at least a generic one to start working on. There's many packages that can be used for getting an idea how to implement such platform: <https://www.djangopackages.com/grids/g/payment-processing/>
17,381
67,996,181
So in python to call a parent classes function in a child class we use the `super()` method but why do we use the `super()` when we can just call the Parent class function suppose i have a `Class Employee:` and i have another class which inherites from the Employee class `class Programmer(Employee):` to call any function of Employee class in Programmer class i can just use `Employee.functionName()` and that does the job. Here is some code: ``` class Person: country = "India" def takeBreath(self): print("I am breathing...") class Employee(Person): company = "Honda" def getSalary(self): print(f"Salary is {self.salary}") def takeBreath(self): print("I am an Employee so I am luckily breathing...") class Programmer(Employee): company = "Fiverr" def getSalary(self): print(f"No salary to programmer's.") def takeBreath(self): Employee().takeBreath() print("I am a Programmer so i am breathing++..") p = Person() p.takeBreath() e = Employee() e.takeBreath() pr = Programmer() pr.takeBreath() ``` As you can see i wanted to call the Employee functions `takeBreath()` method in the `Programmer()` class so i just wrote `Employee.takeBreath()` which also does the job so can anyone explain me why we need the `super()` method in python? ### :)
2021/06/16
[ "https://Stackoverflow.com/questions/67996181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15895348/" ]
With `super()` you don't need to define `takeBreath()` in each class inherited from the `Person()` class.
`super()` is a far more general method. Suppose you decide to change your superclass. Maybe you name it `Tom` instead of `Employee`. Now you have to go about and change every mention of your `Employee` call. You can think of `super()` as a "proxy" to get the superclass regardless of what it is. It enables you to write more flexible code. Though, what you are doing is different. You are creating a new instance of Employee each time, and then calling the method on it. If you change your `takeBreath` method to not take a `self` parameter, you will be able to do something like `Employee.takeBreath()`, or better, `super().takeBreath()`.
17,382
59,475,157
I'm a beginner in python. I'm not able to understand what the problem is? ``` the runtime process for the instance running on port 43421 has unexpectedly quit ERROR 2019-12-24 17:29:10,258 base.py:209] Internal Server Error: /input/ Traceback (most recent call last): File "/var/www/html/sym_math/google_appengine/lib/django-1.3/django/core/handlers/base.py", line 178, in get_response response = middleware_method(request, response) File "/var/www/html/sym_math/google_appengine/lib/django-1.3/django/middleware/common.py", line 94, in process_response if response.status_code == 404: AttributeError: 'tuple' object has no attribute 'status_code' ```
2019/12/25
[ "https://Stackoverflow.com/questions/59475157", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12277769/" ]
Since the column in the first table is an identity field, you should use [`scope_idenity()`](https://learn.microsoft.com/en-us/sql/t-sql/functions/scope-identity-transact-sql?view=sql-server-ver15) immediately after the first INSERT statement to get the result. Then use that result in the subsequent INSERT statements. ``` Create Procedure spCustomerDetails @FirstName nvarchar (30), @LastName nvarchar(30), @Phone Char(30), @Email nvarchar(30) As Begin Begin Try SET TRANSACTION ISOLATION LEVEL SERIALIZABLE Begin Transaction DECLARE @NewBusEntityID int; INSERT INTO Person.Person(PersonType, NameStyle,Title, FirstName, MiddleName, LastName, Suffix, EmailPromotion, AdditionalContactInfo) VALUES('SC', 0, 'NULL', '@FirstName', '@MiddleName', '@LastName', 'NULL', '0', 'NULL'); SELECT @NewBusEntityID = scope_idenity(); INSERT INTO Person.PersonPhone(BusinessEntityID, PhoneNumber, PhoneNumberTypeID) VALUES(@NewBusEntityID, '@Phone', 2); INSERT INTO Person.EmailAddress(BusinessEntityID,EmailAddressID,EmailAddress) VALUES(@NewBusEntityID, '1', '@Email'); COMMIT TRANSACTION End Try Begin Catch Rollback Transaction Print 'Roll back transaction' End Catch End ``` If it were not an identity field, you could instead use a [`SEQUENCE`](https://learn.microsoft.com/en-us/sql/t-sql/statements/create-sequence-transact-sql?view=sql-server-ver15). Then you could select the [`NEXT VALUE FOR`](https://learn.microsoft.com/en-us/sql/t-sql/functions/next-value-for-transact-sql?view=sql-server-ver15) the sequence at the beginning of the procedure and use that value for all three INSERT statements.
You can use MAX: ``` DECLARE @id int = (select max(BusinessEntityId) From Person.BusinessEntity) ```
17,384
23,382,499
I'm running a python script that makes modifications in a specific database. I want to run a second script once there is a modification in my database (local server). Is there anyway to do that? Any help would be very appreciated. Thanks!
2014/04/30
[ "https://Stackoverflow.com/questions/23382499", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2343621/" ]
Thanks for your answers, i found a solution here: <http://crazytechthoughts.blogspot.fr/2011/12/call-external-program-from-mysql.html> A Trigger must be defined to call an external function once the DB Table is modified: ``` DELIMITER $ CREATE TRIGGER Test_Trigger AFTER INSERT ON SFCRoutingTable FOR EACH ROW BEGIN DECLARE cmd CHAR(255); DECLARE result int(10); SET cmd = CONCAT('python /home/triggers.py'); SET result = sys_exec(cmd); END; $ DELIMITER ; ``` Here, to call my python script, I use 'sys\_exec' which is a UDF (User Defined Function). You can download the library from here: <https://github.com/mysqludf/lib_mysqludf_sys>
You can use 'Stored Procedures' in your database a lot of RDBMS engines support one or multiple programming languages to do so. AFAIK postgresql support signals to call external process to. Google something like 'Stored Procedures in Python for PostgreSQL' or 'postgresql trigger call external program'
17,387
37,355,375
There is a dict (say `d`). `dict.get(key, None)` returns `None` if `key` doesn't exist in `d`. **How do I get the first value (i.e., `d[key]` is not `None`) from a list of keys (some of them might not exist in `d`)?** This post, [Pythonic way to avoid “if x: return x” statements](https://stackoverflow.com/questions/36117583/pythonic-way-to-avoid-if-x-return-x-statements), provides a concrete way. ``` for d in list_dicts: for key in keys: if key in d: print(d[key]) break ``` I use **xor operator** to acheive it in one line, as demonstrated in, ``` # a list of dicts list_dicts = [ {'level0' : (1, 2), 'col': '#ff310021'}, {'level1' : (3, 4), 'col': '#ff310011'}, {'level2' : (5, 6), 'col': '#ff312221'}] # loop over the list of dicts dicts, extract the tuple value whose key is like level* for d in list_dicts: t = d.get('level0', None) or d.get('level1', None) or d.get('level2', None) col = d['col'] do_something(t, col) ``` It works. In this way, I just simply list all options (`level0` ~ `level3`). Is there a better way for a lot of keys (say, from `level0` to `level100`), like list comprehensions?
2016/05/20
[ "https://Stackoverflow.com/questions/37355375", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3067748/" ]
There's no convenient builtin, but you could implement it easily enough: ``` def getfirst(d, keys): for key in keys: if key in d: return d[key] return None ```
I would use `next` with a comprehension: ``` # build list of keys levels = [ 'level' + str(i) for i in range(3) ] for d in list_dicts: level_key = next(k for k in levels if d.get(k)) level = d[level_key] ```
17,390
828,139
I'm trying to get the values from a pointer to a float array, but it returns as c\_void\_p in python The C code ``` double v; const void *data; pa_stream_peek(s, &data, &length); v = ((const float*) data)[length / sizeof(float) -1]; ``` Python so far ``` import ctypes null_ptr = ctypes.c_void_p() pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length)) ``` The issue being the null\_ptr has an int value (memory address?) but there is no way to read the array?!
2009/05/06
[ "https://Stackoverflow.com/questions/828139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/102018/" ]
My ctypes is rusty, but I believe you want POINTER(c\_float) instead of c\_void\_p. So try this: ``` null_ptr = POINTER(c_float)() pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length)) null_ptr[0] null_ptr[5] # etc ```
You'll also probably want to be passing the null\_ptr using byref, e.g. ``` pa_stream_peek(stream, ctypes.byref(null_ptr), ctypes.c_ulong(length)) ```
17,397
4,960,777
The following Python code tries to create an SQLite database and a table, using the command line in Linux: ``` #!/usr/bin/python2.6 import subprocess args = ["sqlite3", "db.sqlite", "'CREATE TABLE my_table(my_column TEXT)'"] print(" ".join(args)) subprocess.call(args) ``` When I ran the code, it created a database with zero bytes and the following output: ``` sqlite3 db.sqlite 'CREATE TABLE my_table(my_column TEXT)' Error: near "'CREATE TABLE my_table(my_column TEXT)'": syntax error ``` But when I copied the command printed by the code (just above the error message), and pasted the command onto the command line, the command created a database with a table. What is wrong with the code?
2011/02/10
[ "https://Stackoverflow.com/questions/4960777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/249487/" ]
Drop the `'` in the second argument to `sqlite` (third element of the `args` list). The `subprocess` module does the quoting on its own and ensures, that the arguments gets passed to the executable as one string. It works on the command line, because there, the `'` are necessary to tell the shell, that it should treat the enclosed string as single entity. ``` args = ["sqlite3", "db.sqlite", "CREATE TABLE my_table(my_column TEXT)"] ``` should work.
Besides the extra quoting that @Dirk mentions before, you can also create the database without spawning a subprocess: ``` import sqlite3 cnx = sqlite3.connect("e:/temp/db.sqlite") cnx.execute("CREATE TABLE my_table(my_column TEXT)") cnx.commit() cnx.close() ```
17,402
51,576,837
I have dataset where one of the column holds total sq.ft value. ``` 1151 1025 2100 - 2850 1075 1760 ``` I would like to split the 2100 - 2850 if the dataframe contains '-' and take its average(mean) as the new value. I am trying achieve this using apply method but running into error when statement containing contains is executing. Please suggest how to handle this situation. ``` def convert_totSqft(s): if s.str.contains('-', regex=False) == True << some statements>> else: << some statements>> X['new_col'] = X['total_sqft'].apply(convert_totSqft) ``` Error message: ``` File "<ipython-input-6-af39b196879b>", line 2, in convert_totSqft if s.str.contains('-', regex=False) == True: AttributeError: 'str' object has no attribute 'str' ```
2018/07/29
[ "https://Stackoverflow.com/questions/51576837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10148648/" ]
IIUC ``` df.col.str.split('-',expand=True).apply(pd.to_numeric).mean(1) Out[630]: 0 1151.0 1 1025.0 2 2475.0 3 1075.0 4 1760.0 dtype: float64 ```
IIUC, you can `split` by `-` anyway and just `transform` using `np.mean`, once the mean of a single number is just the number itself ``` df.col.str.split('-').transform(lambda s: np.mean([int(x.strip()) for x in s])) 0 1151.0 1 1025.0 2 2475.0 3 1075.0 4 1760.0 ``` Alternatively, you can `sum` and divide by `len` (same thing) ``` df.col.str.split('-').transform(lambda s: sum([int(x.strip()) for x in s])/len(s)) ``` If want results back necessarily as `int`, just wrap it with `int()` ``` df.col.str.split('-').transform(lambda s: int(np.mean([int(x.strip()) for x in s]))) 0 1151 1 1025 2 2475 3 1075 4 1760 ```
17,403
74,057,953
browser build and python (flask) backend. As far as I understand everything should work, the DOM is identical in both and doesn't change after that, but vue ignores the server-side rendered DOM and generates it from scratch. What surprises me even more is the fact that it does not delete the server's initial rendered DOM, but doubles it in exactly the same way. How to make vue work with prerendered dom? console message: ``` vue.esm-browser.js:1617 [Vue warn]: Hydration node mismatch: - Client vnode: Symbol(Comment) - Server rendered DOM: " " (text) at <RouterView> at <App> ``` ``` Hydration complete but contains mismatches. ``` Minimal, Reproducible example: [on code pen](https://codepen.io/MarcelDev-u/pen/qBYgPZz?editors=1001 "thanks"). My code is quite complex and messy so I isolated the bug to html and js only.
2022/10/13
[ "https://Stackoverflow.com/questions/74057953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15810660/" ]
It turned out that the issue for me was formating... It was working: ``` <div id="app">{{ server rendered html }}</div> ``` It was not: ``` <div id="app"> {{ server rendered html}} </div> ```
[This answer](https://stackoverflow.com/a/67978474/8816585) is explaining the use case with a Nuxt configuration but is totally valid for your code too. The issue here being that you probably have: * some hardcoded HTML string * SSR content generated by Vue * client-side hydrated content by Vue All of them can have the same content, they will still not behave properly if some of them need to overwrite the previous one. If you make your markup like this ```html <div id="app"> <h1></h1> </div> ``` You will not have any issue because it will only keep Vue SSR + client-side Vue and it will be plugging itself to the `app` id. If it was in another context, I would recommend disabling JS to try that one out or clicking on "View page source" with your mouse but you will not have a clean result on Codepen. SSR is a broad topic with a lot of qwirky issues, I recommend that you try that with either Nuxt or at least an SFC Vue setup with Vite to get the full scope and an easier time debugging the whole thing: <https://vuejs.org/guide/quick-start.html#creating-a-vue-application> Trying out in Codepen is nice but will add more sneaky things I'd say. So don't spend too much time trying to fix that in this context, try directly into your VScode editor with some Vue devtools for a full explanation of what's happening.
17,404
48,033,519
``` import pygame as pg, sys from pygame.locals import * import os pg.mixer.pre_init(44100, 16, 2, 4096) pg.init() a = pg.mixer.music.load("./Sounds/ChessDrop2.wav") a.play() ``` The code above is what I have written to test whether sound can be played through pygame. My 'ChessDrop2.wav' file is a 16 bit wav-PCM file because when the file was 32 bit PCM, pygame recognised it as an unknown format. Now that error is gone when I run the code but the error below pops up on my shell instead. I have assigned the sound file to the variable 'a' so shouldn't the sound play? My version of python is 3.6.1 and pygame is 1.9.3. ``` a.play() AttributeError: 'NoneType' object has no attribute 'play' ```
2017/12/30
[ "https://Stackoverflow.com/questions/48033519", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8965922/" ]
this functions doesn't return any object to be used, check the documentation: <https://www.pygame.org/docs/ref/music.html#pygame.mixer.music.load> after loading the file you should use ``` pg.mixer.music.play() ```
As @CaMMelo stated `pygame.mixer.music.load(filename)` method doesn't return an object. However, if you are looking for an return object after the load, you may want to try [pygame.mixer.Sound](https://www.pygame.org/docs/ref/mixer.html#pygame.mixer.Sound) . > > pygame.mixer.Sound > > Create a new Sound object from a file or buffer object > > > ``` from pygame import mixer mixer.init() sound = mixer.Sound("yourWaveFile.wav") sound.play() ```
17,405
60,754,120
Does anyone know a solution to this? EDIT: This question was closed, because the problem didn't seem clear. So the problem was the error "AttributeError: module 'wx' has no attribute 'adv'", although everything seemed right. And actually, everything was right, the problem was individual to another PC, where "import wx.adv" resulted in a segmentation fault. ``` $ python Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import wx >>> wx.version() '4.0.7.post2 gtk3 (phoenix) wxWidgets 3.0.5' >>> wx.adv.NotificationMessage Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'wx' has no attribute 'adv' >>> ``` Or is there any other "non intrusive" notification method?
2020/03/19
[ "https://Stackoverflow.com/questions/60754120", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1647509/" ]
try importing this and run this again ``` import wx.adv ```
As @arvind8 points out it is a separate import. At its simplest: ``` import wx import wx.adv app = wx.App() frame = wx.Frame(parent=None, title="Hello, world!") frame.Show() m=wx.adv.NotificationMessage("My message","The text I wish to show") #m.Show(timeout = m.Timeout_Never) m.Show(timeout = m.Timeout_Auto) #m.Show(timeout = 5) app.MainLoop() ``` Note: the timeout function for the message takes either a number of seconds or one of 2 pre-set values `Never` or `Auto`. `Auto` is the default.
17,406
54,683,892
I have a python project with multiple files and a cmd.py which uses argparse to parse the arguments, in the other files there are critical functions. What I want to do is: I want to make it so that if in the command line I were to put `cmd -p hello.txt` it runs that python file. I was thinking that I could just simply move the cmd.py file to somewhere like `/usr/bin/` or some other directory included in the `$PATH`, however since I have other files which work with my `cmd.py`, there will be multiple files in my `/usr/bin`. Another thing that I could do is to make a symbolic link between the `cmd.py` and `/usr/bin/cmd` like this: `ln -s /path/to/cmd.py /usr/bin/cmd`, but then where do i put the cmd.py? and is this best practice? Note: I intend for this to work on Linux and MacOS X, not windows
2019/02/14
[ "https://Stackoverflow.com/questions/54683892", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8585864/" ]
The usual way to do this is to define a set of entry points in `setup.py` and let the packaging infrastructure do the heavy lifting for you. ``` setup( # ... entry_points = { 'console_scripts': ['cmd = cmd:main'], } ) ``` This requires `setuptools`. Here is some documentation for this facility: <https://python-packaging.readthedocs.io/en/latest/command-line-scripts.html>
For one thing I don't recommend installation in `/usr/bin` as that's where system programs go. `/usr/local/bin` or another custom directory added to `$PATH` could be appropriate. As for getting it to run like a typical program, name it `cmd`, wherever you put it, as the extension is not necessary, and add this line to the top of the program: ```sh #!/usr/bin/env python ``` (You may want to specify `python3` instead of just `python` if you want to ensure Python 3.x is used.) Then it can be made executable with `chmod +x <path to your program>`. Ensure that you have the necessary privileges to do this (i.e. `sudo` may be necessary).
17,407
23,021,864
I've added Python's logging module to my code to get away from a galloping mess of print statements and I'm stymied by configuration errors. The error messages aren't very informative. ``` Traceback (most recent call last): File "HDAudioSync.py", line 19, in <module> logging.config.fileConfig('../conf/logging.conf') File "/usr/lib64/python2.6/logging/config.py", line 84, in fileConfig handlers = _install_handlers(cp, formatters) File "/usr/lib64/python2.6/logging/config.py", line 162, in _install_handlers h = klass(*args) TypeError: __init__() takes at most 5 arguments (21 given) ``` Nothing in my config file gives 21 arguments. Here is the config file ``` [loggers] keys=root,main, sftp, jobapi [handlers] keys=console, logfile, syslog [formatters] keys=simple, timestamp [logger_root] level=NOTSET handlers=logfile [logger_main] level=DEBUG handlers=console, logfile, syslog propagate=1 qualname=main [logger_sftp] level=DEBUG handlers=console, logfile, syslog propagate=1 qualname=sftp [logger_jobapi] level=DEBUG handlers=console, logfile, syslog propagate=1 qualname=jobapi [handler_console] class=StreamHandler level=DEBUG formatter=simple args=(sys.stdout,) [handler_logfile] class=FileHandler level=DEBUG formatter=timestamp args=('../log/audiosync.log') [handler_syslog] class=FileHandler level=WARN formatter=timestamp args=('../log/audiosync.sys.log') [formatter_simple] format=%(levelname)s - %(message)s [formatter_timestamp] format=%(asctime)s - %(name)s -%(levelname)s - %(message)s ``` and here is the logging init code in my main module: ``` import logging import logging.config import logging.handlers logging.config.fileConfig('../conf/logging.conf') logger = logging.getLogger('main') ``` I'm not so much looking for what I did wrong here (though that would be nice) as for a methodology for debugging this. Thanks.
2014/04/11
[ "https://Stackoverflow.com/questions/23021864", "https://Stackoverflow.com", "https://Stackoverflow.com/users/811299/" ]
You can dig into the Python source code to investigate these sorts of problems. Much of the library is implemented in Python and is pretty readable without needing to know the inner details of the interpreter. [hg.python.org](http://hg.python.org/cpython/file/a8f3ca72f703/Lib/logging/config.py) provides a web interface to the repository that is convenient for browsing. I couldn't find the branch for 2.6 but the relevant statement is on line 147 in the current revision. You can see that `args` is generated from an eval which is getting the value of the `args` key from each `handler_*` section of the config file. That args variable is then expanded with an unpack (\*) operation to create arguments for klass(). In your config file you have this line: ``` args=('../log/audiosync.log') ``` It is a 20-character string that is being unpacked into a tuple of individual characters which, along with the `self` object passed to `__init__`, accounts for the 21 arguments in the error message. You are missing a trailing comma needed to make a 1-element tuple: ``` args=('../log/audiosync.log',) ^-- missing ``` The same bug is in the `handler_syslog` section.
### Look for keywords The last two lines of the traceback contain the word `handler` (`handler = ...` and `_install_handlers`). That gives you a starting point to look at the handler definitions in your config file. ### Look for matching values *everywhere* If a function takes 5 arguments, but you've somehow given over 4x that amount, something was not parsed the way you expected it to be. Especially when a quick glance at your config file doesn't show anything near that number. I've found one of the biggest causes of this kind of discrepancy is passing a string when a function is expecting a list, tuple, or object. The underlying code may split that string into characters and use that for arguments. In your case, the first option I can find is in this block in your config: ``` [handler_syslog] class=FileHandler level=WARN formatter=timestamp args=('../log/audiosync.sys.log') ``` There are no 21 character strings by themselves, but if you strip out the leading `../` from the args, you are left with `log/audiosync.sys.log` which is a 21 character string. ### Use a debugger That's what they're there for. Use [pdb](https://docs.python.org/2/library/pdb.html), or a visual debugger like [PyCharm](http://www.jetbrains.com/pycharm/%E2%80%8E) or [PyDev](http://pydev.org/). Then, you can step through the code line by line, and check variable values throughout. ### Change the logging level Some modules allow you to set their logging level. You can set it to `DEBUG` to see everything the developers have ever set to log. It can help you to follow the flow of the application as it runs. I don't think this is available for the ConfigParser module, but it is available at times. ### If all else fails, read the source [The Python source code is available online](http://hg.python.org/). If you're getting semi-cryptic tracebacks and find it difficult to obtain context, you can always download the source, and run through the code manually.
17,409
65,367,490
I have a python data frame like this ``` ID ID_1 ID_2 ID_3 ID_4 ID_5 ID_1 1.0 20.1 31.0 23.1 31.5 ID_2 3.0 1.0 23.0 90.0 21.5 ID_3. 7.0 70.1 1.0 23.0 31.5 ID_4. 9.0 90.1 43.0 1.0 61.5 ID_5 11.0 10.1 11.0 23.0 1.0 ``` I need to update values where COLUMN NAMES are equal to the ID values and then set the values to zero. for example in the first row the ID value (ID\_1) matches with first column ID\_1 and I need to reset the value of 1.0 to zero , and similarly for second row , the ID value (ID\_2) matches with second column ID\_2 and reset the value of 1.0 to zero. How do I do this in Python ? I am very new in python. Can anyone please help. the expected output would be like this - ``` ID ID_1 ID_2 ID_3 ID_4 ID_5 ID_1 0.0 20.1 31.0 23.1 31.5 ID_2 3.0 0.0 23.0 90.0 21.5 ID_3. 7.0 70.1 0.0 23.0 31.5 ID_4. 9.0 90.1 43.0 0.0 61.5 ID_5 11.0 10.1 11.0 23.0 0.0 ```
2020/12/19
[ "https://Stackoverflow.com/questions/65367490", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4726029/" ]
Consider `df`: ``` In [1479]: df Out[1479]: ID ID_1 ID_2 ID_3 ID_4 ID_5 ID_6 0 ID_1 1.0 20.1 31.0 23.0 31.5 24.6 1 ID_2 3.0 1.0 23.0 90.0 21.5 24.6 2 ID_3 7.0 70.1 1.0 23.0 31.5 24.6 3 ID_4 9.0 90.1 43.0 1.0 61.5 24.6 4 ID_5 11.0 10.1 11.0 23.0 1.0 24.6 5 ID_6 7.0 20.1 31.0 33.0 87.5 1.0 ``` Use [`pd.get_dummies`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html) with [`df.combine_first`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.combine_first.html): ``` In [1477]: import numpy as np In [1497]: df.iloc[:, 1:] = pd.get_dummies(df['ID']).replace({0: np.nan, 1: 0}).combine_first(df.iloc[:, 1:]) In [1498]: df Out[1498]: ID ID_1 ID_2 ID_3 ID_4 ID_5 ID_6 0 ID_1 0.0 20.1 31.0 23.0 31.5 24.6 1 ID_2 3.0 0.0 23.0 90.0 21.5 24.6 2 ID_3 7.0 70.1 0.0 23.0 31.5 24.6 3 ID_4 9.0 90.1 43.0 0.0 61.5 24.6 4 ID_5 11.0 10.1 11.0 23.0 0.0 24.6 5 ID_6 7.0 20.1 31.0 33.0 87.5 0.0 ```
Let's try broadcasting: ``` df[:] = np.where(df['ID'].values[:,None] == df.columns.values,0, df) ``` Output: ``` ID ID_1 ID_2 ID_3 ID_4 ID_5 0 ID_1 0.0 20.1 31.0 23.1 31.5 1 ID_2 3.0 0.0 23.0 90.0 21.5 2 ID_3 7.0 70.1 0.0 23.0 31.5 3 ID_4 9.0 90.1 43.0 0.0 61.5 4 ID_5 11.0 10.1 11.0 23.0 0.0 ```
17,410
30,982,532
I'm trying to connect to JIRA using a Python wrapper for the Rest interface and I can't get it to work at all. I've read everything I could find so this is my last resort. I've tried a lot of stuff including > > verify=False > > > but nothing has worked so far. The strange thing is that with urllib.request it does work without any SSL cert (it's just some internal cert) but the goal is to use the Python Jira wrapper so it's not really an option... I've tried Python 3.4 and 2.7... getting desperate... Any ideas? The code is very simple: ``` import requests r = requests.get('https://jiratest.myurl.com/rest/api/2/serverInfo') print(r.content) ``` Error: ``` C:\Python34\python.exe C:/projects/jirascriptsx/delete_worklogs.py Traceback (most recent call last): File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 544, in urlopen body=body, headers=headers) File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 341, in _make_request self._validate_conn(conn) File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 761, in _validate_conn conn.connect() File "C:\Python34\lib\site-packages\requests\packages\urllib3\connection.py", line 238, in connect ssl_version=resolved_ssl_version) File "C:\Python34\lib\site-packages\requests\packages\urllib3\util\ssl_.py", line 279, in ssl_wrap_socket return context.wrap_socket(sock, server_hostname=server_hostname) File "C:\Python34\lib\ssl.py", line 365, in wrap_socket _context=self) File "C:\Python34\lib\ssl.py", line 583, in __init__ self.do_handshake() File "C:\Python34\lib\ssl.py", line 810, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Python34\lib\site-packages\requests\adapters.py", line 370, in send timeout=timeout File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 574, in urlopen raise SSLError(e) requests.packages.urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/projects/jirascriptsx/delete_worklogs.py", line 4, in <module> r = requests.get('https://jiratest.uniqa.at/rest/api/2/serverInfo') File "C:\Python34\lib\site-packages\requests\api.py", line 69, in get return request('get', url, params=params, **kwargs) File "C:\Python34\lib\site-packages\requests\api.py", line 50, in request response = session.request(method=method, url=url, **kwargs) File "C:\Python34\lib\site-packages\requests\sessions.py", line 465, in request resp = self.send(prep, **send_kwargs) File "C:\Python34\lib\site-packages\requests\sessions.py", line 573, in send r = adapter.send(request, **kwargs) File "C:\Python34\lib\site-packages\requests\adapters.py", line 431, in send raise SSLError(e, request=request) requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600) Process finished with exit code 1 ```
2015/06/22
[ "https://Stackoverflow.com/questions/30982532", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2314427/" ]
The actual enum behavior of instatiating the instance [doesn't have an issue with thread safety](https://stackoverflow.com/a/2531881/1424875). However, you will need to make sure that the instance state itself is thread-safe. The interactions with the fields and methods of `Application` are the risk--using either careful synchronization and locking, or purely concurrent data and careful verification that other inconsistencies can't happen, will be your best bet here.
Singleton ensures you only have one instance of a class per class loader. You only have to take care about concurrency if your singleton has a mutable state. I mean if singleton persist some kind of mutable data. In this case you should use some kind of synchronization-locking mechanishm to prevent concurrent modification of the state and/or use thread-safe data structures.
17,411
45,425,026
--- *tldr:* How is Python set up on a Mac? Is there a ton of senseless copying going on even before I start wrecking it? -------------------------------------------------------------------------------------------------------------------- I am hoping to get some guidance regarding Python system architecture on Mac (perhaps the answer is OS agnostic, but I assume for safety's sake that it is not). I can run a variety of commands that *seem* to give me multiple Python binaries. In truth, there may be more this is just what I have come across so far. 1. `ls /usr/local/bin/ | grep 'python\|pyd'` `pydoc pydoc2 pydoc2.7 python python-32 python-config python2 python2-32 python2-config python2.7 python2.7-32 python2.7-config pythonw pythonw-32 pythonw2 pythonw2-32 pythonw2.7 pythonw2.7-32` 2. `ls /usr/bin | grep 'python\|pyd'` `pydoc pydoc2.6 pydoc2.7 python python-config python2.6 python2.6-config python2.7 python2.7-config pythonw pythonw2.6 pythonw2.7` 3. `ls /Library/Frameworks/Python.framework/Versions/` `2.7 Current` 4. `ls /System/Library/Frameworks/Python.framework/Versions/` `2.3 2.5 2.6 2.7 Current` As far as which one runs when executing a `.py`; when I run `which python` I get back `/Library/Frameworks/Python.framework/Versions/2.7/bin/python` This seems consistent when I use the REPL. The `site-packages` relative to this install are available (not that I tinkered with other site package locs) I have not made any serious modifications to my python environment on my Mac so I am assuming this is what is given to users out of the box. If anyone understands how all these binaries fit together and why they all exist please let me know. If the answer is RTM please simply point me to a page as <https://docs.python.org/2/using/mac.html> did not suffice. Thanks for making me smarter! SPECS: Mac OS: 10.12.5
2017/07/31
[ "https://Stackoverflow.com/questions/45425026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5619510/" ]
Sadly that's not how Bootstrap works; you get a single row that you can place columns within and you can't float another column underneath others and have it all automatically adjust like your diagram. I would suggest checking out the jQuery plugin called [Masonry](https://masonry.desandro.com/) which does help with layouts like you are looking for.
[Bootstrap4](https://v4-alpha.getbootstrap.com/) might help with [flexbox](https://v4-alpha.getbootstrap.com/utilities/flexbox/) inbricated. Not too sure this is the best example, it still does require some extra CSS to have it run properly: ```css .container>.d-flex>.col { box-shadow: 0 0 0 3px turquoise; min-height: 1.5rem; height: auto!important; margin: 0 0 20px; } .w-100, .flex-row.d-flex.col>.border { box-shadow: 0 0 0 3px turquoise; } .w-100 { margin: -10px 10px 20px } .container>.col { margin: 10px 0; } .d-flex.flex-column.col>div { flex: 1 0 auto; } .d-flex.flex-column.col>div.big {/* big or whatever classname you think meaningfull */ flex: 1 1 100%; } .col .col:hover { font-size: 0.5em; } ``` ```html <link href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0-alpha.6/css/bootstrap.min.css" rel="stylesheet" /> <div class="container d-flex flex-wrap flex-row"> <div class="d-flex flex-column col "> <div class="d-flex col rounded"> AAAA </div> <div class="flex-row d-flex col rounded"> <div class="col">1</div> <div class="col border">2</div> <div class="col">3</div> </div> <div class="d-flex col rounded"> CCCC </div> <div class="d-flex col rounded"> DDDD </div> </div> <div class="d-flex flex-column col"> <div class="d-flex flex-column col rounded"> <h1>hover me </h1><br/>EEEE<br/>EEEE<br/>EEEE<br/>EEEE<br/>EEEE<br/>EEEE<br/>EEEE<br/>EEEE<br/>EEEE<br/>EEEE<br/>EEEE<br/>EEEE<br/>EEEE<br/>EEEE </div> </div> <div class="d-flex flex-column col"> <div class="d-flex col rounded"> <p>FF<br/>FF</p> </div> <div class="flex-row d-flex col rounded"> <div class="col">1</div> <div class="col border">2</div> <div class="col">3</div> </div> <div class="d-flex col big rounded"> HHHH </div> </div> <div class="w-100 rounded"> IIII</div> ``` <https://codepen.io/gc-nomade/pen/LjNroE> --- **Another** example from a **boostrap3** structure and **`flex` rules added in the CSS** ```css .flex, .flexcol { display:flex;/* rule around which the flex layout is build, remove it to fall back to regular bootstrap */ } .row { padding:0 20px ; margin:auto; text-align:center; } .flexcol { padding:0; } .colchild , .footer{ border:solid gray; color:white; margin:10px; background:linear-gradient(20deg, rgba(0,0,0,0.4), rgba(255,255,255,0.4)) tomato; box-shadow:inset 0 0 3px, 0 0 0 3px orange , 0 0 0 6px turquoise; } .flexcol { flex-direction:column; } .col { flex:1; padding:1em; } .colchild.flex .col:nth-child(2) { border-left:solid gray; border-right:solid gray; } .rounded { border-radius:0.5em; } ``` ```html <link href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet"/> <div class="row container flex"> <div class="col-xs-4 col-sm-4 col-md-4 col-lg-4 flexcol "> <div class="colchild rounded"> <p>boxe</p> </div> <div class="colchild rounded flex"> <div class="col"> 1 </div> <div class="col"> 2 </div> <div class="col"> 3 </div> </div> <div class="colchild rounded"> <p>boxe</p> </div> <div class="colchild rounded"> <p>boxe</p> </div> </div> <div class="col-xs-4 col-sm-4 col-md-4 col-lg-4 flexcol "> <div class="colchild rounded col"> <p>boxe</p> </div> </div> <div class="col-xs-4 col-sm-4 col-md-4 col-lg-4 flexcol "> <div class="colchild rounded"> <p>boxe</p> </div> <div class="colchild rounded flex"> <div class="col"> 1 </div> <div class="col"> 2 </div> <div class="col"> 3 </div> </div> <div class="colchild rounded col "> <p>bottom</p> </div> </div> </div> <div class="row container flex"> <div class="footer w-100 col rounded">footer</div> ``` <https://codepen.io/gc-nomade/pen/VzKvKv/>
17,412
23,449,320
How to write something like `!(str.endswith())` in python I mean I want to check if string IS NOT ending with something. My code is ``` if text == text. upper(): and text.endswith("."): ``` But I want to put IS NOT after and writing ``` if text == text. upper(): and not text.endswith("."): ``` or ``` if text == text. upper(): and not(text.endswith(".")): ``` gives me **Invalid syntax**
2014/05/03
[ "https://Stackoverflow.com/questions/23449320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/678855/" ]
You can use `not` ``` if not str.endswith(): ``` your code can be modified to: ``` if text == text.upper() and not text.endswith("."): ```
You can just use the `not()` oporator: ``` not(str.endswith()) ``` EDIT: Like so: ``` if text == text. upper() and not(text.endswith(".")): do stuff ``` or ``` if text == text. upper() and not(text.endswith(".")): do studff ```
17,414
43,148,235
I want python with selenium webdriver to do the following:- 1. Open Facebook 2. Login 3. Click and open the user pane which has the "Logout" option A small arrow opens the user pane I wrote the following script ``` from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver=webdriver.Firefox() def open_url(url): driver.get(url) assert "Facebook" in driver.title def login(user_id,user_pass): elem_id=driver.find_element_by_name("email") elem_id.clear() elem_id.send_keys(user_id) elem_pass=driver.find_element_by_name("pass") elem_pass.clear() elem_pass.send_keys(user_pass) elem_pass.send_keys(Keys.RETURN) def search(): wait=WebDriverWait(driver,30) pane=driver.find_element_by_id("userNavigationLabel").click() open_url("https://www.fb.com") login("myuserid","mypass") search() ``` The following error is what i get ``` selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: [id="userNavigationLabel"] ``` How should i locate the element?
2017/03/31
[ "https://Stackoverflow.com/questions/43148235", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7236897/" ]
How about using macros only instead of using variables and enumerations. ``` //libnameConf.h #define LIBNAME_A // #define LIBNAME_B // Uncomment this line and both comment the above line while changing libs. ``` Then we use several conditional compilation statements like this. ``` //libname.c double coef_arr_a[100] = {...}; double coef_arr_b[100] = {...}; #ifdef LIBNAME_A somestruct.waveCoefs = coef_arr_a; //do other coef_arr_a specific stuff #endif #ifdef LIBNAME_B somestruct.waveCoefs = coef_arr_b; #endif ```
Sounds like you need [conditional compilaiton](https://en.wikipedia.org/wiki/Conditional_compilation) in C but the way you're doing it is not correct. Conditional compilation is a concept of the [preprocessor](https://en.wikipedia.org/wiki/Preprocessor) which is run before the compiler. The point of preprocessor is to manipulate the source code before it is fed to the compiler. Since you haven't used any preprocessor conditionals, all the source code is fed to the compiler regardless of your compile-time conditionals. You should definitely use preprocessor conditionals to reduce your binary.
17,417
3,093,352
Is there a method to pass a [variable number of arguments](http://docs.python.org/tutorial/controlflow.html#arbitrary-argument-lists) to a function and have it change those arguments using the `( *args, **keywords )` style of argument passing? I've tried a few things but either see no change or have an error raised by the compiler: ``` def foo( *args ): args[0] = 4 ``` This gets me `TypeError: object does not support assignment` (they're tuples.) Or ``` def foo( *args ): plusOne = [ item+1 for item in args ] args = plusOne ``` which has no effect what so ever. If there is no mechanism nor work around I can admit defeat. Edit: To clarify why I'm trying to go this route, consider the case here: ``` class bar(object): def __init__(self,obj): self.obj = obj def foo( input ): input.obj = "something else" ``` If I pass my `bar` object into foo, I get a change in the state. To create a decorator which performs a `deepcopy` which resets all such state I'm currently customizing it for N arguments. I'd like to create one which accepts any number of arguments. Hence, the question.
2010/06/22
[ "https://Stackoverflow.com/questions/3093352", "https://Stackoverflow.com", "https://Stackoverflow.com/users/178060/" ]
No - Python uses [call by object-sharing](http://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_sharing), also known as call-by-value. To clarify the terminology: you are not receiving a deep copy of the object, but a copy of the object **reference**. Note: this is not the same as [call-by-reference](http://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_reference)! You can think of it as call by value, and that the values are references to objects. So to answer your question, you receive a copy of the arguments (object references). You cannot modify the object references as if they were passed by reference. You can make a new modified copy of them if you want, but judging from your examples that isn't what you are looking for. The calling scope won't see your changes. If instead you *mutate* the objects you receive, the client *can* see those changes.
The reason ``` args[0] = 4 ``` doesn't work is because, as the error message says, `args` a tuple, which is immutable. So, you'll need it convert it to the mutable object first, for example like this: ``` >>> def foo( *args ): print(args) args = list(args) args[0] = 42 print(args) >>> foo(23) (23,) [42] ``` If you give more information, it would be possible to provide more pythonic solution, because what you're doing seems strange. Also, second code seem to work just fine. For example the following works just fine and changes calling scope variable: ``` >>> def spam(*a): a[0][0] = 42 >>> l = [23, 32] >>> spam(l) >>> l [42, 32] ``` The reason being exactly the same: mutability of the `l` object. The same can be shown on your example: ``` >>> def foo( *input ): input[0].obj = "something else" >>> b = bar('abc') >>> foo(b) >>> b.obj 'something else' ```
17,418
10,621,021
I am a newbie to python,everywhere I read about list methods I see one thing **The slice method returns a "new" list** What is here meant by "new" list,and why is it faster then changing the original list? Does it really matter if python manipulates the original list,I mean I cant use it anyway.
2012/05/16
[ "https://Stackoverflow.com/questions/10621021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1251851/" ]
I hope that this helps explain what it means by making a new list: ``` >>> lista = [1, 2, 3, 4] >>> listb = lista >>> print lista [1, 2, 3, 4] >>> print listb [1, 2, 3, 4] >>> lista[0] = 3 >>> print listb [3, 2, 3, 4] >>> listc = lista[:] >>> print listc [3, 2, 3, 4] >>> lista[0] = 1 >>> print listc [3, 2, 3, 4] ``` When doing `listb = lista` you are not making a new list, you are making an additional reference to the same list. This is shown by changing the first element in lista with `lista[0] = 3`, this also changes the first element in listb. However, when slicing lista into listc with `listc = lista[:]` you are copying over the values. When changing the first element of lista back to 1 with `lista[0] = 1`, the first element of listc is still 3. For speed I would expect slicing to be slower but this should **not** be a consideration for which one to use. As I've shown they both have a very different implication and it depends on what you are going to do with the list, rather than on speed (*this is in general. There are occasion where the speed might be important*).
With lists, you can do both: 1) create a new list (the original is left intact): ``` In [1]: l = [1, 2, 3, 4, 5] In [2]: l[:3] Out[2]: [1, 2, 3] In [3]: l Out[3]: [1, 2, 3, 4, 5] ``` 2) modify the list in-place: ``` In [6]: del l[3:] In [7]: l Out[7]: [1, 2, 3] In [8]: l.append(15) In [9]: l Out[9]: [1, 2, 3, 15] ``` It's up to you to choose which way makes more sense for your problem. In contrast to lists, tuples are immutable, which means that you can slice them, but you cannot modify them in place.
17,421
62,833,614
I am working on a project with OpenCV and python but stuck on this small problem. I have end-points' coordinates on many lines stored in a list. Sometimes a case is appearing that from a single point, more than one line is detected. From among these lines, I want to keep the line of shortest length and eliminate all the other lines thus my image will contain no point from where more than one line is drawn. My variable which stores the information(coordinates of both the end-points) of all the lines initially detected is as follows: ``` var = [[Line1_EndPoint1, Line1_EndPoint2], [Line2_EndPoint1, Line2_EndPoint2], [Line3_EndPoint1, Line3_EndPoint2], [Line4_EndPoint1, Line4_EndPoint2], [Line5_EndPoint1, Line5_EndPoint2]] ``` where, LineX\_EndPointY(line number "X", endpoint "Y" of that line) is of type [x, y] where x and y are the coordinates of that point in the image. Can someone suggest me how to solve this problem. **You can modify the way data of the lines are stored. If you modify, please explain your data structure and how it is created** Example of such data: ``` [[[551, 752], [541, 730]], [[548, 738], [723, 548]], [[285, 682], [226, 676]], [[416, 679], [345, 678]], [[345, 678], [388, 674]], [[249, 679], [226, 676]], [[270, 678], [388, 674]], [[472, 650], [751, 473]], [[751, 473], [716, 561]], [[731, 529], [751, 473]]] ``` Python code would be appreciable.
2020/07/10
[ "https://Stackoverflow.com/questions/62833614", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11651779/" ]
I still having the same problems ``` @EnableIntegration @Configuration @TestPropertySource(locations="classpath:/msc-test.properties") @Slf4j @RunWith(SpringRunner.class) @ActiveProfiles("test") @ContextConfiguration(classes = MessagingListenerTestConfig.class) @Import(TestChannelBinderConfiguration.class) @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE) @DirtiesContext public class MessagingListenerTest { @Autowired private MessagingListener listener; @Autowired private InputDestination inputDestination; @Autowired private OutputDestination outputDestination; @Mock private RestTemplate restTemplate; private static final String EXPECTED_URL = "http://localhost:11000/test/v2/verification/messaging"; @Before public void setup() { restTemplate = mock(RestTemplate.class); ReflectionTestUtils.setField(listener, "restTemplate", restTemplate); ResponseEntity<String> mockResponse = new ResponseEntity<>("{}", HttpStatus.ACCEPTED); when(restTemplate.postForEntity(any(), any(), eq(String.class))).thenReturn(mockResponse); } @Test public void testHundleMessage() { JSONObject obj1 = new JSONObject() .put("id", 1) .put("targetClass", "/test/v2/verification"); Message<String> request = MessageBuilder.withPayload(obj1.toString()).build(); log.info("request Test : "+ request.getPayload()); inputDestination.send(new GenericMessage<byte[]>(request.getPayload().getBytes(StandardCharsets.UTF_8))); listener.handle(request); //Verificar la url del restTemplate Mockito.verify(restTemplate, Mockito.times(1)).postForEntity(eq(EXPECTED_URL), any(), eq(String.class)); //Verificar la recepción de los mensajes assertThat(outputDestination.receive()).isNotNull(); assertThat(outputDestination.receive().getPayload().toString()).contains("topicName"); } } ``` **just in this line** ``` inputDestination.send(new GenericMessage<byte[]>(request.getPayload().getBytes(StandardCharsets.UTF_8))); ``` **and this the error Junit** ``` java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.rangeCheck(ArrayList.java:653) at java.util.ArrayList.get(ArrayList.java:429) at org.springframework.cloud.stream.binder.test.AbstractDestination.getChannel(AbstractDestination.java:34) at org.springframework.cloud.stream.binder.test.InputDestination.send(InputDestination.java:37) at com.common.messaging.MessagingListenerTest.testHundleMessage(MessagingListenerTest.java:93) ``` **and the console error** ``` 2020-07-14 11:29:16.850 INFO 25240 --- [ main] c.b.a.m.c.m.MessagingListenerTest : The following profiles are active: test 2020-07-14 11:29:18.171 INFO 25240 --- [ main] faultConfiguringBeanFactoryPostProcessor : No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created. 2020-07-14 11:29:18.192 INFO 25240 --- [ main] faultConfiguringBeanFactoryPostProcessor : No bean named 'taskScheduler' has been explicitly defined. Therefore, a default ThreadPoolTaskScheduler will be created. 2020-07-14 11:29:18.212 INFO 25240 --- [ main] faultConfiguringBeanFactoryPostProcessor : No bean named 'integrationHeaderChannelRegistry' has been explicitly defined. Therefore, a default DefaultHeaderChannelRegistry will be created. 2020-07-14 11:29:18.392 INFO 25240 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'integrationChannelResolver' of type [org.springframework.integration.support.channel.BeanFactoryChannelResolver] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-07-14 11:29:18.429 INFO 25240 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'integrationDisposableAutoCreatedBeans' of type [org.springframework.integration.config.annotation.Disposables] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-07-14 11:29:20.113 INFO 25240 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 'taskScheduler' 2020-07-14 11:29:20.356 INFO 25240 --- [ main] o.s.i.e.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel 2020-07-14 11:29:20.358 INFO 25240 --- [ main] o.s.i.c.PublishSubscribeChannel : Channel 'application.errorChannel' has 1 subscriber(s). 2020-07-14 11:29:20.361 INFO 25240 --- [ main] o.s.i.e.EventDrivenConsumer : started bean '_org.springframework.integration.errorLogger' 2020-07-14 11:29:20.382 INFO 25240 --- [ main] c.b.a.m.c.m.MessagingListenerTest : Started MessagingListenerTest in 4.629 seconds (JVM running for 9.331) 2020-07-14 11:29:23.255 INFO 25240 --- [ main] c.b.a.m.c.m.MessagingListenerTest : request Test : {"targetClass":"/test/v2/verification","id":1} 2020-07-14 11:29:28.207 INFO 25240 --- [ main] o.s.i.e.EventDrivenConsumer : Removing {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel 2020-07-14 11:29:28.207 INFO 25240 --- [ main] o.s.i.c.PublishSubscribeChannel : Channel 'application.errorChannel' has 0 subscriber(s). 2020-07-14 11:29:28.207 INFO 25240 --- [ main] o.s.i.e.EventDrivenConsumer : stopped bean '_org.springframework.integration.errorLogger' 2020-07-14 11:29:28.208 INFO 25240 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService 'taskScheduler' Picked up JAVA_TOOL_OPTIONS: -agentpath:"C:\windows\system32\Aternity\Java\JavaHookLoader.dll"="C:\ProgramData\Aternity\hooks" ``` **what could be the problem**
I think the problem is that you are calling `outputDestination.receive()` two times. First time you are getting the message and when trying to reach it second time it's not there. For me was working this approach: ``` String messagePayload = new String(outputDestination.receive().getPayload()); assertThat(messagePayload).contains("topicName"); ```
17,426
25,572,574
Hello I've installed a local version of pip using ``` python get-pip.py --user ``` After that I can't find the path of pip, so I run: ``` python -m pip install --user Cython ``` Finally I can't import Cython ``` import Cython Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named 'Cython' ```
2014/08/29
[ "https://Stackoverflow.com/questions/25572574", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2486641/" ]
You need to filter each date field individually within the range, like so: ``` WHERE (Date1 >= ISNULL(@DateFrom,'17531231') AND Date1 <= ISNULL(@dateTo,'20991231')) OR (Date2 >= ISNULL(@DateFrom,'1753-12-31') AND Date2 <= ISNULL(@dateTo,'20991231')) OR (Date3 >= ISNULL(@DateFrom,'1753-12-31') AND Date3 <= ISNULL(@dateTo,'20991231')) ``` Otherwise you aren't checking the range for each date field, just that a date in that row matches one of the criteria.
Just for another way to look at it. This solution would also work. It makes the where clause simpler at the expense of an additional block of code and a join. ``` CREATE TABLE #dates (id INT, date1 DATE, date2 DATE, date3 DATE) INSERT INTO #dates VALUES ('1','12/13/1945','11/4/1930',NULL), ('2','9/12/1970','9/13/1971','9/14/1972'), ('3',NULL,NULL,NULL), ('4','1/1/2000','1/1/2001','1/1/2002') DECLARE @dateFrom datetime = '1940-01-01', @dateTo datetime = '1950-01-01' ;WITH dateFilter AS ( SELECT id,[Date],DateIndex FROM (SELECT id, date1, date2, date3 FROM #dates) p UNPIVOT([DATE] FOR DateIndex IN ([date1],[date2],[date3])) AS up WHERE up.[DATE] BETWEEN @dateFrom AND @dateTo ) SELECT d.* FROM #dates d INNER JOIN dateFilter df ON df.id = d.id DROP TABLE #dates ```
17,427
26,569,498
I am new to python. I want to store each HTML tag into item of list. ``` from bs4 import BeautifulSoup text = """ <body> <div class="product"> <div class="x">orange</div> <div class="x">apple</div> <p> This is text </p> </div> </body>""" soup = BeautifulSoup(text) y=[] for i in (soup.find_all("div", class_="product")): y.append(i.get_text().encode("utf-8").strip()) ``` From the above code the length of y is 1 and store all text on one item of list. However,it is necessary to parse using "div product" and store each text inside html tag into different item to list. So the y will be : ``` y =['orange', 'apple', 'This is text'] ``` instead of : ``` y=['orange\napple\n This is text'] ```
2014/10/26
[ "https://Stackoverflow.com/questions/26569498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2291434/" ]
Assuming your HTML code looks something like this: ``` <ul class="ulComprar"> <li>Milk</li> <li class="liEspecial">Eggs</li> <li>Bread</li> </ul> ``` Then you could use the following query snippet to show and hide element li.liEspecial: ``` $('.ulComprar').hover( function() { $('.liEspecial', this).hide(); // hides li on mouse enter }, function() { $('.liEspecial', this).show(); // shows li on mouse leave } ); ```
we don't have `display: normal;`. the default `display` for [li](http://www.w3schools.com/tags/tag_li.asp) is `list-item`. try this code: ``` $('.ulComprar').on('mouseenter', function () { $('.liEspecial').css("display", "list-item"); }).on('mouseleave', function () { $('.liEspecial').css("display", "none"); }); ``` [jsfiddle](http://jsfiddle.net/yhhoL6ce/1/)
17,430
36,306,938
I want to generate colors that go well with a given `UIColor` (Triadic, Analogues, Complement etc). I have read a lot of posts like [this](https://stackoverflow.com/questions/14095849/calculating-the-analogous-color-with-python/14116553#14116553) and [this](https://stackoverflow.com/questions/180/function-for-creating-color-wheels) and [this](https://stackoverflow.com/questions/4235072/math-behind-the-colour-wheel). In the last post, The answerer suggested going to easyrgb.com. So I went there. I learnt that I need to "rotate" the hue by some degrees if I want to generate those color schemes. For example, for Triadic colors, I need to rotate it by ± 120 degrees. I know that I can get a color's hue by calling `getHue(:saturation:brightness:)`, but how can I "rotate" it? Isn't the hue returned by the method a number? A number between 0 and 1? This makes no sense to me! I think the first post might have the answer but the code is written in python. I only learnt a little bit of python so I don't quite know what this means: ``` h = [(h+d) % 1 for d in (-d, d)] # Rotation by d ``` From the comment, I see that this line somehow rotates the hue by d degrees. But I don't know what that syntax means. Can anyone tell me how to rotate the hue, or translate the above code to swift?
2016/03/30
[ "https://Stackoverflow.com/questions/36306938", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5133585/" ]
The hue component ranges from 0.0 to 1.0, which corresponds to the angle from 0º to 360º in a color wheel (compare [Wikipedia: HSL and HSV](http://en.wikipedia.org/wiki/HSL_and_HSV)). To "rotate" the hue component by `n` degrees, use: ``` let n = 120 // 120 degrees as an example hue = fmod(hue + CGFloat(n)/360.0, 1.0) ``` The `fmod()` function is used to normalize the result of the addition to the range 0.0 to 1.0. The hue for the complementary color would be ``` let hueComplement = fmod(hue + 0.5, 1.0) ```
> > **In SwiftUI you can do by using apple documentation code** > > > ``` struct HueRotation: View { var body: some View { HStack { ForEach(0..<6) { Rectangle() .fill(.linearGradient( colors: [.blue, .red, .green], startPoint: .top, endPoint: .bottom)) .hueRotation((.degrees(Double($0 * 36)))) .frame(width: 60, height: 60, alignment: .center) } } } ``` }
17,431
70,298,164
I have this python coded statement: ``` is_headless = ["--headless"] if sys.argv[0].find('console.py') != -1 else [""] ``` 1. In what way does the blank between `["--headless"]` and `if` control the code line? 2. How and would `"--headless"` ever be an element in the `is_headless` variable? 3. Using the variable name `is_headless` suggests the final value would be `True` or `False`. Is this correct thinking? In what case would `True` or `False` be assigned? 4. Is `[""]` a way to indicate `False`? 5. A little confused...
2021/12/09
[ "https://Stackoverflow.com/questions/70298164", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17640238/" ]
There is not much to it. Just increment the pointer. → `p++` ``` void printArray(int *s_ptr, int *e_ptr) { for (int *p = s_ptr; p <= e_ptr; p++) { printf("%d\n", *p); } } ```
> > *How can I can print the whole array using only the addreses of the first element and the last element?* > > > To start with, couple of things about array that you should know (if not aware of): 1. An array is a collection of elements of the same type placed in **contiguous memory locations**. 2. An array name, when used in an expression, **will convert to pointer to first element of that array** (there are few *exceptions* to this rule). E.g.: ``` #include <stdio.h> int main (void) { int arr[] = {1, 2, 3, 4, 5}; // Hardcoding the size of array (5) in for loop condition // Note that you can use expression - sizeof (arr) / sizeof (arr[0]) to get size // of array instead of hardcoding it. for (size_t i = 0; i < 5; ++i) { printf ("%d\n", arr[i]); } return 0; } ``` Here the `arr[i]`, in `printf()` statement, will be interpreted as1) - ``` (*((arr) + (i))) ``` and, in this expression, the `arr` will be converted to address of first element of array `arr` i.e. `&arr[0]` and to that address the value of `i` will be added and the resultant pointer will be then dereferenced to get the value at that location. Since, the array element placed in contiguous memory locations, adding `0`, `1`, `2`, .., and so on, to the address of first element of array and dereferencing it will give the value of `1`st, `2`nd, `3`rd, .., and so on, elements of array respectively. Now, you are passing the address of first and last element of given array to function `printArray()`. Using the address of first element we can get the value of all the elements of array. The only thing we need is the size of array in `printArray()` function. From C Standard#6.5.6p9 *[emphasis added]* > > **When two pointers are subtracted, both shall point to elements of the same array object**, or one past the last element of the array object; **the result is the difference of the subscripts of the two array elements**. The size of the result is implementation-defined, and **its type (a signed integer type) is ptrdiff\_t** defined in the <stddef.h> header. > > > The two argument to `printArray()` function are pointer to first and last element of same array. As you have shown in question, the `printArray()` is called like this: ``` printArray(&array[0], &array[MaxSize-1]); ^ ^^^^^^^^^ | | subscript subscript of first of last element element ``` When we subtract them the result we get is the difference of the subscripts of the first and last elements of that array. That means, in `printArray()` function() this ``` e_ptr - s_ptr ``` will give result same as ``` MaxSize-1 - 0 = MaxSize-1 ``` if we add one to this result, we will get size of array ``` MaxSize-1 + 1 = MaxSize ``` Now, we know how to get size of array in `printArray()` and also know how to get the value of all elements of array using the address of first element. Lets implement `printArray()`: ``` void printArray(int *s_ptr, int *e_ptr) { // To get the size of array subtract e_ptr and s_ptr and add 1 ptrdiff_t size = e_ptr - s_ptr + 1; for (ptrdiff_t i = 0; i < size; ++i) { printf ("%d\n", *((s_ptr) + (i))); } } ``` This will output all the elements of array, provided `s_ptr` should be pointer to first element and `e_ptr` should be pointer to last element of same array. This `*((s_ptr) + (i))` looks bit cluttered. As per the definition of subscript operator `[]`1), `*((s_ptr) + (i))` is same as `s_ptr[i]`. So, the `printf()` statement in the for loop can be written as - `printf ("%d\n", s_ptr[i]);`. Alternatively, you can use the pointer of type same as type of array element. Assign the first element address to this pointer and dereferencing it to get value at that location. Increment the pointer and print till it reaches to address of last element of array. The implementation is shown in *@Cheatah* post. --- 1). C11 standard##6.5.2.1p2 > > The definition of the subscript operator [] is that E1[E2] is identical to (\*((E1)+(E2))) > > >
17,432
12,193,803
On Windows 7, I am using the command line ``` python -m SimpleHTTPServer 8888 ``` to invoke a simple web server to serve files from a directory, for development. The problem is that the server seems to keep the files in cache. Old versions of files are served despite newer ones being available. Is there a way to specify the "no cache" option from the command line directly?
2012/08/30
[ "https://Stackoverflow.com/questions/12193803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/605337/" ]
I suggest that you press Ctrl+F5 when refreshing the browser. Just ran into [this](https://gist.github.com/3300372), it can just might be the thing you are looking for (it's in ruby, by the way)
Maybe it's the browser caching your files not the SimpleHTTPServer. Try deactivating the browser cache first.
17,433
10,361,714
I mostly spend time on Python/Django and Objective-C/CocoaTouch and js/jQuery in the course of my daily work. My editor of choice is `vim` for Python/Django and js/jQuery and `xcode` for Objective-C/CocoaTouch. One of the bottlenecks on my development speed is the pace at which I read existing code, particularly open source libraries which I use. In Python/Django for example, when I encounter some new features introduced by django developers, I get curious and begin exploring the code base manually. For example, when class-based views were introduced from django 1.3 onwards, reference - <https://docs.djangoproject.com/en/dev/topics/class-based-views/> - I will check out the example code shown: ``` from django.views.generic import TemplateView class AboutView(TemplateView): template_name = "about.html" ``` And try it out on one of my projects. More importantly, I am curious about what goes on behind the scenes, so I will dig into the source code - ``` # django/views/generic/__init__.py file from django.views.generic.base import View, TemplateView, RedirectView from django.views.generic.dates import (ArchiveIndexView, YearArchiveView, MonthArchiveView, WeekArchiveView, DayArchiveView, TodayArchiveView, DateDetailView) from django.views.generic.detail import DetailView from django.views.generic.edit import FormView, CreateView, UpdateView, DeleteView from django.views.generic.list import ListView class GenericViewError(Exception): """A problem in a generic view.""" pass ``` From here, I will trace it backwards to the django/views/generic/base.py file and find out exactly what `TemplateView` class does:- ``` class TemplateView(TemplateResponseMixin, View): """ A view that renders a template. """ def get_context_data(self, **kwargs): return { 'params': kwargs } def get(self, request, *args, **kwargs): context = self.get_context_data(**kwargs) return self.render_to_response(context) ``` And here's it shows that `TemplateView` class inherits from `TemplateResponseMixin` and `View` classes... and I continue digging further... and so on... **The problem is, this is an extremely inefficient and slow process (to "follow" class hierachies manually and opening up each file along the way).** So the question is - is there an easy way/UI tool (or other visual solution) that parses Python code in a particular project and visualize class hierarchies which I can then inspect easily by "clicking" on a specific class I am interested to read about? **Note** that I am aware of IPython shell but that doesn't seem as user-friendly as a visual display tool. For example, there's `F-Script` in the world of Objective-C/iOS/Mac programming, which not only provides a shell (much like python or IPython shell), but provides a visual way for developers to introspect class hierachies. Reference screenshot:- ![enter image description here](https://i.stack.imgur.com/kJzqZ.jpg) So is there a class-hierarchy visualization tool (for Python specifically, but even better if it's generic and can be used for different languages)??? What are your methods of getting up to speed efficiently when reading open source source code??? **UPDATED** Per advice below, I tried out `ctags` and vim plugin `taglist` and I was able to use `:TlistOpen` to open up a side buffer in vim like this:- ![enter image description here](https://i.stack.imgur.com/wD0x8.png) This looks really cool as `:TlistOpen` now essentially shows me all the classes and functions that are available on my currently open buffer. My problem now is that when I attempt to do `Ctrl``]` while my cursor is on `TemplateView`, I get the following error:- ![enter image description here](https://i.stack.imgur.com/heVFi.png) What am I doing wrong? Is it because my django source code is in a `virtualenv`? Or is there something specific I have to do to make `ctags`/`taglist` "aware" of the django source code?
2012/04/28
[ "https://Stackoverflow.com/questions/10361714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/482506/" ]
Tags are a very good start indeed. (There's too much stuff all over the place on it, so I'll just provide you with one extra keyword to search with: ctags.) In Vim, it ends up (in the basic case) with `Ctrl+]` to go to a class/function definition and `Ctrl+T` to return.
I've been using [exuberant ctags](http://ctags.sourceforge.net/) with [taglist](http://www.vim.org/scripts/script.php?script_id=273) for vim. Use `ctrl``]` to jump to class definition in the current window, `ctrl``w``]` to jump to the definition in a split window. You can install exuberant ctags via homebrew: ``` brew install ctags ``` Be sure to use the one installed at `/usr/local/bin` by homebrew, not the old `ctags` in `/usr/bin`. It is also helpful to put `--python-kinds=-vi` in `~/.ctags` to skip indexing variables and imports for Python files. Another alternative would be to use a variant of `cscope` or `pyscope` though you must have your vim compiled with `cscope` option enabled.
17,442
31,846,508
I'm new in python and I'm trying to dynamically create new instances in a class. So let me give you an example, if I have a class like this: ``` class Person(object): def __init__(self, name, age, job): self.name = name self.age = age self.job = job ``` As far as I know, for each new instance I have to insert, I would have to declare a variable and attach it to the person object, something like this: ``` variable = Person(name, age, job) ``` Is there a way in which I can dynamically do this? Lets suppose that I have a dictionary like this: ``` persons_database = { 'id' : ['name', age, 'job'], ..... } ``` Can I create a piece of code that can iterate over this db and automatically create new instances in the `Person` class?
2015/08/06
[ "https://Stackoverflow.com/questions/31846508", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5196412/" ]
Just iterate over the dictionary using a for loop. ``` people = [] for id in persons_database: info = persons_database[id] people.append(Person(info[0], info[1], info[2])) ``` Then the List `people` will have `Person` objects with the data from your persons\_database dictionary If you need to get the Person object from the original id you can use a dictionary to store the Person objects and can quickly find the correct Person. ``` people = {} for id, data in persons_database.items(): people[id] = Person(data[0], data[1], data[2]) ``` Then you can get the person you want from his/her id by doing `people[id]`. So to increment a person with id = 1's age you would do `people[1].increment_age()` ------ Slightly more advanced material below ---------------- Some people have mentioned using list/dictionary comprehensions to achieve what you want. Comprehensions would be slightly more efficient and more pythonic, but a little more difficult to understand if you are new to programming/python As a dictionary comprehension the second piece of code would be `people = {id: Person(*data) for id, data in persons_database.items()}` And just so nothing here goes unexplained... The `*` before a List in python unpacks the List as separate items in the sequential order of the list, so for a List `l` of length n, `*l` would evaluate to `l[0], l[1], ... , l[n-2], l[n-1]`
Sure, a simple [list comprehension](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions) should do the trick: ``` people = [Person(*persons_database[pid]) for pid in persons_database] ``` This just loops through each key (id) in the person database and creates a person instance by passing through the list of attributes for that id directly as args to the `Person()` constructor.
17,444
3,580,520
To add gtk-2.0 to my virtualenv I did the following: ``` $ virtualenv --no-site-packages --python=/usr/bin/python2.6 myvirtualenv $ cd myvirtualenv $ source bin/activate $ cd lib/python2.6/ $ ln -s /usr/lib/pymodules/python2.6/gtk-2.0/ ``` [Virtualenv on Ubuntu with no site-packages](https://stackoverflow.com/questions/249283/virtualenv-on-ubuntu-with-no-site-packages) Now in the Python interpreter when I do import gtk it says: No module named gtk. When I start the interpreter with sudo it works. Any reason why I need to use sudo and is there a way to prevent it? **Update:** Forgot to mention that cairo and pygtk work but it's not the one I need. **Update2:** Here the directory to show that I ain't crazy. <http://www.friendly-stranger.com/pictures/symlink.jpg>
2010/08/27
[ "https://Stackoverflow.com/questions/3580520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145117/" ]
`sudo python` imports it just fine because that interpreter isn't using your virtual environment. So don't do that. You only linked in one of the necessary items. Do the others mentioned in the answer to the question you linked as well. (The pygtk.pth file is of particular importance, since it tells python to actually put that directory you linked onto the python path) Update ------ Put that stuff in $VIRTUALENV/lib/python2.6/**site-packages/** rather than the directory above that. Looks like the .pth files aren't read from that directory - just from site-packages
This works for me (Ubuntu 11.10): once you activate your virtualenv directory make sure 'dist-packages' exists: ``` mkdir -p lib/python2.7/dist-packages/ ``` Then, make links: For GTK2: ``` ln -s /usr/lib/python2.7/dist-packages/glib/ lib/python2.7/dist-packages/ ln -s /usr/lib/python2.7/dist-packages/gobject/ lib/python2.7/dist-packages/ ln -s /usr/lib/python2.7/dist-packages/gtk-2.0* lib/python2.7/dist-packages/ ln -s /usr/lib/python2.7/dist-packages/pygtk.pth lib/python2.7/dist-packages/ ln -s /usr/lib/python2.7/dist-packages/cairo lib/python2.7/dist-packages/ ``` For GTK3: ``` ln -s /usr/lib/python2.7/dist-packages/gi lib/python2.7/dist-packages/ ```
17,445
10,350,765
Here is my basic problem: I have a Python file with an import of ``` from math import sin,cos,sqrt ``` I need this file to still be 100% CPython compatible to allow my developers to write 100% CPython code and employ the great tools developed for Python. Now enter Cython. In my Python file, the trig functions get called millions of times (fundamental to the code, can't change this). Is there any way that through some Python-fu in the main python file, or Cython magic otherwise I can instead use the C/C++ math functions using some variation on the Cython code ``` cdef extern from "math.h": double sin(double) ``` That would give me near-C performance, which would be awesome. [Stefan's talk](http://www.behnel.de/cython200910/talk.html) says specifically this can't be done, but the talk is two years old, and there are many creative people out there
2012/04/27
[ "https://Stackoverflow.com/questions/10350765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1360263/" ]
I'm not a Cython expert, but AFAIK, all you could do is write a Cython wrapper around `sin` and call that. I can't imagine that's really going to be faster than `math.sin`, though, since it's still using Python calling semantics -- the overhead is in all the Python stuff to call the function, not the actual trig calculations, which are done in C when using CPython too. Have you considered using [Cython pure mode](http://docs.cython.org/src/tutorial/pure.html), which makes the source CPython-compatible?
I may have misunderstood your problem, but the [Cython documentation on interfacing with external C code](http://docs.cython.org/src/userguide/external_C_code.html#resolving-naming-conflicts-c-name-specifications) seems to suggest the following syntax: ``` cdef extern from "math.h": double c_sin "sin" (double) ``` which gives the function the name `sin` in the C code (so that it correctly links to the function from `math.h`), and `c_sin` in the Python module. I'm not really sure I understand what this achieves in this case, though - why would you want to use `math.sin` in Cython code? Do you have some statically typed variables, and some dynamically typed ones?
17,454
2,844,365
im a novice into developing an application using backend as Python (2.5) and Qt(3) as front end GUI designer. I have 5 diffrent dialogs to implement the scripts. i just know to load the window (main window) ``` from qt import * from dialogselectkernelfile import * from formcopyextract import * import sys if __name__ == "__main__": app = QApplication(sys.argv) f = DialogSelectKernelFile() f.show() app.setMainWidget(f) app.exec_loop() ``` main dialog opens on running. i have a set of back,Next,Cancel buttons pusing on each should open the next or previous dialogs. i use the pyuic compiler to source translation.how can i do this from python. please reply i`m running out of time.i dont know how to load another dialog from a signal of push button in another dialog. Help me pls Thanks a Lot
2010/05/16
[ "https://Stackoverflow.com/questions/2844365", "https://Stackoverflow.com", "https://Stackoverflow.com/users/995052/" ]
As Ryan Bigg suggested `simple_format` is the best tool for the job: it's 'l safe' and much neater than other solutions. so for @var: ``` <%= simple_format(@var) %> ``` If you need to sanitize the text to get rid of HTML tags, you should do this *before* passing it to `simple_format` <http://api.rubyonrails.org/classes/ActionView/Helpers/TextHelper.html#method-i-simple_format>
The best way I can figure to go about this is using the sanitize method to strip all but the BR tag we want. Assume that we have `@var` with the content `"some\ntext"`: Trying `<%= @var.gsub(/\n/, '<br />') %>` doesn't work. Trying `<%= h @var.gsub(/\n/, '<br />').html_safe %>` doesn't work and is unsafe. Trying `<%= sanitize(@var.gsub(/\n/, '<br />'), :tags => %w(br) %>` WORKS. I haven't tested this very well, but it allows the BR tag to work, and replaced a dummy script alert I added with white space, so it seems to be doing its job. If anyone else has an idea or can say if this is a safe solution, please do. Update: Another idea suggested by Jose Valim: `<%= h(@var).gsub(/\n/, '<br />') %>` Works
17,459
56,674,550
I want to split a text that contains numbers ``` text = "bla bla 1 bla bla bla 142 bla bla (234.22)" ``` and want to add a `'\n'` before and after each number. ``` > "bla bla \n1\n bla bla bla \n142\n bla bla (234.22)" ``` The following function gives me the sub strings, but it throws away the pattern, i.e. the numbers. What is the best way to replace a pattern with something that contains the pattern in python? ``` re.split(' [0123456789]+ ', text) ```
2019/06/19
[ "https://Stackoverflow.com/questions/56674550", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5452008/" ]
Use ``` s = re.sub(r' \d+ ', '\n\\g<0>\n', s) ``` See the [regex demo](https://regex101.com/r/081OkV/1). To replace only standalone numbers as whole words use ``` s = re.sub(r'\b\d+\b', '\n\\g<0>\n', s) ``` If you want to match the numbers enclosed with whitespaces only use either of ``` re.sub(r'(?<!\S)\d+(?!\S)', '\n\\g<0>\n', s) # also at the start/end of string re.sub(r'(?<=\s)\d+(?=\s)', '\n\\g<0>\n', s) # only between whitespaces ``` Actually, the replacement can be specified as `'\n\g<0>\n'`, as `\g` is an undefined escape sequence and the backslash will be treated as literal char in this case and will be preserved in the resulting string to form the regex backreference construct. [Python demo](https://ideone.com/KAhtZj): ``` import re s = "bla bla 1 bla bla bla 142 bla bla" s = re.sub(r'\b\d+\b', '\n\\g<0>\n', s) print(s) # => bla bla \n1\n bla bla bla \n142\n bla bla ```
Try this code!! This might help! ``` import re text = "bla bla 1 bla bla bla 142 bla bla" replaced = re.sub('([0-9]+)', r'\n\1\n',text) print(replaced) Output: 'bla bla \n1\n bla bla bla \n142\n bla bla' ```
17,462
63,610,350
I have int in python that I want to reverse `x = int(1234567899)` I want to result will be `3674379849` explain : = `1234567899` = `0x499602DB` and `3674379849` = `0xDB029649` How to do that in python ?
2020/08/27
[ "https://Stackoverflow.com/questions/63610350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13767076/" ]
``` >>> import struct >>> struct.unpack('>I', struct.pack('<I', 1234567899))[0] 3674379849 >>> ``` This converts the integer to a 4-byte array (`I`), then decodes it in reverse order (`>` vs `<`). Documentation: [`struct`](https://docs.python.org/3/library/struct.html)
If you just want the result, use [sabiks approach](https://stackoverflow.com/a/63610471/7505395) - if you want the intermediate steps for bragging rights, you would need to * create the hex of the number (#1) and maybe add a leading 0 for correctness * reverse it 2-byte-wise (#2) * create an integer again (#3) f.e. like so ``` n = 1234567899 # 1 h = hex(n) if len(h) % 2: # fix for uneven lengthy inputs (f.e. n = int("234",16)) h = '0x0'+h[2:] # 2 (skips 0x and prepends 0x for looks only) bh = '0x'+''.join([h[i: i+2] for i in range(2, len(h), 2)][::-1]) # 3 b = int(bh, 16) print(n, h, bh, b) ``` to get ``` 1234567899 0x499602db 0xdb029649 3674379849 ```
17,463
71,632,619
I am new to Python. I have a XML file("topstocks.xml") with some elements and attributes which looks like as below. I was trying to pass an attribute "id" as a function parameter, so that I can dynamically fetch the data. ``` <properties> <property id="H01" cost="106000" state="NM" percentage="0.12">2925.6</property> <property id="H02" cost="125000" state="AZ" percentage="0.15">4500</property> <property id="H03" cost="119000" state="NH" percentage="0.13">3248.7</property> </properties> ``` My python code goes like this. ``` import xml.etree.cElementTree as ET tree = ET.parse("topstocks.xml") root = tree.getroot() def find_all(id ='H02'): # I am trying to pass attribute "id" stocks = [] for child in root.iter("property"): data = child.attrib.copy() data["cost"] = float(data["cost"]) data["percentage"] = float(data["percentage"]) data["netIncome"] = float(child.text) stocks.append(data) return stocks def FindAll(id ='H02'): settings = find_all(id) return settings if __name__=="__main__": idSelection = "H02" result= FindAll(id=idSelection) print(result) ``` It's output should print. {'id': 'H02', 'cost': 125000.0,'state': 'AZ', 'percentage': 0.15, 'netIncome': 4500.0} Thank you in advance.
2022/03/26
[ "https://Stackoverflow.com/questions/71632619", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14810351/" ]
You cannot combine make constructs, like `ifeq`, with shell constructs, like setting a shell variable. Makefiles are not scripts, like a shell script or a python script or whatever. Make works in two distinct phases: first ALL the makefiles are parsed, all make variables are assigned, all `ifeq` statements are resolved, and all targets and prerequisites are parsed and make constructs an internal graph of all the relationships between them. Basically, everything that is not indented with a TAB is parsed in this phase. Second, after ALL makefiles are parsed, make will walk (some part of) the graph created in the first step and, for targets that are out of date, it will expand the recipe (which is a shell script) then run a shell and give it the recipe. Everything that's indented with a TAB is handled in this phase. So, clearly you can't have ifeq conditions (expanded during the first phase) that depend on actions in recipes (run in the second phase). You can write your makefile like this: ``` VALUE_TO_TEST = bad_value MISMATCH = ifeq ($(VALUE_TO_TEST), expected_value) MISMATCH = yes endif all: ifdef MISMATCH # Do thing because there is a mismatch else # Do thing where there is no mismatch endif ``` I expect you don't want to do this, but although this second example is better than the first it still doesn't really explain enough about what you want to do, and why it's not acceptable to do things this way.
Sigh - so the answer after MANY permutations is the tab mistake: ``` a = MISMATCH= all: ifeq ($(a),) MISMATCH=yes endif ifdef MISMATCH $(info fooz) else $(info bark) endif ``` (make files are so frustrating)
17,464
66,109,204
I have a file called `setup.sh` which basically has this ``` python3 -m venv env source ./env/bin/activate # other setup stuff ``` When I run `sh setup.sh`, the environment folder `env` is created, and it will run my `#other setup stuff`, but it will skip over `source ./env/bin/activate`, which puts me in my environment. The command runs just fine if I do so in the terminal(I have a macbook), but not in my bash file. Three ideas I've tried: 1. ensuring execute priviges: `chmod +x setup.sh` 2. change the line `source ./env/bin/activate` to `. ./env/bin/activate` 3. run the file using `bash setup.sh` instead of `sh setup.sh` Is there a smarter way to go about being put in my environment, or some way I can get the `source` to run?
2021/02/08
[ "https://Stackoverflow.com/questions/66109204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14745324/" ]
### global variable and change listener You can add an event listener listening for changes for the checkbox. You can use a global variable which gets track of the unchecked boxes. ``` let countUnchecked = 0; ``` Initially its *value is 0* when you add a new checkbox its value *increases by one*. When the box gets selected it increases by one and when it gets deselected it decreases by one. ``` itemCheck.addEventListener('change', function() { if (itemCheck.checked) { countUnchecked--; document.getElementById('unchecked-count').innerText = countUnchecked; } else { counter++; document.getElementById('unchecked-count').innerText = countUnchecked; } ``` ```js let todos = [] let countUnchecked = 0; function reflectToHTML(i) { let count = todos.length let listItem = document.createElement('li'), itemCheck = document.createElement('input') itemCheck.type = 'checkbox' itemCheck.id = 'check' + todos.length listItem.append(itemCheck, i['text']) itemCheck.addEventListener('change', function() { if (itemCheck.checked) { countUnchecked--; document.getElementById('unchecked-count').innerText = countUnchecked; } else { countUnchecked++; document.getElementById('unchecked-count').innerText = countUnchecked; } }) document.getElementById('todo-list').appendChild(listItem) document.getElementById('item-count').innerText = todos.length countUnchecked++; document.getElementById('unchecked-count').innerText = countUnchecked; } function createTodo() { let newTodo = { text: '', checked: 0 } newTodo['text'] = prompt('Item description') if (newTodo['text'] != null) { todos.push(newTodo) reflectToHTML(newTodo) } } ``` ```html <div class="container center"> <h1 class="center title">My TODO App</h1> <div class="flow-right controls"> <span>Item count: <span id="item-count">0</span></span> <span>Unchecked count: <span id="unchecked-count">0</span></span> </div> <button class="button center" onClick="createTodo();">New TODO</button> <ul id="todo-list" class="todo-list"> </ul> </div> ```
You can add an event listener just for the `<ul>` element and not for each `type='checkbox'` element ex: ```js document.querySelector("#todo-list").onchange = function() { document.querySelector("#unchecked-count").textContent = this.querySelectorAll("[type=checkbox]:not(:checked)").length; } ``` so here on each change we get the node list of the unchecked checkbox elements of that ul element and set the counter the length of that list.
17,465
22,146,205
### Context: I have been playing around with python's wrapper for opencv2. I wanted to play with a few ideas and use a wide angle camera similar to 'rear view' cameras in cars. I got one from a scrapped crash car (its got 4 wires) I took an educated guess from the wires color codding, connect it up so that I power the power and ground line from a usb type A and feed the NTSC composite+ composite- from an RCA connector. I bought a NTSC to usb converter [like this one](http://www.ebay.com/itm/like/231153493687?lpid=97). It came with drivers and some off the shelf VHStoDVD software. ### the problem: I used the run of the mill examples online to trial test it like this: ``` import numpy as np import cv2 cam_index=0 cap=cv2.VideoCapture(cam_index) print cap.isOpened() ret, frame=cap.read() #print frame.shape[0] #print frame.shape[1] while (cap.isOpened()): ret, frame=cap.read() #gray=cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) cv2.imshow('frame', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break #release and close cap.release() cv2.destroyAllWindows() ``` this is the output from shell: ``` True Traceback (most recent call last): File "C:/../cam_capture_.py", line 19, in <module> cv2.imshow('frame', frame) error: ..\..\..\..\opencv\modules\highgui\src\window.cpp:261: error: (-215) size.width>0 && size.height>0 in function cv::imshow >>> ``` ### key Observations: [SCREENSHOTS](http://imgur.com/a/gXqr3#1) 1. in control panel the usb dongle is shown as 'OEM capture' in Sound Video & Game controllers . So it's not seen as a simple plug and play Webcam in 'Imaging devices' 2. If I open the VHStoDVD software I need to configure 2 aspects: * set as Composite * set enconding as NTSC then the camera feed from the analog camera is shown OK within the VHStoDVD application 3. When I open the device video channel in FLV (device capture). The device stream is just a black screen but IF i open the VHStoDVD software WHILE flv is streaming I get the camera's feed to stream on FLV and a black screen is shown on the VHStoDVD feed. Another important difference is that there is huge latency of aprox 0.5sec when the feed is in FLV as opposed to running in VHStoDVD. 4. When running "cam\_capture.py" as per the sample code above at some put during runtime i will eventually get a stop error code 0x0000008e: detail: ``` stop: 0x0000008E (0xC0000005, 0xB8B5F417, 0X9DC979F4, 0X00000000 ) ks.sys - Address B8B5F417 base at B8B5900, Datestamp... beg mem dump phy mem dump complete ``` 5.if i try to print frame.shape[0] or frame.shape[1] I get a type error say I cannot print type None 6.if try other cam\_index the result is always false ### TLDR: In 'control panel' the camera device is under 'sound video & game controllers' not under 'imaging devices'; The cam\_index==zero; The capture.isOpened()=True; The frame size is None; If VHStoDVD is running with composite NTSC configured the camera works , obviously you cant see the image with printscreen in attachment but trust me ! ;) Is there any form of initialisation of the start of communication with the dongle that could fix this i.e. emulate VHStoDVD settings (composite+NTSC)? I thought I could buspirate the start of comms between VHStoDVD and the dongle but it feels like I am going above and beyond to do something I thought was a key turn solution. Any constructive insights, suggestion , corrections are most welcome! Thanks Cheers
2014/03/03
[ "https://Stackoverflow.com/questions/22146205", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3380927/" ]
Ok , so after deeper investigation the initial suspicion was confirmed i.e. because the NTSC dongle is not handled as an imaging device (it's seen as a Video Controller , so similar to an emulation of a TV Tuner card ) it means that although we are able to call cv2.VideoCapture with cam\_index=0 the video channel itself is not transmitting because we are required to define a bunch of parameters 1. encoding 2. frame size 3. fps rate etc The problem is because the device is not supported as an imaging device calling cv2.VideoCapture.set(parameter, value) doesn't seem to change anything on the original video feed. I didn't find a solution but I found a work around. There seems to be quite a few options online. Search for keywords DV to webcam or camcorder as a webcam. I used DVdriver (<http://www.trackerpod.com/TCamWeb/download.htm>) (i used the trial because I am cheap!). Why does it work? ----------------- As much as I can tell DVdriver receives the data from the device which is set as a Video controller (similar to a capture from "Windows Movie Maker" or ffmpeg) and then through "fairydust" outputs the frames on cam\_index=0 (assumed no other cam connected) as an 'imaging device' webcam. Summary ------- TLDR use DVdriver or similar. I found a workaround but I would really like to understand it from first principles and possible generate a similar initialisation of the NTSC dongle from within python, without any other software dependencies but until then, hopefully this will help others who were also struggling or assuming it was a hardware issue. I will now leave you with some Beckett: Ever tried. Ever failed. No matter. Try again. Fail again. Fail better. (!)
It's a few months late, but might be useful. I was working on a Windows computer and had installed the drivers that came with the device, I tried the same code as your question with an Ezcap from Somagic and got the same error. Since "frame is None," I decided to try an if statement around it - in case it was an initialization error. Placing into the loop: ``` if frame is None: print 0 else: print 1 ``` The result is: 01110111111111111111111111111... And if the frame = cap.read(), above the loop is commented out - I get: 00111111111111111... So for my device capture device it appears to be working for all frames beyond the 5th are captured. I'm not sure why this is, but it might be a useful work around for now. *Disclaimer: Unfortunately, my camera input is currently in a radiation field so I can't get to it for a couple of weeks to make sure it works for sure. However, the images are currently a black frame (which is expected without proper input).*
17,466
10,002,937
I have some pom files in my project with the following structure ``` <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <parent> <artifactId>xparent</artifactId> <groupId>y</groupId> <version>2.0.0</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>someparent</artifactId> <version>x.x.x-needs_change</version> <packaging>pom</packaging> <name>some name</name> <description>some description</description> <url>myurl</url> <modules> <module>mymodules</module> </modules> <properties> <my.version>x.x.x-needs_change</my.version> </properties> <dependencies> <dependency> <groupId>hhhhh</groupId> <artifactId>hhhhh</artifactId> <version>x.x.x-should not change</version> </dependency> </dependencies> </project> ``` I am using sed to give the current version as input and change it to a new version given. But I do not want to change version within dependency block. How do I do this? I do not want to go the maven version plugin route. I have tried it and it does not suit my requirements. I prefer sed / python script. Thanks
2012/04/03
[ "https://Stackoverflow.com/questions/10002937", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1164061/" ]
You could try: ``` sed -e '/<dependencies>/,/<\/dependencies>/ !{ s!<version>[0-9.]\+</version>!<version>'"$NEWVERSION"'</version>! }' MY_FILE ``` The `/<dependencies>/,/<\/dependencies>/` says "find all lines between `<dependencies>` and `</dependencies>`". The `!` after that says "perform the following actions everywhere **but** those lines (i.e. do this for all lines *except* between `<dependencies>` and `</dependencies>`)". The `s!<version>[0-9.]\+</version>!<version>'"$NEWVERSION"'</version>!` says "replace `<version>...</version>` with `<version>$NEWVERSION</version>`, where `$NEWVERSION` is some environment variable that contains the new version number. The farting around with the quotes (`'"$NEWVERSION"'`) was because I wanted single quotes for the main part of the sed command (so I don't have to worry about the exclamation mark & backslashes), but I *do* want `$NEWVERSION` to be expanded. Modify to suit your script.
``` nawk '{ a=$0; getline; if($0!~/depend/ && a!~/version/) {gsub(/2.0.0/,"1.0.0",$0);print a"\n"$0} else print a"\n"$0 }' file3 ``` Below is the test: ``` pearl.302> cat file3 <parent> <aritifactID> </artifactID> <groupID> </groupID> <version>2.0.0</version> </parent> <properties> <version>2.0.0</version> </properties> <dependencies> <dependency> <version>2.0.0</version> </dependency> </dependencies> pearl.303> nawk '{a=$0; getline; if($0!~/depend/ && a!~/version/) {gsub(/2.0.0/,"1.0.0",$0);print a"\n"$0} else print a"\n"$0 }' file3 <parent> <aritifactID> </artifactID> <groupID> </groupID> <version>1.0.0</version> </parent> <properties> <version>1.0.0</version> </properties> <dependencies> <dependency> <version>2.0.0</version> </dependency> </dependencies> ```
17,469
10,211,188
I am using python2.7 and lxml. My code is as below ``` import urllib from lxml import html def get_value(el): return get_text(el, 'value') or el.text_content() response = urllib.urlopen('http://www.edmunds.com/dealerships/Texas/Frisco/DavidMcDavidHondaofFrisco/fullsales-504210667.html').read() dom = html.fromstring(response) try: description = get_value(dom.xpath("//div[@class='description item vcard']")[0].xpath(".//p[@class='sales-review-paragraph loose-spacing']")[0]) except IndexError, e: description = '' ``` The code crashes inside the try, giving an error ``` UnicodeDecodeError at / 'utf8' codec can't decode byte 0x92 in position 85: invalid start byte ``` The string that could not be encoded/decoded was: ouldn�t be I have tried using a lot of techniques including .encode('utf8'), but none does solve the problem. I have 2 question: 1. How to solve this problem 2. How can my app crash when the problem code is between a try except
2012/04/18
[ "https://Stackoverflow.com/questions/10211188", "https://Stackoverflow.com", "https://Stackoverflow.com/users/952787/" ]
Your except clause only handles exceptions of the IndexError type. The problem was a UnicodeDecodeError, which is not an IndexError - so the exception is not handled by that except clause. It's also not clear what 'get\_value' does, and that may well be where the actual problem is arising.
1. * skip chars on Error, or decode it correctly to unicode. 2. * you only catch IndexError, not UnicodeDecodeError
17,474
19,637,346
I have python project that is already built based on Scons. I am trying to use Eclipse IDE and Pydev to fix some bugs in the source code. I have installed Eclispe Sconsolidator plugin. My project is like below Project A all source codes including Sconscript file which defines all the tager, environmet etc. Eclipse provide me with Add Scons nature to the project. Once added the Scons automatically picks up my Sconscript file and executes. ``` == Running SCons at 10/28/13 1:59 PM == Command line: /opt/gcdistro/app/scons/2.3.0/bin/scons -u --jobs=16 scons: Reading SConscript files. ``` I want to know how can I place breakpoints in some of the .py files that is a part of my project which Scons is executing.
2013/10/28
[ "https://Stackoverflow.com/questions/19637346", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1845278/" ]
**Gateway Pattern** > > A gateway encapsulates the semantic gap between the object-oriented > domain layer and the relation-oriented persistence layer. > > > Definition taken from [here](http://www.cs.sjsu.edu/~pearce/modules/patterns/enterprise/persistence/gateway.htm). The Gateway in your example is also called a "Service". The service layer is important because it provides a higher abstraction and a more "holistic" way in dealing with a Person entity. The reason for this "extra" layer is the other objects in the system that are connected to a Person. For example, say there are `Car` objects and each Person may have a Car. Now, when we sell a car we should update the "owner" field, further you'll want to do the same for the Person objects that are involved (seller/buyer). In order to achieve this "cascading" in an OO manner (without coupling the objects implementations) `BuyCarService` will update the new owners: the service will call `CarDAO` and `PersonDAO` in order to update the relevant fields in the DB so that the DAOs won't have to "know" each other and hence decouple the implementations. Hope this makes things clearer.
Most of the Design patterns explanations become confusing at some time or other because originally it was named and explained by someone but in due course of time several other similar patterns come into existence which have similar usage and explanation but very little difference. This subtle difference then becomes a source of debates:-). Concerning Gateway pattern, here is what is Martin Fowler mentions in Catalogs of Enterprise Application architecture.I am straight quoting from [here](http://martinfowler.com/eaaCatalog/gateway.html) > > "Gateway - An object that encapsulates access to an external system or > resource." > > > Interesting software rarely lives in isolation. Even the purest > object-oriented system often has to deal with things that aren't > objects, such as relational data-base tables, CICS transactions, and > XML data structures. > > > When accessing external resources like this, you'll usually get APIs > for them. However, these APIs are naturally going to be somewhat > complicated because they take the nature of the resource into account. > Anyone who needs to under-stand a resource needs to understand its API > - whether JDBC and SQL for rela-tional databases or W3C or JDOM for XML. Not only does this make the software harder to understand, it > also makes it much harder to change should you shift some data from a > relational database to an XML message at some point in the future. > > > The answer is so common that it's hardly worth stating. Wrap all the > special API code into a class whose interface looks like a regular > object. Other objects access the resource through this Gateway, which > translates the simple method calls into the appropriate specialized > API. > > >
17,479
35,601,754
I want to encrypt a string in python. Every character in the char is mapped to some other character in the secret key. For example `'a'` is mapped to `'D'`, 'b' is mapped to `'d'`, `'c'` is mapped to `'1'` and so forth as shown below: ``` char = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789" secretkey="Dd18Abz2EqNPWhYTOjBvtVlpXaH6msFUICg4o0KZwJeryQx3f9kSinRu5L7cGM" ``` If I choose the string `"Lets meet at the usual place at 9 am"` the output must be `"oABjMWAABMDBMB2AMvjvDPMYPD1AMDBMGMDW"`
2016/02/24
[ "https://Stackoverflow.com/questions/35601754", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5948577/" ]
**As for replacing multiple characters in a string** You can use [`str.maketrans`](https://docs.python.org/3.5/library/stdtypes.html#str.maketrans) and [`str.translate`](https://docs.python.org/3.5/library/stdtypes.html#str.translate): ``` >>> char = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789" >>> secretkey = "Dd18Abz2EqNPWhYTOjBvtVlpXaH6msFUICg4o0KZwJeryQx3f9kSinRu5L7cGM" >>> trans = str.maketrans(char, secretkey) >>> s = "Lets meet at the usual place at 9 am" >>> s.translate(trans) '0AvB WAAv Dv v2A tBtDP TPD1A Dv M DW' ``` or if you prefer to preserve only those in `char`: ``` >>> ''.join(c for c in s if c in char).translate(trans) '0AvBWAAvDvv2AtBtDPTPD1ADvMDW' ``` **As for encrypting** I would recommend using a dedicated library for that, such as [pycrypto](https://pypi.python.org/pypi/pycrypto).
Ok, I am making two assumptions here. 1. I think the output you expect is wrong, for instance `L` should be mapped to `0`, not to `o`, right? 2. I am assuming you want to ignore whitespace, since it is not included in your mapping. So then the code would be: ``` to_encrypt = "Lets meet at the usual place at 9 am" char = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789" secretkey = "Dd18Abz2EqNPWhYTOjBvtVlpXaH6msFUICg4o0KZwJeryQx3f9kSinRu5L7cGM" encrypted = "" for c in to_encrypt: if c in char: encrypted += secretkey[char.index(c)] print(encrypted) ``` The output would be: ``` 0AvBWAAvDvv2AtBtDPTPD1ADvMDW ```
17,482
15,750,681
I'm writing a simple game in python(2.7) in pygame. In this game, I have to store 2D coordinates. The number of these items will start from 0 and increase by 2 in each step. They will increase up to ~6000. In each step I have to check whether 9 specific coordinates are among them, or not. I've tried to store them simply in a list as (x,y), but it is not efficient to search in such a list. **How can I store these coordinates so it will be more efficient to search among them?** What I was trying to do in each step: ``` # Assuming: myList = [] co1 = (12.3,20.2) # and so on.. valuesToCheck = [co1,co2,co3,co4,co5,co6,co7,co8,co9] # In each step: # Adding 2 coordinates myList.append((x1,y1)) myList.append((x2,y2)) # Searching 9 specific coordinates among all for coordinate in valuesToCheck: if coordinate in myList: print "Hit!" break # Note that the valuesToCheck will change in each step. del valuesToCheck[0] valuesToCheck.append(co10) ``` Coordinates are floating point numbers, and their highest values are limited. They start from (0.0,0.0) to (1200.0,700.0). I've searched about this but stored values were either string or constant numbers.
2013/04/01
[ "https://Stackoverflow.com/questions/15750681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2049320/" ]
Maintain a [set](http://docs.python.org/3.3/tutorial/datastructures.html#sets) alongside your list, or replacing the list entirely if you have no other use for it. Membership checking and adding are [O(1) on average](http://wiki.python.org/moin/TimeComplexity) for sets, so your overall algorithm will be O(N) compared to the O(N^2) of just using a list. ``` myList = [] mySet = set() co1 = (12,20) # and so on.. valuesToCheck = [co1,co2,co3,co4,co5,co6,co7,co8,co9] # In each step: # Adding 2 coordinates myList.append((x1,y1)) myList.append((x2,y2)) mySet.add((x1, y1)) mySet.add((x2, y2)) # Searching 9 specific coordinates among all for coordinate in valuesToCheck: if coordinate in mySet: print "Hit!" break # Note that the valuesToCheck will change in each step. del valuesToCheck[0] valuesToCheck.append(co10) ```
If I understand correctly, you're adding elements to `myList`, but never removing them. You're then testing every element of `valuesToCheck` for memebership in `myList`. If that's the case, you could boost performance by converting myList to a set instead of a list. Testing for membership in a list is O(n), while testing for membership in a set is typically O(1). Your syntax will remain mostly unchanged: ``` mySet = set() # your code # Adding 2 coordinates mySet.add((x1,y1)) mySet.add((x2,y2)) # Searching 9 specific coordinates among all for coordinate in valuesToCheck: if coordinate in mySet: print "Hit!" break # Note that the valuesToCheck will change in each step. del valuesToCheck[0] valuesToCheck.append(co10) ```
17,483
37,866,313
I did `ls -l /usr/bin/python` I got [![enter image description here](https://i.stack.imgur.com/wvA2p.png)](https://i.stack.imgur.com/wvA2p.png) How can I fix that red symbolic link ?
2016/06/16
[ "https://Stackoverflow.com/questions/37866313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4480164/" ]
`ls -l /usr/bin/python` will only show the symbolic link. Use `ls -l /usr/bin/ | grep python2.7` to see if `python2.7` is in the directory. The output should be something like this: ``` lrwxrwxrwx 1 root root 9 Jun 3 16:39 python -> python2.7 lrwxrwxrwx 1 root root 9 Jun 3 16:39 python2 -> python2.7 -rwxr-xr-x 1 root root 3550168 Jun 3 02:29 python2.7 ``` The above shows the binary `python2.7` and two symbolic links pointing to it.
You can enter ``` $which python ``` to see where your Python path is. You can then use ``` $ln -s /thepathfromabove/python2.7 python ```
17,484
66,406,182
I'm not the best with python and am trying to cipher shift text entered by the user. The way this cipher should work is disregarding symbols, numbers, etc. It also converts full stops to X's and must all be upper case. I currently have the code for that but am unsure as to how to take that converted text and shift it by a number given by the user. Not sure if this all makes sense, but any help would be greatly appreciated! Here is my code: ``` def convert_to_Caesar(t): #Remove all special characters and only show A-Z t = re.sub("[^A-Za-z.]",'', t) cipherText = "" # Full stops are replaced with X's for letter in t: if letter == '.': cipherText += 'X' # Lower case is converted to upper case else: cipherText += letter.upper() # Plain text is ciphered and returned return cipherText # User enters plain text to cipher text = input("What do you want to cipher? ") shift = int(input("How many positions to shift by? ")) print(convert_to_Caesar(text)) ``` Thank you
2021/02/28
[ "https://Stackoverflow.com/questions/66406182", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15299466/" ]
You can use ord()/chr() as suggested by @Girish Srivatsa: ``` alphabet_len = ord('Z') - ord('A') + 1 new_letter = chr((ord(letter.upper()) - ord('A') + shift) % alphabet_len + ord('A')) ``` But it might be cleaner if you just create a variable that holds your alphabet: ``` import string alphabet = "".join(list(string.ascii_uppercase)) ``` Then you look what you look up the position in your alphabet, add the positions, then look up what the new letter is: ``` pos = alphabet.find(letter.upper()) if pos == -1: if letter == '.': new_letter = 'X' else: # discard other symbols new_letter = '' else: new_pos = (pos + shift) % len(alphabet) new_letter = alphabet[new_pos] ``` Note, you cannot tell if 'X' in your cipher text a shifted letter or '.'. If you need to fix that, add '.' to your alphabet and remove the special case for '.' under `if pos == -1`. This becomes messy with chr()/ord() method.
If you want it to be formatted even more correctly, following all your rules but formatting in capitals and lowercase too. This shifts the dictionary, and runs if loops. I know you asked for all letters to be capitals, but this improves the code a little. Output of Code: ``` Do you want to... 1. Encode, or 2. Decode? 1 This must be encoded! Please, Work! 5 Ymnx rzxy gj jshtiji! Uqjfxj, Btwp! ``` Code: ``` import time def shift_dict(Caesar, Shift): dic_len = len(Caesar) Shift = Shift % dic_len list_dic = [(k,v) for k, v in iter(Caesar.items())] Shifted = { list_dic[x][0]: list_dic[(x - Shift) % dic_len][1] for x in range(dic_len) } return Shifted def shift_dict2(Caesar, Shift): dic_len = len(Caesar) Shift = Shift % dic_len list_dic = [(k,v) for k, v in iter(Caesar.items())] Shifted = { list_dic[x][0]: list_dic[(x - Shift) % dic_len][-1] for x in range(dic_len) } return Shifted UpperList = { "A":0, "B":1, "C":2, "D":3, "E":4, "F":5, "G":6, "H":7, "I":8, "J":9, "K":10, "L":11, "M":12, "N":13, "O":14, "P":15, "Q":16, "R":17, "S":18, "T":19, "U":20, "V":21, "W":22, "X":23, "Y":24, "Z":25 } UpperCaesar = { "A":"A", "B":"B", "C":"C", "D":"D", "E":"E", "F":"F", "G":"G", "H":"H", "I":"I", "J":"J", "K":"K", "L":"L", "M":"M", "N":"N", "O":"O", "P":"P", "Q":"Q", "R":"R", "S":"S", "T":"T", "U":"U", "V":"V", "W":"W", "X":"X", "Y":"Y", "Z":"Z" } LowerList = { "a":0, "b":1, "c":2, "d":3, "e":4, "f":5, "g":6, "h":7, "i":8, "j":9, "k":10, "l":11, "m":12, "n":13, "o":14, "p":15, "q":16, "r":17, "s":18, "t":19, "u":20, "v":21, "w":22, "x":23, "y":24, "z":25 } LowerCaesar = { "a":"a", "b":"b", "c":"c", "d":"d", "e":"e", "f":"f", "g":"g", "h":"h", "i":"i", "j":"j", "k":"k", "l":"l", "m":"m", "n":"n", "o":"o", "p":"p", "q":"q", "r":"r", "s":"s", "t":"t", "u":"u", "v":"v", "w":"w", "x":"x", "y":"y", "z":"z" } UpperList1 = { "A":0, "B":1, "C":2, "D":3, "E":4, "F":5, "G":6, "H":7, "I":8, "J":9, "K":10, "L":11, "M":12, "N":13, "O":14, "P":15, "Q":16, "R":17, "S":18, "T":19, "U":20, "V":21, "W":22, "X":23, "Y":24, "Z":25 } UpperCaesar1 = { "A":"A", "B":"B", "C":"C", "D":"D", "E":"E", "F":"F", "G":"G", "H":"H", "I":"I", "J":"J", "K":"K", "L":"L", "M":"M", "N":"N", "O":"O", "P":"P", "Q":"Q", "R":"R", "S":"S", "T":"T", "U":"U", "V":"V", "W":"W", "X":"X", "Y":"Y", "Z":"Z" } LowerList1 = { "a":0, "b":1, "c":2, "d":3, "e":4, "f":5, "g":6, "h":7, "i":8, "j":9, "k":10, "l":11, "m":12, "n":13, "o":14, "p":15, "q":16, "r":17, "s":18, "t":19, "u":20, "v":21, "w":22, "x":23, "y":24, "z":25 } LowerCaesar1 = { "a":"a", "b":"b", "c":"c", "d":"d", "e":"e", "f":"f", "g":"g", "h":"h", "i":"i", "j":"j", "k":"k", "l":"l", "m":"m", "n":"n", "o":"o", "p":"p", "q":"q", "r":"r", "s":"s", "t":"t", "u":"u", "v":"v", "w":"w", "x":"x", "y":"y", "z":"z" } Asker = int(input("Do you want to... 1. Encode, or 2. Decode? ")) if Asker == 1: Plaintext = str(input("")) OriginalShift = int(input("")) Shift = OriginalShift*-1 UpperCaesar = shift_dict(UpperCaesar, Shift) LowerCaesar = shift_dict(LowerCaesar, Shift) Lister = [] X = 0 for i in range(len(Plaintext)): if Plaintext[X].isalpha(): if Plaintext[X].isupper(): Lister.append(UpperCaesar[Plaintext[X]]) else: Lister.append(LowerCaesar[Plaintext[X]]) else: Lister.append(Plaintext[X]) X += 1 print(*Lister, sep = "") elif Asker == 2: Asker1 = int(input("Do you have the key (1), or not(2): ")) if Asker1 == 1: Plaintext = str(input("")) OriginalShift = int(input("")) Shift = OriginalShift*-1 UpperCaesar = shift_dict(UpperCaesar, 26 - Shift) LowerCaesar = shift_dict(LowerCaesar, 26 - Shift) Lister = [] X = 0 for i in range(len(Plaintext)): if Plaintext[X].isalpha(): if Plaintext[X].isupper(): Lister.append(UpperCaesar[Plaintext[X]]) else: Lister.append(LowerCaesar[Plaintext[X]]) else: Lister.append(Plaintext[X]) X += 1 print(*Lister, sep = "") elif Asker1 == 2: Plaintext = str(input("")) OriginalShift = 0 for i in range(26): UpperCaesar = shift_dict2(UpperCaesar, -1) LowerCaesar = shift_dict2(LowerCaesar, -1) Lister = [] X = 0 for i in range(len(Plaintext)): if Plaintext[X].isalpha(): if Plaintext[X].isupper(): Lister.append(UpperCaesar[Plaintext[X]]) else: Lister.append(LowerCaesar[Plaintext[X]]) else: Lister.append(Plaintext[X]) X += 1 time.sleep(0.01) print("With a shift of ", 25 - (OriginalShift*-1), ": ", *Lister, sep = "") OriginalShift -= 1 ```
17,485
66,894,868
My results is only empty loop logs. if i put manual in terminal this line command : ``` python3 -m PyInstaller --onefile --name SOCIAL_NETWORK_TEST --distpath packages/projectTest --workpath .cache/ app.py ``` then pack works fine. Any suggestion. ``` bashCommand = "python3 -m PyInstaller --onefile --name " + self.engineConfig.currentProjectName + " --distpath " + "projects/" + self.engineConfig.currentProjectName + "/Package/" + " --workpath .cache/ main.py" print("PACK DONE,") # no expirience import subprocess process = subprocess.Popen(bashCommand.split(), stderr=subprocess.STDOUT, stdout=subprocess.PIPE) # self.myLogs = [] for line in iter(process.stdout.readline, b'\n'): # self.testLog = str(line) # self.LOGS.text = self.testLog print ("PACKAGE:", str(line)) print("Package application for linux ended.") ```
2021/03/31
[ "https://Stackoverflow.com/questions/66894868", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513187/" ]
It seems you're running server and client in the same directory, and the server truncates the file before the client gets to read from it.
it works pefect for me with hello world but if you want to send a binary file maybe you can try base 64
17,486
21,940,911
I'm trying to apply a ripple effect to an image in python. I found Pillow's im.transform(im.size, Image.MESH,.... is it possible? Maybe I have to load the image with numpy and apply the algorithm. I also found this: <http://www.pygame.org/project-Water+Ripples-1239-.html> ![ripple](https://i.stack.imgur.com/iIWa0.png) another way manually but I don't know any algorithm. this is my start. it doesn't do anything... ``` #!/usr/bin/env python3 from PIL import Image import sys import numpy import math im = Image.open(sys.argv[1]) im.show() matrix = numpy.asarray(im) width = im.size[0] height = im.size[1] amplitude = ? # parameters frequency = ? matrix_dest = numpy.zeros((im.size[0],im.size[1],3)) for x in range(0, width): for y in range(0, height): pass # ç_ç im2 = Image.fromarray(numpy.uint8(matrix_dest)) im2.show() ``` **EDIT:** I'd really like to keep this structure (using pillow. I already use extensivly in my project and if I can I wouldn't add any other dependency) and not including scipi or matplotlib.. With the following code I have the distortion I wanted, but **colors** are screwed up. Maybe I have to apply the distortion to **R,G,B planes** and then compose the result in one image. Or **palettize** the image and then apply the original palette. (Btw the image would be used as a texture to display moving water in a 3D environment.) ``` im = Image.open(sys.argv[1]) im.show() m = numpy.asarray(im) m2 = numpy.zeros((im.size[0],im.size[1],3)) width = im.size[0] height = im.size[1] A = m.shape[0] / 3.0 w = 1.0 / m.shape[1] shift = lambda x: A * numpy.sin(2.0*numpy.pi*x * w) for i in range(m.shape[0]): print(int(shift(i))) m2[:,i] = numpy.roll(m[:,i], int(shift(i))) im2 = Image.fromarray(numpy.uint8(m2)) im2.show() ```
2014/02/21
[ "https://Stackoverflow.com/questions/21940911", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1210984/" ]
You could use `np.roll` to rotate each row or column according to some sine function. ``` from scipy.misc import lena import numpy as np import matplotlib.pyplot as plt img = lena() A = img.shape[0] / 3.0 w = 2.0 / img.shape[1] shift = lambda x: A * np.sin(2.0*np.pi*x * w) for i in range(img.shape[0]): img[:,i] = np.roll(img[:,i], int(shift(i))) plt.imshow(img, cmap=plt.cm.gray) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/InvvO.png)
Why don't you try something like: ``` # import scipy # import numpy as np for x in range(cols): column = im[:,x] y = np.floor(sin(x)*10)+10 kernel = np.zeros((20,1)) kernel[y] = 1 scipy.ndimage.filters.convolve(col,kernel,'nearest') ``` I threw this together just right now, so you'll need to tweak it a bit. The frequency of the sin is definitely too high, check [here](http://en.wikipedia.org/wiki/Sine_wave). But I think overall this should work.
17,487
64,553,669
Does anyone know why I get an indentation error even though it (should) be correct? ``` while not stop: try: response += sock.recv(buffer_size) if header not in response: print("error in message format") return # this is where I get the error except socket.timeout: stop = True ``` Error Code `python3 ueb02.py localhost 31000 File "ueb02.py", line 40 return ^ SyntaxError: 'return' outside function make: *** [run] Error 1` **edit:** Thanks for the answers, @balderman's approach solved my problem. Thanks to everyone who contributed here :D
2020/10/27
[ "https://Stackoverflow.com/questions/64553669", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12781947/" ]
If you want to delete the command executing message, like `prefix test_welcome`, you can use `await ctx.message.delete()`.
You can use `await ctx.message.delete`, either way, i recommend you to read the [documentation](https://discordpy.readthedocs.io/en/latest/).
17,490
17,610,811
I want to make crontab where script occurs at different minutes for each hour like this `35 1,8,12,15,31 16,18,21 * * 0,1,2,3,4,5,6 python backup.py` I want script to run at `16hour and 31 minutes` but it is giving me error bad hour i want the cron occur at `1:35am` , then `16:31`, then `21:45`
2013/07/12
[ "https://Stackoverflow.com/questions/17610811", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1667349/" ]
As there is not a pattern that can match the three times, it is not possible to schedule that just with one crontab expression. You will have to use three: ``` 45 21 * * * python backup.py 31 16 * * * python backup.py 35 1 * * * python backup.py ``` Note also that `python backup.py` will probably not work. You have to define full path for both files and binaries: ``` 35 1 * * * /usr/bin/python /your/dir/backup.py ``` Where `/usr/bin/python` or similar can be obtained with `which python`.
If the system which you are on has systemd, You can look into systemd timers(<https://www.freedesktop.org/software/systemd/man/systemd.time.html>). Then you might be able to achieve the randomness using the RandomizedDelaySec setting and an OnCalendar setting which will schedule the service to run every hour or interval you set plus will generate a RandomizedDelaySec at every run so that the interval is random.
17,491
21,192,133
Let's say I have a program that uses a .txt file to store data it needs to operate. Because it's a very large amount of data (just go with it) in the text file I was to use a generator rather than an iterator to go through the data in it so that my program leaves as much space as possible. Let's just say (I know this isn't secure) that it's a list of usernames. So my code would look like this (using python 3.3). ``` for x in range LenOfFile: id = file.readlines(x) if username == id: validusername = True #ask for a password if validusername == True and validpassword == True: pass else: print("Invalid Username") ``` Assume that valid password is set to True or False where I ask for a password. My question is, since I don't want to take up all of the RAM I don't want to use readlines() to get the whole thing, and with the code here I only take a very small amount of RAM at any given time. However, I am not sure how I would get the number of lines in the file (assume I cannot find the number of lines and add to it as new users arrive). Is there a way Python can do this without reading the entire file and storing it at once? I already tried `len()`, which apparently doesn't work on text files but was worth a try. The one way I have thought of to do this is not too great, it involves just using readlines one line at a time in a range so big the text file must be smaller, and then continuing when I get an error. I would prefer not to use this way, so any suggestions would be appreciated.
2014/01/17
[ "https://Stackoverflow.com/questions/21192133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2945577/" ]
You can just iterate over the file handle directly, which will then iterate over it line-by-line: ``` for line in file: if username == line.strip(): validusername = True break ``` Other than that, you can’t really tell how many lines a file has without looking at it completely. You do know how big a file is, and you could make some assumptions on the character count for example (UTF-8 ruins that though :P); but you don’t know how long each line is without seeing it, so you don’t know where the line breaks are and as such can’t tell how many lines there are in total. You still would have to look at every character one-by-one to see if a new line begins or not. So instead of that, we just iterate over the file, and stop once whenever we read a whole line—that’s when the loop body executes—and then we continue looking from that position in the file for the next line break, and so on.
If you want number of lines in a file so badly, why don't you use `len` ``` with open("filename") as f: num = len(f.readlines()) ```
17,492
29,191,405
I'm a little confused about when I need to explicitly copy an object in Python in order to make changes without altering the original. The [Python doc page](https://docs.python.org/3.4/library/copy.html) doesn't have too much detail, and simply says that "assignment statements do not create copies". Example 1: ``` >>> a = 4 >>> b = a >>> b += 1 >>> print(b) >>> print(a) # Results are what you'd "expect" them to be: 5 4 ``` Example 2: ``` >>> class Test(): >>> def __init__(self, x): >>> self.x = x >>> >>> A = Test(4) >>> B = A >>> B.x += 1 >>> print(B.x) >>> print(A.x) # Unexpected results because B isn't actually a copy of A: 5 5 ``` Why do the two examples behave differently? ------------------------------------------- Edit: In order to get Example 2 to work, I basically did the following. I'm asking why this isn't necessary for Example 1: ``` >>> import copy >>> A = Test(4) >>> B = copy.copy(A) >>> B.x += 1 >>> print(B.x) >>> print(A.x) ```
2015/03/22
[ "https://Stackoverflow.com/questions/29191405", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2856558/" ]
For starters, `(?<!...)` is **PCRE** in which the `perl = TRUE` parameter needs to be enabled. The trick is to use lookahead here instead of lookbehind and add [**word boundaries**](http://www.rexegg.com/regex-boundaries.html#wordboundary) to force the regular expression engine to match whole words. Also, you broadly stated **words**; in my vocabulary this could mean any kind of word so I am going to use the Unicode property [`\pL`](http://www.regular-expressions.info/unicode.html), which matches any kind of letter from any language, if this matches more than expected you can simply change it back to `[a-zA-Z]` or use the POSIX named class `[[:alpha:]]` instead. ``` gsub("(?i)\\b(?!one)(\\pL+)\\b", "'\\1'", text, perl=T) # [1] "one 'two' 'three' 'four' 'five' one 'six' one 'seven' one 'eight' 'nine' 'ten' one" ```
You could try the below the PCRE regex ``` > gsub('\\bone\\b(*SKIP)(*F)|([A-Za-z]+)', "'\\1'", text, perl=TRUE) [1] "one 'two' 'three' 'four' 'five' one 'six' one 'seven' one 'eight' 'nine' 'ten' one" ``` `\\bone\\b` matches the text `one` and the following `(*SKIP)(*F)` makes the match to skip and then fail. Now it uses the pattern which was on the right side of `|` operator to select characters from the remaining string (ie, except the skipped part) [DEMO](https://regex101.com/r/lK9zP7/1)
17,495
68,199,583
As you can see [here](https://i.stack.imgur.com/knIlJ.png), after I attempt to train my model in this cell, the asterisk disappears and the brackets are blank instead of containing a number. Do you know why this is happening, and how I can fix it? I'm running python 3.7 and TensorFlow 2.5.0.
2021/06/30
[ "https://Stackoverflow.com/questions/68199583", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13004323/" ]
Unfortunately, that is indeed an **issue of Eclipse 2021-06 (4.20)** that happens inside conditions and loops when there is trailing code not separated by a semicolon `;` ([similar but not the same as in this question](https://stackoverflow.com/q/68258236/6505250)). Example: ``` class Sample { void sample(String foo) { if (foo != null) { sys // content assist here System.out.println(); } } } ``` I created the [minimal reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) above and reported it here for you: [**Eclipse bug 574267** - [content assist] [regression] No content assist for templates in conditional blocks](https://bugs.eclipse.org/bugs/show_bug.cgi?id=574267#c2) As workaround you can add a `;` after the location where to use the content assist. **Update:** After less than 4 hours after [reporting a reproducible example](https://bugs.eclipse.org/bugs/show_bug.cgi?id=574267#c2), the [**issue has been fixed**](https://bugs.eclipse.org/bugs/show_bug.cgi?id=574267#c6). So, as an alternative to the above mentioned workaround, you can wait for the upcoming release **Eclipse 2021-09 (4.21)** on September 15, 2021 or at least for the first milestone build of it on July 16, 2021.
Could it be the same as [here](https://stackoverflow.com/a/68265945/6167720)? (would have added comment, but too little rep)
17,498
37,422,530
Working my way through a beginners Python book and there's two fairly simple things I don't understand, and was hoping someone here might be able to help. The example in the book uses regular expressions to take in email addresses and phone numbers from a clipboard and output them to the console. The code looks like this: ``` #! python3 # phoneAndEmail.py - Finds phone numbers and email addresses on the clipboard. import pyperclip, re # Create phone regex. phoneRegex = re.compile(r'''( (\d{3}|\(\d{3}\))? #[1] area code (\s|-|\.)? #[2] separator (\d{3}) #[3] first 3 digits (\s|-|\.) #[4] separator (\d{4}) #[5] last 4 digits (\s*(ext|x|ext.)\s*(\d{2,5}))? #[6] extension )''', re.VERBOSE) # Create email regex. emailRegex = re.compile(r'''( [a-zA-Z0-9._%+-]+ @ [\.[a-zA-Z0-9.-]+ (\.[a-zA-Z]{2,4}) )''', re.VERBOSE) # Find matches in clipboard text. text = str(pyperclip.paste()) matches = [] for groups in phoneRegex.findall(text): phoneNum = '-'.join([groups[1], groups[3], groups[5]]) if groups [8] != '': phoneNum += ' x' + groups[8] matches.append(phoneNum) for groups in emailRegex.findall(text): matches.append(groups[0]) # Copy results to the clipboard. if len(matches) > 0: pyperclip.copy('\n'.join(matches)) print('Copied to Clipboard:') print('\n'.join(matches)) else: print('No phone numbers of email addresses found') ``` Okay, so firstly, I don't really understand the phoneRegex object. The book mentions that adding parentheses will create groups in the regular expression. If that's the case, are my assumed index values in the comments wrong and should there really be two groups in the index marked one? Or if they're correct, what does groups[7,8] refer to in the matching loop below for phone numbers? Secondly, why does the emailRegex use a mixture of lists and tuples, while the phoneRegex uses mainly tuples? **Edit 1** Thanks for the answers so far, they've been helpful. Still kind of confused on the first part though. Should there be eight indexes like rock321987's answer or nine like sweaver2112's one? **Edit 2** Answered, thank you.
2016/05/24
[ "https://Stackoverflow.com/questions/37422530", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5195054/" ]
every opening left `(` marks the beginning of a capture group, and you can nest them: ``` ( #[1] around whole pattern (\d{3}|\(\d{3}\))? #[2] area code (\s|-|\.)? #[3] separator (\d{3}) #[4] first 3 digits (\s|-|\.) #[5] separator (\d{4}) #[6] last 4 digits (\s*(ext|x|ext.)\s*(\d{2,5}))? #[7,8,9] extension ) ``` You should use [named groups](https://docs.python.org/2/howto/regex.html#non-capturing-and-named-groups) here `(?<groupname>pattern)`, along with clustering only parens `(?:pattern)` that don't capture anything. And remember, you should capture quantified constructs, not quantify captured constructs: ``` (?<areacode>(?:\d{3}|\(\d{3}\))?) (?<separator>(?:\s|-|\.)?) (?<exchange>\d{3}) (?<separator2>\s|-|\.) (?<lastfour>\d{4}) (?<extension>(?:\s*(?:ext|x|ext.)\s*(?:\d{2,5}))?) ```
``` ( #[1] around whole pattern (\d{3}|\(\d{3}\))? #[2] area code (\s|-|\.)? #[3] separator (\d{3}) #[4] first 3 digits (\s|-|\.) #[5] separator (\d{4}) #[6] last 4 digits (\s*(ext|x|ext.)\s*(\d{2,5}))? #[7] extension <----------> <-------> ^^ ^^ || || [8] [9] ) ``` **Second Question** You are understanding it entirely wrong. You are mixing python with regex. In regex > > `[]` character class (and not list) > > > `()` capturing group (and not tuple) > > > So whatever is inside these have nothing to do with `list` and `tuple` in python. Regex can be considered itself as a language and `()`, `[]` etc. are part of regex
17,499
14,074,149
I'm having a bit of difficulty figuring out what my next steps should be. I am using tastypie to create an API for my web application. From another application, specifically ifbyphone.com, I am receiving a POST with no headers that looks something like this: ``` post data:http://myapp.com/api/ callerid=1&someid=2&number=3&result=Answered&phoneid=4 ``` Now, I see in my server logs that this is hitting my server.But tastypie is complaining about the format of the POST. > > {"error\_message": "The format indicated > 'application/x-www-form-urlencoded' had no available deserialization > method. Please check your `formats` and `content_types` on your > Serializer.", "traceback": "Traceback (most recent call last):\n\n > File \"/usr/local/lib/python2.7/dist-packages/tastypie/resources.py\" > > > I also receive the same message when I try to POST raw data using curl, which I "believe" is the same basic process being used by ifbyphone's POST method: ``` curl -X POST --data 'callerid=1&someid=2&number=3&duration=4&phoneid=5' http://myapp.com/api/ ``` So, assuming my problem is actually what is specified in the error message, and there is no deserialization method, how would I go about writing one? ### #### Update ###### With some help from this commit ( <https://github.com/toastdriven/django-tastypie/commit/7c5ea699ff6a5e8ba0788f23446fa3ac31f1b8bf> ) I've been playing around with writing my own serializer, copying the basic framework from the documentation ( <https://django-tastypie.readthedocs.org/en/latest/serialization.html#implementing-your-own-serializer> ) ``` import urlparse from tastypie.serializers import Serializer class urlencodeSerializer(Serializer): formats = ['json', 'jsonp', 'xml', 'yaml', 'html', 'plist', 'urlencode'] content_types = { 'json': 'application/json', 'jsonp': 'text/javascript', 'xml': 'application/xml', 'yaml': 'text/yaml', 'html': 'text/html', 'plist': 'application/x-plist', 'urlencode': 'application/x-www-form-urlencoded', } def from_urlencode(self, data,options=None): """ handles basic formencoded url posts """ qs = dict((k, v if len(v)>1 else v[0] ) for k, v in urlparse.parse_qs(data).iteritems()) return qs def to_urlencode(self,content): pass ```
2012/12/28
[ "https://Stackoverflow.com/questions/14074149", "https://Stackoverflow.com", "https://Stackoverflow.com/users/170352/" ]
This worked as expected when I edited my resource model to actually use the serializer class I created. This was not clear in the documentation. ``` class urlencodeSerializer(Serializer): formats = ['json', 'jsonp', 'xml', 'yaml', 'html', 'plist', 'urlencode'] content_types = { 'json': 'application/json', 'jsonp': 'text/javascript', 'xml': 'application/xml', 'yaml': 'text/yaml', 'html': 'text/html', 'plist': 'application/x-plist', 'urlencode': 'application/x-www-form-urlencoded', } def from_urlencode(self, data,options=None): """ handles basic formencoded url posts """ qs = dict((k, v if len(v)>1 else v[0] ) for k, v in urlparse.parse_qs(data).iteritems()) return qs def to_urlencode(self,content): pass MyModelResource(ModelResoucre): class Meta: ... serializer = urlencodeSerializer() # IMPORTANT ```
I would add a modification to the from\_urlencode mentioned in Brandon Bertelsen's post to work better with international characters: ``` def from_urlencode(self, data, options=None): """ handles basic formencoded url posts """ qs = {} for k, v in urlparse.parse_qs(data).iteritems(): value = v if len(v)>1 else v[0] value = value.encode("latin-1").decode('utf-8') qs[k] = value return qs ``` I'm not sure if this is because of an environmental reason on my side, but I found that when using the following string "ÁáÄäÅåÉéÍíÑñÓóÖöÚúÜü" and the original function, I ran into some problems. When this string gets URL encoded, it turns into: "%C3%81%C3%A1%C3%84%C3%A4%C3%85%C3%A5%C3%89%C3%A9%C3%8D%C3%AD%C3%91%C3%B1%C3%93%C3%B3%C3%96%C3%B6%C3%9A%C3%BA%C3%9C%C3%BC" When this gets URL decoded, we have: u'\xc3\x81\xc3\xa1\xc3\x84\xc3\xa4\xc3\x85\xc3\xa5\xc3\x89\xc3\xa9\xc3\x8d\xc3\xad\xc3\x91\xc3\xb1\xc3\x93\xc3\xb3\xc3\x96\xc3\xb6\xc3\x9a\xc3\xba\xc3\x9c\xc3\xbc' The problem here is that this string appears to be unicode, but it actually isn't, so the above string gets converted to: "ÃáÃäÃÃ¥ÃéÃíÃñÃóÃÃ" I found that if I interpreted the URL decoded value as latin-1, and then decoded it for UTF-8, I got the correct original string.
17,501
65,934,494
I have three boolean arrays: shift\_list, shift\_assignment, work。 shift\_list:rows represent shift, columns represent time. shift\_assignment:rows represent employee, columns represent shifts work: rows represent employee, columns represent time. **I want to change the value in work by changing the value in shift\_assignment, for example:** if I set shift\_assignment[0,2]==1 then work's Row e0 should be [0,0,1,1,1,0,0] , the [0,0,1,1,1,0,0] row shoud come from shift\_list's row s2. my purpose is to control work array through shift\_assignment,and the value of work must come from shift\_list. sorry,my english! [![enter image description here](https://i.stack.imgur.com/IPlXM.png)](https://i.stack.imgur.com/IPlXM.png) [![enter image description here](https://i.stack.imgur.com/70FeC.png)](https://i.stack.imgur.com/70FeC.png) [![enter image description here](https://i.stack.imgur.com/w1RJX.png)](https://i.stack.imgur.com/w1RJX.png) ```py from ortools.sat.python import cp_model model = cp_model.CpModel() solver = cp_model.CpSolver() shift_list=[[1,1,1,0,0,0,0], [0,1,1,1,0,0,0], [0,0,1,1,1,0,0], [0,0,0,1,1,1,0], [0,0,0,0,1,1,1]] shift_assignment={} for i in range(5): for j in range(5): shift_assignment[i,j] = model.NewBoolVar("shifts(%i,%i)" % (i,j)) work={} for i in range(5): for j in range(7): work[i,j] = model.NewBoolVar("work(%i,%i)" % (i,j)) for i in range(5): model.Add(sum(shift_assignment[i,j] for j in range(5))==1) for i in range(5): model.Add(how can i do?).OnlyEnforceIf(shift_assignment[i,j]) model.Add(shift_assignment[0,2]==1) model.Add(shift_assignment[1,1]==1) model.Add(shift_assignment[2,3]==1) model.Add(shift_assignment[3,4]==1) model.Add(shift_assignment[4,0]==1) res=np.zeros([5,7]) status = solver.Solve(model) print("status:",status) for i in range(5): for j in range(7): res[i,j]=solver.Value(work[i,j]) print(res) ```
2021/01/28
[ "https://Stackoverflow.com/questions/65934494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13863269/" ]
thank to @Laurent Perron! ``` from ortools.sat.python import cp_model model = cp_model.CpModel() solver = cp_model.CpSolver() shift_list=[[1,1,1,0,0,0,0], [0,1,1,1,0,0,0], [0,0,1,1,1,0,0], [0,0,0,1,1,1,0], [0,0,0,0,1,1,1]] num_emp = 5 num_shift=5 num_time = 7 work={} shift_assignment={} for e in range(num_emp): for s in range(num_shift): shift_assignment[e,s] = model.NewBoolVar("shifts(%i,%i)" % (e,s)) for e in range(num_emp): for t in range(num_time): work[e,t] = model.NewBoolVar("work(%i,%i)" % (e,t)) for e in range(num_emp): model.Add(sum(shift_assignment[e,s] for s in range(num_shift))==1) for e in range(num_emp): for s in range(num_shift): and_ls=[] or_ls=[] for t in range(num_time): if shift_list[s][t]: and_ls.append(work[e,t]) or_ls.append(work[e,t].Not()) else: and_ls.append(work[e,t].Not()) or_ls.append(work[e,t]) or_ls.append(shift_assignment[e,s]) model.AddBoolAnd(and_ls).OnlyEnforceIf(shift_assignment[e,s]) model.AddBoolOr(or_ls) model.Add(shift_assignment[0,2]==1) model.Add(shift_assignment[1,1]==1) model.Add(shift_assignment[2,3]==1) model.Add(shift_assignment[3,4]==1) model.Add(shift_assignment[4,0]==1) status = solver.Solve(model) print("status:",status) res=np.zeros([num_emp,num_time]) for e in range(num_emp): for t in range(num_time): res[e,t]=solver.Value(work[e,t]) print(res) ``` [![enter image description here](https://i.stack.imgur.com/zZU2H.png)](https://i.stack.imgur.com/zZU2H.png)
Basically you need a set of implications. looking only at the first worker: work = [w0, w1, w2, w3, w4, w5, w6] shift = [s0, s1, s2, s3, s4] ``` shift_list=[[1,1,1,0,0,0,0], [0,1,1,1,0,0,0], [0,0,1,1,1,0,0], [0,0,0,1,1,1,0], [0,0,0,0,1,1,1]] ``` so ``` w0 <=> s0 w1 <=> or(s0, s1) w2 <=> or(s0, s1, s2) w3 <=> or(s1, s2, s3) w4 <=> or(s2, s3, s4) w5 <=> or(s3, s4) w6 <=> s4 ``` where you encode `l0 <=> or(l1, ..., ln)` by writing ``` # l0 implies or(l1, .., ln) or(l0.Not(), l1, .., ln) # or(l1, .., ln) implies l0 forall i in 1..n: implication(li, l0) ```
17,502
30,893,843
I've the same issue as asked by the OP in [How to import or include data structures (e.g. a dict) into a Python file from a separate file](https://stackoverflow.com/questions/2132985/how-to-import-or-include-data-structures-e-g-a-dict-into-a-python-file-from-a). However for some reason i'm unable to get it working. My setup is as follows: file1.py: ``` TMP_DATA_FILE = {'a':'val1', 'b':'val2'} ``` file2.py: ``` from file1 import TMP_DATA_FILE var = 'a' print(TMP_DATA_FILE[var]) ``` When i do this and run the script from cmd line, it says string indices must be integers. When i do `type(TMP_DATA_FILE)`, i get class 'str'. I tried to convert this to dict to be able to use dict operations, but couldn't get it working. If i do `print(TMP_DATA_FILE.get(var))`, since i'm developing using PyCharm, it lists dict operations like get(), keys(), items(), fromkeys() etc, however when i run the program from command line it says 'str' object has no attributes 'get'. When i do `print(TMP_DATA_FILE)` it just lists 'val1'. It doesn't list 'a', 'b', 'val2'. However the same script when run from PyCharm works without any issues. It's just when i run the script from command line as a separate interpreter process it gives those errors. I'm not sure if it's PyCharm that's causing the errors or if i'm doing anything wrong. Originally i had only one key:value in the dict variable and it worked, when i added new key:value pair that's when it started giving those errors. I've also tried using `ast.literal_eval` & `eval`, neither of them worked. Not sure where i'm going wrong. Any help would be highly appreciated. Thanks.
2015/06/17
[ "https://Stackoverflow.com/questions/30893843", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3149936/" ]
There are two ways you can access variable `TMP_DATA_FILE` in file `file1.py`: ``` import file1 var = 'a' print(file1.TMP_DATA_FILE[var]) ``` or: ``` from file1 import TMP_DATA_FILE var = 'a' print(TMP_DATA_FILE[var]) ``` `file1.py` is in a directory contained in the python search path, or in the same directory as the file importing it. Check [this answer](https://stackoverflow.com/questions/3144089/expand-python-search-path-to-other-source#answer-3144107) about the python search path.
You calling it the wrong way. It should be like this : ``` print file1.TMP_DATA_FILE[var] ```
17,503
60,992,072
I have a mini-program that can read text files and turn simple phrases into python code, it has Lexer, Parser, everything, I managed to make it play sound using "winsound" but for some reason, it plays the sound as long as the function does not return, this specific part in the code looks like this: ``` winsound.PlaySound(self.master.files.get(args[1]), winsound.SND_ASYNC | winsound.SND_LOOP | winsound.SND_NODEFAULT) time.sleep(10) return True ``` I used the time.sleep(10) just to experiment when the sound didn't play, and what I noticed is that it plays UNTIL the "return True" line occurs, so doing this time.sleep(10) will do it so the music will play only for 10 seconds. My question is: How can I make this play function without making the music stop whenever the function returns? **Edit**: I made is so the function will return True or False so that the superclass that manages all the commands will know whether each command ran successfully or not **Note** This is just a small portion of the code that is relevant to this question. If you suspect there's more to see in the code to understand my problem please write it in the comments :)
2020/04/02
[ "https://Stackoverflow.com/questions/60992072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12420682/" ]
Add `update` to your `ChangeNotifierProxyProvider` and change `build` to `create`. ``` ChangeNotifierProxyProvider<MyModel, MyChangeNotifier>( create: (_) => MyChangeNotifier(), update: (_, myModel, myNotifier) => myNotifier ..update(myModel), child: ... ); ``` See: <https://github.com/rrousselGit/provider/blob/master/README.md#ProxyProvider> and <https://pub.dev/documentation/provider/latest/provider/ChangeNotifierProxyProvider-class.html> Edit: Try this ``` ChangeNotifierProxyProvider<Auth, Products>( create: (c) => Products(Provider.of<Auth>(c, listen: false).token), update: (_, auth, products) => products.authToken = auth.token, ), ```
You can use it like this: ``` ListView.builder( physics: NeverScrollableScrollPhysics(), scrollDirection: Axis.vertical, itemCount: rrr.length, itemBuilder: (ctx, index) => ChangeNotifierProvider.value( value: rrr[index], child: ChildItem()), ), ``` Information about the provider content is in `ChildItem()`
17,508
16,903,936
How can I change the location of the .vim folder and the .vimrc file so that I can use two (or more) independent versions of vim? Is there a way to configure that while compiling vim from source? (maybe an entry in the feature.h?) Why do I want to do such a thing?: I have to work on project that use python2 as well as python3, therefore I want to have two independent vim setups with different plugins, configurations etc. Moreover, one version has to be compiled with +python, the other with +python3.
2013/06/03
[ "https://Stackoverflow.com/questions/16903936", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2344834/" ]
You can influence which `~/.vimrc` is used via the `-u vimrc-file` command-line argument. Since this is the first initialization, you can then influence from where plugins are loaded (i.e. the `.vim` location) by modifying `'runtimepath'` in there. Note that for editing Python files of different versions, those settings (like indent, completion sources, etc.) are taken from *filetype* plugins which are sourced for every buffer separately, so it should be possible to even edit both Python 2 and 3 in the same Vim instance. (Unless you have some badly written plugins that define global stuff.) So for that, some sort of per-buffer configuration (some `:autocmd`s on the project path, or some more elaborate solution (search for *localrc* plugins or questions about *project vimrc* here) might suffice already. Also note that when the Python interpreter (which you'd only need for Python-based plugins and some interactive `:py` commands, not for editing Python) is compiled in with *dynamic linking* (which is the default at least on Windows), you can have both Python 2 **and** 3 support in the same Vim binary.
I think the easiest solution would be just to let pathogen handle your runtimepath for you. `pathogen#infect()` can take paths that specify different directories that you can use for your bundle directory. So if your `.vim` directory would look like this ``` .vim/ autoload/ pathogen.vim bundle_python2/ <plugins> bundle_python3/ <other plugins> ``` Then inside one of your vimrc for python 2 specific stuff you would have ``` call pathogen#infect('bundle_python2/{}') ``` and for python 3 specific stuff you would have ``` call pathogen#infect('bundle_python3/{}') ``` Since each plugin folder is really just a `.vim` folder you can place your python specific configuration stuff in a folder of the corresponding bundle and pretend its a `.vim`. This structure also has the added benefit that you can change both configurations at the same time if you feel like it by putting common stuff in `.vim` normally. You can also pass multiple bundle directories if you feel like to pathogen so you can have plugins that are shared without duplicating files. You just do this by passing multiple paths to `pathogen#infect('bundle/{}', 'bundle_python3/{}')` After this is all done you can just create aliases for vim to call the correct vimrc file.
17,509
16,130,549
I've got an internet site running on tornado, with video features (convert, cut, merge). The video traitement is quite long, so i want to move it to another python process, and keep the tornado process as light as possible. I use the mongo db for commun db functionalities, synchronously as the db will stay light.
2013/04/21
[ "https://Stackoverflow.com/questions/16130549", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1538095/" ]
There are several options: * [jQuery UI](http://jqueryui.com/) * [YUI](http://yuilibrary.com/) * [ninjaui](http://ninjaui.com/)
Use [kendo UI](http://www.kendoui.com/) Comprehensive HTML5/JavaScript framework for modern web and mobile app development Kendo UI is everything professional developers need to build HTML5 sites and mobile apps. Today, productivity of an average HTML/jQuery developer is hampered by assembling a Frankenstein framework of disparate JavaScript libraries and plug-ins. Kendo UI has it all: rich jQuery-based widgets, a simple and consistent programming interface, a rock-solid DataSource, validation, internationalization, a MVVM framework, themes, templates and the list goes on. WEB DEMOS are [here](http://demos.kendoui.com/web/overview/index.html) Stackoverflow question are [here](https://stackoverflow.com/questions/tagged/kendo-ui) about Kendo UI
17,512
38,888,714
What is the python syntax to insert a line break after every occurrence of character "X" ? This below gave me a list object which has no split attribute error ``` for myItem in myList.split('X'): myString = myString.join(myItem.replace('X','X\n')) ```
2016/08/11
[ "https://Stackoverflow.com/questions/38888714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284097/" ]
**Python 3.X** ``` myString.translate({ord('X'):'X\n'}) ``` `translate()` allows a dict, so, you can replace more than one different character at time. Why `translate()` over `replace()` ? Check [translate vs replace](https://stackoverflow.com/questions/31143290/python-str-translate-vs-str-replace) **Python 2.7** ``` myString.maketrans('X','X\n') ```
A list has no `split` method (as the error says). Assuming `myList` is a list of strings and you want to replace `'X'` with `'X\n'` in each once of them, you can use list comprehension: ``` new_list = [string.replace('X', 'X\n') for string in myList] ```
17,513
72,432,540
as you see "python --version show python3.10.4 but the interpreter show python 3.7.3 [![enter image description here](https://i.stack.imgur.com/RUqlc.png)](https://i.stack.imgur.com/RUqlc.png) how can i change the envirnment in vscode
2022/05/30
[ "https://Stackoverflow.com/questions/72432540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16776924/" ]
If you click on the interpreter version being used by VSCode, you should be able to select different versions across your device. [![Interpreter version](https://i.stack.imgur.com/6tWBe.png)](https://i.stack.imgur.com/6tWBe.png)
Selecting the interpreter in VSCode: <https://code.visualstudio.com/docs/python/environments#_work-with-python-interpreters> To run `streamlit` in `vscode`: Open the `launch.json` file of your project. Copy the following: ``` { "configurations": [ { "name": "Python:Streamlit", "type": "python", "request": "launch", "module": "streamlit", "args": [ "run", "${file}" ] } ] } ```
17,522
70,971,382
I want to compare two files and display the differences and the missing records in both files. Based on suggestions on this forum, I found awk is the fastest way to do it. Comparison is to be done based on composite key - match\_key and issuer\_grid\_id **Code:** ``` BEGIN { FS="[= ]" } { match(" "$0,/ match_key="[^"]+"/) key = substr($0,RSTART,RLENGTH) } NR==FNR { file1[key] = $0 next } { if ( key in file1 ) { nf = split(file1[key],tmp) for (i=1; i<nf; i+=2) { f1[key,tmp[i]] = tmp[i+1] } msg = sep = "" for (i=1; i<NF; i+=2) { if ( $(i+1) != f1[key,$i] ) { msg = msg sep OFS ARGV[1] "." $i "=" f1[key,$i] OFS FILENAME "." $i "=" $(i+1) sep = "," } } if ( msg != "" ) { print "Mismatch in row " FNR msg } delete file1[key] } else { file2[key] = $0 } } END { for (key in file1) { print "In file1 only:", key, file1[key] } for (key in file2) { print "In file2 only:", key, file2[key] } } ``` **file1:** ``` period="2021-02-28" book_base_ent_cd="U0028" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="USD" issuer_grid_id="2" match_key="PLCHS252SA20" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="3" match_key="PLCHS252SA20" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA22" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA21" ``` **file2:** ``` period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="3" match_key="PLCHS252SA20" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA20" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA23" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA21" ``` **file 3 (it has only one row but number of fields are more)** ``` period="2021-02-28" book_base_ent_cd="U0027" other_inst_ident="PLCHS258Q463" rep_nom_curr="PLN" reporting_basis="Unit" src_instr_class="Debt" mat_date="2026-08-25" nom_curr="PLN" primary_asset_class="Bond" seniority_type="931" security_status="alive" issuer_name="CUST38677608" intra_group_prud_scope="Issuer is not part of the reporting group" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_frbrnc_stts="NOFRBRNRNGT" prfrmng_stts="Performing" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" src_imprmnt_assssmnt_mthd="COLLECTIVE" accmltd_imprmnt="78.54" accmltd_chngs_fv_cr="0" expsr_vl="0" unit_measure="EUR" unit_measure_nv="EUR" crryng_amnt="24565.13" issuer_grid_id="38677608" match_key="PLCHS258Q463" ``` **Expected output:** ``` In file1 only : issuer_grid_id="2" match_key="PLCHS252SA22" In file2 only : issuer_grid_id="2" match_key="PLCHS252SA23" Mismatch for issuer_grid_id="2" match_key="PLCHS252SA20" : file1.book_base_ent_cd="U0028" file2.book_base_ent_cd="U0027", file1.unit_measure="USD" file2.unit_measure="EUR" ``` **Actual Output** ``` awk -f compare.awk file1 file2 Mismatch in row 1 for file1.issuer_grid_id="2" file2.issuer_grid_id="3", file1.match_key="PLCHS252SA21" file2.match_key="PLCHS252SA20" In file2 only: period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA21" ``` I am not able to find a way to do the multifield comparison? Any suggestion is appreciated. I tagged python too, if any way to do it in faster way in it. Best Regards.
2022/02/03
[ "https://Stackoverflow.com/questions/70971382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17742463/" ]
Just tweak the setting of `key` at the top to use whatever set of fields you want, and the printing of the mismatch message to be `from key ... key` instead of `from line ... FNR`: ``` $ cat tst.awk BEGIN { FS="[= ]" } { match(" "$0,/ issuer_grid_id="[^"]+"/) key = substr($0,RSTART,RLENGTH) match(" "$0,/ match_key="[^"]+"/) key = key substr($0,RSTART,RLENGTH) } NR==FNR { file1[key] = $0 next } { if ( key in file1 ) { nf = split(file1[key],tmp) for (i=1; i<nf; i+=2) { f1[key,tmp[i]] = tmp[i+1] } msg = sep = "" for (i=1; i<NF; i+=2) { if ( $(i+1) != f1[key,$i] ) { msg = msg sep OFS ARGV[1] "." $i "=" f1[key,$i] OFS FILENAME "." $i "=" $(i+1) sep = "," } } if ( msg != "" ) { print "Mismatch for key " key msg } delete file1[key] } else { file2[key] = $0 } } END { for (key in file1) { print "In file1 only:", key, file1[key] } for (key in file2) { print "In file2 only:", key, file2[key] } } ``` ``` $ awk -f tst.awk file1 file2 Mismatch for key issuer_grid_id="2" match_key="PLCHS252SA20" file1.book_base_ent_cd="U0028" file2.book_base_ent_cd="U0027", file1.unit_measure="USD" file2.unit_measure="EUR" In file1 only: issuer_grid_id="2" match_key="PLCHS252SA22" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA22" In file2 only: issuer_grid_id="2" match_key="PLCHS252SA23" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA23" ```
You can use ruby sets: ``` $ cat tst.rb def f2h(fn) data={} File.open(fn){|fh| fh. each_line{|line| h=line.scan(/(\w+)="([^"]+)"/).to_h k=h.slice("issuer_grid_id", "match_key"). map{|k,v| "#{k}=#{v}"}.join(", ") data[k]=h} } data end f1=f2h(ARGV[0]) f2=f2h(ARGV[1]) mis=Hash.new { |hash, key| hash[key] = [] } (f2.keys & f1.keys).each{|k| f1[k].each{|ks,v| template="#{ks}: #{ARGV[0]}.#{f1[k][ks]}, #{ARGV[1]}.#{f2[k][ks]}" mis[k] << template if f1[k][ks]!=f2[k][ks]}} mis.each{|k,v| puts "Mismatch for key #{k} #{v.join(" ")}"} f1only=(f1.keys-f2.keys).join(", ") f2only=(f2.keys-f1.keys).join(", ") puts "Only in #{ARGV[0]}: #{f1only}\nOnly in #{ARGV[1]}: #{f2only}" ``` Then calling like so: ``` ruby tst.rb file1 file2 ``` Prints: ``` Mismatch for key issuer_grid_id=2, match_key=PLCHS252SA20 book_base_ent_cd: file1.U0028, file2.U0027 unit_measure: file1.USD, file2.EUR Only in file1: issuer_grid_id=2, match_key=PLCHS252SA22 Only in file2: issuer_grid_id=2, match_key=PLCHS252SA23 ``` (If you want quotes around the values, they are easily added.) It works because ruby support set arithmetic on arrays (this is from the ruby interactive shell): ``` irb(main):033:0> arr1=[1,2,3,4] => [1, 2, 3, 4] irb(main):034:0> arr2=[2,3,4,5] => [2, 3, 4, 5] irb(main):035:0> arr1-arr2 => [1] # only in arr1 irb(main):036:0> arr2-arr1 => [5] # only in arr2 irb(main):037:0> arr1 & arr2 => [2, 3, 4] # common between arr1 and arr2 ``` Since we are using `(f2.keys & f1.keys)` we are guaranteed to only be looping over shared keys. It therefore works just fine with your example `file3`: ``` $ ruby tst.rb file1 file3 Only in file1: issuer_grid_id=2, match_key=PLCHS252SA20, issuer_grid_id=3, match_key=PLCHS252SA20, issuer_grid_id=2, match_key=PLCHS252SA22, issuer_grid_id=2, match_key=PLCHS252SA21 Only in file3: issuer_grid_id=38677608, match_key=PLCHS258Q463 ``` Since Python also has sets, this is easily written in Python too: ``` import re def f2h(fn): di={} k1, k2="issuer_grid_id", "match_key" with open(fn) as f: for line in f: matches=dict(re.findall(r'(\w+)="([^"]+)"', line)) di[f"{k1}={matches[k1]}, {k2}={matches[k2]}"]=matches return di f1=f2h(fn1) f2=f2h(fn2) mis={} for k in set(f1.keys()) & set(f2.keys()): for ks,v in f1[k].items(): if f1[k][ks]!=f2[k][ks]: mis.setdefault(k, []).append( f"{ks}: {fn1}.{f1[k][ks]}, {fn2}.{f2[k][ks]}") for k,v in mis.items(): print(f"Mismatch for key {k} {' '.join(v)}") print(f"Only in {fn1}: {';'.join(set(f1.keys())-f2.keys())}") print(f"Only in {fn2}: {';'.join(set(f2.keys())-f1.keys())}") ``` While `awk` does not support sets, the set operations `and` and `minus` are trivial to write with associative arrays. Which then allows a `GNU awk` version of this same method: ``` function set_and(a1, a2, a3) { delete a3 for (e in a1) if (e in a2) a3[e] } function set_minus(a1, a2, a3) { delete a3 for (e in a1) if (!(e in a2)) a3[e] } function proc_line(s, data) { delete data # this is the only GNU specific portion and easily rewritten for POSIX patsplit(s,matches,/\w+="[^"]+"/) for (m in matches) { split(matches[m],kv, /=/) data[kv[1]]=kv[2] } } { proc_line($0, data) key=sprintf("issuer_grid_id=%s, match_key=%s", data["issuer_grid_id"], data["match_key"]) } FNR==NR{a1[key]=$0} FNR<NR{a2[key]=$0} END{ set_and(a1,a2, a3) for (key in a3) { ft=sprintf("Mismatch for key %s ", key) proc_line(a1[key],d1) proc_line(a2[key],d2) for (sk in d1) if (d1[sk]!=d2[sk]) { printf("%s %s %s.%s; %s.%s", ft, sk, ARGV[1], d1[sk], ARGV[2], d2[sk]) ft="" } if (ft=="") print "" } set_minus(a1,a2, a3) for (e in a3) printf("In %s only: %s\n", ARGV[1], e) set_minus(a2,a1, a3) for (e in a3) printf("In %s only: %s\n", ARGV[2], e) } ``` This works the same as the Ruby and Python version and also supports the third file example. Good luck!
17,524
72,337,348
I would like to get all text separated by double quotes and commas using python Beautifulsoup. The sample has no class or ids. Could use the div with "Information:" for parent like this: ``` try: test_var = soup.find(text='Information:').find_next('ul').find_next('li') for li in test_var.find_all: test_var = print(li.text, end="," except: test_var = '' ``` Sample: ``` <body> <div>Information:</div> <ul> <li>Text 1</li> <li>Text 2</li> <li>Text 3</li> </ul> </body> ``` The end result should be like this: "Text 1", "Text 2", "Text 3" Thank you.
2022/05/22
[ "https://Stackoverflow.com/questions/72337348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2615887/" ]
Just use the [:not](https://api.jquery.com/not-selector/) selector like this: ```js $('.one:not([data-id="two"])').on('click', function() { $('.A').show(); }); $("[data-id='two'].one").on('click', function() { $('.B').show(); }); ``` ```css .one {width: 50px;margin: 10px;padding: 10px 0;text-align: center;outline: 1px solid black} .A, .B {display: none;background: yellow;width: 50px;margin: 10px;padding: 10px 0;text-align: center;outline: 1px solid black} ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="one">one</div> <div data-id="two" class="one">two one</div> <div class="A">A</div> <div class="B">B</div> ```
Change it to accept one or the other when any `$('.one')` is clicked: ``` $('.one').on('click', function() { if ($(this).data('id')) { $('.B').show(); } else { $('.A').show(); } }); ``` ```js if ($(this).data('id')) {... // if the `data-id` has a value ex. "2", then it is true ``` ```js $('.one').on('click', function() { if ($(this).data('id')) { $('.B').show(); } else { $('.A').show(); } }); ``` ```css .one { width: 50px; margin: 10px; padding: 10px 0; text-align: center; outline: 1px solid black } .A, .B { display: none; background: yellow; width: 50px; margin: 10px; padding: 10px 0; text-align: center; outline: 1px solid black } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="one">one</div> <div data-id="two" class="one">two one</div> <div class="A">A</div> <div class="B">B</div> ```
17,525
67,018,079
I have probem with this code , why ? the code : ``` import cv2 import numpy as np from PIL import Image import os import numpy as np import cv2 import os import h5py import dlib from imutils import face_utils from keras.models import load_model import sys from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D,Dropout from keras.layers import Dense, Activation, Flatten from keras.utils import to_categorical from keras import backend as K from sklearn.model_selection import train_test_split from Model import model from keras import callbacks # Path for face image database path = 'dataset' recognizer = cv2.face.LBPHFaceRecognizer_create() detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml"); def downsample_image(img): img = Image.fromarray(img.astype('uint8'), 'L') img = img.resize((32,32), Image.ANTIALIAS) return np.array(img) # function to get the images and label data def getImagesAndLabels(path): path = 'dataset' imagePaths = [os.path.join(path,f) for f in os.listdir(path)] faceSamples=[] ids = [] for imagePath in imagePaths: #if there is an error saving any jpegs try: PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale except: continue img_numpy = np.array(PIL_img,'uint8') id = int(os.path.split(imagePath)[-1].split(".")[1]) faceSamples.append(img_numpy) ids.append(id) return faceSamples,ids print ("\n [INFO] Training faces now.") faces,ids = getImagesAndLabels(path) K.clear_session() n_faces = len(set(ids)) model = model((32,32,1),n_faces) faces = np.asarray(faces) faces = np.array([downsample_image(ab) for ab in faces]) ids = np.asarray(ids) faces = faces[:,:,:,np.newaxis] print("Shape of Data: " + str(faces.shape)) print("Number of unique faces : " + str(n_faces)) ids = to_categorical(ids) faces = faces.astype('float32') faces /= 255. x_train, x_test, y_train, y_test = train_test_split(faces,ids, test_size = 0.2, random_state = 0) checkpoint = callbacks.ModelCheckpoint('trained_model.h5', monitor='val_acc', save_best_only=True, save_weights_only=True, verbose=1) model.fit(x_train, y_train, batch_size=32, epochs=10, validation_data=(x_test, y_test), shuffle=True,callbacks=[checkpoint]) # Print the numer of faces trained and end program print("enter code here`\n [INFO] " + str(n_faces) + " faces trained. Exiting Program") ``` --- ``` the output: ------------------ File "D:\my hard sam\ماجستير\سنة ثانية\البحث\python\Real-Time-Face-Recognition-Using-CNN-master\Real-Time-Face-Recognition-Using-CNN-master\02_face_training.py", line 16, in <module> from keras.utils import to_categorical ImportError: cannot import name 'to_categorical' from 'keras.utils' (C:\Users\omar\PycharmProjects\SnakGame\venv\lib\site-packages\keras\utils\__init__.py) ```
2021/04/09
[ "https://Stackoverflow.com/questions/67018079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15558831/" ]
**Keras** is now fully intregrated into **Tensorflow**. So, importing only **Keras** causes error. It should be imported as: ``` from tensorflow.keras.utils import to_categorical ``` **Avoid** importing as: ``` from keras.utils import to_categorical ``` It is safe to use `from tensorflow.keras.` instead of `from keras.` while importing all the necessary modules. ```py from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D,Dropout from tensorflow.keras.layers import Dense, Activation, Flatten from tensorflow.keras.utils import to_categorical from tensorflow.keras import backend as K from sklearn.model_selection import train_test_split from tensorflow.keras import callbacks ```
First thing is you can install this `keras.utils` with ``` $!pip install keras.utils ``` or another simple method just import `to_categorical` module as ``` $ tensorflow.keras.utils import to_categorical ``` because keras comes under tensorflow package
17,528
4,424,004
I'm new with python programming and GUI. I search on internet about GUI programming and see that there are a lot of ways to do this. I see that easiest way for GUI in python might be tkinter(which is included in Python, and it's just GUI library not GUI builder)? I also read a lot about GLADE+PyGTK(and XML format), what is there so special(glade is GUI builder)? Can anyone make some "personal opinion" about this choices? I have python code, I need to make simple GUI(2 button's-open-close-read-write,and some "print" work) and then make some .exe file (is there best choice py2exe=?). Is there a lot of changes in code to make GUI? Many thanks
2010/12/12
[ "https://Stackoverflow.com/questions/4424004", "https://Stackoverflow.com", "https://Stackoverflow.com/users/530877/" ]
``` bool perfectNumber(number); ``` This does not call the `perfectNumber` function; it declares a local variable named `perfectNumber` of type `bool` and initializes it with the value of `number` converted to type `bool`. In order to call the `perfectNumber` function, you need to use something along the lines of: ``` bool result = perfectNumber(number); ``` or: ``` bool result(perfectNumber(number)); ``` On another note: if you are going to read input from a stream (e.g. `cin>>number`), you must check to be sure that the extraction of the value from the stream succeeded. As it is now, if you typed in `asdf`, the extraction would fail and `number` would be left uninitialized. The best way to check whether an extraction succeeds is simply to test the state of the stream: ``` if (cin >> number) { bool result = perfectNumber(number); } else { // input operation failed; handle the error as appropriate } ``` You can learn more about how the stream error states are set and reset in [Semantics of flags on `basic_ios`](https://stackoverflow.com/questions/4258887/semantics-of-flags-on-basic-ios). You should also consult [a good, introductory-level C++ book](https://stackoverflow.com/questions/388242/the-definitive-c-book-guide-and-list) for more stream-use best practices.
``` void primenum(long double x) { bool prime = true; int number2; number2 = (int) floor(sqrt(x));// Calculates the square-root of 'x' for (int i = 1; i <= x; i++) { for (int j = 2; j <= number2; j++) { if (i != j && i % j == 0) { prime = false; break; } } if (prime) { cout << " " << i << " "; c += 1; } prime = true; } } ```
17,533
66,413,002
I'm attempting to translate the following curl request to something that will run in django. ``` curl -X POST https://api.lemlist.com/api/hooks --data '{"targetUrl":"https://example.com/lemlist-hook"}' --header "Content-Type: application/json" --user ":1234567980abcedf" ``` I've run this in git bash and it returns the expected response. What I have in my django project is the following: ``` apikey = '1234567980abcedf' hookurl = 'https://example.com/lemlist-hook' data = '{"targetUrl":hookurl}' headers = {'Content-Type': 'application/json'} response = requests.post(f'https://api.lemlist.com/api/hooks/', data=data, headers=headers, auth=('', apikey)) ``` Running this python code returns this as a json response ``` {} ``` Any thoughts on where there might be a problem in my code? Thanks!
2021/02/28
[ "https://Stackoverflow.com/questions/66413002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7609684/" ]
One way you can do this at the *word* level is: ``` select t.* from t cross apply (select count(*) as cnt from string_split(t.text, ' ') s1 cross join string_split(@sentence, ' ') s2 on s1.value = s2.value ) ss order by ss.cnt desc; ``` Notes: * This only looks for exact word matches in the two phrases. * This requires that words are separated by spaces, both in `text` and in "the sentence". * Duplicate words might through the count off. This can be managed (say by using `count(distinct s1.value) as cnt`) if you need to.
There's a lot of way two select item. For example: ``` SELECT 'I want to buy a ' + A.BrandName + ' cellphone and the model should be ' + A.ModelName FROM ( SELECT SUBSTRING(TEXT, 1, LEN('sumsung')) AS BrandName , SUBSTRING(TEXT, LEN(SUBSTRING(TEXT, 1, LEN('sumsung')))+1, LEN(TEXT)) AS ModelName FROM TABLE_NAME WHERE TEXT LIKE N'%samsung%' AND TEXT LIKE N' %galaxy s9%' ) AS A ```
17,538
66,755,583
I've tried all the installing methods in geopandas' [documentation](https://geopandas.org/getting_started/install.html) and nothing works. `conda install geopandas` gives ``` UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your CUDA driver: - feature:/win-32::__cuda==10.1=0 Your installed CUDA driver is: 10.1 ``` `conda install --channel conda-forge geopandas` gives the same error. Created a new environment with conda: ``` Package python conflicts for: python=3 geopandas -> python[version='2.7.*|3.5.*|3.6.*|>=3.5|>=3.6|3.4.*|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0|>=2.7,<2.8.0a0|>=3.5,<3.6.0a0'] geopandas -> pandas[version='>=0.24'] -> python[version='>=3.7|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0']The following specifications were found to be incompatible with your CUDA driver: - feature:/win-32::__cuda==10.1=0 Your installed CUDA driver is: 10.1 ``` I tried installing from source, no luck: ``` A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable. ``` I also followed [this answer](https://stackoverflow.com/a/58943939/13083530), which gives similar errors for all packages installing: ``` Package `geopandas` found in cache Downloading package . . . https://download.lfd.uci.edu/pythonlibs/z4tqcw5k/geopandas-0.8.1-py3-none-any.whl geopandas-0.8.1-py3-none-any.whl Traceback (most recent call last): File "C:\Users\\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found ``` I also followed [this tutorial](https://towardsdatascience.com/geopandas-installation-the-easy-way-for-windows-31a666b3610f) and download 5 dependencies' binary wheels and pip install them. I have this error for installing `Fiona`, `geopandas`, `pyproj` ``` A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable. ``` I'm in my venv with Python 3.8.7 in Windows 10. I have GDAL installed and set `GDAL_DATA` and `GDAL_DRIVER_PATH` as environment vars.
2021/03/23
[ "https://Stackoverflow.com/questions/66755583", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13083530/" ]
@duckboycool and @Ken Y-N are right, downgrading to Python 3.7 did the trick! Downgrading with conda `conda install python=3.7` and then `conda install geopandas`
You need to create an environment initially, Then inside the new environment try to install Geopandas: ```none 1- conda create -n geo_env 2- conda activate geo_env 3- conda config --env --add channels conda-forge 4- conda config --env --set channel_priority strict 5- conda install python=3 geopandas ``` and following video: <https://youtu.be/k-MWeAWEta8> <https://geopandas.org/getting_started/install.html>
17,539
6,767,990
So, I use [SPM](http://www.fil.ion.ucl.ac.uk/spm/) to register fMRI brain images between the same patient; however, I am having trouble registering images between patients. Essentially, I want to register a brain atlas to a patient-specific scan, so that I can do some image patching. So register, then apply that warping and transformation to any number of images. SPM was unsuccessful in such a registration. It cannot warp the atlas to be in the same brain shape as the patient brain. Would software such as [freesurfer](http://surfer.nmr.mgh.harvard.edu/) be good for this?? Or is there something better out there in either matlab or python (but preferably python)?? Thanks! tylerthemiler
2011/07/20
[ "https://Stackoverflow.com/questions/6767990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402632/" ]
Freesurfer segments and annotates the brain in the patient's native space, resulting in patient-specific regions, like [so](http://dl.dropbox.com/u/2467665/freesurfer_segmentation.png). I'm not sure what you mean by patching, or to what other images you'd like to apply this transformation, but it seems like the software most compatible for working with individual patient data, rather than normalized data across patients.
I think [ITK](http://www.itk.org/) is made for this kind if purpose. A Python wrapper exists ([Paul Novotny](http://www.paulnovo.org/) distributes binaries for Ubuntu on his site), but this is mainly C++. If you work under Linux then it is quite simple to compile if you are familiar with cmake. As this toolkit is a very low-level framework I can advise you to try [elastix](http://elastix.isi.uu.nl/index.php) which is a command line utility allowing one to make registration on picture using multiscale Bspline dense registration. Another interesting tool based on Maxwell demons and improved with diffeomorphic capabilities is [MedINIRA](http://www-sop.inria.fr/asclepios/software/MedINRIA/).
17,542
23,533,566
I want to use /etc/sudoers to change the owner of a file from bangtest(user) to root. Reason to change: when I uploaded an image from bangtest(user) to my server using Django application then image file permission are like ``` ls -l /home/bangtest/alpha/media/products/image_2093.jpg -rw-r--r-- 1 bangtest bangtest 28984 May 6 02:47 ``` but when I tried to access those file from server using //myhost/media/products/image\_2093.jpg, I am getting 404 error.When I tried to log the error its like ``` Caught race condition abuser. attacker: 0, victim: 502 open file owner: 502, open file: /home/bangtest/alpha/media/products/image_2093.jpg ``` After when I changed the owner of a file from bangtest to root,then I am able to access the image perfectly. So because of that reason I want to change owner of file dynamically using python script. I have tried by changing the sudoers file like mentioned below.But still I am getting error like ``` chown: changing ownership of `image.jpg': Operation not permitted ``` My sudoers code: ``` root ALL=(ALL) ALL bangtest ALL=(ALL) /bin/chown root:bangtest /home/bangtest/alpha/* ``` Any Clues why sudoers are not working? Note:Operating system Linux. Thanks
2014/05/08
[ "https://Stackoverflow.com/questions/23533566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2479352/" ]
I strongly suggest you use a browser such as Firefox with Firebug installed. Load any page, hit Tools > Web Developer > Inspector (or its hot key equivalent), then click on your object, the HTML code inspector will reference the exact line of the css file that is governing the style being generated (either the style directly, or the computed style). Time and sanity saver.
After several attempts and some help from Zurb support the CSS i needed was: ``` .top-bar-section .dropdown li:not(.has-form) a:not(.button) { color: white; background: #740707; } ``` Thanks for the help
17,550
60,103,642
I already know how to open windows command prompt through python, but I was wondering how if there is a way to open a windows powershellx86 window and run commands through python 3.7 on windows 10?
2020/02/06
[ "https://Stackoverflow.com/questions/60103642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You can just call out to powershell.exe using `subprocess.run` ``` import subprocess subprocess.run('powershell.exe Get-Item *') ```
If you know how to run the command prompt (CMD.EXE) then you should be able to use the same method to run PowerShell (PowerShell.EXE). PowerShell.EXE is located in c:\windows\system32\windowspowershell\v1.0\ by default. To run the shell with commands use: ``` c:\windows\system32\windowspowershell\v1.0\PowerShell.exe -c {commands} ``` To launch a .ps1 script file, use ``` c:\windows\system32\windowspowershell\v1.0\PowerShell.exe -f Path\Script.ps1 ``` Good luck.
17,553